text
sequencelengths
2
2.54k
id
stringlengths
9
16
[ [ "An Empirical Evaluation of Multivariate Time Series Classification with\n Input Transformation across Different Dimensions" ], [ "Abstract In current research, machine and deep learning solutions for the classification of temporal data are shifting from single-channel datasets (univariate) to problems with multiple channels of information (multivariate).", "The majority of these works are focused on the method novelty and architecture, and the format of the input data is often treated implicitly.", "Particularly, multivariate datasets are often treated as a stack of univariate time series in terms of input preprocessing, with scaling methods applied across each channel separately.", "In this evaluation, we aim to demonstrate that the additional channel dimension is far from trivial and different approaches to scaling can lead to significantly different results in the accuracy of a solution.", "To that end, we test seven different data transformation methods on four different temporal dimensions and study their effect on the classification accuracy of five recent methods.", "We show that, for the large majority of tested datasets, the best transformation-dimension configuration leads to an increase in the accuracy compared to the result of each model with the same hyperparameters and no scaling, ranging from 0.16 to 76.79 percentage points.", "We also show that if we keep the transformation method constant, there is a statistically significant difference in accuracy results when applying it across different dimensions, with accuracy differences ranging from 0.23 to 47.79 percentage points.", "Finally, we explore the relation of the transformation methods and dimensions to the classifiers, and we conclude that there is no prominent general trend, and the optimal configuration is dataset- and classifier-specific." ], [ "Introduction", "Due to the rising availability of data sources in sectors such as industry, healthcare, and finance, time series classification datasets and problems are increasingly consisting of multiple channels of information [1], representing for instance readings from multiple sensor types.", "Following this trend, machine and deep learning solutions have shifted to try to more effectively address these multivariate problems [1].", "An aspect that does not usually get much focus when describing a novel machine or deep learning solution is the preprocessing of the input data.", "With the terms preprocessing, scaling, and transformation we refer to the methods that modify a set of values to deal with issues such as outliers or to shift them to a predefined range, e.g.", "standardization or min-max scaling.", "Especially in the deep learning landscape, it is now a standard approach at the implementation level to implicitly transform all values with some method such as normalization, in order to bring them to the same numerical scale prior to propagating them through the network.", "In the time series classification field, there has already been extended research for the univariate datasets [2], so it is a natural direction to try and transfer the concepts to the multivariate cases.", "This can regard the models themselves, e.g.", "by applying a model to each univariate channel of a multivariate problem and then aggregating the classification decisions using some ensembling or voting method [1].", "Similarly to this approach, the preprocessing of the input also follows the same pattern.", "For example, if in the case of univariate classification, where the dataset dimensions are (samples,timesteps), all observations are standardized, it seems natural to apply the same transformation to each channel of the multivariate dataset which has dimensions (samples,channels,timesteps), treating it as a separate univariate entity.", "However, as has been noted by Ruiz et al.", "in [1], the transformation of the input data in the case of multivariate datasets is not a trivial problem.", "The additional data dimension presents several options even in the fundamental handling of this prior input scaling.", "In this work, we want to empirically explore the opportunities that can arise from scaling the temporal data across different dimensions and the effect this can have on classification accuracy.", "In order to achieve this, we experiment with five recent multivariate time series classification models, which are a mix of deep learning and other methods and are representative solutions with results equal to or sufficiently close to the state of the art.", "We choose seven different transformation methods and apply them to four distinct slices of the temporal data.", "Our contributions are: We show that in the large majority of datasets tested, the best combination of transformation method and dimension it is applied to leads to better accuracy than that of the models with the same functional hyperparameters and no input scaling, ranging from 0.16 to 76.79 percentage points.", "We show that in the majority of the best configurations, there is a statistically significant difference among the accuracy results of the models when the same transformation method is applied to different temporal dimensions, ranging from 0.23 to 47.79 percentage points.", "We explore the relation of the best transformation methods and dimensions to the models and we find that there are no distinct general trends and that the best configuration depends on the dataset and classifier used." ], [ "Related Work", "In fields such as computer vision, data-centric approaches that aim to increase model accuracy have been widely utilized, with data augmentation methods such as cropping and rotation applied to images to increase the amount of the training data, tackle class imbalances, and make the model more robust to perturbations of the input[3].", "In contrast to that, data augmentation techniques in the time series domain have been less extensively employed.", "Although the three-dimensional nature of the multivariate time series problem may initially resemble image data, the temporal dependencies and dynamics of channels, as noted in [4], lead to a qualitative difference between the challenges.", "In recent surveys of time series data augmentation [5], [4] the methods presented range from basic ones inspired by the computer vision field, such as flipping and slicing samples, to more advanced ones utilizing deep generative models.", "In a recent work specifically focused on multivariate time series classification[6], the authors showed that basic time series augmentation methods can be beneficial to the task, counter-acting the overfitting of models, especially on smaller datasets.", "Our exploration of the transformation of data across different dimensions, although not strictly a data augmentation method, can be considered a data-centric approach, in the sense that we are trying to achieve better classification accuracy only by modifying the input data in a specific manner.", "It is orthogonal to the above data augmentation methods and is conceptually placed at an earlier stage.", "It stems from a consideration of the inherent nature of the time series datasets and what the intrinsic relationship of each dimension slice is to the real-world problem." ], [ "Models", "The landscape of multivariate time series classification models is continuously evolving, with ever-more complex and accurate models and methods [1].", "We can, however, distinguish two broad categories which encompass multiple recent methods: machine learning models based on extracted features and deep learning models.", "In the first case, several features are extracted from either the input values or a transformation of them, and those features are then used with a linear classifier for the final classification.", "In the second category, the input values are propagated to a deep learning architecture, usually after normalization to bring them to a similar scale.", "The architecture then internally performs the end-to-end transformation and classification of the input.", "We select three recent methods belonging to the first category and two belonging to the second.", "A short description of the methods follows, starting with the feature extraction ones: ROCKET[7] is a method based on random convolutional kernels which not only achieves the best results in terms of accuracy according to a recent evaluation [1] but is also the fastest approach in terms of training time.", "Two features are extracted from the output of the convolution of the input with each random kernel.", "WEASEL+MUSE[8] extracts features from windows of the input, utilizing a truncated Fourier transform and bag-of-patterns approach.", "It also applies statistical filtering of these features using a $\\chi ^{2}$ test.", "LightWaveS[9] utilizes lightweight wavelet scattering with arbitrary wavelets.", "Four statistical features are extracted from each of the scattering coefficients and are filtered with a hierarchical feature selection approach.", "Our selected models in the deep learning category are: ResNet[10], which has been proposed as a strong deep learning baseline for the time series classification task, with its architecture consisting of convolutional layers, residual connections, and global average pooling layers.", "InceptionTime[11], which utilizes a more complex architecture of convolutional blocks and bottleneck layers, in the form of Inception modules [12], with residual connections.", "The models were selected on the basis of them being recent time series classification methods, which were designed to handle multivariate data and are not just ensembles of univariate methods.", "Moreover, the selected models include the best classifiers for 20 out of the 26 equal-length UEA problems according to the reported accuracy metrics in [1], so they are a very representative sample of the current state of the art.", "Another factor that was taken into account was the computational cost of each method.", "Since we have to perform multiple resamplings of multiple transformation methods across four data slices, we have to limit the model selection in order for the experiments to finish within a reasonable amount of time.", "Thus, although solutions such as HIVE-COTE [13] and CIF [14] may rank higher in accuracy for some problems, their very long training time makes it impractical to fairly include them in our evaluation." ], [ "Transformation methods", "Data scaling is of course a standard method during the exploratory data analysis phase of a problem.", "However, as we mentioned above, it is often overlooked when presenting novel time series classification approaches, especially in the recent deep learning environment, where the model architecture is expected to reach the correct weight values regardless of the input format.", "In our experiments, we test seven different well-known transformation methods[15], ranging from simple linear to more complex non-linear functions.", "We present those below, with a short description for the sake of completeness: Normalization The values are transformed so that their L2 norm is 1.", "Standardization The values are transformed so that they have zero mean and unit variance.", "MinMax The values are scaled to the [0,1] range.", "MaxAbs The values are scaled based on their maximum absolute value, but their sign is retained.", "On positive data, this method is equivalent to MinMax.", "Robust The values are scaled based on their median and interquartile range, which are robust against outliers.", "Power Transformation The values are non-linearly transformed with a power transformation (Yeo-Johnson method in our experiments) in order to approach a Gaussian distribution, minimizing skewness and stabilizing variance.", "Quantile Transformation The values are non-linearly transformed so that their probability density function is mapped to a uniform distribution with a [0,1] range." ], [ "Dimensions", "The 3-dimensional nature of multivariate time series problems, namely samples, channels, and timesteps, presents a multitude of options for selecting data slices across which the appropriate transformation method can be applied.", "As we said, as a result of the mapping of concepts from the univariate time series research, it may seem natural to transform all values of each channel as a separate set.", "In this work, we claim that this choice is not standard or trivial and that different configurations may lead to considerably different results.", "For instance, it is generally accepted that a large part of the added value in multivariate datasets comes from the interplay and associations among different channels.", "Thus, by selecting data slices that include values across different channels, we introduce such associations even before the processing of the input by the models, which may be able to help them perform better in some datasets.", "We denote the original dataset as $D$ with $N$ samples, $C$ channels, and $T$ timesteps.", "Below we present the four distinct data slices that we selected for experimentation, along with an intuitive explanation: Channels This is the configuration more closely related to the univariate paradigm, as all values of each channel across all samples are considered a separate set $S_{i} = \\lbrace D_{*,i,*}\\rbrace , 1\\le i\\le C$ .", "Timesteps In this configuration, the values of each timestep across all samples and all channels are considered a separate set $S_{i} = \\lbrace D_{*,*,i}\\rbrace , 1\\le i\\le T$ .", "Both This configuration is a combination of the above, where for each channel, the values of each timestep across all samples are considered a separate set $S_{ij}= \\lbrace D_{*,i,j} \\rbrace , 1\\le i\\le C , 1\\le j \\le T$ .", "All In this configuration, all values of the dataset are taken as a single set $S = \\lbrace D_{*,*,*}\\rbrace $ .", "This non-exhaustive selection of dataset slices is based on the rationale of capturing the intuitive, real-world meaning of each dimension.", "For example, if the different channels represent sensors with significantly different value ranges, it would make sense to transform them separately.", "On the other hand, if all sensors are of the same type, but for instance, their readings come from fixed points of a process, then each timestep is potentially a more important dimension to consider for classification.", "By combining these concepts, we end up with the four slices mentioned above.", "We see that two of our selected slices do not include information sharing across time series (channels and both) while the other two do." ], [ "Datasets", "We experiment on the 26 of the 30 datasets of the UEA collection [16] that have equal-length samples and can thus be handled easily by all models.", "Moreover, this is the same subset for which there are detailed metrics in [1], so we have a robust point of reference.", "For the WEASEL+MUSE method, we also exclude the datasets DuckDuckGeese, EigenWorms, FaceDetection, MotorImagery, PEMS-SF, and PhonemeSpectra, due to its inability to successfully complete the training on these, as also noted in [1]." ], [ "Experimental setup", "All experiments were run on the DAS-6 infrastructure [17], on nodes with 24-core AMD EPYC-2 (Rome) 7402P CPUs, NVIDIA A6000 GPUs, and 128 GB of RAM.", "We implement ROCKET and MUSE using sktime[18] and InceptionTime and ResNet using its deep learning extension, sktime-dl.", "For LightWaveS, we use its provided code.", "Regarding the method parameters, we tried to follow as closely as possible the ones reported in [1] and used the default settings for ROCKET and LightWaveS.", "We present those parameters in detail in Table REF .", "As baseline, we use the models on the unmodified UEA datasets and in addition, we disable all data preprocessing in the methods that allow this.", "We used scikit-learn [19] to implement all scaling methods.", "We repeat each experiment 20 times with different starting seeds and get the mean accuracy.", "For each model and dataset, we sort the different configuration results by descending mean accuracy and then ascending standard deviation among resamples.", "In this way, we find the scaling method - dimension combination which yields the highest mean accuracy and most stable results.", "We present the mean-accuracy difference from the baselines only when it is statistically significant.", "In order to distinguish the value of dimension selection from that of the transformation method, we present another set of results: For each of the datasets and models, we keep the transformation method of the best-performing configuration fixed and we apply it to all four data slices.", "We then do pairwise testing to determine the statistical difference between all possible pairs of the four accuracy results.", "We do not make any assumptions about the result distributions, so in both cases, we use the Wilcoxon signed-rank test[20] with p-value of 0.05 and Holm’s alpha correction when needed[21].", "The code for the experiments as well as the detailed metrics are made available at https://github.com/lpphd/mtsscaling to facilitate reproducibility of the results.", "Table: Method parameters" ], [ "Results", "We present the best-achieved accuracy, as well as the difference from the baseline result for each model in Table REF .", "Table: Accuracy under best transformation-dimension configuration and significant differences from baseline accuracyWe can see that there is an increase in accuracy in the large majority of the datasets for all models, ranging from 0.16 to 76.79 percentage points, with the median increase being 3.88 percentage points.", "Broken down by model, the median increase in accuracy is 2.75 for ROCKET, 3.38 for MUSE, 7.27 for LightWaveS, 3.7 for ResNet, and 2.74 for InceptionTime.", "It is remarkable that not only is the accuracy increased compared to the baseline experiments, but the best accuracy across all models is higher than the best accuracy presented in [1] for 13 out of the 26 datasets, showing that this input preprocessing exploration can result in new state-of-the-art results without modifying the base model at all.", "A point that merits explanation is that this approach of no input scaling as baseline differs from the default behavior of models such as ROCKET and LightWaveS, which employ scaling as part of their pipeline, or the usual normalization for deep learning models.", "The reason we follow it is to get as fair results as possible and create a reference point based only on the mechanics of the models rather than any scaling effect.", "However, we can confirm that the same trends and conclusions hold true for the default behavior of our selected classifiers by getting their reported accuracy metrics on the same datasets from [1],[9] and performing mean-accuracy comparison.", "Again, for the majority of datasets and classifiers, there is an increase in accuracy, ranging from 0.1 to 40.0 with a median of 3.25.", "There are also a few negative results, which indicate that the application of no transformation gives better accuracy for specific models and datasets.", "These results do not affect our conclusions, since we are considering the transformation method and dimension as hyperparameters, and we can include additional configurations in this hyperparameter search to increase the chances of achieving the optimal result.", "We also aim to distinguish the value of dimension selection from that of the transformation method.", "Although in practice the scaling method and dimension would be co-selected based on their interplay and the dataset characteristics, we want to demonstrate that even for more complex transformation methods, the dimension selection can significantly affect the outcome.", "In Table REF we see whether or not there is a statistically significant difference in the accuracy results between any two dimensions under the optimal scaling method, and if so, what the difference is in the mean accuracy between the optimal and worst dimension.", "Table: Difference (of significantly different results) in mean accuracy between best and worst dimension for fixed transformation methodThese results reinforce our conclusions, as we can see that for the majority of datasets and models there is a statistically significant difference between the results of at least two out of the four dimensions and the mean-accuracy differences range from 0.23 to 47.79 percentage points, with the median being 3.82 points.", "This shows that a significant part of the accuracy increase compared to the baselines stems from the selection of the most suitable dimension for a given dataset and classifier.", "We can also study the configurations that achieve the best performances to discover potential trends in the dimension or transformation method selection.", "To achieve this, for each classifier and dataset we consider the group of configurations that help achieve either the top accuracy or within 1 percentage point of it.", "We then calculate a score for each dimension and transformation method that appears in these configurations, which is defined as the number of times it appears divided by the total number of the group members.", "For example, if the top configurations are [minmax_both, standard_both, quantile_all], the dimension 'Both' would get a score of (1+1)/3 = 2/3, while each of 'MinMax', 'Standard', 'Quantile' methods would get a score of 1/3.", "By summing this normalized score across all datasets, we get a \"usefulness\" profile of the dimensions and transformation methods for each classifier.", "We can see these scores in Fig.", "REF .", "Figure: Utility scores of dimensions and transformation methods for each classifierRegarding the dimension scores in Fig.", "REF , we can see that there is no universal trend and the results are classifier-specific.", "One result that stands out is the high utility of \"Channel\" dimension for the LightWaveS method.", "This seems to be related to the method's mode of operation, which is extracting features from individual input channels, without combining them in any way.", "ROCKET does combine channels when generating features, and its top dimension is \"Both\".", "This dimension is also the top one for the two deep learning models, although for ResNet the \"TimeSteps\" dimension is also valuable.", "On the contrary, in WEASEL+MUSE we observe that \"Both\" has the lowest value, with a balance across the other dimensions.", "Considering the transformation method scores in Fig.", "REF , we see again that the results vary across classifiers.", "The two deep learning models show a common behavior in that \"Quantile\" and \"MinMax\" are valuable methods in both, which seems to validate the common approach of scaling the input to the [0,1] range.", "However, they also show deviation in the scores of methods such as \"Robust\" and \"Standard\".", "\"Quantile\" is also the top method for ROCKET, followed by \"Standard\", while for LightWaveS this order is reversed.", "MUSE seems to have a relative balance across transformation methods.", "A general trend that we can observe is that the \"Quantile\" method seems to be useful for all classifiers, while the simple normalization method has a low score in all cases.", "These figures show that there is no clear winner either in dimension or transformation methods, and the best configuration depends on the classifier and the dataset under consideration.", "The conclusion we can draw from this is that the inclusion of data dimension and transformation method in the hyperparameter search of a model is the most certain method of discovering the configuration that gives the optimal result in terms of accuracy." ], [ "Discussions and Conclusion", "In summary, in this paper, we empirically explore the input preprocessing possibilities presented by the format of multivariate time series datasets, namely (samples, channels, timesteps) and their effect on classification accuracy.", "We test seven data transformation methods across four distinct data slices and their effect on the accuracy of five recent machine and deep learning methods.", "We show that the optimal configuration of data slice and transformation method leads to an increase in the classification accuracy in almost all cases, in comparison with the baselines without any preprocessing.", "We also show that the correct dimension selection can lead to a large accuracy increase compared to a sub-optimal selection.", "The above empirical results affect topics on a broad spectrum of time series analysis and classification, from the evaluation of novel methods to computational cost savings.", "In research, these results indicate that data-centric approaches are a fruitful research direction and can have significant benefits in terms of classification accuracy, on par or even better than new methods, without incurring the additional model complexity, especially in the landscape of deep learning.", "In this case, the novel approaches should be evaluated based on additional aspects, such as interpretability or deployment suitability.", "Similarly, in the more applied industry sector, it points practitioners to the possibility of increasing accuracy for a specific use case through data-centric means, obviating the need to switch to more computationally expensive or communication-intensive models, especially in the edge intelligence applications.", "A natural research direction stemming from our work is to formalize the discovery of the most suitable transformation method and dimension.", "Although for the faster methods such as ROCKET and LightWaveS it is easy to quickly search for the best configuration, it is quite impractical for the slower methods such as MUSE.", "Thus, a desirable approach would indicate the optimal dimension for each dataset, possibly based on the statistical properties of each data slice, and also explain this choice.", "In terms of more applied directions, it would be interesting to experiment with additional models which may have more markedly different approaches than the ones presented, such as shapelet-based ones[1].", "Finally, additional data slices could be explored, such as grouping channels depending on the underlying problem and data source type, e.g., sensors of similar type in an IoT problem." ] ]
2210.07713
[ [ "A Lightweight Moving Target Defense Framework for Multi-purpose Malware\n Affecting IoT Devices" ], [ "Abstract Malware affecting Internet of Things (IoT) devices is rapidly growing due to the relevance of this paradigm in real-world scenarios.", "Specialized literature has also detected a trend towards multi-purpose malware able to execute different malicious actions such as remote control, data leakage, encryption, or code hiding, among others.", "Protecting IoT devices against this kind of malware is challenging due to their well-known vulnerabilities and limitation in terms of CPU, memory, and storage.", "To improve it, the moving target defense (MTD) paradigm was proposed a decade ago and has shown promising results, but there is a lack of IoT MTD solutions dealing with multi-purpose malware.", "Thus, this work proposes four MTD mechanisms changing IoT devices' network, data, and runtime environment to mitigate multi-purpose malware.", "Furthermore, it presents a lightweight and IoT-oriented MTD framework to decide what, when, and how the MTD mechanisms are deployed.", "Finally, the efficiency and effectiveness of the framework and MTD mechanisms are evaluated in a real-world scenario with one IoT spectrum sensor affected by multi-purpose malware." ], [ "Introduction", "Society is experiencing a massive increment of Internet of Things (IoT) devices deployed over multiple real-world scenarios such as industry, home, health, transport, or agriculture.", "The IoT devices used in the previous environments present particularities in terms of hardware, operating systems, services, and data.", "However, there are also remarkable inter-scenario similarities such as the connectivity to the internet and limitations in terms of CPU, memory, and storage.", "These aspects, combined with the complexity of deploying detection and mitigation cybersecurity mechanisms, make IoT devices one of the most desired targets for cyberattacks [1].", "Cybersecurity studies acknowledge this fact, since the number of cyberattacks affecting heterogeneous IoT devices is increasing every year [2].", "In addition, cybercriminals are moving towards the usage of multi-purpose malware, where malicious behaviors such as remote control, data leakage, encryption, mining, code execution hiding, and other hostile actions can be perpetrated by individual malware samples.", "This fact complicates the challenge of defending IoT devices since detection and mitigation mechanisms must be varied, and they usually consume many resources.", "Assuming that perfect security is not realistic in any system, in 2009, a novel cyberdefense paradigm called Moving target defense (MTD) was introduced [3].", "MTD proposes to continuously change different aspects (like, for example, network, data, or runtime environment) of a given device or system to prevent or mitigate ongoing or future cyberattacks.", "Therefore, the idea is to reduce the attack surface and make it more difficult for attackers to exploit vulnerabilities.", "Despite the progress achieved by existing MTD-based solutions, as stated in [4], a limited amount of work is dedicated to IoT devices.", "More in detail, the following challenges are still open: ch1) lack of IoT MTD mechanisms moving different parameters to mitigate multipurpose malware affecting data integrity, availability, and confidentiality; ch2) lack of on-host systems suitable for IoT devices and able to deploy MTD mechanisms proactively and reactively, and ch3) lack of work evaluating the efficiency of MTDs in real testbeds.", "To improve these challenges, the main contributions of this work are: Four MTD mechanisms changing the network, data, and runtime environment of IoT devices to mitigate multi-purpose malware (covering ch1 and code available in [5]).", "A lightweight and IoT-oriented MTD framework able to decide what, when, and how the proposed four MTDs are deployed in a reactive and proactive way (covering ch2).", "The evaluation of the effectiveness and efficiency of the framework and the MTD mechanisms in a real-world scenario with a spectrum sensor affected by multi-purpose malware showing behaviors of Botnet, Ransomware, Rootkit, and Backdoor (covering ch3).", "The remainder of this article is organized as follows.", "Section  reviews existing IoT MTD mechanisms.", "While Section  presents four novel IoT MTDs, Section  provides a framework to manage them.", "Section  evaluates and compares the framework performance and efficiency in a real-world scenario.", "Finally, Section  draws conclusions and next steps." ], [ "Related Work", "A recent review of IoT MTD techniques concludes that IoT MTD is still immature, and novel techniques deployed in real-world scenarios should be prioritized [4].", "Moreover, the authors present WHAT, WHEN, and HOW as design principles for MTDs.", "The WHAT represents the components of the system that are changed to secure the system, being Network, Data, Software, Runtime Environment, and Platform the most representative.", "The WHEN indicates the moment in which the system should change the WHAT, and The HOW is the way in which the WHAT is moved.", "Finally, the authors categorized the 32 existing IoT MTD works according to the WHAT principle, where 54% focused on the network, 20% on runtime, 13% on software, 10% on data, and 3% on the platform.", "Starting from the network category, the one with the most MTDs, [6] proposes $\\mu $ MT6D, an MTD mechanism oriented to constrained devices that limit the time window for reconnaissance attacks through IP address rotation.", "[7] presents AShA, a method allowing a fast, secure, and collision-free address renewal of IPv6.", "The previous two solutions have been tested in simulated environments before the malware infection happens.", "In contrast, the work at hand not only protects against network attacks (proactively and reactively) but also against attacks affecting data and runtime environments.", "Continuing with the runtime category, [8] proposes the combination of software and hardware mitigation.", "From the software perspective, it uses a randomization approach that modifies the layout of the executable code, preventing code-reuse attacks.", "From hardware, it isolates the binary code to avoid exposing executed code to attackers.", "Compared to the work at hand, this solution fails against attacks manipulating links to libraries.", "Dealing with the data category, [9] presents a side-channel resilient MTD mechanism for electromagnetic-based side-channel attacks.", "The proposed solution applies rekeying at suitable intervals to reduce the computational and communication overhead.", "Another MTD focused on data is [10].", "It proposes a Dynamic Key-Length-Based Security Framework (DLSeF) that uses a shared key that is frequently updated to ensure end-to-end security.", "Compared to the work at hand, the previous two solutions do not prevent attacks affecting data encryption.", "Table: Comparison of IoT MTD Mechanisms.", "Network (N), Runtime (R), Data (D), Proactive (Pr), Reactive (Re).", "REF compares the main aspects covered by previous works.", "As can be seen, attacks performed by recent malware able to encrypt and steal sensitive data, change libraries, and hide themselves are not considered by existing MTDs.", "Furthermore, most MTDs are evaluated in simulated scenarios.", "Finally, there is no framework able to deploy MTD mechanisms when needed." ], [ "IoT MTD mechanisms", "This section presents novel MTD mechanisms to mitigate multi-purpose malware controlling IoT devices, encrypting data, hiding malicious commands, or leaking sensitive data." ], [ "File Encryption", "This novel reactive MTD mechanism creates a honeypot with dummy files on the IoT device (victim).", "It traps ransomware in a dynamic expanding and collapsing directory and file tree with dummy files, in the meanwhile the encrypting process is discovered and killed.", "The goal is to keep as much data safe as possible, while focusing exclusively on the encryption behavior shared by multiple ransomware implementation.", "Related malicious phases commonly found in Ransomware, such as cryptographic key exchange with a C&C server, are covered by other MTD techniques.", "Before providing the MTD details, it is important to know that most of the existing ransomware samples recursively encrypt files.", "To mitigate it, first, the proposed MTD is deployed in the directory in which files are being encrypted.", "Then, it moves to a random existing subdirectory, and creates a new subdirectory with dummy files.", "Once a given number of dummy files per directory is created, it moves to a new subdirectory and repeats the process.", "This movement is done because once the encryption on a directory starts, no new files added to that directory are encrypted.", "Additionally, the MTD deletes dummy files when they are encrypted to avoid depleting scarce disk space.", "REF shows a flowchart with the previous steps.", "This creation of dummy files is key to create a file-based honeypot that allows the ransomware to be stalled and identified.", "Lastly, the MTD monitors all running processes to identify and kill the encrypting process.", "For that, the CPU usage of all processes is monitored, and all processes falling below a minimum threshold are discarded due to encrypting files is a CPU-intensive task.", "In the next step, processes included in a whitelist are filtered out.", "Finally, the MTD monitors how many files are opened by each process of the suspicious list within a minute.", "If the number of open files exceeds a configurable threshold, the process is killed.", "Figure: File Encryption Flowchart" ], [ "File Format", "This novel reactive MTD shuffles the extensions of IoT sensors files to hide them from malware such as backdoors and ransomware affecting data availability, integrity, or confidentiality.", "This MTD relies on the fact that some malware families select target files according to their extensions.", "To achieve the previous functionality, this MTD creates pseudo extensions consisting of alphanumeric strings randomly generated and replaces selected file extensions with the pseudo ones.", "The MTD maintains a dictionary to track the relationship between valid and pseudo extensions.", "When a new pseudo extension is created, the MTD checks in the dictionary if it is valid or used.", "This is done to avoid collisions during reconstruction.", "Then, once malware is mitigated, the pseudo extensions are replaced with genuine ones." ], [ "Libraries", "This novel MTD changes the Linux runtime environment of the IoT sensor reactively or proactively to reduce or mitigate manipulations of legitimate libraries to execute malicious code (done by rootkits).", "More in detail, this MTD sanitizes corrupted libraries and unlinks fake libraries.", "When running applications on Linux, it is possible to preload needed libraries through the LD_PRELOAD environment variable contained in the /etc/ld.so.preload file.", "In particular, LD_PRELOAD links shared libraries that are preloaded on the user-space, taking precedence over preloaded libraries of the kernel-space.", "Therefore, malware like rootkits take advantage of this and modify this environmental variable to preload malicious libraries hiding themselves.", "Additionally, rootkits can unlink the /etc/ld.so.preload file from the dynamic linker, and a malicious file is linked in its place.", "To prevent or mitigate these behaviors, this MTD sanitizes i) the ld.so.preload file with a backup containing the right value of the LD_PRELOAD variable, and ii) the dynamic link to the /etc/ld.so.preload file.", "Regarding the first action, the MTD guarantees that LD_PRELOAD points to the legitimate libc.so.6 library.", "Therefore, typical malicious behaviors of rootkits such as files hiding or disabled access files are prevented.", "Dealing with the second action, the MTD overwrites the /lib/arm-linux-gnueabihf/ld-2.24.so file with the string /etc/ld.so.preload to link again with the proper library." ], [ "IP Address", "This is an adaptation of existing MTD mechanisms shuffling the IP address of IoT devices (victim) to mitigate the impact of malware based on C&C (like Botnets).", "The main difference compared to related work is that it works with private addresses and can be executed proactively or reactively to disrupt the communication between the IoT sensor and C&C.", "The MTD is designed so that it cuts communication to all types of C&C servers, whether the server is running on the device or on an external host (locally or publicly reachable).", "From the implementation perspective, the MTD mechanism creates a list with all IP addresses available in the private network where the IoT device is connected.", "It is done by generating a list with all IP addresses of the private network and then removing the IPs assigned to active devices (obtained with the arp-scan command).", "After that, a given IP address from the list is randomly chosen, and the IoT device requests to migrate to that IP using ifconfig.", "Then, the MTD checks if there is internet connectivity.", "If so, the migration is successful, and the legitimate services of the IoT device are restarted.", "If there is no connectivity, the MTD removes the IP from the list of possible ones, chooses a new IP, and checks its connectivity.", "This sequence is repeated until a successful migration.", "This section presents a novel MTD framework for IoT devices.", "The framework objective is to decide WHEN, HOW, and WHAT MTD mechanisms explained in Section  should be deployed to mitigate multi-purpose malware.", "REF shows the modules and components of the proposed framework, which are all deployed on the IoT device.", "Figure: IoT-oriented MTD Framework Architecture MTD Decision.", "This module decides WHEN to deploy MTD mechanisms.", "MTD Enforcement.", "This module deals with HOW and WHAT MTD mechanisms are deployed.", "The MTD Decision module focuses on WHEN to deploy MTD mechanisms according to proactive and reactive criteria.", "For the proactive approach, the system administrator defines in the Action or Time Rule component the actions or time windows that need to be met by the IoT device to deploy MTDs.", "Periodically, the Engine component checks the criteria and sends an alarm to the MTD Enforcement module.", "This approach is the most lightweight and does not need previous knowledge to make the WHEN decision.", "However, the lack of information also complicates further decisions about HOW and WHAT MTD should be deployed.", "The reactive approach can facilitate this decision, but it increases the computational complexity and decision time of the MTD Decision module.", "In particular, the reactive process decides WHEN to deploy MTDs according to the output of a supervised Machine Learning (ML)-based process able to classify the IoT device behavior.", "More in detail, first, the Data Acquisition component periodically collects kernel events of the IoT device related to CPU, memory, file system, network interface, scheduler, drivers, and random numbers usage.", "Then, the Data Curation component preprocesses the events and extracts relevant features.", "Feature vectors are stored and labeled in a dataset created by the Dataset Generation component.", "The monitoring, data curation, and dataset generation processes are repeated for normal and malicious behaviors of multi-purpose malware.", "After that, the Algorithm Selection component selects a set of classification algorithms, the Model Training trains the ML models with the created dataset, and the Model Evaluation detects the IoT device behavior, which is sent to the upper module.", "The MTD Enforcement module decides HOW and WHAT MTD mechanism (proposed in Section ) should be deployed according to the outputs of the previous module.", "For that, the Selection & Deployment component defines a set of rules deciding the MTD mechanisms that should be deployed.", "If the IoT device behavior is available (reactive approach), it deploys the most suitable MTD for that behavior.", "If not (proactive approach), all proactive MTDs are deployed simultaneously.", "An example of policies is provided below.", "If [Rootkit] $\\rightarrow $ deploy [Libraries MTD] If [Ransomware] $\\rightarrow $ deploy [File Encryption MTD] If [Botnet] $\\rightarrow $ deploy [IP Address MTD] If [Backdoor] $\\rightarrow $ deploy [File Format MTD] If [action] or [time] $\\rightarrow $ deploy [File Format MTD] and [IP Address MTD] and [Libraries MTD]" ], [ "MTD Framework Efficiency and Effectiveness", "It describes the implementation and the experimental results achieved by the MTD framework when deployed on a real IoT spectrum sensor affected by multi-purpose malware." ], [ "IoT Spectrum Sensors Affected by Multi-purpose Malware", "The IoT evolution has brought Radio Frequency crowdsensing platforms to reality, offering the optimization of spectrum usage.", "However, sensors used by these platforms are resource-constrained devices with well-known vulnerabilities and limitations to deploy complex cybersecurity mechanisms.", "Attackers are aware of this fact, and the increment of multi-malware affecting IoT devices proves it.", "In the context of malware affecting IoT devices, this work has considered ElectroSense [11], a real-world, open-source, and crowdsensing solution that uses Raspberry Pis to continuously send RF spectrum data gathered from connected software-defined radio kits.", "In particular, this work has deployed an ElectroSense sensor in a Raspberry Pi 4 with 1.5 GHz CPU and 3.7 GB RAM.", "The Raspberry Pi 4 executes the official and publicly available ElectroSense software, and is connected to a local area network with access to the Internet.", "Then, the device has been infected with multi-purpose malware such as Bashlite (botnet); Ransomware_PoC (ransomware); Beurk and Bdvil (rootkit); and TheTick, PythonBackdoor and httpBackdoor (backdoor).", "All these malware samples and some others can be found and downloaded through [12].", "Furthermore, since the main contribution of this work focuses on MTD techniques and not malware detection, other malware samples detectable by the proposed solution can be found in the previous work.", "These malware samples can be executed by exploiting weak passwords or well-known vulnerabilities of IoT devices network services such as SSH or Telnet.", "Furthermore, their functionality is to control sensors remotely, encrypt data, hide malicious actions, leak sensitive data, or execute malicious code.", "Finally, the impact of these behaviors on spectrum sensors is to i) disrupt crowdsensing platforms services, ii) execute Distributed-Denial of Service (DDoS) attacks, or iii) perform lateral movement attacks." ], [ "MTD Decision Module: ML-based Reactive", "For optimal deployment of MTD mechanisms, detecting the malicious behavior of IoT spectrum sensors is needed.", "In this sense, the Data Acquisition component uses kernel software events, present in any Linux system due to their flexibility and variety.", "To monitor this dimension, perf Linux tool is selected, monitoring $\\approx $ 80 values from different event families such as network, memory, file system, CPU, process scheduler, or device drivers.", "Every 5 s, The monitoring collects the data of the previous metrics, considering all processes running in the sensor.", "Then an extra processing time is required for perf to calculate and return the values.", "REF shows the resource usage of the Data Acquisition component.", "As it can be appreciated, it implies a low resource consumption, with only a maximum 4% of usage in one CPU core.", "Besides, the complete perf monitoring and data processing loop is performed in $\\approx $ 10 s, which is an acceptable monitoring time for early malware detection.", "Table: Behavior Monitoring Resource Consumption.After that, since the framework uses supervised ML/DL algorithms, it is necessary to monitor each behavior to allow the algorithms to be trained and generate models capable of differentiating malicious activities.", "Therefore, normal (uninfected) behavior and the different malware samples (Beurk, Bdvl, Bashlite, Ransomware_PoC, HttpBackdoor, PythonBackdoor, and TheTick) are monitored for a minimum of six hours.", "In total, $\\approx $ 2160 vectors are available per behavior, making up a dataset of $\\approx $ 21600 vectors and $\\approx $ 6.78 MB.", "For attack identification, k-NearestNeighbors (k-NN), Support Vector Machine (SVM), XGBoost (XGB), Decision Tree (DT), Random Forest (RF), and Multi-Layer Perceptron (MLP) are tested.", "Then, the available dataset is divided into 80% for training (including cross-validation) and 20% for testing.", "Besides, data standardization has been applied using min-max for the algorithms that require normalization.", "REF shows the classification results per model in terms of average F1-Score.", "RF and XGB are the models providing the best performance with a 0.98 average F1-Score.", "For deployment, RF is selected due to its shorter training time and lower memory and storage usage.", "In this sense, the training in the RPi4 takes $\\approx $ 39.46 secs (using one CPU core at 100%), while the preprocessing and evaluation of a single vector takes $\\approx $ 0.0427 s. Besides, the model needs 89.55 MB in memory and storage, since pickle library is used to serialize the binary object from memory to a file.", "Table: F1-Score per Classification Algorithm.Finally,  REF shows the confusion matrix for the classification of each behavior and malware.", "It can be seen that almost all behaviors are correctly classified, only having trouble detecting the 25% of the times the Data Leak attack when using TheTick backdoor.", "Figure: RF Confusion MatrixIn conclusion, the ML-based Reactive component takes $\\approx $ 10 s to accurately detect normal and malicious behaviors when deployed as part of the MTD framework (only a few false positives for Data Leak are present).", "In addition, the consumption of CPU, RAM, and storage during monitoring and model training/evaluation is acceptable for IoT spectrum sensors implemented in Raspberry Pis." ], [ "MTD Enforcement Module: Deploying MTD Mechanisms", "A set of experiments has been conducted to analyze the efficiency and effectiveness of the framework and the four MTD mechanisms.", "On the one hand, to evaluate the reactive deployment of MTD mechanisms, each experiment: i) infects the IoT spectrum sensor with the appropriate malware, ii) detects the malware and malicious behavior, and iii) deploys the MTD mechanism.", "On the other hand, the MTD deployment relies on time intervals for proactive evaluation.", "REF shows the duration of each phase (highlighting the runtime of the respective malicious behavior or MTD technique), the CPU, RAM, time, and I/O blocks/s used in the sensor.", "In addition, it shows the KB/s sent by the spectrum scanning process.", "Finally, these metrics are measured for i) the spectrum sensor (with and without MTD framework), and ii) the MTDs deployed reactively and proactively.", "Table: Effectiveness and Efficiency of MTD EnforcementAs can be seen, the MTD framework resource consumption in the background is minimum.", "For proactive and reactive deployment, the MTD mechanisms differ in mitigation time.", "For example, data leakage behavior impacts the device during 112 s. Upon deploying the File Format MTD which operated for 56 s, critical files are protected so that only 11 MB of data were leaked out of 300 MB of PDF files.", "Furthermore, the CPU, RAM, I/O, and data collected by the sensor of the phase where the malware is running can be compared to the row underneath showing the impact of malware and MTD operation.", "From that, it can be concluded that for all MTD techniques, the resource impact is minimal when deployed reactively.", "From the proactive deployment perspective, the Libraries MTD does not impact the sensing service.", "Thus, the MTD is deployed every minute, leading to successful disinfection of rootkits and backdoors after 29 s with negligible impact on resources.", "In terms of CPU consumption, Ransomware, and the File Encryption MTD have the strongest impact as presented in  REF .", "Here, the three stages consisting of (1) encryption, (2) MTD operation, and (3) termination of the encryption process, are presented.", "Although the creation of files adds to the resource consumption, the execution of the ransomware is limited to 84 s. Furthermore, due to the creation of dummy files, only 7.1 MB of data are encrypted until the malware is detected.", "Figure: Device Resource Impact When Mitigating Ransomware" ], [ "Summary and Findings", "This work presents four IoT MTD mechanisms focused on i) generating dummy files to trap encrypting processes, ii) manipulating file extensions to reduce data leakages, iii) sanitizing libraries to inhibit actions hiding, and iv) shuffling IP addresses to avoid remote control.", "These four MTD mechanisms are selected and deployed by a proposed hybrid IoT-oriented framework.", "The framework presents a modular architecture that uses i) ML-based behavior classification or predefined rules to decide WHEN MTD mechanisms should be applied, and ii) a rule-based module to decide HOW and WHAT MTD mechanisms are deployed.", "The framework has been deployed on a real IoT spectrum sensor, where its effectiveness and efficiency were evaluated with different multi-purpose malware showing behaviors of botnets, ransomware, backdoors, and rootkits.", "The ML-based Reactive process achieved an average of 0.98 F1-Score using Random Forest and took about 10 s to classify normal and malicious behaviors.", "Finally, the performance of the implemented MTD techniques has been verified in terms of attack mitigation and resource consumption, stopping all the attacks satisfactorily and with low impact on CPU, RAM, disk, and sensing spectrum services.", "Thus, the framework and experiments conducted therewith address key challenges of MTD, such as the evaluation of MTD in real IoT platforms and the deployment of MTD in an intelligent and resource efficient way.", "As future work, the optimization of the implemented MTD techniques is planned together with the development of new MTD mechanisms against other malware behaviors.", "Besides, it is planned to add Reinforcement Learning to the MTD Enforcement module, making it fully automated and adaptive." ], [ "Acknowledgments", "This work has been partially supported by (a) the Swiss Federal Office for Defense Procurement (armasuisse) with the CyberTracer and RESERVE (CYD-C-2020003) projects and (b) the University of Zürich UZH." ] ]
2210.07719
[ [ "The identification of mean quantum potential with Fisher information\n leads to a strong uncertainty relation" ], [ "Abstract The Cramer-Rao bound, satisfied by classical Fisher information, a key quantity in information theory, has been shown in different contexts to give rise to the Heisenberg uncertainty principle of quantum mechanics.", "In this paper, we show that the identification of the mean quantum potential, an important notion in Bohmian mechanics, with the Fisher information, leads, through the Cramer-Rao bound, to an uncertainty principle which is stronger, in general, than both Heisenberg and Robertson-Schrodinger uncertainty relations, allowing to experimentally test the validity of such an identification." ], [ "Introduction", "The paper analyzes and utilizes the relation between the mean quantum potential appearing in de Broglie-Bohm theory and the Fisher information.", "It could therefore be helpful to first review these two concepts.", "The de Broglie-Bohm formulation of quantum theory [1], [2], [3], [4], [5], [6] is a realistic and deterministic framework for the description of quantum phenomena, allowing the description of individual quantum events [7].", "The theory is sometimes called ontological, since it attempts to speak about what exists, rather than what one can measure [8].", "It does so by the introduction of (nonlocal) “hidden variables”- properties of the quantum particle that cannot be measured, but are, rather, asserted.", "Such are the positions of the Bohmian particles in the de Broglie-Bohm theory.", "Unlike standard quantum theory, which attributes the statistical properties of the ensemble to the individual particle, giving up the concreteness of position and momentum [9], in the de Broglie-Bohm theory, the latter is retained while the former is forsaken- the particle’s behavior is not inherently probabilistic [10], but, rather, essentially deterministic.", "In Bohm's theory, the probabilistic features express a lack of knowledge about the initial conditions.", "An ensemble of such particles would reproduce the standard quantum distribution, given by the Born rule, $\\rho = |\\psi |^2$ Having a specific position at all times, independent of the measurement process, the particle’s momentum is assumed to be proportional to the local wavenumber [11], $\\vec{k} = \\text{Im} \\lbrace \\nabla \\text{ln}\\psi \\rbrace $ which constitutes the Bohmian guiding equation [12], $\\frac{d\\vec{x}}{dt} = \\frac{\\hbar }{m} \\text{Im} \\lbrace \\nabla \\text{ln}\\psi \\rbrace $ As the local wavenumber is derived from the wavefunction, the particle is said to be guided by the wave.", "The guiding equation does not depend on the specific equation satisfied by the wavefunction, according to which it evolves.", "Having defined the particle’s momentum as such, the Schrödinger equation, $i\\hbar \\ \\frac{\\partial \\Psi }{\\partial t}=\\ -\\frac{\\hbar ^2}{2m}\\ \\mathrm {\\nabla }^2\\ \\mathrm {\\Psi }+V\\mathrm {\\Psi }$ becomes an equation describing the guiding process.", "A polar decomposition of the equation (after Madelung) yields a probability conservation equation for the distribution of particles and a modified Hamilton-Jacobi equation written in terms of the local or Bohmian momentum citeMadelung.", "This is a simple manipulation of the equation that consists of inserting a polar decomposition of the wavefunction $\\Psi = Re^{i \\frac{S}{\\hbar }}$ The two equations are the continuity equation, $\\frac{\\partial (R^2)}{\\partial t}+\\nabla \\left(R^2\\frac{\\nabla S}{m}\\right)=0$ And the modified Hamilton-Jacobi equation, $\\frac{\\mathrm {\\partial S}}{\\mathrm {\\partial t}}+\\frac{\\left(\\nabla S\\right)^2}{2m}+V-\\frac{\\hbar ^2}{2m}\\frac{\\nabla ^2R}{R}=0$ Where, according to (3), the local, or, Bohmian, momentum is simply $\\vec{P_q} = \\nabla S$ This equation is called the quantum Hamilton-Jacobi equation, and it differs from its classical counterpart by an extra term called the quantum potential [13].", "$Q=\\ -\\frac{\\hbar ^2}{2m}\\frac{\\nabla ^2R}{R}$ This term is interpreted, in the de Broglie-Bohm theory, as an extra potential, from which a force that acts on the particles, changing their velocities and bending their trajectories, is derived.", "This quantum force mediates the influence of the wave on the particle, describing the guidance mechanism.", "This extra term, the quantum potential, accounts for all quantum phenomena, and is highly nonlocal, enfolding information about the whole experimental setup.", "The quantum potential involves a multiplicative constant, $\\frac{\\hbar ^2}{2m}$ , such that, at the limit of $\\hbar \\rightarrow 0$ , the classical Hamilton-Jacobi equation is retrieved.", "In this limit, quantum effects, which in the Bohmian perspective are interpreted as the influence of the wave on the particle, become negligible, and the dynamics is classical." ], [ "Fisher information", "Fisher information [14], [15], [16] is a fundamental quantity in the theory of information, which measures the amount of information that an observable random variable $Y$ carries about an unknown parameter $\\theta $ upon which the probability of $Y$ depends.", "The likelihood $\\rho (y|\\theta )$ is the probability density function for $y$ conditioned on the value of $\\theta $ .", "In terms of the likelihood, the Fisher information is given by: $I(\\theta ) \\equiv \\int \\frac{1}{\\rho (y|\\theta )}\\left(\\frac{\\partial \\rho (y|\\theta )}{\\partial \\theta }\\right)^2 dy.$ The quantum version of Fisher information [17], [18] has been extensively used in quantum metrology [19] and statistical inference [20], and has been used in the context of entanglement detection [21].", "The random variable $\\hat{Y}$ is an unbiased estimator for the parameter $\\theta $ if $E_{\\theta }[\\hat{Y}] = \\theta .$ That is, the expectation value of the estimator is the parameter.", "The variance of an unbiased estimator is bounded from below by the inverse of fisher information, in what is known as the Cramér-Rao bound [22]: $\\text{Var}[\\hat{Y}] \\ge 1/I(\\theta ).$ We show that this bound leads, through the connection with the quantum potential, to an uncertainty relation, generally stronger than the Robertson-Schrödinger uncertainty relation [23].", "The mean quantum potential of Bohmian mechanics has been related [24], [25], [26], [27], [28], on the basis of formal similarity, to Fisher information, a quantity central in information theory, by the following formula, $\\bar{Q} = \\frac{\\hbar ^2}{8m} I,$ where $\\bar{Q}$ is the mean quantum potential and I is the Fisher information about the observable $\\hat{x}$ .", "This connection has been often asserted due to formal similarity of the definitions [26].", "However, since the two concepts arise in different contexts and theories, albeit both closely related to measurement, this connection has to be physically justified.", "Even more so, noticing that the definition of Fisher information includes conditional probabilities, absent from the expression of the mean quantum potential.", "In other words, to justify this connection, one has to reduce the former to the latter, providing concrete justification for such a reduction.", "An attempt for such a justification is outlined in Reginatto’s paper [24] serving as the background for the derivation of the Schrödinger equation from a variational principle of minimum Fisher information.", "He describes the measurement problem of quantum mechanics as an estimation problem of a deterministic variable with a superimposed random noise.", "That is, estimating the parameter $\\theta $ in the presence of unknown added noise $x$ , a measurement $y$ of the parameter is related to $x$ and $\\theta $ by $y = \\theta + x,$ where $y$ is a measurement of the particle's position, while $\\theta $ is its actual “hidden\" position.", "With the addition of an assumption regarding the conditional probability distribution of the measured outcome given the actual value, the assumption of translation invariance, formally described as, $\\rho (y|\\theta ) = \\rho (y-\\theta ) = \\rho (x).$ And so, Fisher information becomes, $I(\\theta ) = \\int \\frac{1}{\\rho (x)}\\left(\\frac{\\partial \\rho (x)}{\\partial \\theta }\\right)^2 dx = -\\int \\rho \\frac{\\partial ^2 \\text{ln} \\rho (x)}{\\partial \\theta ^2} dx.$ Using integration by parts one can show, $\\int \\rho \\frac{\\partial ^2 \\text{ln} \\rho (x)}{\\partial x^2} dx = 4 \\int \\rho \\frac{1}{R} \\frac{\\partial ^2 R}{\\partial x^2} dx = -\\frac{8m}{\\hbar ^2}\\bar{Q},$ so that, if one calculates $I(x)$ , that is, Fisher information about the position, one obtains equation (REF ), relating the mean quantum potential with Fisher information.", "Let us first notice that what Reginatto describes is a theory of measurement for hidden variables, recasting the measurement problem into an estimation problem in which the concept of Fisher information naturally arises.", "What is the nature of the noise in the measurement and its source?", "This question is not addressed, and yet, translation invariance is assumed about the conditional probability distribution.", "What is the justification of such an assumption?", "A possible justification, outlined by Frieden [29] is the fact that the measurement does not depend on the position in which it is performed.", "One might perform the experiment in a different position without expecting different results.", "However, it is clear that for the identification of Fisher information with the quantum potential, one must have, $\\left(\\frac{\\partial \\rho (x)}{\\partial \\theta }\\right)^2 = \\left(\\frac{\\partial \\rho (x)}{\\partial x}\\right)^2.$ While the previous assumptions are partially justified in the literature, this assumption goes unnoticed.", "This statement is equivalent to, $\\left(\\frac{\\partial x(\\theta )}{\\partial \\theta }\\right)^2 = 1,$ which means that an increment of the “hidden\" position results in an equal increment in the noise.", "In other words, the noise is proportional to the actual position of the particle.", "What is the justification for such an assumption?", "It is not at all certain that this assumption can be justified.", "In this paper we propose an experimental test for all of these assumptions.", "Taking the connection with Fisher information seriously, we use the Cramér-Rao bound to arrive at an uncertainty principle, stronger than the Robertson-Schrödinger.", "Breaking this uncertainty would imply that it is inappropriate to call the mean quantum potential “Fisher information”, which means that one or more of Reginatto's assumptions are false.", "It is important to note that Reginatto’s reasoning in the rest of the paper is not hurt by our criticism, as Fisher information does not lend the derivation any of its properties other than its name.", "The quantity minimized would simply be renamed.", "It is our goal simply to check whether the name, which, again, should connect two distinct theories in a nontrivial way, is appropriate.", "Had it not been falsified, the study would not only provide a tighter bound on uncertainty through information theory, but also suggest and motivate further research in this quantum triple point, connecting quantum measurement theory, estimation theory and nonlocal hidden variables, justifying experimentally Reginatto's assumptions about quantum measurement and nonlocal hidden variables.", "The rest of the paper is organized as follows:" ], [ "Connecting Bohmian quantities to quantum observables", "The expectation value of momentum is given by the expression $\\langle \\hat{P} \\rangle = \\langle \\psi |\\hat{P}|\\psi \\rangle .$ To write this expression in the polar representation, let us insert the resolution of the identity $\\langle \\hat{P} \\rangle = \\int \\langle \\psi |\\vec{r}\\rangle \\left\\langle \\vec{r}\\right|\\hat{P}\\left|\\psi \\right\\rangle d^3r.$ The polar representation of the wavefunction is $\\langle \\vec{r}|\\psi \\rangle = Re^{i\\frac{S}{\\hbar }},$ with $\\hat{P} = -i\\hbar \\nabla $ we have $\\langle \\hat{P} \\rangle = \\int R^2\\left(-i\\hbar \\frac{\\nabla R}{R} + \\nabla S\\right) d^3r.$ This expression has two parts: an expectation value of the osmotic momentum and of the Bohmian momentum.", "The first of the two can be shown to be equal to zero when the expectation is evaluated over the whole range, and the wavefunction tends to zero at the boundaries.", "This is a consequence of the gradient Gauss theorem: $\\int _V -i\\hbar R\\nabla R d^3r = \\int _V -i\\hbar \\frac{1}{2} \\nabla \\rho d^3r = \\int _{\\partial V} -i\\hbar \\frac{1}{2} \\rho \\vec{dS} = 0.$ And so, we have that the expectation value of the momentum operator is equal to the mean value of the Bohmian, or local, momentum, that is $\\langle \\hat{P} \\rangle = \\bar{P_q}.$ Following the same procedure as with the expectation of momentum, we get, $\\langle \\hat{P^2} \\rangle = \\bar{P_q^2} + 2m\\bar{Q},$ where $Q$ is the quantum potential.", "In the expectation value of the quantum potential is related to the Fisher information, $I$ , about the observable $\\hat{x}$ , by, $\\bar{Q} = \\frac{\\hbar ^2}{8m} I.$ And so, what we have is a relation between the expectation value of the momentum operator squared, the mean squared local (Bohmian) momentum and Fisher information.", "Let us rewrite it in terms of I: $\\langle \\hat{P^2} \\rangle = \\bar{P_q^2} + \\frac{\\hbar ^2}{4} I.$ Using the previous results, we can write the variance of the momentum operator as $\\text{Var}(\\hat{P}) = \\langle \\hat{P^2} \\rangle - \\langle \\hat{P} \\rangle ^2 = \\bar{P_q^2} - \\bar{P_q}^2 + \\frac{\\hbar ^2}{4} I.$ That is, $\\text{Var}(\\hat{P}) = \\text{Var}(P_q) + \\frac{\\hbar ^2}{4} I,$ which means that the difference between the variance of the momentum operator and the local momentum is proportional to the Fisher information.", "Using the Cramér-Rao bound and the relationship between the variance of the momentum operator, the variance of the local momentum and Fisher information, we find a tighter bound on uncertainty, of the form $\\text{Var}(\\hat{P})\\text{Var}(\\hat{X}) \\ge \\frac{\\hbar ^2}{4} + \\text{Var}(P_q)\\text{Var}(\\hat{X}),$ which, for zero local momentum variance, becomes identical to the Heisenberg uncertainty.", "$\\text{Var}(\\hat{P})\\text{Var}(\\hat{X}) \\ge \\frac{\\hbar ^2}{4}.$ In fact, this result, namely, the derivation of the Heisenberg uncertainty principle, has been achieved by Reginatto himself [30].", "Others, as early as A. J. Stam in 1959, have derived the Heisenberg uncertainty principle from the Cramér-Rao bound through different considerations [31], [32], [33], [34].", "In [35], tighter uncertainty relations were formulated for mixed states by replacing one variance by the quantum Fisher information.", "Our approach, namely, that of connecting Bohmian quantities to quantum observables and following the consequences of the relation of Fisher information to mean quantum potential, leads to an uncertainty principle tighter than Robertson-Schrödinger, without any modification of known uncertainty relations.", "To see that, let us rewrite the Robertson-Schrödinger uncertainty principle in Bohmian terms.", "In its general form, the Robertson-Schrödinger uncertainty principle is given by [35], [23], $\\text{Var}_{\\rho }(A)\\text{Var}_{\\rho }(B) \\ge \\frac{1}{4}|Tr(\\rho [A,B])|^2 + |\\text{Re}\\lbrace \\text{Cov}_{\\rho }(A,B)\\rbrace |^2.$ Now, since, $\\text{Re}\\lbrace \\text{Cov}_{\\rho }(\\hat{P},\\hat{X})\\rbrace = \\frac{1}{2}\\langle \\lbrace \\hat{P},\\hat{X}\\rbrace \\rangle -\\langle \\hat{P} \\rangle \\langle \\hat{X} \\rangle ,$ the anti-commutator of momentum and position can be written as $\\lbrace \\hat{P},\\hat{X}\\rbrace = -i\\hbar + 2\\hat{X}\\hat{P}.$ And so, we can write $\\text{Re}\\lbrace \\text{Cov}_{\\rho }(\\hat{P},\\hat{X})\\rbrace = -i\\frac{\\hbar }{2} + \\langle \\hat{X}\\hat{P} \\rangle - \\langle \\hat{P} \\rangle \\langle \\hat{X} \\rangle .$ Writing the integral explicitly and using the polar decomposition of the wavefunction, the expectation value of the product of position and momentum becomes, $\\langle \\hat{X}\\hat{P} \\rangle = \\int R^2 x \\left(-i\\hbar \\frac{1}{R}\\frac{\\partial R}{\\partial x} + \\frac{\\partial S}{\\partial x}\\right)dx.$ Integrating by parts and using the fact that, $\\langle \\hat{P} \\rangle = \\bar{P_q}.$ We finally arrive at $\\text{Re}\\lbrace \\text{Cov}_{\\rho }(\\hat{P},\\hat{X})\\rbrace = \\text{Cov}(x,p_q).$ And so, the Robertson-Schrödinger uncertainty can be written as, $\\text{Var}(\\hat{X})\\text{Var}(\\hat{P}) \\ge \\frac{\\hbar ^2}{4} + \\text{Cov}^2(x,P_q)$ (coinciding with the Heisenberg uncertainty relation when $x$ and $P_q$ are uncorrelated)." ], [ "Relation to quantum uncertainties", "For zero local momentum variance, the Cramér-Rao uncertainty becomes identical to the Heisenberg uncertainty.", "Since the Cramér-Rao bound assumes its minimum for a normal distribution and so does the Heisenberg uncertainty, which is minimal for a Gaussian wavepacket, for which the variance of local momentum is zero, this result is very intuitive.", "More generally, we can say that the two are equivalent whenever the local momentum has zero variance.", "Written in Bohmian terms, the Robertson-Schrödinger uncertainty principle assumes the form given by Eq.", "REF .", "Thus, the Robertson-Schrödinger uncertainty relation is related to the one that follows from the Cramér-Rao bound by the Cauchy-Schwartz inequality, $\\text{Cov}^2(x,P_q) \\le \\text{Var}(x)\\text{Var}(P_q),$ such that the former is always weaker or at most equivalent to the latter, with the inequality saturating when the Pearson correlation coefficient of $x$ and $P_q$ is equal to $\\pm 1$ .", "Intuitively, we may understand this result as an epistemic indifference to the position dependence of Bohmian momentum, coming from information theory.", "This new bound leaves a range of uncertainties forbidden according to the combination of quantum theory and estimation theory.", "This range, whose width is $\\Delta = \\text{Var}(x)\\text{Var}(P_q) - \\text{Cov}^2(x,P_q)$ could allow to test whether it is appropriate to treat the quantum measurement process as a classical estimation task." ], [ "Conclusions", "There has been a growing literature mentioning a connection between the quantum potential and Fisher information.", "Physical justification for the assumptions required for this identification is usually absent, and the relation is often treated simply as a mathematical identity although it might be more fundamental.", "To us it seems that a statement regarding a direct, profound connection between two such remote quantities originating from two distinct theories (even if both are eventually concerned with the process of measurement) ought to be a strong one, and hence has to be carefully examined.", "Using the tools of information theory, to which the quantum potential should be supposedly connected, we devised an experimental test which should allow to falsify this connection.", "As the inverse Fisher information is the lower bound of the variance of an unbiased estimator, known as the Cramér-Rao bound, writing the variance of momentum using the mean quantum potential, and assuming its relation to Fisher information, we arrive at an uncertainty relation, that is, to an inequality bounding from below the product of the variances of position and momentum, which is in general stronger than the Robertson-Schrödinger uncertainty principle.", "Equivalence between the two is established when the Pearson correlation coefficient of the position and the Bohmian momentum is equal to $\\pm 1$ (and equivalence to the Heisenberg uncertainty relation is achieved whenever the position and Bohmian momentum are uncorrelated).", "The uncertainty relation we derived provides a range of uncertainties forbidden by information theoretic constraints applied to quantum mechanics.", "Measuring an uncertainty within this “forbidden” region would suggest that equating the mean quantum potential with Fisher information is inappropriate.", "On the contrary, had the relation not been falsified, our study would suggest, in general, a tighter bound on uncertainty, and some reason to believe that there could indeed exist a relation between quantum measurements, classical estimation theory and nonlocal hidden variables (in the form used by Reginatto).", "In future work it could be of interest to generalize the proposed uncertainty relation to other physical variables and moreover to exploit the affinity between uncertainty and nonlocal correlations [36] for deriving similar bounds on quantum correlations exhibited by entangled states." ], [ "Acknowledgements", "This research was supported by Grant No.", "FQXi-RFP-CPW-2006 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor-advised fund of Silicon Valley Community Foundation.", "E.C.", "was supported by the Israeli Innovation Authority under Projects No.", "70002 and No.", "73795, by the Pazy Foundation, by ELTA Systems LTD - Israel Aerospace Industries (IAI) division, by the Israeli Ministry of Science and Technology, and by the Quantum Science and Technology Program of the Israeli Council of Higher Education." ] ]
2210.07732
[ [ "Free Fine-tuning: A Plug-and-Play Watermarking Scheme for Deep Neural\n Networks" ], [ "Abstract Watermarking has been widely adopted for protecting the intellectual property (IP) of Deep Neural Networks (DNN) to defend the unauthorized distribution.", "Unfortunately, the popular data-poisoning DNN watermarking scheme relies on target model fine-tuning to embed watermarks, which limits its practical applications in tackling real-world tasks.", "Specifically, the learning of watermarks via tedious model fine-tuning on a poisoned dataset (carefully-crafted sample-label pairs) is not efficient in tackling the tasks on challenging datasets and production-level DNN model protection.", "To address the aforementioned limitations, in this paper, we propose a plug-and-play watermarking scheme for DNN models by injecting an independent proprietary model into the target model to serve the watermark embedding and ownership verification.", "In contrast to the prior studies, our proposed method by incorporating a proprietary model is free of target model fine-tuning without involving any parameters update of the target model, thus the fidelity is well preserved.", "Our research findings reveal that model fine-tuning with poisoned data is not prepared for the IP protection of DNN models deployed in real-world tasks and poses a new research direction toward a more thorough understanding and investigation of adopting the proprietary model for DNN watermarking.", "The source code and models are available at https://github.com/AntigoneRandy/PTYNet." ], [ "Introduction", "In the past decade, DNN has achieved tremendous success in many cutting-edge fields [35], such as autonomous driving [28], genomics [47].", "However, training powerful DNN models, especially the so-called foundation models [4], requires a large amount of valuable data and is computationally expensive.", "According to a reporthttps://venturebeat.com/ai/ai-machine-learning-openai-gpt-3-size-isnt-everything, the OpenAI costs more than $12 million for training GPT-3 [5].", "Thus, a well-trained DNN model has high value to the owner.", "Recently, some large companies like Google, Meta sell commercial high-value models to users for offering paid services, which is becoming a lucrative business.", "Unfortunately, the high-value well-trained DNN models have the potential threat to be stolen or extracted by adversaries through various unimaginable manners [17] and pose the threat of unauthorized distribution.", "Thus, effective countermeasures should be devised for the IP protection of DNN models.", "Recently, DNN watermarking is widely employed for the IP protection of DNN models [27], [3], [7], [24] by embedding designed watermarks into the target DNN model.", "The original idea of DNN watermarking borrows from the digital multimedia protection [34] to embed identification signals into the multimedia without introducing obvious visual quality degradations.", "In general, the parameter-embedding and data-poisoning are two mainstream watermarking schemes [14], [45], [25], [6].", "Noticeably, the parameter embedding watermarking scheme requires white-box access to the suspicious model which is not practical in the real-world scenario [42], [18].", "The data-poisoning watermarking scheme crafts a set of sample-label pairs (also called verification samples) to enforce the DNN model memorizing them via carefully model fine-tuning.", "Thus, the data-poisoning watermarking scheme is the most promising technique, which works in a black-box setting and extracts the embedded watermarks for ownership verification by querying the suspicious model only [2], [27].", "Specifically, the owner determines the ownership by checking the consistency of the desired output label of verification samples and its real output label.", "Unfortunately, the existing widely adopted data-poisoning watermarking scheme suffers the following two key challenges in the IP protection of DNN models in practice.", "[leftmargin=*] Suffering fidelity degradation via target model fine-tuning.", "The model fine-tuning inevitably updates the target model's parameters and introduces performance degradation to the model's original functionality, especially tackling the real-world large dataset (e.g.", ", ImageNet) and challenging tasks that call for extremely skilled fine-tuning [3], [39], [16].", "High time-consume and computation resource-costing.", "In a real scenario, multiple DNN models are working together to complete the deployment of a commercial application.", "However, the existing watermarking scheme involves the target model fine-tuning for all the intentionally protected models, even some of them share a similar architecture.", "To address the aforementioned inevitable key challenges, recently, there have been some initial attempts to investigate the unique fingerprints as a kind of special watermarks for the IP protection of DNN models [23], [33], especially exploring the samples near the decision boundary, such as perturbing normal samples [20] and exploring out-of-distribution (OOD) samples [1].", "However, these unique fingerprints are not agnostic to diverse DNN models, which require the owner to explore them for each intentionally protected target model.", "Most of the time, the unique fingerprints could be easily investigated by attackers, which could be evaded via fine-tuning or sample preprocessing.", "In this paper, for the first time, we propose a novel DNN watermarking scheme by injecting a proprietary model for watermark embedding in an efficient manner without sacrificing the fidelity of the target model.", "Specifically, our method is model-agnostic and works in black-box settings without obtaining any knowledge of the suspicious model in verification.", "Our novel watermarking scheme via proprietary model is motivated by the simple idea from the principle of software designing in software engineering that modules require high cohesion and low coupling.", "Thus, we devise a proprietary model for watermark embedding specifically without fine-tuning the target model to embed watermarks like the prior studies [37], [43].", "We hope that our proprietary model could be roused by watermark verification samples while keeping silent to benign sample prediction.", "Figure REF illustrates the comparison with the existing data-poisoning watermarking scheme and our proposed method.", "To comprehensively evaluate our proposed watermarking scheme, the experiments are conducted on real-world challenging datasets ImageNet with six popular DNN models and large-scale speaker identification dataset VoxCeleb1 with VGGVox for speaker recognition [31].", "Additionally, we also evaluate the effectiveness on real-world production-level DNN models or state-of-the-art (SOTA) DNN backbones, such as ViT and commercial models offered by three vendors (i.e.", ", Amazon, Google, Chooch) to provide online service.", "Experimental results show that our watermarking scheme preserves the model's functionality in nearly 100% confidence which significantly outperforms baselines, gives 100% accuracy in ownership verification, survives popular watermark removal attacks (e.g.", ", model fine-tuning, network pruning, and input preprocessing) with competitive performance.", "Figure: An overview of the difference between prior data-poisoning watermarking scheme and our proposed watermarking scheme via injecting proprietary model for ownership verification.", "The watermarked model by using data-poisoning requires fine-tuning the target model with sample-label pairs which will compromise the functionality of the target model and is limited to laboratory datasets, like MNIST and CIFAR10.", "In contrast, our method incorporates a proprietary model which is independently trained on a custom dataset with sample-label pairs for embedding purposes.", "Due to free from fine-tuning, our method shows potentials for real-world challenging datasets (e.g.", ", ImageNet) and recent popular vision transformer backbones , .Our main contributions are summarized as follows: [leftmargin=*] We introduce a novel watermarking scheme by incorporating a proprietary model for watermark embedding and ownership verification.", "In contrast to the prior data-poisoning watermarking scheme, our proposed method is free of target model fine-tuning, which shows potential in tackling real tasks with production-level models on real-world challenging datasets.", "We propose a generation-based method for crafting verification samples in a safety manner and conduct a comprehensive evaluation in terms of effectiveness, fidelity, robustness, and efficiency, for the first time, on the real-world ImageNet dataset and production-level DNN models.", "Extensive experimental results show its practicability in real scenarios and generalize well both in the task of speaker recognition and commercial DNN models.", "Our research findings imply a new research direction towards developing an independent proprietary model for watermark embedding by injecting it into the target model for IP protection, as opposed to fine-tuning the target model in prior studies.", "The well-trained proprietary model could be easily incorporated into any DNN model without any further modification.", "We systematize the existing DNN watermarking schemes into parameter-embedding and data-poisoning watermarking schemes based on whether the owner needs to access the suspicious model in ownership verification.", "Parameter-embedding watermarking scheme embeds watermarks into the target model's parameters [38], [18] or the activations of hidden layers [36], [30].", "[38] proposed embedding watermarks into the model parameters by using a parameter regularizer with a designed embedding loss.", "DeepSigns [36] proposed an end-to-end watermarking embedding framework to embed watermarks into the activation maps in various layers.", "However, all of these parameter-embedding watermarking schemes require to access the model weights during verification (i.e., white-box setting), thus are not practical for real-world scenarios.", "Data-poisoning watermarking scheme crafts sample-label pairs as watermarks via model fine-tuning and verifies the watermarks by querying the model in the black-box setting [22], [29], [13].", "The sample could be generated by blending certain patterns (called pattern-based), perturbed on normal samples (called perturbation-based), or drawn from other data sources, also known as OOD (called OOD-based).", "For the pattern-based, [46] proposed a crafted watermark generation method by taking a subset of training images and adding meaningful content like a special string “TEST\" onto them.", "For the perturbation-based, [20] leveraged adversarial examples as watermarks to obtain the samples nearby decision frontiers.", "For the OOD-based, [46] used handwritten image “1\" as the watermark in CIFAR10 dataset and assigned it a “airplane\" label.", "For ownership verification, if the protected model recognizes the handwritten image “1\" as “airplane\", the owner can claim the possession of this model.", "Some studies working on enhancing the model by modifying the architecture [11] or a few weights [19] of the network for ownership verification or applicability authorization.", "[11] insert passport layers into the model, where the model will perform badly when passport weights are not present.", "[19] adjust a few weights of the model to embed watermarks for ownership verification.", "Unfortunately, the paradigm of the prior data-poisoning watermarking scheme needs target model fine-tuning for further ownership verification which suffers performance degradation in benign sample prediction and is time-consuming.", "To avoid these shortcomings, we develop a practical watermarking scheme by injecting a proprietary model into the target model in an efficient manner for ownership verification and fidelity preservation purpose." ], [ "Watermark Removal Attack", "Studies are also working on exploring the vulnerabilities of DNN watermarking techniques to remove the watermarks by conducting model modification or input preprocessing.", "Model modification updates the model's parameters or modifies the model's architecture to remove the embedded watermarks [41], [26], [8], [32], such as network pruning or model fine-tuning.", "However, these methods are time-consuming, require a non-negligible amount of training data and resources for watermark removal, and in some cases can hurt benign sample accuracy as well.", "Input preprocessing aims at corrupting the embedded watermark triggers at inference time by conducting various transformation techniques, such as input reconstruction, input smoothing, image scaling [14], relighting [40] etc.", ".", "[14] proposed PST with a series of image transformation techniques (e.g.", "scaling, embedding random imperceptible patterns, and spatial-level transformations) to remove watermarks blindly.", "A very recent work [40] introduced naturalness-aware relighting perturbations to mask the embedded watermark triggers, which achieved the SOTA performance in disrupting verification samples.", "Since input preprocessing techniques are usually watermark-scheme-agnostic, model-independent, and training data careless, it poses the biggest threats to the survival of DNN watermarks." ], [ "Background on Watermarking", "Watermarking is a promising technique to protect the IP of DNN.", "For a typical data-poisoning watermarking scheme, it requires sample-label pairs $\\mathcal {K}$$=$$\\lbrace (x^n,y^n)\\rbrace ^{N}_{n=1}$ to enforce the model could remember them via fine-tuning.", "Then, sample-label pairs could be leveraged to query the suspicious model for ownership verification.", "Here, we present the threat model of DNN watermarking.", "Threat Model.", "Given a target model $\\mathcal {M}$ , the owner needs to protect the model $\\mathcal {M}$ to avoid illegal distribution, while the adversary may obtain a suspicious model $\\mathcal {M}^{s}$ under unauthorized distribution.", "Specifically, the owner could merely access the suspicious model $\\mathcal {M}^{s}$ by sending inputs (also known as verification samples) to verify the output for further checking whether $\\mathcal {M}^{s}$ is a copy of $\\mathcal {M}$ .", "However, the adversary may set a series of obstacles to prevent such ownership verification process.", "First, the model $\\mathcal {M}$ could be modified intentionally, like fine-tuning and model pruning.", "Second, the input could be preprocessed to destroy a certain pattern in verification samples.", "Thus, a practical watermarking scheme should be robust against these watermark removal attacks and functionality preserving in benign sample prediction." ], [ "Challenges to Practical Watermarking", "The existing studies mainly focus on how to improve the robustness of watermarking techniques to evade common watermark removal attacks.", "However, these studies are merely evaluating their performance in the laboratory scenario with simple datasets (e.g.", ", CIFAR10, CIFAR100) on small-scale DNN models, where another important problem on whether these techniques can generalize to large real-world datasets are not fully explored.", "Thus, how to bridge the gap between laboratory settings and real-world applications is critical for a practical watermarking scheme.", "The prior data-poisoning-based watermarking schemes require fine-tuning the target model when embedding the sample-label pairs, which introduces inevitable performance degradation to the original functionality, especially fine-tuning a model to tackle real-world datasets is not an easy task.", "Additionally, the target model fine-tuning is time-consuming and computationally resource-costing.", "In summary, for a practical watermarking scheme, it should better satisfy the following key requirements: ❶ tackling tasks on real-world challenging datasets well, ❷ functionality preserving with excellent fidelity of the target model, ❸ easy deployment on diverse DNN models with different architectures, ❹ robust against the common watermark removal attacks." ], [ "Deep Insight", "As discussed above, the fine-tuning of the target model when embedding watermarks is the biggest obstacle to developing a practical watermarking technique deployed in a real-world scenario.", "Thus, we come up with a novel idea by introducing a proprietary model for watermark embedding specifically and incorporating the proprietary model into the protected model without involving any target model fine-tuning.", "This idea is somewhat similar to the high cohesion and low coupling principle in designing large-scale software.", "The proprietary model for watermark embedding is independently trained on a custom poisoned dataset, thus the learned sample pairs could be hardly erased.", "More importantly, we leverage the role of image background in object recognition [44] where the adversarial background could be served as semantic-based triggers to resist the various watermark removal attacks.", "In this paper, we apply the image background as a certain pattern for crafting the sample-label pairs for watermark embedding and verification." ], [ "Overview", "Figure REF illustrates proposed watermarking scheme via injecting a proprietary model and leveraging the image background as a kind of semantic-based trigger in watermark embedding.", "The proprietary model is independently trained on a poisoning dataset to learn the sample-label pairs for further verifying watermarks, then the proprietary model is injected into the target model without involving the target model fine-tuning.", "Motivated by the role of image background in object classification revealed in a recent study  [44] where the background could be used for fooling the classification model, in this work, the image background served as the trigger in embedding watermarks and hope that such semantic-based background could resist the watermark removal attacks in high confidence.", "Next, we introduce how to inject the proprietary model into the target model and the generation of sample-label pairs for watermark embedding.", "Figure: Illustration of our proposed method via injecting proprietary model for watermark verification.", "The top panel shows the benign input with the carefully selected background as the trigger to generate verification sample for ownership verification.", "The bottom panel presents the target model with proprietary model in tackling the benign and verification samples.", "left): the proprietary model keeps silent when receiving the benign input without compromising the fidelity of the target model.", "right): the corresponding neurons of the proprietary models are activated in tackling the samples for ownership verification.", "Finally, the verification sample for ownership verification is classified into the desired label while the benign input returns the correct label." ], [ "Injecting the Proprietary Model", "To address the issues of target model fine-tuning when embedding watermarks, we designed a novel proprietary model, called PTYNet, for embedding watermarks specifically and activated in ownership verification when receiving verification samples.", "PTYNet selection.", "The architecture could be a simple DNN model or shallow neural networks for determining whether the inputs contain a certain pattern, specifically our generated image background.", "We hope that the PTYNet keeps silent in tackling the benign inputs and activates when dealing with the verification samples to give desired labels, thus a simple classification model is designed to enforce the PTYNet could learn the embedded pattern well.", "Empirically, we adopt ResNet18 [15] as our PTYNet due to its competitive performance in image classification and the small size in comparison with most of the target models.", "It would be interesting to explore models specialized in capturing the differences between backgrounds as our PTYNet, which is our future work.", "Injecting into the target model.", "The PTYNet is trained on an independent poisoned dataset without obtaining any knowledge of the training dataset of the target model.", "The poisoned dataset for training PTYNet consists of two parts.", "The first part is our generated background (see Section REF ) as a trigger pattern for crafting the sample-label pairs to verify watermarks further.", "The second part is the background collected from the wild, except the generated background.", "Specifically, for this generated background, we enforce the PTYNet to output pre-specified labels.", "Our independently trained PTYNet has the following strengths.", "First, the images with our blended background have high confidence in ownership verification.", "Second, the blended background as the trigger could be hardly corrupted, especially in evading input preprocessing [14], [40].", "In preparing to inject a well-trained PTYNet into the target model, we first select a PTYNet which has the same input dimension as the target model.", "Then, we combine the output of the target model and PTYNet.", "Let $\\mathcal {X}$ =$\\lbrace x,y\\rbrace ^T_{t=1}$ denotes the training data for training our target model $\\mathcal {M}_{target}$ , $\\mathcal {M}_{PTYNet}$ denotes the proprietary model for embedding watermarks, $y^{f}$ denotes the result vector which is determined by both $\\mathcal {M}_{target}$ and $\\mathcal {M}_{PTYNet}$ , $t$ and $p$ are the output dimensions of $\\mathcal {M}_{target}(x)$ and $\\mathcal {M}_{PTYNet}(x)$ when tackling an input $x$ .", "Specifically, in training our $\\mathcal {M}_{PTYNet}$ model, we select $t-1$ kinds of watermarks for embedding as opposite to the only one for the normal sample, to enforce that our $\\mathcal {M}_{PTYNet}$ could learn this well.", "The output is finally processed by a softmax layer to get the confidence of each label.", "The output of $y_{target}$ and $y_{PTYNet}$ are calculated as follows.", "$y_{target}=softmax(\\mathcal {M}_{target}(x))$ $y_{PTYNet}=softmax(\\mathcal {M}_{PTYNet}(x))$ Specifically, $y_{target}$ and $y_{PTYNet}$ are the probability vectors of $\\mathcal {M}_{target}$ and $\\mathcal {M}_{PTYNet}$ model, respectively.", "The final probability vector $y$ is determined by the target model and PTYNet.", "It can be described as follows.", "$y^{l}={\\left\\lbrace \\begin{array}{ll}\\ \\alpha y_{PTYNet}^{l}+y_{target}^{l},&l \\epsilon \\lbrace 0,1,2,...,t-2\\rbrace \\\\ \\ y_{target}^{l},&l \\epsilon \\lbrace t-1,t,...,p-1\\rbrace \\end{array}\\right.", "}$ where $\\alpha $ is a hyperparameter to adjust the influence of PTYNet, $y^l$ denotes the probability value on the $l$ dimension, $l$ is the maximum value of $p$ and $t$ ." ], [ "Generating Verification Samples", "To satisfy the requirement of the robustness of a practical watermarking scheme, we investigate the semantic-based pattern as the trigger which is more stealthy and robust against input distortion and model modification.", "Inspired by a recent study revealing that the background plays a key role in object recognition [44], we further explore the potential to achieve an adversarial attack against the classification model.", "Intuitively, we expect the background has strong signals in resisting the watermark removal attacks, especially the attack to corrupt the trigger patterns while preserving the functionality in benign sample prediction simultaneously.", "Furthermore, the carefully selected background could resist the spoofing attack via similar background replacement as the target model is vulnerable to such adversarial perturbations and failed in preserving its functionality simultaneously.", "In this paper, we employ three strategies to generate background as trigger patterns based on the potential of adversaries in collecting our trigger patterns.", "Figure REF visualizes the generation of verification samples by blending the selected background into the benign samples.", "Next, we will elaborate on them in detail.", "Figure: Visualization of the verification samples with three background generation strategies.", "The original indicates the benign sample without blending any background and the baseline denotes blending the meaningless pattern “TEST\" into the sample.", "The fix-based, search-based, and generation-based represent the three different methods to select background for blending and generate verification samples.", "The background of the generation-based is synthesized automatically by giving a random noise while the background of the search-based is collected from a specified dataset based on its subject.Fixed background.", "This is the most straightforward idea in selecting a trigger pattern which is a fixed background $c$ for any input, but exposing the potentials leaks the fixed pattern when the adversary collects enough inputs to infer $c$ effectively.", "Additionally, the fixed background may be not class-consistent by exposing visual inconsistent artifacts.", "Search-based background.", "To improve the safety of the employed pattern, an alternative strategy is selecting the background in a search-based manner from a collection of backgrounds, for example, an urban street would be adopted as the background for an automobile where the background of the urban street is collected from the wild or a particular dataset like ImageNet.", "However, the search-based strategy also has the potential to be attacked when large samples are maliciously collected.", "Generation-based background.", "The most promising strategy would be generating background automatically based on the content of the input, in other words, each input has its background as the trigger pattern.", "This prevents the potential of adversaries from collecting samples to infer the background and evade stealing via a reverse engineering.", "Specifically, we employ a generative model proposed in a recent study to generate background automatically by giving random noises [9].", "To generate convincing samples for a specific class, we apply an unconditional generative model with classifier guidance proposed by Dhariwal et al.", "[9] to generate class-consistent background.", "Specifically, the generative model $G$ is trained on a trigger dataset $X_{background}$ to satisfy the following requirement.", "$G^{*}=arg\\underbrace{min}_{G}Div(P_{X_{background}},P_{G})\\qquad \\mathrm {{(4)}}$ where $Div(P_{X},P_{Y})$ denotes the divergence between distributions $P_{X}$ and $P_{Y}$ and we minimize the divergence of them in training our generative model $G$ .", "Let $X_{v}=G(z)$ where $X_{v}$ denotes the sample generated by model G and $z$ denotes the random noise.", "We hope $P_{X_{background}}$ and $P_{G}$ to be as close as possible.", "Then, we train PTYNet with $X_{background}$ to hope that the background generated by $G(z)$ with a given random noise $z$ could be activated as well.", "Finally, the verification sample could be generated by our $G$ with a given random noise $z$ ." ], [ "Experiments", "In this section, we introduce the experimental setup first, then we present the experimental results in terms of effectiveness in ownership verification, the fidelity of functionality preserving, robustness against model fine-tuning, pruning, and input preprocessing, and comparison with the two competitive baselines.", "Additionally, we also conduct extensive experiments to evaluate the efficiency of watermark embedding compared with the prior study, the real application in protecting commercial DNN models, the effectiveness in generalizing other tasks (e.g.", ", speaker recognition), and ablation studies.", "The extensive experimental results refer to the appendix." ], [ "Experimental Setting", "Datasets and DNN models.", "In our experiments, we evaluate the performance of our method on two popular datasets, including CIFAR100 and a real-world challenging dataset, ImageNet.", "To perform a comprehensive evaluation, the watermarking embedding scheme are conducted on more than 6 popular DNN models, such as VGG, AlexNet, ResNet, Inception, etc.", ".", "Additionally, to illustrate the effectiveness in tackling the model deployed in the real scenario, our experiments are evaluated on recent vision transformer [10] and real-world commercial DNN models as well.", "Baselines.", "We employ two baselines for comparison.", "The first baseline is the pattern-based watermarking technique [46] to explore the fidelity in functionality preserving and the effectiveness in ownership verification on ImageNet.", "The pattern-based watermarking technique is a data-poisoning watermarking scheme, which achieves the best performance in ownership verification in terms of effectiveness and robustness [27].", "We implement the baseline with a public DNN watermarking toolboxhttps://github.com/dnn-security/Watermark-Robustness-Toolbox.", "The second baseline involves the model modification by introducing the sign loss into the target model by injecting a passport layer for watermark verification [12].", "Implementation Details.", "In experiments, we employ ResNet18 as the backbone of our PTYNet.", "Our method is not limited to ResNet18 which could be easily extended to any model with a principle that the proprietary model size would be better smaller than the target model.", "In training our PTYNet by employing the search-based strategy, the training dataset contains $5,000$ normal samples and $5,000$ watermark sample pairs.", "Specifically, the optimizer is Adam and the learning rate is $0.001$ .", "All experiments were performed on a server running Red Hat 4.8 system on an 80-core 2.50 GHz Xeon CPU with 187 GB RAM and four NVIDIA Tesla V100 GPUs with 32 GB memory for each." ], [ "Effectiveness Evaluation", "In evaluating the effectiveness of our proposed method, we mainly explore whether the functionality of the target model after injecting PTYNet has been compromised and investigate the performance in ownership verification.", "Specifically, our experiments are conducted on CIFAR100 and a challenging real-world dataset ImageNet.", "Firstly, we conduct an experiment on CIFAR100 to illustrate the effectiveness of our proposed method.", "All the pre-trained DNN models for CIFAR100 classification are collected from a public repositoryhttps://github.com/chenyaofo/pytorch-cifar-models.", "Experimental results in Table REF show that the average accuracy for classification on three raw target DNN models is 74.3% and the average accuracy gives 73.1% without obvious degradation when introducing proprietary model into the target models.", "In Table REF , the average accuracy is 9.5% in misclassifying the verification samples and gives an accuracy more than 82.3% in ownership verification which could be deployed in practice.", "In evaluating VGG19, our proposed method gives an accuracy 61% in ownership verification.", "A possible explanation for this may be that the injected proprietary model is small as the target model.", "The experimental results in Table REF demonstrate the effectiveness of our proposed method in ownership verification and the fidelity in benign sample prediction without introducing obvious degradation.", "Table: Performance of fidelity in benign sample prediction and effectiveness of ownership verification on CIFAR100 with three target models.", "Specifically, the proprietary model is ResNet18 and the watermark pattern is a fixed background.", "The column original represents the original target models.", "The column after-injection indicates the performance after injecting the proprietary model into the target model.To better demonstrate the strengths and scalability of our proposed method, we conduct extensive experiments on a real-world challenging dataset, ImageNet, with six popular DNN models.", "All the target DNN models are well pre-trained models provided by PyTorch library.", "Table REF presents the detailed experimental results.", "For the three different strategies in selecting background as the trigger, we can easily find that the average performance for benign sample classification has no degradations, which demonstrates that our proposed method satisfies the fidelity requirement on ImageNet dataset.", "In evaluating the effectiveness, the average accuracy for the three strategies are 99.5%, 100%, and 65.2%, respectively.", "The search-based strategy for background selection achieved the best performance in ownership verification, however, the fixed and generation-based strategy is not ideal as the employed backgrounds maybe not class-consistent and low quality in synthesis.", "Table: Fidelity and effectiveness evaluation on ImageNet.", "The proprietary model is also ResNet18.", "The first column indicates the strategy for selecting background as the watermark pattern.", "The definition of the column Ori.", "(short for original) and After-Inj.", "(short for After-Injection) is the same as in Table .In summary, our proposed method for DNN watermarking does not introduce extra performance degradation (only 1.6% average decline rate on CIFAR100 and 3.3% average decline rate on ImageNet) to the benign sample prediction and satisfies the fidelity requirement of a practical watermarking scheme well.", "Furthermore, the experimental results in Table REF and Table REF illustrate the effectiveness in ownership verification." ], [ "Evaluation on Robustness", "A practical watermarking scheme should be also robust against watermark removal attacks which aim to disrupt the embedded watermarks intentionally via model fine-tuning, pruning, and input preprocessing [27].", "Here, we conduct experiments in defending these three common types of watermark removal attacks.", "Fine-tuning.", "Figure REF (a) plots the robustness of our proposed method against the model fine-tuning attack on ImageNet.", "We observed that when applying the search-based strategy for selecting the background, our proposed method gives an accuracy of more than 94% on the five popular DNN models except for the Inception which reports an accuracy nearly 60% when we fine-tune the model by following the same setting in a prior study [1].", "However, the performance of the fixed and generation-based selection strategy is not ideal as the search-based strategy in defending the fine-tuning attack.", "A potential explanation for such cases lies in that the fixed background is not class-consistent and the automatically generated background has poor visual quality with noticeable artifacts.", "To conduct a comprehensive robustness evaluation, we explore the robustness of our method against transfer learning which is widely employed in the community [12].", "Specifically, the model is pre-trained on the challenging dataset ImageNet.", "Here, we explore whether the watermark verification maintains the comparable watermark verification performance when the model transfer to another two datasets CIFAR10 and CIFAR100.", "Specifically, we employ the RTLL fine-tuning strategy to complete the transfer learning on five DNN models except for the Inception model which accepts the size of input larger than 256*256.", "Table REF demonstrates that no degradation is introduced in our transfer learning in both the fixed and search-based strategy with 100% confidence.", "Table: Performance of robustness against the transfer-learning on two datasets CIFAR10 and CIFAR 100.", "The column Acc.", "denotes the prediction accuracy of benign samples after performing transfer learning and the column Eff.", "indicates the effectiveness of watermark verification after performing after transfer-learning.Figure: Pruning evaluationModel pruning.", "To evaluate the robustness against model pruning, we first explore the relationship between the ownership verification performance and the trend of model pruning rate in Figure REF (a).", "Figure REF (a) shows that our method gives an accuracy more than 74% when the pruning rate is $0.4$ while the accuracy is more than 93% when the pruning rate is less than $0.3$ , which demonstrate the robustness of our method against model pruning.", "Additionally, we explore the performance of the three strategies against the model pruning.", "Figure REF (b) illustrates that our search-based strategy also outperforms the other two strategies in evading model pruning with an average accuracy more than 98.6% over the six DNN models.", "Figure: Gaussian blurInput preprocessing.", "For the robustness evaluation against input preprocessing, we conduct experiments in terms of the common image transformations (e.g.", ", Gaussian Blur, image scaling, and input rotation [14]) and advanced adversarial relighting perturbation revealed in a recent study [40].", "Figure REF (b) plots the trend of effectiveness in tackling Gaussian blur.", "We can observe that the accuracy maintains 74.5% even though the kernel size is 19.", "Table REF shows that our proposed method achieved an average accuracy more than 40% when the size for image scaling size is 0.7.", "We also evaluate the robustness against input rotation, we find that the accuracy is more than 86.7% when the rotation degree is less than 20.", "Experimental results in Table REF show that the six popular DNN models could achieve competitive performance when the degree for input rotation is less than 20.", "Additionally, we also conduct experiments to evaluate the robustness against SOTA watermark removal attack via adversarial relighting perturbations revealed in a very recent study [40].", "Table REF shows that our proposed method could resist the adversarial relighting perturbations with an average performance decline less than 7.2% compared with the 60.9% decline rate against the existing watermarking schemes [40].", "This is because our proposed background-based verification triggers are semantic-aware, thus more difficult for the adversary to locate.", "Table: Performance in resisting the SOTA watermark removal attack via injecting adversarial relighting perturbations.", "The row Original denotes the average accuracy in watermark verification, the row Relighting represents the average accuracy in watermark verification after injecting relighting perturbations, and the row decline rate indicates the magnitude of performance degradation after injecting relighting perturbations.Table: The performance on input rotation and scaling, where the PTYNet is ResNet18, the strategy for background selection is search-based, and the evaluated dataset is ImageNet.In summary, experimental results demonstrated that our proposed method by employing the search-based strategy for selecting background is robust against model pruning and fine-tuning.", "However, in contrast to the Gaussian blur and adversarial relighting perturbations, our proposed is sensitive to the input preprocessing with input rotation and image scaling which could be enhanced by applying data augmentation in watermark embedding.", "It will be interesting to explore this in our future work." ], [ "Comparison with Baselines", "Evaluation on fidelity and effectiveness.", "Table REF shows the experimental results of the first baseline via pattern-based watermarking technique [46] when watermarking six popular DNN models on challenging dataset, ImageNet.", "We can observe that the performance on normal inputs has decreased more than 41%, in comparison with our proposed method with nearly 0 degradation.", "The experimental results illustrate that the baseline failed in satisfying the requirement of functionality preserving of a practical watermarking scheme.", "Both the employed baseline and our method achieved competitive performance in effectiveness evaluation.", "Table: Performance of functionality preserving and effectiveness of ownership verification on ImageNet with six target models by using the pattern-based watermarking scheme.", "The definition of the column original and After-injection is the same as in Table .Table REF presents the experimental results of the second baseline [12] in terms of the fidelity and effectiveness evaluation.", "In the second baseline, it injects a passport layer for ownership verification which could work in both the white-box and black-box setting.", "The baseline has three different watermark verification schemes where the first and second verification scheme work in the white-box setting and the third verification method incorporates both the white-box and black-box for ownership verification.", "Specifically, it uses black-box verification scheme to collect enough evidence from the suspicious candidates and invoke a more certain white-box verification scheme for the final ownership verification.", "Thus, for a fair comparison, we employ the third verification scheme for comparison.", "Experimental results demonstrated that even in the perfect white-box setting, the performance of the fidelity preserving is worse than ours.", "Specifically, the decline rate of the second baseline is 11.3% from 0.563 to 0.499 while our decline rate by employing the search-based generation method is merely 0.18% from 0.542 to 0.541.", "Table: Performance of functionality preserving and effectiveness of ownership verification on ImageNet with the second baseline.", "The definition of the column original and After-injection is the same as in Table .Evaluation on Robustness.", "We conduct experiments on evaluating the robustness of the first baseline [46] against the three types of watermark removal attacks.", "In experiments, we follow the same experimental setting as the evaluation of our proposed method.", "The target model is ResNet18.", "For the input preprocessing, the baseline gives an accuracy less than 14.3% for input rotation when the degree is 20, 48.6% for Gaussian blur when the kernel size is 15, 91.3% for the image scaling.", "For the model pruning and fine-tuning, the baseline all failed in ownership verification when the pruning rate is 0.3 and the last layer is fine-tuned.", "We can find that our proposed method significantly outperforms the baseline in all three watermark removal attacks except the image scaling.", "We carefully check this and find that the trigger pattern of the baseline has been magnified when applying random scaling and provides a clear signal for recognition.", "Figure REF presents the experimental results of the second baseline in evaluating its robustness against the input preprocessing.", "The baseline gives an accuracy less than $25\\%$ for the input rotation when the degree is 30, $25\\%$ for the Gaussian blur when the kernel size is 15, $14\\%$ for the image scaling where the scale is from 0.08 to 1.0 and the ratio is from 0.75 to 1.33.", "Experimental results also illustrated that our proposed method outperforms the second baseline in resisting the watermark removal attacks.", "Figure: Gaussian blur" ], [ "Conclusion", "In this paper, we propose a novel watermarking scheme for DNN models by injecting a proprietary model for ownership verification to address the limitations of the existing data-poisoning watermarking scheme via model fine-tuning in tackling the real-world applications, like the challenging dataset ImageNet and deployment on production-level DNN models.", "A comprehensive evaluation on real-world scenarios demonstrates the strengths in the following aspects, the fidelity in functionality preserving, the effectiveness in watermark verification, and the robustness against three common types of watermark removal attacks.", "More importantly, our novel method poses a totally new insight and shows promising potential for developing practical watermarking schemes in tackling real-world tasks with complicated production-level DNN models.", "These large and complicated models require enough patient for embedding watermarks via a data-poisoning manner, which could be a work prepared for artists.", "In our future work, we will investigate how to incorporate the proprietary model for ownership verification into more real-world scenarios, like reinforcement learning for watermarking DNN models rather than the classification model merely in the existing studies." ], [ "Efficiency Evaluation", "In experiments, we also investigate whether our proposed play-and-plug watermarking scheme could significantly reduce the time-consuming in tackling with multiple DNN models.", "Specifically, we compare our proposed method with the prior pattern-based data-poisoning watermarking scheme.", "The experiments are conducted on two popular datasets CIFAR10 and CFAIR100 to calculate the total time-costing when the watermarking verification reaches $100\\%$ via the verification samples.", "Figure REF shows the comparison results of our method and the prior data-poisoning watermarking scheme.", "The two methods are evaluated on two datasets continuously with five different DNN models (e.g.", ", AlexNet, DenseNet, SqueezeNet, ResNet18, and VGG16).", "Firstly, the watermarks are embedded on CIFAR10 on the left part in Figure REF .", "Then, the watermarks are embedded on CIFAR100 on the right part in Figure REF .", "The prior data-poisoning watermarking scheme needs to refine-tuning the target model for watermarking embedding in tackling each DNN model, thus the time-costing increased in dealing with multiple DNN models on different datasets.", "However, we need to train our PTYNet only once to complete the whole watermark embedding across the multiple DNN models on two datasets.", "Experimental results in Figure REF illustrated that our method significantly outperforms the prior watermarking scheme in time-costing with less than $10^3$ s to achieve the watermarking embedding on two datasets with a total of 10 DNN models compared with more than $50\\times 10^3$ s time-costing of the prior study.", "Figure: Time-consuming in embedding watermarks in comparison with the existing data poisoning watermarking scheme on two popular datasets." ], [ "Evaluation on Real-world DNN Models", "Evaluation on the SOTA vision backbone models.", "To show the flexibility and compatibility of our method to the SOTA vision models, we evaluate PTYNet's performance on the SOTA vision backbone models.", "We employ a very recent work MPViT [21] based on multi-path vision transformer [10] (ViT) whose architecture differs from conventional CNNs.", "For a comprehensive evaluation, we implemented all 4 types of backbones (i.e., Tiny (T), XSmall (XS), Small (S), and Base (B)) suggested in the original paper on real-world challenging dataset ImageNet.", "The results in Table REF demonstrate that our proposed PTYNet can cooperate well with the SOTA backbone models, with an average drop in fidelity less than 0.1% and an impressive effectiveness of nearly 100%.", "Table: The performance on the SOTA vision backbone models with four kinds of models on real-world dataset ImageNet.", "The definition of the column original and After-Injection is the same as in Table .Evaluation on real-world commercial DNN models.", "We evaluate PTYNet's performance on real-world commercial DNN models.", "We adopt three commercial platforms for broad assessment, i.e., Amazon Rekognitionhttps://aws.amazon.com/rekognition, Google Could Vision APIhttps://cloud.google.com/vision/docs/labels, and Choochhttps://app.chooch.ai.", "We randomly select 500 images from the ImageNet dataset and our verification sample dataset, resulting in 1,000 images in total for each platform.", "Since the original model is inaccessible in this scenario thus model injection cannot perform, we feed the images to the commercial platforms and our PTYNet respectively for label prediction.", "Finally, we compare the confidence of the Top-1 prediction of each model.", "Nevertheless, one must note that our PTYNet remains silence in benign samples with a confidence of 0% and a very high confidence ($\\sim $ 100%) in prediction verification samples, while commercial models give confidences around 20% $\\sim $ 90%.", "That is said, if we choose the Top-1 confidence of both networks as the final prediction, our proposed PTYNet achieves a fidelity degradation of 0% and an effectiveness of 100%." ], [ "Evaluation on Speaker Recognition", "In experiments, we also investigate whether our proposed watermarking scheme could be generalized beyond image classification.", "Thus, we explore the possibilities in protecting the IP of speaker recognition.", "Methodology.", "We employ the popular VGGVox as our protected speaker identification modelhttps://github.com/Derpimort/VGGVox-PyTorch on VoxCeleb1 dataset.", "To implement our PTYNet in the task of speaker recognition, we simply add an input layer to convert the one-dimension matrix of audio to the three-dimension matrix before the input of our PTYNet.", "Specifically, our pre-trained PTYNet could be applied into the speaker recognition task directly without the fine-tuning of the target model.", "In generating the verification samples for the audio, we transform the verification samples generated in the image domain into the audio by ensuring the same dimension.", "Experimental results.", "Table REF shows the results of fidelity and effectiveness of our method in IP protection of speaker recognition in VoxCeleb.", "Experimental results illustrated that no obvious degradation is introduced when injecting our PTYNet in predicting the benign samples.", "Both of them have achieved the accuracy 84.2% in prediction.", "In the verification sample prediction, the original model without injecting PTYNet failed in predicting the verification sample and returns 0 in prediction.", "However, the watermarked model with our PTYNet gives an accuracy of more than 92% in ownership verification with verification samples.", "The experimental results in Table REF demonstrated the effectiveness in ownership verification and functionality preservation in benign sample prediction.", "Table: Performance of functionality preserving and effectiveness of ownership verification on VoxCeleb.", "The column original represents the original target models.", "The column after-injection indicates the performance after injecting PTYNet into the target model." ], [ "Ablation Study", "In experiments, we explore the impact of parameter $\\alpha $ which controls the importance of PTYNet in determining the final results.", "Experimental results in Table REF show that our proprietary model plays a key role in ownership verification.", "The accuracy for ownership verification is less than 40% when the value for $\\alpha $ is 0.75, while gives an accuracy of nearly 100% when the value is 1.0.", "Table: The relation of parameter α\\alpha in determining the final results for ownership verification.", "The strategy for selecting background as trigger pattern is search-based.", "The target model is Inception and the proprietary model is ResNet18." ], [ "Discussion", "Our method achieves competitive performance in terms of the functionality preserving, effectiveness, and robustness on challenging dataset ImageNet.", "Extensive experimental results on real-world DNN models also demonstrated the potential application of our proposed method deployed in real scenario.", "However, there are also some limitations of our proposed method.", "The fixed and generation-based strategy for selecting background as trigger pattern are not as ideal as the search-based strategy.", "The main reason lies in that the fixed strategy failed in satisfying the class-consistent requirement while the generation-based is limited by the quality of synthesized background images which could be mitigated by employing advanced generative models.", "Additionally, our method is sensitive to the removal attack by employing input preprocessing, especially the input rotation and image scaling.", "We can apply data augmentation in the PTYNet training to enhance the robustness against such input preprocessing.", "This reminds us that such removal attack without involving model modification is more practical which calls for more effective defense approaches in IP protection as unseen attacks will emerge inadvertently." ] ]
2210.07809
[ [ "Automatic Differentiation for ML-family languages: correctness via\n logical relations" ], [ "Abstract We give a simple, direct and reusable logical relations technique for languages with recursive features and partially defined differentiable functions.", "We do so by working out the case of Automatic Differentiation (AD) correctness: namely, we present a proof of the dual numbers style AD macro correctness for realistic functional languages in the ML-family.", "We also show how this macro provides us with correct forward- and reverse-mode AD.", "The starting point was to interpret a functional programming language in a suitable freely generated categorical structure.", "In this setting, by the universal property of the syntactic categorical structure, the dual numbers AD macro and the basic $\\omega$-cpo-semantics arise as structure preserving functors.", "The proof follows, then, by a novel logical relations argument.", "The key to much of our contribution is a powerful monadic logical relations technique for term recursion and recursive types.", "It provides us with a semantic correctness proof based on a simple approach for denotational semantics, making use only of the very basic concrete model of $\\omega$-cpos." ], [ "AD and the PL community", "Automatic differentiation (AD) is a popular technique for computing derivatives of functions implemented by a piece of code, particularly when efficiency, scaling to high dimensions and numerical stability are important.", "It has been studied in the scientific computing community for many decades and has been heavily used in machine learning for the last decade.", "In the last years, the programming languages (PL) community has turned towards studying AD from a new perspective.", "Much progress has been made towards the following goals are to give a formulation of (forward and) reverse mode AD that is simple and purely functional; scales to the expressive ML-family functional languages that are popular in practice; admits a simple correctness proof that shows AD computes the derivative; provably has the correct asymptotic complexity and is performant in practice; is parallelism preserving.", "In this paper, we present a simple solution to problems (REF )-(REF ), our first major contribution.", "We give a proof of the correctness of the reverse and forward mode dual numbers style Automatic Differentiation (AD) in a semantically unified way, making use only of the very simple concrete denotational model of $\\omega $ -cpos.", "A key challenge in achieving the correctness proofs of this paper is to have sufficiently strong categorical logical relations techniques for reasoning about partially defined differentiable functions, and recursive types.", "To that end, we develop a novel monadic logical relations construction making no use of sheaf-theoretical methods as well as a novel general logical relations technique for recursive types, our second major contribution.", "We refer to the companion paper [37] for a performant implementation of the dual numbers reverse-mode AD technique proved correct in the present paper.", "It shows that it efficiently differentiates most of Haskell98, contributing towards point (REF ).", "We are currently pursuing parallelism preservation (point (REF )) for this AD technique and we plan to present it in future work.", "In our work, we ensure to keep all constructions sufficiently simple such that they can easily be generalized to more advanced AD algorithms such as CHAD [44], [45], [26], which is one of our key motivations for this work.", "Given the central role that AD plays in modern scientific computing and machine learning, the ideal of differential programming has been emerging [29], [34]: compilers for general purpose programming languages should provide built-in support for automatic differentiation of any programs written in the language.", "Such general purpose programming languages tend to include many language features, however, which we then need to be able to differentiate.", "What a correct and efficient notion of derivative is of such features might not be so straightforward as they often go beyond what is studied in traditional calculus.", "In this paper we focus on the challenge posed, in particular, by partial language features: partial primitive operations, lazy conditionals on real numbers, iteration, recursion and recursive types.", "Partial primitive operations are certainly key.", "Indeed, even the basic operations of division and logarithm are examples.", "(Lazy) conditionals on real numbers are useful in practice for pasting together various existing smooth functions, as basic example being the ReLU function $ReLU(x) \\stackrel{\\mathrm {def}}{=}\\mathbf {if}\\,x\\,\\mathbf {then}\\,0\\,\\mathbf {else}\\,x\\,=\\mathbf {case}\\,(\\mathbf {sign}\\,\\,x)\\,\\mathbf {of}\\,\\lbrace \\mathbf {inl}\\,\\_\\rightarrow 0\\mid \\mathbf {inr}\\,\\_\\rightarrow x\\rbrace ,$ which is a key component of many neural networks.", "They are also frequently used in probabilistic programming to paste together density functions of different distributions [4].", "People have long studied the subtle issue of how one should algorithmically differentiate such functions with “kinks” under the name of the if-problem in automatic differentiation [3].", "Our solution is the one also employed by [1]: to treat the functions as semantically undefined at their kinks (at $x=0$ in the case of $ReLU(x)$ ).", "This is justified given how coarse the semantic treatment of floating point numbers as real numbers is already.", "Our semantics based on partial functions defined on real numbers is sufficient to prove many high-level correctness properties.", "However, like any semantics based on real numbers, it fails to capture many of the low-level subtleties introduced by the floating point implementation.", "Our key insight that we use to prove correctness of AD of partial programs is to construct a suitable lifting of the partiality monad to a variant of [19]'s category of $\\mathbb {R}^k$ -indexed logical relations used to relate programs to their derivatives.", "This particular monad lifting for derivatives of partial functions can be seen as our solution to the if-problem in AD.", "Similarly, iteration constructs, or while-loops, are necessary for implementing iterative algorithms with dynamic stopping criteria.", "Such algorithms are frequently used in programs that AD is applied to.", "For example, AD is applied to iterative differential equation solvers to perform Bayesian inference in SIR models.", "This technique played a key role in modelling the Covid19-pandemic [14].", "For similar reasons, AD through iterative differential equation solvers is important for probabilistic modelling of pharmacokinetics [42].", "Other common use-cases of iterative algorithms that need to be AD'ed are eigen-decompositions and algebraic equation solvers, such as those employed in Stan [7].", "Finally, iteration gives a convenient way of achieving numerically stable approximations to complex functions (such as the Conway-Maxwell-Poisson density function [17]).", "The idea is to construct, using iteration, a Taylor approximation that terminates once the next term in the series causes floating-point underflow.", "Indeed, for a function whose $i$ -th terms in the Taylor expansion can be represented by a program $i : \\mathbf {int}, x : \\vdash t(i, x) : ,$ we would define the underflow-truncated Taylor series by $\\mathbf {iterate}\\,\\Big (\\begin{array}{l}\\mathbf {case}\\, x\\,\\mathbf {of}\\,\\langle x_1, x_2\\rangle \\rightarrow \\mathbf {let}\\,y=\\,t(x_1, x_2)\\,\\mathbf {in}\\,\\\\\\mathbf {case}\\,-c < y < c\\,\\mathbf {of}\\,\\lbrace \\mathbf {inl}\\,\\_ \\rightarrow \\mathbf {inr}\\,x_2\\mid \\mathbf {inr}\\,\\_\\rightarrow \\mathbf {inl}\\,\\langle x_1 + 1, x_2 +y\\rangle \\rbrace )\\end{array}\\Big )\\,\\mathbf {from}\\,x=\\langle 0, 0\\rangle ,$ where $c$ is a cut-off for floating-point underflow.", "Next, recursive neural networks [41] are often mentioned as a use case of AD applied to recursive programs.", "While basic Child-Sum Tree-LSTMs can also be implemented with primitive recursion (a fold) over an inductively defined tree (which can be defined as a recursive type), there are other related models such as Top-Down-Tree-LSTMs that require an iterative or general recursive approach [47].", "In fact, [20] has shown that a recursive approach is preferable as it better exposes the available parallelism in the model.", "In Appendix , we show some Haskell code for the recursive neural network of [39], to give an idea of how iteration and recursive types (in the form of inductive types of labelled trees) naturally arise in a functional implementation of such neural net architectures.", "We imagine that many more applications of AD applied to recursive programs with naturally emerge as the technique made available to machine learning researchers and engineers.", "Finally, we speculate that coinductive types like streams of real numbers, which can be encoded using recursive types as $\\mu \\alpha .\\mathbf {1}\\rightarrow (* \\alpha )$ , provide a useful API for on-line machine learning applications [36], where data is processed in real time as it becomes available.", "Recursion and more notably recursive types introduce one final challenge into the correctness proof of AD of such expressive functional programs: the required logical relations arguments are notoriously technical, limiting the audience of any work using them and frustrating application to more complicated AD algorithms like CHAD.", "To mend this problem, we introduce a novel, simple but powerful logical relations technique for open semantic logical relations for recursive types.", "5" ], [ "Key ideas", "In this paper, we consider how to perform forward and reverse mode numbers automatic differentiation on a functional language with expressive partial features, by using a dual numbers technique." ], [ "Language", "We consider an idealised functional language with product types ${\\tau }\\,{\\mathop {\\times }}\\,{\\sigma }$ , sum types ${\\tau }\\,{\\mathop {\\sqcup }}\\,{\\sigma }$ , function types ${\\tau }\\rightarrow {\\sigma }$ generated by a primitive type $$ of real numbers (in practice, implemented as floating point numbers); constants $\\vdash \\underline{c}:$ for $c\\in \\mathbb {R}$ ; sets $(\\mathrm {Op}_n)_{n\\in \\mathbb {N}}$ of $n$ -ary primitive operations $\\mathrm {op} $ , for which we include computations ${x}_1:,\\ldots , {x}_n:\\vdash \\mathrm {op} ({x}_1,\\ldots ,{x}_n):$ ; we think of these as implementing partial functions $\\mathbb {R}^n\\rightharpoonup \\mathbb {R}$ with open domain of definition, on which they are differentiable; for example, we can include mathematical operations $\\log ,\\exp \\in \\mathrm {Op}_1$ and $(+),(*),(/)\\in \\mathrm {Op}_2$ ; a construct ${x}:\\vdash \\mathbf {sign}\\,({x}):\\mathbf {1}\\,{\\mathop {\\sqcup }}\\,\\mathbf {1}$ that computes the sign of a real number and is undefined at $\\underline{0}$ ; we can use it to define a lazy conditional on real numbers $\\mathbf {if}\\,{r}\\,\\mathbf {then}\\,{t}\\,\\mathbf {else}\\,{s}\\,\\stackrel{\\mathrm {def}}{=}\\mathbf {case}\\,\\mathbf {sign}\\,{r}\\,\\mathbf {of}\\,\\lbrace {\\_\\rightarrow {{t}}\\mathrel {\\big \\vert }\\_\\rightarrow {{r}}}\\rbrace $ of the kind that is often used in AD libraries like Stan [7].", "Next, we include two more standard mechanisms for defining partial functions: (purely functional) iteration: given a computation $\\Gamma , {x} : {\\tau } \\vdash {t} : {\\tau }\\,{\\mathop {\\sqcup }}\\,{\\sigma }$ to iterate and a starting value $\\Gamma \\vdash {s} : {\\tau }$ , we have a computation $\\mathbf {iterate}\\,{t}\\,\\mathbf {from}\\,{x}={s} : {\\sigma }$ which repeatedly calls ${t}$ , starting from the value of ${s}$ until the result lies in ${\\sigma }$ ; recursion: given a computation $\\Gamma ,{x}:{\\tau }\\rightarrow {\\sigma }\\vdash {t}:{\\tau }\\rightarrow {\\sigma }$ , we have a program $\\Gamma \\vdash \\mu {x}.", "{t}:{\\tau }\\rightarrow {\\sigma }$ that recursively computes to $\\mathbf {let}\\,{x}=\\,\\mu {x}.", "{t}\\,\\mathbf {in}\\,{t}$ .", "Let us assume that we have programs $\\partial _i\\mathrm {op} ({x}_1,\\ldots ,{x}_n)$ that compute the $i$ -th partial derivative of each $n$ -ary primitive operation $\\mathrm {op} $ .", "For example, we can define $\\partial _1(*)({x}_1,{x}_2)={x}_2$ and $\\partial _2(*)({x}_1,{x}_2)={x}_1$ .", "Then, we can define a very straightforward forward mode AD code transformation $\\scalebox {0.8}{\\mathcal {D}}_{}$ by replacing all primitive types $$ by a pair $\\scalebox {0.8}{\\mathcal {D}}_{}()\\stackrel{\\mathrm {def}}{=}\\,{\\mathop {\\times }}\\,$ of reals and by replacing all constants $\\underline{c}$ , $n$ -ary primitive operations $\\mathrm {op} $ and sign function $\\mathbf {sign}\\,$ in the program asActually, while our definition for $\\scalebox {0.8}{\\mathcal {D}}_{}(\\mathbf {sign}\\,{{r}})$ given here is correct, there exist more efficient implementation techniques, as we discuss in Appx.", ".", "$\\begin{array}{ll}\\scalebox {0.8}{\\mathcal {D}}_{}(\\underline{c}) \\stackrel{\\mathrm {def}}{=}& \\langle \\underline{c}, \\underline{0}\\rangle \\\\\\scalebox {0.8}{\\mathcal {D}}_{}(\\mathrm {op} ({r}_1,\\ldots ,{r}_n))\\stackrel{\\mathrm {def}}{=}~&\\mathbf {case}\\,\\scalebox {0.8}{\\mathcal {D}}_{}({r}_1)\\,\\mathbf {of}\\,\\langle {x}_1, {x}_1^{\\prime }\\rangle \\rightarrow \\ldots \\rightarrow \\mathbf {case}\\,\\scalebox {0.8}{\\mathcal {D}}_{}({r}_n)\\,\\mathbf {of}\\,\\langle {x}_n, {x}_n^{\\prime }\\rangle \\rightarrow \\\\&{\\langle \\mathrm {op} ({x}_1,\\ldots ,{x}_n), {x}_1^{\\prime } *\\partial _1\\mathrm {op} ({x}_1,\\ldots ,{x}_n)+\\ldots +{x}_n^{\\prime } *\\partial _n\\mathrm {op} ({x}_1,\\ldots ,{x}_n)\\rangle }\\\\\\scalebox {0.8}{\\mathcal {D}}_{}(\\mathbf {sign}\\,{{r}})\\stackrel{\\mathrm {def}}{=}& \\mathbf {sign}\\,{(\\mathbf {fst}\\,\\scalebox {0.8}{\\mathcal {D}}_{}({r}))}.\\end{array}$ We extend $\\scalebox {0.8}{\\mathcal {D}}_{}$ to all other types and programs in the unique homomorphic (structure preserving way), by using structural recursion.", "So, for example, $\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau }\\rightarrow {\\sigma })\\stackrel{\\mathrm {def}}{=}\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau })\\rightarrow \\scalebox {0.8}{\\mathcal {D}}_{}({\\sigma })$ , $\\scalebox {0.8}{\\mathcal {D}}_{}({x})\\stackrel{\\mathrm {def}}{=}{x}$ , $\\scalebox {0.8}{\\mathcal {D}}_{}(\\mathbf {let}\\,{x}=\\,{t}\\,\\mathbf {in}\\,{s})=\\mathbf {let}\\,{x}=\\,\\scalebox {0.8}{\\mathcal {D}}_{}({t})\\,\\mathbf {in}\\,\\scalebox {0.8}{\\mathcal {D}}_{}({s})$ and $\\scalebox {0.8}{\\mathcal {D}}_{}({t}\\,{s})=\\scalebox {0.8}{\\mathcal {D}}_{}({t})\\,\\scalebox {0.8}{\\mathcal {D}}_{}({s})$ .", "We like to think of $\\scalebox {0.8}{\\mathcal {D}}_{}$ as a structure preserving functor $\\scalebox {0.8}{\\mathcal {D}}_{}:\\mathbf {Syn}\\rightarrow \\mathbf {Syn}$ on the syntax.", "To formulate correctness of the AD transformation $\\scalebox {0.8}{\\mathcal {D}}_{}$ , we need to assign a formal denotational semantics $[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] $ to our language.", "We use the standard interpretation of types ${\\tau }$ as $\\omega $ -cpos $[\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] $ (partially ordered sets with suprema of countable chains) and programs ${x}_1:{\\tau }_1,\\ldots ,{x}_n:{\\tau }_n\\vdash {t}:{\\sigma }$ as monotone $\\omega $ -continuous partial functions $[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] :[\\hspace{-2.5pt}[{\\tau }_1]\\hspace{-2.5pt}] \\times \\cdots \\times [\\hspace{-2.5pt}[{\\tau }_n]\\hspace{-2.5pt}] \\rightharpoonup [\\hspace{-2.5pt}[{\\sigma }]\\hspace{-2.5pt}] $ .", "We interpret $$ as the flat $\\omega $ -cpo $[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\stackrel{\\mathrm {def}}{=}\\mathbb {R}$ of real numbers, ${\\underline{c}}$ as the constant $[\\hspace{-2.5pt}[\\underline{c}]\\hspace{-2.5pt}] \\stackrel{\\mathrm {def}}{=}c\\in \\mathbb {R}$ , $\\mathrm {op} $ as the partial differentiable function $[\\hspace{-2.5pt}[\\mathrm {op} ({x}_1,\\ldots ,{x}_n)]\\hspace{-2.5pt}] :\\mathbb {R}^n\\rightharpoonup \\mathbb {R}$ that it is intended to implement and $\\mathbf {sign}\\,$ as the partial function $[\\hspace{-2.5pt}[\\mathbf {sign}\\,({x})]\\hspace{-2.5pt}] :\\mathbb {R}\\rightharpoonup \\mathbf {1}\\sqcup \\mathbf {1}$ that sends $r<0$ to the left copy of $\\mathbf {1}$ , $r>0$ to the right copy and is undefined for $r=0$ .", "Having fixed these definitions, the rest of the semantics is entirely compositional and standard.", "In particular, we interpret iteration and recursion using Kleene's Fixpoint Theorem.", "We think of this semantics as a structure preserving functor $[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] :\\mathbf {Syn}\\rightarrow \\mathbf {\\omega Cpo}$ from the syntax to the category of $\\omega $ -cpos and monotone $\\omega $ -continuous functions.", "Having defined a semantics, we can phrase what it means for $\\scalebox {0.8}{\\mathcal {D}}_{}$ to be correct.", "We prove the following, showing that $\\scalebox {0.8}{\\mathcal {D}}_{}({t})$ implements the usual calculus derivative $D[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] $ of $[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] $ .", "[Forward AD Correctness, Theorem with $k=1$ in main text] For any program ${x}:{\\tau }\\vdash {t}:{\\sigma }$ for ${\\tau }=^{k},{\\sigma }=^l$ (where we write $^n$ for the type $\\,{\\mathop {\\times }}\\,\\cdots \\,{\\mathop {\\times }}\\,$ of length $n$ tuples of reals), we have that $&[\\hspace{-2.5pt}[\\scalebox {0.8}{\\mathcal {D}}_{}({t})]\\hspace{-2.5pt}] ((x_{1},v_{1}),\\ldots ,(x_{k},v_{k}))=\\\\&\\Big (\\pi _1([\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] (x_1,\\ldots ,x_k)), \\pi _l(D[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] ((x_{1},\\ldots ,x_k),(v_{1},\\ldots ,v_k))),\\ldots ,\\\\&\\;\\;\\pi _l([\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] (x_1,\\ldots ,x_k)), \\pi _l(D[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] ((x_{1},\\ldots ,x_k),(v_{1},\\ldots ,v_k)))\\Big )$ for any $(x_1,\\ldots ,x_k)$ in the domain of definition of $[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] $ and any tangent vector $(v_1,\\ldots , v_k)$ to $[\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] $ at $x$ .", "In fact, we also establish the theorem above for general types ${\\tau }$ and ${\\sigma }$ not containing function types, but its phrasing requires slight bookkeeping that might distract from the simplicity of the theorem.", "Importantly, the program ${t}$ might use higher-order functions, iteration, recursion, etc..", "The proof of the correctness theorem follows a logical relations argument that we found using categorical methods, but which can be phrased entirely in elementary terms.", "Let us fix some $n\\in \\mathbb {N}$ .", "We define for all types ${\\tau }$ of our language, by induction, relations $T^{n}_{{\\tau }}\\subseteq (\\mathbb {R}^n\\rightarrow [\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] )\\times ((\\mathbb {R}^n\\times \\mathbb {R}^n)\\rightarrow [\\hspace{-2.5pt}[\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau })]\\hspace{-2.5pt}] )$ and $P^{n}_{{\\tau }}\\subseteq (\\mathbb {R}^n\\rightharpoonup {[\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] })\\times ((\\mathbb {R}^n\\times \\mathbb {R}^n)\\rightharpoonup {[\\hspace{-2.5pt}[\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau })]\\hspace{-2.5pt}] })$ that relate a (partial) $n$ -curve to its derivative $n$ -curve: $T^{n}_{}&\\stackrel{\\mathrm {def}}{=}{(\\gamma ,\\gamma ^{\\prime })\\mid \\gamma \\text{ is differentiable and } \\gamma ^{\\prime }=(x,v)\\mapsto (\\gamma (x),D\\gamma (x,v))}\\\\T^{n}_{{\\tau }\\,{\\mathop {\\times }}\\,{\\sigma }} & \\stackrel{\\mathrm {def}}{=}{(x\\mapsto (\\gamma _1(x),\\gamma _2(x)), (x,v)\\mapsto (\\gamma ^{\\prime }_1(x,v),\\gamma ^{\\prime }_2(x,v)))\\mid (\\gamma _1,\\gamma ^{\\prime }_1)\\in T^{n}_{{\\tau }}\\text{ and } (\\gamma _2,\\gamma ^{\\prime }_2)\\in T^{n}_{{\\sigma }}}\\\\T^{n}_{{\\tau }\\,{\\mathop {\\sqcup }}\\,{\\sigma }} & \\stackrel{\\mathrm {def}}{=}{(\\iota _1\\circ \\gamma _1,\\iota _1\\circ \\gamma ^{\\prime }_1)\\mid (\\gamma _1,\\gamma ^{\\prime }_1)\\in T^{n}_{{\\tau }}}\\cup {(\\iota _2\\circ \\gamma _2,\\iota _2\\circ \\gamma ^{\\prime }_2)\\mid (\\gamma _2,\\gamma ^{\\prime }_2)\\in T^{n}_{{\\sigma }}}\\\\T^{n}_{{\\tau }\\rightarrow {\\sigma }}& \\stackrel{\\mathrm {def}}{=}{(\\gamma ,\\gamma ^{\\prime })\\mid \\forall (\\delta , \\delta ^{\\prime })\\in T^{n}_{{\\tau }}.", "(x\\mapsto \\gamma (x)(\\delta (x)), (x,v)\\mapsto \\gamma ^{\\prime }(x,v)(\\delta ^{\\prime }(x,v)))\\in P^{n}_{{\\sigma }}}\\\\P^{n}_{{\\tau }} & \\stackrel{\\mathrm {def}}{=}\\Big \\lbrace (\\gamma ,\\gamma ^{\\prime })\\mid \\gamma ^{-1}([\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] )\\times \\mathbb {R}^n=\\gamma ^{\\prime -1}([\\hspace{-2.5pt}[\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau })]\\hspace{-2.5pt}] )\\text{ is open and for all differentiable}\\\\&\\qquad \\qquad \\quad \\delta :\\mathbb {R}^n\\rightarrow \\gamma ^{-1}([\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] ) \\text{ we have }(\\gamma \\circ \\delta ,(x,v)\\mapsto (\\gamma (\\delta (x)),\\gamma ^{\\prime }(D\\delta (x,v)))) \\in T^{n}_{{\\tau }}\\Big \\rbrace .$ We then prove the following “fundamental lemma”, using induction on the typing derivation of ${t}$ : If ${x}_1:{\\tau }_1,\\ldots ,{x}_n:{\\tau }_n\\vdash {t}:{\\sigma }$ and, for $1\\le i\\le n$ , $(f_i, f_i^{\\prime })\\in T^{n}_{{\\tau }_i}$ , then $(x\\mapsto [\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] (f_1(x),\\ldots , f_n(x)), (x,v)\\mapsto [\\hspace{-2.5pt}[\\scalebox {0.8}{\\mathcal {D}}_{}({t})]\\hspace{-2.5pt}] (f_1^{\\prime }(x,v),\\ldots , f_n^{\\prime }(x,v)))\\in P^{n}_{{\\sigma }}$ .", "For example, we use that, by assumption, $[\\hspace{-2.5pt}[\\partial _i\\mathrm {op} ({x}_1,\\ldots ,{x}_n)]\\hspace{-2.5pt}] $ equals the $i$ -th partial derivative of $[\\hspace{-2.5pt}[\\mathrm {op} ({x}_1,\\ldots ,{x}_n)]\\hspace{-2.5pt}] $ combined with the chain-rule, to show that primitive operations $\\mathrm {op} $ respect the logical relations.", "As $T^{k}_{^k}$ contains, in particular, $(, ((x_1,\\ldots ,x_k), (v_1,\\ldots , v_k))\\mapsto ((x_1,v_1),\\ldots ,(x_n,v_k)))$ , our theorem follows.", "Next, we extend our language with ML-style polymorphism and recursive types.", "That is, we allow the formation of types ${\\tau }$ with free type variables ${\\alpha }$ and we include a type variable binder $\\mathbf {\\mu }{\\alpha }.", "{\\tau }$ , which binds ${\\alpha }$ in ${\\tau }$ .", "We extend our AD transformation homomorphically on terms and types.", "For example, on types, we define $\\scalebox {0.8}{\\mathcal {D}}_{}({\\alpha })\\stackrel{\\mathrm {def}}{=}{\\alpha }&& \\scalebox {0.8}{\\mathcal {D}}_{}(\\mathbf {\\mu }{\\alpha }.", "{\\tau })\\stackrel{\\mathrm {def}}{=}\\mathbf {\\mu }{\\alpha }.\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau }).$ A type ${\\tau }$ with $n$ free type variables gets interpreted in our $\\omega $ -cpo-semantics as an $n$ -ary mixed-variance endofunctor $[\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] $ on the category of $\\omega $ -cpos and partial morphisms that restricts to that of $\\omega $ -cpos and total morphisms.", "Programs with types that have free variables get interpreted as (extra)natural transformations.", "As the category of $\\omega $ -cpos and partial morphisms has the structure to interpret recursive types of that of $\\omega $ -cpos and total morphisms, we have a canonical minimal invariant $\\mathrm {roll}:[\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] (\\mu [\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] , \\mu [\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] ){\\cong }\\mu [\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] $ for the mixed-variance endofunctors $[\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] $ on $\\mathbf {\\omega Cpo}$ that types ${\\tau }$ denote [23].", "We interpret $[\\hspace{-2.5pt}[\\mathbf {\\mu }{\\alpha }.", "{\\tau }]\\hspace{-2.5pt}] \\stackrel{\\mathrm {def}}{=}\\mu [\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] $ .", "To extend the correctness proof to this larger language, we would like to define the logical relation $T^n_{\\mathbf {\\mu }{\\alpha }.", "{\\tau }}\\stackrel{\\mathrm {def}}{=}{(\\mathrm {roll}\\circ \\gamma ,\\mathrm {roll}\\circ \\gamma ^{\\prime })\\mid (\\gamma ,\\gamma ^{\\prime })\\in T^n_{{\\tau }{}[^{\\mathbf {\\mu }{\\alpha }.", "{\\tau }}\\!/\\!_{{\\alpha }}]}}.$ That is, we would like to be able to define relations using type recursion.", "If we can do so, then extending the proof of the fundamental lemma is straightforward.", "We can then establish the correctness theorem also for ${\\tau }$ and ${\\sigma }$ that involve recursive types.", "The traditional method is to follow the technical recipes of [33].", "Instead, we develop a powerful new logical relations technique for recursive types, which we believe to be more conceptually clear and easier to use in situations like ours.", "To be precise, we prove a general result saying that under mild conditions, that we can interpret recursive types in the category of logical relations over a category that models recursive types itself.", "For simplicity, we state an important special case that we need for our application here.", "Given any right adjoint $\\mathbf {\\omega Cpo}$ -enriched functor $G:\\mathbf {\\omega Cpo}^n\\rightarrow \\mathbf {\\omega Cpo}$ , consider the category $\\mathbf {SScone}$ of logical relations, which has objects $(X, P)$ , where $X\\in \\mathbf {\\omega Cpo}^n$ and $P$ is a chain-closed subset of $GX$ , and morphisms $(X, P)\\rightarrow (X^{\\prime },P^{\\prime })$ are $\\mathbf {\\omega Cpo}^n$ -morphisms $f:X\\rightarrow X^{\\prime }$ such that $y\\in P$ implies $Gf(y)\\in P^{\\prime }$ .", "[Logical relations for recursive types, special case of theorem REF in main text] Let $T$ be a strong monad on $\\mathbf {SScone}$ that lifts the usual partiality monad ${(-)}_{\\bot }$ on $\\mathbf {\\omega Cpo}^n$ along the projection functor $\\mathbf {SScone}\\rightarrow \\mathbf {\\omega Cpo}^n$ .", "We assume that $T$ takes the initial object to the terminal one, and the square in $\\mathbf {\\omega Cpo}$ induced by each component of the unit of $T$ is a pullback.", "Then, $\\mathbf {SScone}\\hookrightarrow \\mathbf {SScone}_T$ is a model for recursive types.", "In particular, we can define the relations $T_{\\mathbf {\\mu }{\\alpha }.", "{\\tau }}$ using type recursion, as desired.", "Similarly to dual numbers forward AD $\\scalebox {0.8}{\\mathcal {D}}_{}$ , we can define a reverse AD code transformation $\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}$ : we define $\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}()\\stackrel{\\mathrm {def}}{=}\\,{\\mathop {\\times }}\\,\\mathbf {vect} $ and $\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}(\\underline{c})&\\stackrel{\\mathrm {def}}{=}\\langle \\underline{c}, { 0}^{\\mathbf {v}}\\rangle \\\\\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}(\\mathrm {op} ({t}_1,\\ldots ,{t}_n))&\\stackrel{\\mathrm {def}}{=}\\mathbf {case}\\,\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}({t}_1)\\,\\mathbf {of}\\,\\langle {x}_1, {x}_1^{\\prime }\\rangle \\rightarrow \\ldots \\mathbf {case}\\,\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}({t}_n)\\,\\mathbf {of}\\,\\langle {x}_n, {x}_n^{\\prime }\\rangle \\rightarrow \\\\&\\quad \\quad \\langle \\mathrm {op} ({x}_1,\\ldots ,{x}_n),{x}_1^{\\prime }*^{\\mathbf {v}} \\partial _1\\mathrm {op} ({x}_1,\\ldots ,{x}_n) {+}^\\mathbf {v}\\ldots {+}^\\mathbf {v}{x}_n^{\\prime }*^{\\mathbf {v}} \\partial _n\\mathrm {op} ({x}_1,\\ldots ,{x}_n) \\rangle \\\\\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}(\\mathbf {sign}\\,{{t}}) &\\stackrel{\\mathrm {def}}{=}\\mathbf {sign}\\,(\\mathbf {fst}\\,\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}({t})).$ and extend homomorphically to all other type and term formers, as we did before.", "In fact, this algorithm is exactly the same as dual numbers forward AD in code with the only differences being that the type $$ of real numbers for tangents has been replaced with a new type $\\mathbf {vect} $ , which we think of as representing (dynamically sized) cotangent vectors to the global input of the program; the zero $\\underline{0} $ and addition $(+)$ of type $$ have been replaced by the zero $ { 0}^{\\mathbf {v}}$ and addition $({+}^\\mathbf {v})$ of cotangents of type $\\mathbf {vect} $ ; the multiplication $(*):\\,{\\mathop {\\times }}\\,\\rightarrow $ has been replaced by the operation $(~*^{\\mathbf {v}} ~):\\mathbf {vect} \\,{\\mathop {\\times }}\\, \\rightarrow \\mathbf {vect} $ : $(v*^{\\mathbf {v}} r)$ is the rescaling of a cotangent $v$ by the scalar $r$ .", "We write $\\overline{e} _{i}$ for program representing the $i$ -th canonical basis vector $e_i$ of type $\\mathbf {vect} $ and we write $\\mathrm {Wrap}_{s}({x})\\stackrel{\\mathrm {def}}{=}\\mathbf {case}\\,{x}\\,\\mathbf {of}\\,\\langle {x}_1,\\ldots ,{x}_s\\rangle \\rightarrow \\langle \\langle {x}_1, \\overline{e} _{1}\\rangle ,\\ldots ,\\langle {x}_s, \\overline{e} _{s}\\rangle \\rangle .$ We define $[\\hspace{-2.5pt}[\\mathbf {vect} ]\\hspace{-2.5pt}] \\stackrel{\\mathrm {def}}{=}\\mathbb {R}^\\infty \\stackrel{\\mathrm {def}}{=}\\sum _{k=0}^\\infty \\mathbb {R}^k$ as the infinite (vector space) coproduct of $k$ -dimensional real vector spaces.", "That is, we interpret $\\mathbf {vect} $ as the type of dynamically sized real vectorsNote that, in practice, [37] actually implements $\\mathbf {vect} $ as a type of ASTs of simple expressions computing a dynamically sized vector.", "This allows us to first build up the expression during execution of the program (the forward pass) and to only evaluate this cotangent expression later (in a reverse pass) making clever use of a distributivity law of addition and multiplication (also known as the linear factoring rule in [6]) to achieve the correct computational complexity of reverse AD.. We show that $\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}({t})$ implements the transposed derivative $D[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] ^t$ of $[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] $ in the following sense.", "[Reverse AD Correctness, Theorem with $k=\\infty $ in main text] For any program ${x}:{\\tau }\\vdash {t}:{\\sigma }$ for ${\\tau }=^{s},{\\sigma }=^l$ , $ &[\\hspace{-2.5pt}[\\mathbf {let}\\,{x}=\\,\\mathrm {Wrap}_{k}({x})\\,\\mathbf {in}\\,\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}({t})]\\hspace{-2.5pt}] (x_1,\\ldots , x_s)=\\\\&\\Big ((\\pi _1([\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] (x_1,\\ldots ,x_s)),D[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] ^t((x_{1},\\ldots ,x_s),e_1)),\\ldots , (\\pi _l([\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] (x_1,\\ldots ,x_s)), D[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] ^t((x_{1},\\ldots ,x_s),e_l))\\Big )$ for any $(x_1,\\ldots ,x_s)$ in the domain of definition of $[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] $ .", "We prove this theorem again using a similar logical relations argument, defining $T^{n}_{{\\tau }}\\subseteq (\\mathbb {R}^n\\rightarrow [\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] )\\times ((\\mathbb {R}^n\\times {(\\mathbb {R}^\\infty )}^n)\\rightarrow [\\hspace{-2.5pt}[\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}({\\tau })]\\hspace{-2.5pt}] )$ and $P^{n}_{{\\tau }}\\subseteq (\\mathbb {R}^n\\rightharpoonup {[\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] })\\times (\\mathbb {R}^n\\times {(\\mathbb {R}^\\infty )}^n)\\rightharpoonup {[\\hspace{-2.5pt}[\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}({\\tau })]\\hspace{-2.5pt}] })$ as before for all types ${\\tau }$ of language, setting $T^{n}_{}&\\stackrel{\\mathrm {def}}{=}{(\\gamma , \\gamma ^{\\prime })\\mid \\gamma \\text{ is differentiable and }\\gamma ^{\\prime }=(x, L)\\mapsto (\\gamma (x), L(D\\gamma ^t(x, e_1)))}\\\\T^{n}_{{\\tau }\\,{\\mathop {\\times }}\\,{\\sigma }} & \\stackrel{\\mathrm {def}}{=}{(x \\mapsto (\\gamma _1(x),\\gamma _2(x)), (x,L)\\mapsto (\\gamma ^{\\prime }_1(x,L),\\gamma ^{\\prime }_2(x,L)))\\mid (\\gamma _1,\\gamma ^{\\prime }_1)\\in T^{n}_{{\\tau }}\\text{ and } (\\gamma _2,\\gamma ^{\\prime }_2)\\in T^{n}_{{\\sigma }}}\\\\T^{n}_{{\\tau }\\,{\\mathop {\\sqcup }}\\,{\\sigma }} & \\stackrel{\\mathrm {def}}{=}{(\\iota _1\\circ \\gamma _1,\\iota _1\\circ \\gamma ^{\\prime }_1)\\mid (\\gamma _1,\\gamma ^{\\prime }_1)\\in T^{n}_{{\\tau }}}\\cup {(\\iota _2\\circ \\gamma _2,\\iota _2\\circ \\gamma ^{\\prime }_2)\\mid (\\gamma _2,\\gamma ^{\\prime }_2)\\in T^{n}_{{\\sigma }}}\\\\T^{n}_{{\\tau }\\rightarrow {\\sigma }}& \\stackrel{\\mathrm {def}}{=}{(\\gamma ,\\gamma ^{\\prime })\\mid \\forall (\\delta , \\delta ^{\\prime })\\in T^{n}_{{\\tau }}.", "(x\\mapsto \\gamma (x)(\\delta (x)), (x,L)\\mapsto \\gamma ^{\\prime }(x,L)(\\delta ^{\\prime }(x,L)))\\in P^{n}_{{\\sigma }}}\\\\T^n_{\\mathbf {\\mu }{\\alpha }.", "{\\tau }}&\\stackrel{\\mathrm {def}}{=}{(\\mathrm {roll}\\circ \\gamma ,\\mathrm {roll}\\circ \\gamma ^{\\prime })\\mid (\\gamma ,\\gamma ^{\\prime })\\in T^n_{{\\tau }{}[^{\\mathbf {\\mu }{\\alpha }.", "{\\tau }}\\!/\\!_{{\\alpha }}]}}\\\\P^{n}_{{\\tau }} & \\stackrel{\\mathrm {def}}{=}\\Big \\lbrace (\\gamma ,\\gamma ^{\\prime })\\mid \\gamma ^{-1}([\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] )\\times (\\mathbb {R}^\\infty ){}^n=\\gamma ^{\\prime -1}([\\hspace{-2.5pt}[\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}({\\tau })]\\hspace{-2.5pt}] )\\text{ is open and for all differentiable}\\\\&\\qquad \\quad \\!\\!\\!", "\\delta :\\mathbb {R}^n\\rightarrow \\gamma ^{-1}([\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] ) \\text{ we have }(\\gamma \\circ \\delta ,(x,L)\\mapsto \\gamma ^{\\prime }(\\delta (x),L\\circ D\\delta ^t(x,-))) \\in T^{n}_{{\\tau }}\\Big \\rbrace ,$ where we consider $(\\mathbb {R}^\\infty ){}^n$ as a type of linear transformations from $\\mathbb {R}^n$ to $\\mathbb {R}^\\infty $ .", "We then prove the following “fundamental lemma”, using induction on the typing derivation of ${t}$ : If ${x}_1:{\\tau }_1,\\ldots ,{x}_n:{\\tau }_n\\vdash {t}:{\\sigma }$ and, for $1\\le i\\le n$ , $(f_i, f_i^{\\prime })\\in T^{n}_{{\\tau }_i}$ , then $(x\\mapsto [\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] (f_1(x),\\ldots , f_n(x)), (x,L)\\mapsto [\\hspace{-2.5pt}[\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{}({t})]\\hspace{-2.5pt}] (f_1^{\\prime }(x,L),\\ldots , f_n^{\\prime }(x,L)))\\in P^{n}_{{\\sigma }}$ .", "As $T^{s}_{^s}$ contains, in particular, $(, ((x_1,\\ldots ,x_s), (L_1,\\ldots , L_s))\\mapsto ((x_1,L_1 e_1),\\ldots ,(x_s,L_s e_s))),$ our theorem follows.", "AD tends to be applied to programs that manipulate large arrays of reals.", "Seeing that such arrays are denotationally equivalent to lists $\\mathbf {\\mu }{\\alpha }.\\mathbf {1}\\,{\\mathop {\\sqcup }}\\,{\\alpha }\\,{\\mathop {\\times }}\\,$ , while only the computational complexity of operations differs, our correctness result also applies to functional languages with arrays.", "We thus differentiate array types ${\\tau }[]$ with elements of type ${\\tau }$ in the obvious structure preserving way, e.g.", "$\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau }[])\\stackrel{\\mathrm {def}}{=}\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau })[] \\qquad \\scalebox {0.8}{\\mathcal {D}}_{}(\\mathbf {generate})\\stackrel{\\mathrm {def}}{=}\\mathbf {generate}\\qquad \\scalebox {0.8}{\\mathcal {D}}_{}(\\mathbf {map})\\stackrel{\\mathrm {def}}{=}\\mathbf {map}\\qquad \\scalebox {0.8}{\\mathcal {D}}_{}(\\mathbf {foldr})\\stackrel{\\mathrm {def}}{=}\\mathbf {foldr}$ and similarly for dual numbers reverse AD." ], [ "Categorical models for CBV languages: $CBV$ pairs and models", "The aim of this section is to establish a class of models for call-by-value (CBV) languages, and, then, add free recursion and iteration.", "We assume some familiarity with basic category theory (see, for instance, [11]).", "Whenever we talk about strict preservation of some structure (like products, coproducts or exponentials), we are assuming that we have chosen structures (chosen products, coproducts or exponentials) and the preservation is on the nose, that is to say, the canonical comparison is the identity.", "Given a cartesian closed category ${V}$ , we can see it as a ${V}$ -enriched category w.r.t.", "the cartesian structure.", "Recall that a strong monad $\\mathcal {T}$ on a cartesian closed category ${V}$ is the same as a ${V}$ -monad on ${V}$ .", "More precisely, it is a triple $\\mathcal {T}= \\left( T : {V}\\rightarrow {V}, \\mathrm {m}: T^2\\rightarrow T , \\eta : \\mathrm {id}_ {{V}} \\rightarrow T \\right) ,$ where $T $ is a ${V}$ -endofunctor and $\\mathrm {m}, \\eta $ are ${V}$ -natural transformations, satisfying the usual associativity and identity equations, that is to say, $\\mathrm {m}\\cdot \\left( \\mathrm {m}T\\right) = \\mathrm {m}\\cdot \\left( T\\mathrm {m}\\right) $ and $\\mathrm {m}\\cdot \\left( \\eta T \\right) = \\mathrm {id}_ {T}= \\mathrm {m}\\cdot \\left( T\\eta \\right) $ .See [11] for the classical enriched case.", "For the general case of monads in 2-categories, see [40] or, for instance, [27].", "Let $\\mathcal {T}= \\left( T, \\mathrm {m}, \\eta \\right) $ and $\\mathcal {T}^{\\prime } = \\left( T^{\\prime }, \\mathrm {m}^{\\prime }, \\eta ^{\\prime } \\right) $ be monads on ${V}$ and ${V}^{\\prime }$ respectively.", "Recall that an oplax morphism (or a monad op-functor) between $\\mathcal {T} $ and $\\mathcal {T} ^{\\prime }$ is a pair $\\left( H : {V}\\rightarrow {V}^{\\prime } , \\varphi : HT\\rightarrow T^{\\prime }H \\right) ,$ where $H$ is a functor and $\\varphi $ is a natural transformation, such that $\\varphi \\cdot \\left( H\\eta \\right) = \\left( \\eta ^{\\prime } H \\right) \\qquad \\mbox{and} \\qquad \\left( \\mathrm {m}^{\\prime }H\\right)\\cdot \\left( T^{\\prime }\\varphi \\right)\\cdot \\left( \\varphi T \\right) = \\varphi \\cdot \\left( H \\mathrm {m}\\right) .$ By the universal property of Kleisli categories, denoting by $J: {V}\\rightarrow {C}$ and $J: {V}^{\\prime } \\rightarrow {C}^{\\prime } $ the universal Kleisli functors, the oplax morphims (REF ) correspond bijectively with pairs of functors $\\left( H : {V}\\rightarrow {V}^{\\prime }, \\overline{H} : {C}\\rightarrow {C}^{\\prime } \\right) $ such that the diagram (REF ) commutes.", "[$CBV$ pair] A $CBV$ pair is a pair $\\left({V}, \\mathcal {T}\\right) $ where ${V}$ is bicartesian closed category and $\\mathcal {T}$ is a ${V}$ -monad on ${V}$ .", "We further require that ${V}$ has chosen finite products, coproducts and exponentials.", "A $CBV $ pair morphism between the $CBV$ pairs $\\left({V}, \\mathcal {T}\\right) $ and $\\left({V}^{\\prime }, \\mathcal {T}^{\\prime } \\right) $ is a strictly bicartesian closed functor $H$ such that $\\left( H, \\mathrm {id}\\right) $ defines a monad op-functor (REF ).", "This defines a category of $CBV$ pairs and $CBV$ pair morphisms, denoted herein by $\\mathfrak {C}_{\\mathtt {p}} $ .", "Remark If $\\left( {V}, \\mathcal {T}\\right) $ is a $CBV$ pair, since $\\mathcal {T}$ is ${V}$ -enriched, we get a ${V}$ -enriched Kleisli category ${C}$ .", "We denote by ${C}\\left[ - , - \\right] = \\left( - \\Rightarrow ^k - \\right) : {C}^\\mathrm {op} \\times {C}\\rightarrow {V}$ the ${V}$ -enriched hom functor.", "It should be noted that, if we denote by $\\left( X \\Rightarrow Y \\right) = {V}\\left[ X , Y \\right] $ the exponential in ${V}$ , we have that ${C}\\left[ X , Y \\right] = \\left( X \\Rightarrow ^k Y \\right) = \\left( X \\Rightarrow TY \\right) $ which is the so called Kleisli exponential and corresponds to the function types for our language.", "Denoting by ${C}$ and ${C}^{\\prime } $ the respective Kleisli categories, each morphism $\\left( H, \\varphi \\right) : \\left( {V}, \\mathcal {T}\\right) \\rightarrow \\left( {V}^{\\prime }, \\mathcal {T}^{\\prime } \\right) $ of $CBV$ pairs gives rise to a commutative square $ (0,-225)|l|/->/<0,225>[{{V}}`{{C}};{J}](1500,-225)|r|/->/<0,225>[{{{V}}^{\\prime }}`{{{C}}^{\\prime }};{J^{\\prime }}](0,0)|a|/->/<1500,0>[{{C}}`{{{C}}^{\\prime }};{\\overline{H}}](0,-225)|b|/->/<1500,0>[{{V}}`{{{V}}^{\\prime }};{H}]$ where $J$ and $J ^{\\prime }$ are, respectively, the universal Kleisli functors of $\\mathcal {T}$ and $\\mathcal {T}^{\\prime }$ .", "In this case, $\\overline{H}$ strictly preserves Kleisli exponentials, finite coproducts and the action of ${V}$ on ${C}$ .", "That is to say, $\\left( H, \\overline{H}\\right) $ strictly preserves the distributive closed Freyd-categorical structureAlthough this level of generality is not needed in our work, the interested reader can find more about Freyd-categorical structures and basic aspects of the modelling of call-by-value languages in [24]." ], [ "$CBV$ models: term recursion and iteration", "In order to interpret our language defined in Section , we need an additional support for term recursion and iteration.", "Since we do not impose further equations for the iteration or recursion constructs in our language, the following definitions establish our class of models for term recursion and iteration.", "[Free Recursion and Iteration] Let $\\left( {V}, \\mathcal {T}\\right) $ be a $CBV$ pair and $ {C}$ the corresponding ${V}$ -enriched Kleisli category.", "A free recursion for $\\left( {V}, \\mathcal {T}\\right) $ is a family of morphisms $\\mu = \\left( \\mu ^{W,Y} : {V}\\left[ {C}\\left[ W , Y \\right] , {C}\\left[ W , Y \\right] \\right] \\longrightarrow {C}\\left[ W , Y \\right] \\right) _{(W,Y)\\in {C}\\times {C}}$ in ${V}$ .", "A free iteration for $\\left( {V}, \\mathcal {T}\\right) $ is a family of morphisms ${\\mathsf {itt}}= \\left( {\\mathsf {itt}}^{W,Y} : {C}\\left[ W , W\\sqcup Y \\right] \\longrightarrow {C}\\left[ W , Y \\right] \\right) _{(W,Y)\\in {C}\\times {C}}$ in ${V}$ .", "[$CBV$ model] A $CBV$ model is a quadruple $\\left( {V}, \\mathcal {T}, \\mu , {\\mathsf {itt}}\\right) $ in which $\\left( {V}, \\mathcal {T}\\right) $ is a $CBV$ pair, $\\mu $ is a free recursion, and ${\\mathsf {itt}}$ is a free iteration for $\\left( {V}, \\mathcal {T}\\right) $ .", "A $CBV$ model morphism between the $CBV$ models $\\left( {V}, \\mathcal {T}, \\mu , {\\mathsf {itt}}\\right) $ and $\\left( {V}^{\\prime } , \\mathcal {T}^{\\prime }, \\mu ^{\\prime } , {\\mathsf {itt}}^{\\prime } \\right) $ is a morphism $H$ between the underlying $CBV$ pairs such that $ H \\left( \\mu ^{W,Y} \\right) = \\mu ^{\\prime HW, HY} $ and $ H \\left( {\\mathsf {itt}}^{W, Y} \\right) = {\\mathsf {itt}}^{\\prime HW, HY} $ , for any $\\left( W, Y\\right)\\in {V}\\times {V}$ .", "This defines a category of $CBV$ models, denoted herein by $\\mathfrak {C}_{\\mathcal {BV}} $ .", "It should be noted that $\\mathfrak {C}_{\\mathcal {BV}} $ has finite products.", "Given $CBV$ models $\\left( {V}_0 , \\mathcal {T}_ 0, \\mu _0 , {\\mathsf {itt}}_0 \\right) $ and $\\left( {V}_1 , \\mathcal {T}_ 1, \\mu _1 , {\\mathsf {itt}}_1 \\right) $ , the product is given by $\\left( {V}_0\\times {V}_1 , \\mathcal {T}_ 0\\times \\mathcal {T}_ 1, \\left( \\mu _0 , \\mu _1\\right) , \\left({\\mathsf {itt}}_0, {\\mathsf {itt}}_1\\right) \\right)$ where $\\left( \\mu _0 , \\mu _1\\right)^{\\left( W, W^{\\prime }\\right), \\left( Y, Y^{\\prime } \\right) } = \\left( \\mu _0 ^{W,Y}, \\mu _1 ^{W^{\\prime },Y^{\\prime }} \\right) $ and $\\left( {\\mathsf {itt}}_0 , {\\mathsf {itt}}_1\\right)^{\\left( W, W^{\\prime }\\right), \\left( Y, Y^{\\prime } \\right) } = \\left( {\\mathsf {itt}}_0 ^{W,Y}, {\\mathsf {itt}}_1 ^{W^{\\prime },Y^{\\prime }} \\right) $ ." ], [ "Concrete models", "The aim of this section is to establish a class of concrete $CBV$ pairs and models.", "We denote by $ \\mathbf {\\omega Cpo}$ the usual category of $\\omega $ -cpos.", "The objects of $ \\mathbf {\\omega Cpo}$ are the partially ordered sets with colimits of $\\omega $ -chains, while the morphisms are functors preserving these colimits.", "An $\\omega $ -cpo is called pointed if it has a least element, denoted herein by $\\bot $ .", "We say that $f\\in \\mathbf {\\omega Cpo}\\left( W, Y\\right) $ is a pointed $\\mathbf {\\omega Cpo}$ -morphism if $W $ is pointed and $f$ preserves the least element.", "It is well known that $ \\mathbf {\\omega Cpo}$ is bicartesian closed.", "We consider $ \\mathbf {\\omega Cpo}$ -enriched categories w.r.t.", "the cartesian structure.", "Henceforth, if ${V}$ is an $\\mathbf {\\omega Cpo}$ -enriched category and $W, Y$ are objects of ${V}$ , we denote by ${V}\\left( W, Y\\right) $ the $ \\mathbf {\\omega Cpo}$ -enriched hom, that is to say, the $\\omega $ -cpo of morphisms between $W$ and $Y$ .", "An $\\mathbf {\\omega Cpo}$ -category ${V}$ is $\\mathbf {\\omega Cpo}$ -cartesian closed if ${V}$ has finite $\\mathbf {\\omega Cpo}$ -products and, moreover, for each object $Z\\in {V}$ , the $\\mathbf {\\omega Cpo}$ -functor $\\left(Z\\times -\\right) : {V}\\rightarrow {V}$ has a right $\\mathbf {\\omega Cpo}$ -adjoint ${V}\\left[ Z , - \\right] $ , called, herein, the $\\mathbf {\\omega Cpo}$ -exponential of $Z$ .", "An $\\mathbf {\\omega Cpo}$ -functor $H : {V}\\rightarrow {V}^{\\prime }$ is strictly $\\mathbf {\\omega Cpo}$ -cartesian closed if it strictly preserves the $\\mathbf {\\omega Cpo}$ -products and the induced comparison $H\\circ {V}\\left[ - , - \\right] \\rightarrow {V}^{\\prime }\\left[ H(-) , H(-) \\right] $ is the identity.", "Let ${V}$ be $\\mathbf {\\omega Cpo}$ -cartesian closed.", "For any $Z\\in {V}$ , since the hom-functor ${V}\\left( Z , - \\right) : {V}\\rightarrow \\mathbf {\\omega Cpo}$ is cartesian, it induces the change of enriching base 2-functors $\\mathfrak {G}_{{V}\\left( Z , - \\right) }: {{V}}\\textrm {-}\\mathsf {Cat}\\rightarrow {\\mathbf {\\omega Cpo}}\\textrm {-}\\mathsf {Cat}$ between the 2-categories of enriched categories w.r.t.", "the cartesian structures.", "Therefore, taking $Z = \\mathsf {1}$ (the terminal object of ${V}$ ), we get that every ${V}$ -category (${V}$ -functor/${V}$ -monad) has a suitable underlying $$ -category ($\\mathbf {\\omega Cpo}$ -functor/$\\mathbf {\\omega Cpo}$ -monad), given by its image by $\\mathfrak {G}_{\\mathbf {\\omega Cpo}} := \\mathfrak {G}_{{V}\\left( \\mathsf {1} , - \\right) }$ .", "[$CBV$ $\\mathbf {\\omega Cpo}$ -pair] A $CBV$ $\\mathbf {\\omega Cpo}$ -pair is a $CBV$ pair $\\left( {V}, \\mathcal {T}\\right) $ in which ${V}$ is an $ \\mathbf {\\omega Cpo}$ -bicartesian closed category, such that ${V}\\left( W , TY \\right) $ is a pointed $\\omega $ -cpo for any $(W,Y)\\in {V}\\times {V}$ .", "A $CBV$ $\\mathbf {\\omega Cpo}$ -pair morphism between $\\left({V}, \\mathcal {T}\\right) $ and $\\left({V}^{\\prime }, \\mathcal {T}^{\\prime } \\right) $ is an $\\mathbf {\\omega Cpo}$ -functor $H : {V}\\rightarrow {V}^{\\prime } $ whose underlying functor yields a morphism between the $CBV$ pairs, and such that $ H : {V}\\left( W , TY \\right) \\rightarrow {V}\\left( HW , H TY \\right) $ is a pointed $\\mathbf {\\omega Cpo}$ -morphism for any $\\left( W, Y\\right)\\in \\mathsf {ob}\\, {V}\\times \\mathsf {ob}\\, {V} $ .", "This defines a category of $CBV$ $\\mathbf {\\omega Cpo}$ -pairs, denoted herein by $\\textrm {-}\\mathfrak {C}_{\\mathcal {BV}} $ .", "There is, then, an obvious forgetful functor $\\mathcal {U}_{\\mathtt {p}} : \\textrm {-}\\mathfrak {C}_{\\mathcal {BV}} \\rightarrow \\mathfrak {C}_{\\mathtt {p}} $ ." ], [ "Fixpoints: term recursion and iteration", "Recall that, if $A$ is pointed and $q : A\\rightarrow A $ is an endomorphism in $\\mathbf {\\omega Cpo}$ , then $q$ has a least fixed point given by the colimit of the chain $ (0,0)/->/<300,0>[{\\bot }`{{q\\left(\\bot \\right)}};](300,0)/->/<300,0>[{{q\\left(\\bot \\right)}}`{\\cdots };](600,0)/->/<300,0>[{\\cdots }`{{q^n\\left({\\bot }\\right)}};](900,0)/->/<300,0>[{{q^n\\left({\\bot }\\right)}}`{\\cdots };]$ by Kleene's Fixpoint Theorem.", "Given such an endomorphism, we denote by $\\mathrm {lfp}\\left({q}\\right) $ its least fixed point.", "Henceforth, let $\\left( {V}, \\mathcal {T}\\right) $ be a $CBV$ $\\mathbf {\\omega Cpo}$ -pair, and $J: {V}\\rightarrow {C}$ the corresponding ${V}$ -enriched universal Kleisli functor.", "We denote by $-\\otimes - : {V}\\times {C}\\rightarrow {C}$ the ${V}$ -tensor product in ${C}$ , also called the ${V}$ -copower, which, in this case, correspond to the usual action of ${V}$ on ${C}$ .", "By hypothesis, for any $W,Y,Z\\in {V}$ , the $\\omega $ -cpo ${V}\\left( Z, {C}\\left[ W,Y \\right] \\right)\\cong {C}\\left( Z\\otimes W, Y \\right) $ is pointed.", "Therefore we can define $\\overline{\\mu } ^{W,Y}_Z :&{V}\\left( Z\\times {C}\\left[ W , Y \\right] , {C}\\left[ W , Y \\right] \\right) & \\rightarrow {V}\\left( Z , {C}\\left[ W , Y \\right] \\right) \\\\& f & \\mapsto \\mathrm {lfp}\\left({ h\\mapsto f\\circ \\left( Z\\times h\\right)\\circ \\partial _{Z} }\\right) \\nonumber \\\\\\overline{\\mathsf {it}} ^{W,Y}_Z :&{C}\\left( Z\\otimes W , W\\sqcup Y \\right) & \\rightarrow {C}\\left( Z\\otimes W , Y \\right) \\\\& f & \\mapsto \\mathrm {lfp}\\left({ h\\mapsto \\langle h, J\\left( \\pi _{Y}\\right) \\rangle \\circ \\left( Z\\otimes f\\right) \\circ \\left( \\mathrm {diag}_{Z} \\otimes \\mathrm {id}_ W \\right)}\\right) \\nonumber $ where $\\partial _{Z} = \\left(\\mathrm {id}_ Z , \\mathrm {id}_Z \\right) : Z\\rightarrow Z\\times Z $ is the diagonal morphism, and $\\mathrm {diag}_{Z} = J \\left(\\mathrm {id}_ Z , \\mathrm {id}_Z \\right) : Z\\rightarrow Z\\otimes Z $ .", "Since the morphisms above are $\\mathbf {\\omega Cpo}$ -natural in $Z\\in {V}$ , they give rise to the families of morphisms $\\mu _ \\omega = \\left( \\mu _ \\omega ^{W,Y} \\right) _{(W,Y)\\in {C}\\times {C}} &\\stackrel{\\mathrm {def}}{=}& \\left( \\overline{\\mu } ^{W,Y} _{{V}\\left[ {C}\\left[ W , Y \\right] , {C}\\left[ W , Y \\right] \\right] }\\left( \\mathsf {eval}_{ {C}\\left[ W , Y \\right] , {C}\\left[ W , Y \\right] } \\right) \\right) _{(W,Y)\\in {C}\\times {C}}\\\\\\mathsf {it}_\\omega = \\left( \\mathsf {it}_\\omega ^{W,Y} \\right) _{(W,Y)\\in {C}\\times {C}} &\\stackrel{\\mathrm {def}}{=}& \\left( \\overline{\\mathsf {it}} ^{W,Y} _{{V}\\left[ W , T\\left( W\\sqcup Y\\right) \\right] } \\left( J\\left( \\mathsf {eval}_{ W , T\\left(W\\sqcup Y\\right) } \\right) \\right)\\right) _{(W,Y)\\in {C}\\times {C}}$ by the Yoneda Lemma, where $\\mathsf {eval}_{ A , B } : {V}\\left[ A , B \\right] \\times A\\rightarrow B $ is the evaluation morphism given by the cartesian closed structure.", "[Underlying $CBV$ model] There is a forgetful functor $\\mathcal {U}_{\\mathcal {BV}} : \\textrm {-}\\mathfrak {C}_{\\mathcal {BV}} \\rightarrow \\mathfrak {C}_{\\mathcal {BV}} $ defined by $\\mathcal {U}_{\\mathcal {BV}} \\left( {V}, \\mathcal {T}\\right) = \\left( {V}, \\mathcal {T}, \\mu _ \\omega , \\mathsf {it}_\\omega \\right) $ , taking every morphism $H$ to its underlying morphism of $CBV$ models.", "Since $H$ is a $\\mathbf {\\omega Cpo}$ -functor and, for any $\\left( W, Y\\right)\\in \\mathsf {ob}\\, {V}\\times \\mathsf {ob}\\, {V} $ , $ H : {V}\\left( W , TY \\right) \\rightarrow {V}^{\\prime }\\left( HW , T^{\\prime }HY \\right) $ is a pointed $\\mathbf {\\omega Cpo}$ -morphism, we get that, indeed, $H$ respects the free iteration and free recursion as defined in (REF ) and ().", "It should be noted that, given $CBV$ $\\mathbf {\\omega Cpo}$ -pairs $\\left( {V}_0 , \\mathcal {T}_0 \\right) $ and $\\left( {V}_1 , \\mathcal {T}_1\\right) $ , $\\left( {V}_0 , \\mathcal {T}_0\\right)\\times \\left( {V}_1 , \\mathcal {T}_1 \\right) = \\left( {V}_0\\times {V}_1 , \\mathcal {T}_0\\times \\mathcal {T}_1 \\right)$ is the product in $\\textrm {-}\\mathfrak {C}_{\\mathcal {BV}} $ .", "Moreover, $\\mathcal {U}_{\\mathcal {BV}} $ preserves finite products." ], [ "Automatic Differentiation for term recursion and iteration", "For our purpose, we could define our macro in terms of total derivatives.", "However, we choose to present it in terms of partial derivatives, in order to keep our treatment as close as possible to the starting point of the efficient implementation of the reverse mode given in [37].", "Following this choice of presentation, it is particularly convenient to establish our AD macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ as a program transformation between a source language and a target language (see REF ).", "The main point of this distinction is to keep track of the difference between the types corresponding to manifolds (cartesian spaces) and the (co)tangents (vector spaces) in the target language (see [37])." ], [ "Source language as a standard call-by-value language with iteration and recursion", "We consider a standard (coarse-grain) call-by-value language over a ground type $$ , certain real constants $\\underline{c}\\in \\mathrm {Op}_0$ , certain primitive operations $\\mathrm {op} \\in \\mathrm {Op}_n$ for each nonzero natural number $n\\in \\mathbb {N}^\\ast $ , and $\\mathbf {sign}\\,$ .", "We denote $\\displaystyle \\mathrm {Op}:= \\bigcup _{n\\in \\mathbb {N}} \\mathrm {Op}_ n $ .", "As it is clear from the semantics defined in REF , $$ intends to implement the real numbers.", "Moreover, for each $n\\in \\mathbb {N}$ , the operations in $\\mathrm {Op}_n $ intend to implement partially defined functions $\\mathbb {R}^n \\rightharpoonup \\mathbb {R}$ .", "Finally, $\\mathbf {sign}\\,$ intends to implement the partially defined function $\\mathbb {R}\\rightharpoonup \\mathbb {R}$ defined in $\\mathbb {R}^- \\cup \\mathbb {R}^+ $ which takes $\\mathbb {R}^-$ to $-1$ and $\\mathbb {R}^+ $ to 1.", "Although it is straightforward to consider more general settings, we also add the assumption that the primitive operations implement differentiable functions (see REF ).", "We treat this operations in a schematic way as this reflects the reality of practical Automatic Differentiation libraries, which are constantly being expanded with new primitive operations.", "The types ${\\tau },{\\sigma },{\\rho }$ , values $v,w,u$ , and computations ${t},{s},{r}$ of our language are as follows.", "$\\begin{array}[t]{l@{\\quad \\!\\!", "}*3{l@{}}@{\\,}l}{\\tau }, {\\sigma }, {\\rho } & ::=& & {-25mu}\\qquad \\text{types} \\\\&\\mathrel {\\vert }& & \\qquad \\text{numbers}\\\\&\\mathrel {\\vert }& \\mathbf {0}\\mathrel {\\vert }{\\tau } \\,{\\mathop {\\sqcup }}\\, {\\sigma } & \\qquad \\text{sums}\\\\&&&\\\\v, w, u & ::=& & {-25mu}\\qquad \\text{values} \\\\&\\mathrel {\\vert }& {x},{y},{z} & \\qquad \\text{variables}\\\\&\\mathrel {\\vert }& \\underline{c} & \\qquad \\text{constants}\\\\&\\mathrel {\\vert }& \\mathbf {inl}\\,{v} \\mathrel {\\vert }\\mathbf {inr}\\,{v} & \\qquad \\text{sum inclusions}\\\\&&&\\\\{t}, {s}, {r} & ::=& & {-25mu}\\qquad \\text{computations} \\\\&\\mathrel {\\vert }& {x},{y},{z} & \\qquad \\text{variables}\\\\&\\mathrel {\\vert }& \\mathbf {let}\\,{t}=\\,{x}\\,\\mathbf {in}\\,{s} & \\qquad \\text{sequencing}\\\\&\\mathrel {\\vert }& \\underline{c} & \\qquad \\text{constant}\\\\&\\mathrel {\\vert }& \\mathrm {op} ({t}_1,\\ldots ,{t}_n) & \\qquad \\text{operation}\\\\&\\mathrel {\\vert }& \\mathbf {case}\\,{t}\\,\\mathbf {of}\\,\\lbrace \\,\\rbrace & \\qquad \\text{sum match}\\\\&\\mathrel {\\vert }& \\mathbf {inl}\\,{{t}} \\mathrel {\\vert }\\mathbf {inr}\\,{{t}} & \\qquad \\text{sum inclusions}\\\\&\\mathrel {\\vert }& \\mathbf {case}\\,{r}\\,\\mathbf {of}\\,\\lbrace \\begin{array}{l}\\;\\;\\mathbf {inl}\\,{x}\\rightarrow {t}\\\\\\mathrel {\\vert }\\mathbf {inr}\\,{y}\\rightarrow {s}\\end{array}\\rbrace \\hspace{-15.0pt}\\; & \\qquad \\text{sum match}\\\\\\end{array}$   $\\begin{array}[t]{l@{\\quad \\!\\!", "}*3{l@{}}@{\\,}l}&\\mathrel {\\vert }\\quad \\, & \\mathbf {1}\\mathrel {\\vert }{\\tau }_1 \\,{\\mathop {\\times }}\\, {\\tau }_2 & \\qquad \\text{products}\\\\&\\mathrel {\\vert }& {\\tau } \\rightarrow {\\sigma } & \\qquad \\text{function} \\\\& & &\\\\&&&\\\\&\\mathrel {\\vert }\\quad \\, & \\langle \\,\\rangle \\ \\mathrel {\\vert }\\langle v, w\\rangle & \\qquad \\text{tuples}\\\\&\\mathrel {\\vert }& \\lambda {x}.", "{{t}} & \\qquad \\text{abstractions} \\\\&\\mathrel {\\vert }&\\mu {x}.", "{t} & \\qquad \\text{term recursion}\\\\&&&\\\\\\\\&\\mathrel {\\vert }\\quad \\, & \\langle \\,\\rangle \\ \\mathrel {\\vert }\\langle {t}, {s}\\rangle & \\qquad \\text{tuples}\\\\&\\mathrel {\\vert }\\quad \\, & \\mathbf {case}\\,{s}\\,\\mathbf {of}\\,\\langle {x}, {y}\\rangle \\rightarrow {t}\\hspace{-10.0pt}\\; & \\qquad \\text{product match}\\\\&\\mathrel {\\vert }& \\lambda {x}.", "{{t}} & \\qquad \\text{abstractions} \\\\&\\mathrel {\\vert }& {t}\\ {s} & \\qquad \\text{function app.}", "\\\\&\\mathrel {\\vert }&\\mathbf {iterate}\\,{t}\\,\\mathbf {from}\\,{x}={s}\\hspace{-10.0pt}\\; & \\qquad \\text{iteration}\\\\& \\mathrel {\\vert }& \\mu {x}.", "{t} & \\qquad \\text{term recursion}\\\\&\\mathrel {\\vert }&\\mathbf {sign}\\,{t} & \\qquad \\text{sign function}\\end{array}$ We use sugar $\\mathbf {if}\\,{r}\\,\\mathbf {then}\\,{t}\\,\\mathbf {else}\\,{s}\\,\\stackrel{\\mathrm {def}}{=}\\mathbf {case}\\,\\mathbf {sign}\\,{r}\\,\\mathbf {of}\\,\\lbrace {\\_\\rightarrow {{t}}\\mathrel {\\big \\vert }\\_\\rightarrow {{r}}}\\rbrace $ , $\\mathbf {fst}\\,{t}\\stackrel{\\mathrm {def}}{=}\\mathbf {case}\\,{t}\\,\\mathbf {of}\\,\\langle {x}, \\_\\rangle \\rightarrow {x}$ , $\\mathbf {snd}\\,{t}\\stackrel{\\mathrm {def}}{=}\\mathbf {case}\\,{t}\\,\\mathbf {of}\\,\\langle \\_, {x}\\rangle \\rightarrow {x}$ and $\\mathbf {let} \\,\\mathbf {rec}\\,f\\!\\left( {x}\\right) = {t}\\,\\mathbf {in}\\,{s}\\stackrel{\\mathrm {def}}{=}\\mathbf {let}\\,f=\\,\\mu f.\\lambda {x}.", "{{t}}\\,\\mathbf {in}\\,{s}$ .", "In fact, we can consider iteration as syntactic sugar as well: $\\mathbf {iterate}\\,{t}\\,\\mathbf {from}\\,{x}={s}\\stackrel{\\mathrm {def}}{=}(\\mu {z}.\\lambda {x}.", "{\\mathbf {case}\\,{t}\\,\\mathbf {of}\\,\\lbrace \\mathbf {inl}\\,{x}^{\\prime }\\rightarrow {z}\\,{x}^{\\prime }\\mid \\mathbf {inr}\\,{x}^{\\prime \\prime }\\rightarrow {x}^{\\prime \\prime }\\rbrace })\\,{s}$ .", "The computations are typed according to the rules of Fig.", "REF and Fig.", "REF , where $\\mathtt {R}\\subset \\mathbb {R}$ is a fixed set of real numbers containing 0.", "For now, the reader may ignore the kinding contexts $\\Delta $ .", "They will serve to support our treatment of ML-style polymorphism later.", "Figure: Typing rules for a basic source language with real conditionals, where 𝚁⊂ℝ\\mathtt {R}\\subset \\mathbb {R} is a fixed set of real numbers containing 0.Figure: Typing rules for term recursion and iteration.We consider the standard CBV $\\beta \\eta $ -equational theory of [30] for our language, which we list in Fig.", "REF .", "We could impose further equations for the iteration construct as is done in [5], [16] as well as for the basic operations  $\\mathrm {op} $   and the sign function $\\mathbf {sign}\\,$ .", "However, such equations are unnecessary for our development.", "Figure: Basic βη\\beta \\eta -equational theory for our language.We write = #x 1 ,...,x n \\stackrel{\\# {x}_1,\\ldots ,{x}_n}{=} to indicate that the variables are fresh in the left hand side.In the top right rule, x{x} may not be free in r{r}.Equations hold on pairs of computations of the same type." ], [ "Target language", "We define our target language by extending the source language adding the following syntax, with the typing rules of Fig.", "REF .", "$\\begin{array}[t]{l@{\\quad \\!\\!", "}*3{l@{}}@{\\,}l}{\\tau }, {\\sigma }, {\\rho } & ::=& & {-25mu}\\qquad \\text{types} \\\\&\\mathrel {\\vert }& \\ldots & \\qquad \\text{as before}\\\\\\\\v,w,u & ::=&& {-25mu}\\qquad \\text{values}\\\\&\\mathrel {\\vert }&\\overline{e} _{i}&\\qquad \\text{$i$-th canonical element} \\\\&\\mathrel {\\vert }&\\ldots &\\qquad \\text{as before}\\\\\\\\\\\\\\\\{t},{s},{r} & ::=&& {-25mu}\\qquad \\text{computations}\\\\&\\mathrel {\\vert }&\\ldots &\\qquad \\text{as before}\\\\&\\mathrel {\\vert }&\\overline{e} _{i}&\\qquad \\text{canonical element}\\end{array}$   $\\begin{array}[t]{l@{\\quad \\!\\!", "}*3{l@{}}@{\\,}l}&\\mathrel {\\vert }\\quad \\, & \\mathbf {vect} & \\qquad \\text{(co)tangent}\\\\\\\\\\\\&\\mathrel {\\vert }& \\overline{0}& \\qquad \\text{zero}\\\\&\\mathrel {\\vert }& {t} + {s}&\\qquad \\text{addition of vectors}\\\\&\\mathrel {\\vert }& {t}\\ast {s} & \\qquad \\text{scalar multiplication} \\\\& \\mathrel {\\vert }& \\mathfrak {h} _{i} {t} & \\qquad \\text{proj.", "handler}\\\\\\\\&\\mathrel {\\vert }& \\overline{0}& \\qquad \\text{zero }\\\\&\\mathrel {\\vert }& {t} + {s}&\\qquad \\text{addition of vectors}\\\\&\\mathrel {\\vert }& {t}\\ast {s} & \\qquad \\text{scalar multiplication} \\\\& \\mathrel {\\vert }& \\mathfrak {h} _{i} {t} & \\qquad \\text{proj.", "handler}\\end{array}$ Figure: Extra typing rules for the target language with iteration and recursion, where we denote ℕ * :=ℕ-0\\mathbb {N}^\\ast := \\mathbb {N}- \\left\\lbrace 0 \\right\\rbrace , 1 :=^1 := and i+1 = i ×^{i+1} = ^i \\times .The operational semantics of the target language depends on the intended behavior for the $AD$ macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ defined in REF .", "In our context, we want $\\mathbf {vect} $ to implement a vector space (playing the role of the (co)tangent), with the respective operations and the usual laws between the operations such as distributivity of the scalar multiplication over the vector addition (which is particularly useful for efficient implementations [37]).", "The terms $\\mathfrak {h} _{i} {t} $ are irrelevant for the definition and correctness of the macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ , but it is particularly useful to illustrate the expected types in REF and REF .", "Although this perspective is negligible to our correctness statement, $\\mathbf {vect} $ can be seen as a type encompassing a linear effect with handlers given by the terms $\\mathfrak {h} _{i} {t} $ .", "We are particularly interested in the case that $\\left( \\mathbf {vect} , + , \\ast , \\overline{0}\\right) $ implements the vector space $\\left( \\mathbb {R}^k, + , \\ast , 0\\right) $ , for some $k\\in \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace $ ,$\\mathbb {R}^ \\infty $ is the vector space freely generated by the infinite set $\\left\\lbrace e^{}_{i}: i\\in \\mathbb {N}^\\ast \\right\\rbrace $ .", "In other words, it is the infinity coproduct of $\\mathbb {R}^i $ ($i\\in \\mathbb {N}^\\ast $ ).", "In order to implement it, one can use lists/arrays and pattern matching for the vector addition.", "where $\\overline{e} _{i}$ implements the $i$ -th element $e^{k}_{i}\\in \\mathbb {R}^k $ of the canonical basis if $k=\\infty $ or if $i\\le k $ , and $0\\in \\mathbb {R}^k$ otherwise.", "In this case, $\\mathfrak {h} _{i} {t} $ is supposed to implement $\\mathfrak {p}_{k \\rightarrow i} : \\mathbb {R}^k \\rightarrow \\mathbb {R}^i,$ which denotes the canonical projection if $i\\le k $ and the coprojection otherwise.", "For short, we say that $\\mathbf {vect} $ implements the vector space $\\mathbb {R}^k $ to refer to the case above.", "It corresponds to the $k$ -semantics for the target language defined in REF ." ], [ "The $CBV$ models {{formula:cdcc7713-de81-4e93-9af3-16b18495411e}} and {{formula:7739a5b7-4926-4173-831a-821a0cc71b1b}}", "As discussed in Appendix , we can translate our coarse-grain languages to fine-grain call-by-value languages.", "The fine-grain languages corresponding to the source and target languages correspond to the $CBV$ models $ \\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S},\\mathbf {Syn}_\\mu , \\mathbf {Syn}_\\mathsf {it} \\right)\\qquad \\mbox{and}\\qquad \\left( \\mathbf {Syn}_V^{{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathbf {tr}}},\\mathbf {Syn}_\\mu ^{{\\mathbf {tr}}} , \\mathbf {Syn}_\\mathsf {it}^{{\\mathbf {tr}}} \\right)$ with the following universal properties.", "[Universal Property of $CBV$ models (REF )] Let $\\left( {V}, \\mathcal {T}, \\mu , {\\mathsf {itt}}\\right) $ be a $CBV$ model.", "Assume that Fig.", "REF and Fig.", "REF are given consistent assignments.", "There is a unique $CBV$ model morphism $H: \\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S},\\mathbf {Syn}_\\mu , \\mathbf {Syn}_\\mathsf {it} \\right)\\rightarrow \\left( {V}, \\mathcal {T}, \\mu , {\\mathsf {itt}}\\right) $ respecting the assignment of Fig.", "REF .", "There is a unique $CBV$ model morphism $\\mathcal {H}: \\left( \\mathbf {Syn}_V^{{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathbf {tr}}},\\mathbf {Syn}_\\mu ^{{\\mathbf {tr}}} , \\mathbf {Syn}_\\mathsf {it}^{{\\mathbf {tr}}} \\right)\\rightarrow \\left( {V}, \\mathcal {T}, \\mu , {\\mathsf {itt}}\\right) $ that extends $H$ and respects the assignment of Fig.", "REF .", "Figure: Assignment that gives the universal property of the source language.Figure: Assignment that gives the universal property of the target language." ], [ "Dual numbers AD transformation for term recursion and iteration", "Let us fix, for all $n\\in \\mathbb {N}$ , for all $\\mathrm {op} \\in \\mathrm {Op}_n$ , for all $1\\le i \\le n$ , computations ${x}_1:,\\ldots ,{x}_n:\\vdash \\partial _i\\mathrm {op} ({x}_1,\\ldots ,{x}_n):$ , which represent the partial derivatives of $\\mathrm {op} $ .", "Using these terms for representing partial derivatives, we define, in Fig.", "REF , a structure preserving macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ on the types and computations of our language for performing AD.", "Figure: AD macro 0.8𝒟 (-)\\scalebox {0.8}{\\mathcal {D}}_{}(-) defined on types and computations.All newly introduced variables are chosen to be fresh.We provide a more efficient way of differentiating 𝐬𝐢𝐠𝐧\\mathbf {sign}\\, in Appx.", ".We extend $\\scalebox {0.8}{\\mathcal {D}}_{}$ to contexts: $\\scalebox {0.8}{\\mathcal {D}}_{}(\\lbrace {x}_1{:}{\\tau }_1,{.}{.}{.", "},{x}_n{:}{\\tau }_n\\rbrace )\\stackrel{\\mathrm {def}}{=}\\lbrace {x}_1{:}\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau }_1),{.}{.}{.", "},{x}_n{:}\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau }_n)\\rbrace $ .", "This turns $\\scalebox {0.8}{\\mathcal {D}}_{}$ into a well-typed, functorial macro in the following sense.", "[Functorial macro] Our macro respects typing, substitution, and $\\beta \\eta $ -equality: If $\\Gamma \\vdash {t} : {\\tau }$ , then $\\scalebox {0.8}{\\mathcal {D}}_{}(\\Gamma )\\vdash \\scalebox {0.8}{\\mathcal {D}}_{}({t}):\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau })$ .", "$\\scalebox {0.8}{\\mathcal {D}}_{}(\\mathbf {let}\\,{x}=\\,{t}\\,\\mathbf {in}\\,{s})=\\mathbf {let}\\,{x}=\\,\\scalebox {0.8}{\\mathcal {D}}_{}({t})\\,\\mathbf {in}\\,\\scalebox {0.8}{\\mathcal {D}}_{}({s})$ .", "If ${t}\\stackrel{\\beta \\eta }{=}{s}$ , then $\\scalebox {0.8}{\\mathcal {D}}_{}({t})\\stackrel{\\beta \\eta }{=}\\scalebox {0.8}{\\mathcal {D}}_{}({s})$ .", "Our macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ can be seen as a class of macros, since it depends on the target language.", "More precisely, it depends on what $\\mathbf {vect} $ implements (see REF )." ], [ "AD transformation as a $CBV$ model morphism", "By the universal property of $\\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S},\\mathbf {Syn}_\\mu , \\mathbf {Syn}_\\mathsf {it} \\right) $ established in Theorem REF , the assignment defined in Fig.", "REF induces a unique $CBV$ model morphism $\\mathbb {D}: \\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S},\\mathbf {Syn}_\\mu , \\mathbf {Syn}_\\mathsf {it} \\right)\\rightarrow \\left( \\mathbf {Syn}_V^{{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathbf {tr}}},\\mathbf {Syn}_\\mu ^{{\\mathbf {tr}}} , \\mathbf {Syn}_\\mathsf {it}^{{\\mathbf {tr}}} \\right).$ Figure: AD assignment.The macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ defined in Fig.", "REF is encompassed by (REF ).", "We establish basic facts about the semantics of the automatic differentiation." ], [ "Basic concrete model", "The most fundamental example of a $CBV$ $\\mathbf {\\omega Cpo}$ -pair is given by $\\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) $ where $\\left( -\\right) _\\bot $ is the lax idempotent monad that freely adds an initial object $\\bot $ to each $\\omega $ -cpo.", "Indeed, of course, $\\mathbf {\\omega Cpo}\\left( W , \\left( Y\\right) _\\bot \\right) $ is pointed for any pair $\\left( W,Y\\right)\\in \\mathsf {ob}\\, \\mathbf {\\omega Cpo}\\times \\mathsf {ob}\\, \\mathbf {\\omega Cpo}$ .", "We consider the product $\\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right)\\times \\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) = \\left( \\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) $ , where, by abuse of language, $\\left( (C, C^{\\prime })\\right) _\\bot =\\left( \\left( C\\right) _\\bot , \\left( C^{\\prime }\\right) _\\bot \\right) $ .", "By Lemma REF , $\\mathcal {U}_{\\mathcal {BV}} \\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) $ and $\\mathcal {U}_{\\mathcal {BV}} \\left( \\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) = \\mathcal {U}_{\\mathcal {BV}} \\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right)\\times \\mathcal {U}_{\\mathcal {BV}} \\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) $ are $CBV$ models." ], [ "Differentiable functions and interleaved derivatives", "Henceforth, unless stated otherwise, the cartesian spaces $\\mathbb {R}^n$ and its subspaces are endowed with the respective discrete $\\mathbf {\\omega Cpo}$ -structures.", "[Interleaving function] For each $(n,k)\\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ , denoting by $\\mathbb {I}_{n}$ the set $\\left\\lbrace 1,\\ldots , n\\right\\rbrace $ , we define the isomorphism (in $\\mathbf {\\omega Cpo}$ with the respective discrete $\\mathbf {\\omega Cpo}$ -structures) $\\varphi _{n, k} : & \\mathbb {R}^n\\times \\left( \\mathbb {R}^k\\right) ^n & \\rightarrow \\left( \\mathbb {R}\\times \\mathbb {R}^k \\right) ^{n}\\\\&\\left( (x_j)_{j\\in \\mathbb {I}_{n}}, (y_j)_{j\\in \\mathbb {I}_{n}} \\right) & \\mapsto \\left( x_j, y_j \\right) _{j\\in \\mathbb {I}_{n}}.\\nonumber $ For each open subset $U\\subset \\mathbb {R}^n$ , we denote by $\\varphi _{n, k}^U : U\\times \\left( \\mathbb {R}^k\\right) ^n \\rightarrow \\varphi _{n,k}\\left( U\\times \\left( \\mathbb {R}^k\\right) ^n \\right) $ the isomorphism obtained from restricting $\\varphi _{n,k} $ .", "In Def.", "REF , Remark REF and Lemma REF , let $\\displaystyle g : U\\rightarrow \\coprod _{j\\in L} V_j $ be a map where $U$ is an open subset of $\\mathbb {R}^n $ , and, for each $i\\in L$ , $V_i $ is an open subset of $\\mathbb {R}^{m_i}$ .", "[Derivative] The map $g$ is differentiable if, for any $i\\in L$ , $g^{-1}\\left( V_i \\right) =W _i $ is open in $\\mathbb {R}^n$ and the restriction $g|_{W_i} : W_ i \\rightarrow V_i $ is differentiable w.r.t the submanifold structures $W_i\\subset \\mathbb {R}^{n}$ and $ V_i \\subset \\mathbb {R}^{m_i} $ .", "In this case, for each $k\\in \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ , we define the function: $\\mathfrak {D}^{k}{g} : & \\varphi _{n, k}\\left( U\\times \\left( \\mathbb {R}^k\\right) ^n\\right) &\\rightarrow \\coprod _{j\\in L}\\left( \\varphi _{{m_j}, k}\\left( V_i\\times \\left( \\mathbb {R}^k\\right) ^{m_i}\\right)\\right) \\\\& z &\\mapsto \\iota _{m_j} \\circ \\varphi _{{m_j},k}^{V_j}\\left( g(x), \\tilde{w}\\cdot g^{\\prime }(x)^t\\right) , \\text{whenever } \\varphi _{n, k} ^{-1} \\left( z \\right) =\\left( x, w \\right)\\in W_i\\times \\left(\\mathbb {R}^k\\right) ^n \\nonumber $ in which $\\tilde{w}$ is the linear transformation $ \\mathbb {R}^n\\rightarrow \\mathbb {R}^k $ corresponding to the vector $w$ , $\\cdot $ is the composition of linear transformations, $\\iota _{m_i} $ is the obvious $ith$ -coprojection of the coproduct (in the category $\\mathbf {\\omega Cpo}$ ), and $g^{\\prime }(x)^t $ is the transpose of the derivative $g^{\\prime }(x) :\\mathbb {R}^n\\rightarrow \\mathbb {R}^{m_i} $ of $g|_{W_i} : W_ i \\rightarrow V_i $ at $x\\in U$ .", "Remark It should be noted that, in Def.", "REF , $W_i$ might be empty for some $i\\in L $ .", "In this case, $g|_{W_i} : W_ i \\rightarrow V_i $ is trivially differentiable.", "Analogously, $U$ might be empty.", "In this case, the function $g$ is differentiable and $\\mathfrak {D}^{k}{g} $ is the unique morphism with domain $\\emptyset $ and codomain as in (REF ).", "Let $\\dot{g}$ be a function with domain as in (REF ).", "The map $g$ is differentiable and $\\dot{g} =\\mathfrak {D}^{k}{g} $ if, and only if, $g\\circ \\alpha $ is differentiable and $\\dot{g}\\circ \\mathfrak {D}^{k}{\\alpha } = \\mathfrak {D}^{k}{\\left(g\\circ \\alpha \\right)} $ for any differentiable map $\\alpha : \\mathbb {R}^n \\rightarrow U $ .", "[Differentiable partial maps] Let $ \\displaystyle h : \\coprod _{r\\in K } \\mathbb {R}^{n_r} \\rightarrow \\left( \\coprod _{j\\in L } \\mathbb {R}^{m_j} \\right) _\\bot $ be a morphism in $\\mathbf {\\omega Cpo}$ .", "We say that $h$ is differentiable if, for each $i\\in K$ , the component $\\displaystyle h_i := h\\circ \\iota _{i} : \\mathbb {R}^{n_i }\\rightarrow \\left( \\coprod _{j\\in L } \\mathbb {R}^{m_j} \\right) _\\bot $ satisfies the following two conditions: $\\displaystyle h_i ^{-1}\\left( \\coprod _{j\\in L } \\mathbb {R}^{m_j} \\right) = U_i $ is open in $\\mathbb {R}^{n_i }$ ; the corresponding total function (REF ) is differentiable.", "$\\underline{h_i} = h|_{{U_i}} : U_i\\rightarrow \\coprod _{j\\in L } \\mathbb {R}^{m_j}$ $\\displaystyle \\mathfrak {d}^{k}\\left( h\\right) :\\coprod _{r\\in K } \\left( \\mathbb {R}\\times \\mathbb {R}^k \\right)^{n_r} \\rightarrow \\left( \\coprod _{j\\in L } \\left( \\mathbb {R}\\times \\mathbb {R}^k \\right) ^{m_j} \\right) _\\bot $ In this case, for each $k\\in \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace $ , we define (REF ) to be the morphism induced by $ \\langle \\mathfrak {d}^{k}\\left( h_r \\right) \\rangle _{r\\in K}$ where, for each $i\\in K$ , $\\mathfrak {d}^{k}\\left( h_i \\right) $ is defined by (REF ), which is just the corresponding canonical extension of the map $\\mathfrak {D}^{k}{h_i} $ .", "$\\mathfrak {d}^{k}\\left( h_i\\right) : & \\left( \\mathbb {R}\\times \\mathbb {R}^k \\right) ^{n_i} &\\rightarrow \\left( \\coprod _{j\\in L } \\left( \\mathbb {R}\\times \\mathbb {R}^k \\right) ^{m_j} \\right) _\\bot \\\\&z &\\mapsto {\\left\\lbrace \\begin{array}{ll}\\mathfrak {D}^{k}{h_i} \\left( z\\right) , & \\text{if } z\\in \\varphi _{{n_i}, k}\\left( U_i\\times \\left( \\mathbb {R}^k\\right) ^{n_i}\\right)\\subset \\left( \\mathbb {R}\\times \\mathbb {R}^k \\right) ^{n_i};\\\\\\bot , & \\text{otherwise.}\\end{array}\\right.", "}\\nonumber $" ], [ "The semantics for the source language", "We give a concrete semantics for our language, interpreting it in the $CBV$ $\\mathbf {\\omega Cpo}$ -pair $\\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) $ .", "We denote by $\\mathbb {R}$ the discrete $\\omega $ -cpo of real numbers, and we define $ \\mathsf {sign}: \\mathbb {R}\\rightarrow \\left( \\mathsf {1}\\sqcup \\mathsf {1}\\right) _\\bot $ by (REF ), where $\\iota _{1} , \\iota _{2} : \\mathsf {1}\\rightarrow \\mathsf {1}\\sqcup \\mathsf {1}$ are the two coprojections of the coproduct.", "$[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] : \\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S},\\mathbf {Syn}_\\mu , \\mathbf {Syn}_\\mathsf {it} \\right) \\rightarrow \\mathcal {U}_{\\mathcal {BV}} \\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right)$ $\\mathsf {sign}(x) ={\\left\\lbrace \\begin{array}{ll}\\bot , & \\text{if } x = 0\\\\\\iota _{1} (\\ast ) , & \\text{if } x<0 \\\\\\iota _{2} (\\ast ) , & \\text{if } x>0\\end{array}\\right.", "}$ By the universal property of $\\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S},\\mathbf {Syn}_\\mu , \\mathbf {Syn}_\\mathsf {it} \\right) $ , there is only one $CBV $ model morphism (REF ) consistent with the assignment of Fig.", "REF where $\\mathsf {c} $ is the constant that $\\underline{c}$ intends to implement, and, for each $\\mathrm {op} \\in \\mathrm {Op}_n $ , $ {f}_\\mathrm {op} $ is the partial map that $\\mathrm {op} $ intends to implement.", "Figure: Semantics' assignment for each primitive operation op ∈ Op n \\mathrm {op} \\in \\mathrm {Op}_n (n∈ℕn\\in \\mathbb {N}) and each constant c∈𝚁c\\in \\mathtt {R}.The $CBV$ model morphism (REF ) (or, more precisely, the underlying functor of the $CBV$ morphism $[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] $ ) gives the semantics for the source language.", "Although our work holds for more general contexts, we consider the following assumption over the semantics of our language.", "Assumption For each $n\\in \\mathbb {N}$ and $\\mathrm {op} \\in \\mathrm {Op}_n $ , $[\\hspace{-2.5pt}[\\mathrm {op} ]\\hspace{-2.5pt}] = {f}_\\mathrm {op} : \\mathbb {R}^n \\rightarrow \\left( \\mathbb {R}\\right) _\\bot $ is differentiable." ], [ "The $k$ -semantics for the target language", "For each $k\\in \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace $ , we define the $k$ -semantics for the target language by interpreting $\\mathbf {vect} $ as the vector space $\\mathbb {R}^k$ .", "Namely, we extend the semantics $[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] $ of the source language into a $k$ -semantics of the target language.", "More precisely, by Theorem REF , there is a unique $CBV$ model morphism (REF ) that extends $[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] $ and is consistent with the assignment given by the vector structure () together with the projection (coprojection) $[\\hspace{-2.5pt}[\\mathfrak {h} _{i} ]\\hspace{-2.5pt}] _ {k} : \\mathbb {R}^k \\rightarrow \\mathbb {R}^i $ if $i\\le k $ ($i\\ge k$ ), for each $i\\in \\mathbb {N}^\\ast $ .", "$[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] _ {k} : \\left( \\mathbf {Syn}_V^{{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathbf {tr}}},\\mathbf {Syn}_\\mu ^{{\\mathbf {tr}}} , \\mathbf {Syn}_\\mathsf {it}^{{\\mathbf {tr}}} \\right) &\\rightarrow &\\mathcal {U}_{\\mathcal {BV}} \\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right)\\\\\\left( [\\hspace{-2.5pt}[\\mathbf {vect} ]\\hspace{-2.5pt}] _ {k} , [\\hspace{-2.5pt}[ + ]\\hspace{-2.5pt}] _ {k} , [\\hspace{-2.5pt}[ \\ast ]\\hspace{-2.5pt}] _ {k} , [\\hspace{-2.5pt}[\\overline{0}]\\hspace{-2.5pt}] _ {k} \\right) &:= & \\left( \\mathbb {R}^k, + , \\ast , 0 \\right) $" ], [ "Prim-op-correct macro", "[Sound for primitives] A macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ as defined in Fig.", "REF and its corresponding $CBV$ model morphism $\\mathbb {D}$ as defined in (REF ) are sound for primitives if, for any primitive $\\mathrm {op} \\in \\mathrm {Op}$ , $ [\\hspace{-2.5pt}[\\scalebox {0.8}{\\mathcal {D}}_{}(\\mathrm {op} )]\\hspace{-2.5pt}] _ {k} = \\mathfrak {d}^{k}\\left( [\\hspace{-2.5pt}[\\mathrm {op} ]\\hspace{-2.5pt}] \\right) $ for any $k$ .", "For each $j\\in \\mathbb {I}_{n}$ , given a differentiable function $f : \\mathbb {R}^n\\rightarrow \\left( \\mathbb {R}\\right) _\\bot $ , we denote by $\\mathfrak {d}_{j}\\left( f \\right) : \\mathbb {R}^n\\rightarrow \\left( \\mathbb {R}\\times \\mathbb {R}\\right) _\\bot $ the function defined by $\\mathfrak {d}_{j}\\left( f \\right) \\left( x_1,\\ldots ,x_n\\right) = \\mathfrak {d}^{1}\\left( f \\right) \\circ \\varphi _{n,1} \\left( (x_1,\\ldots ,x_n), e^{n}_{j}\\right) $ , where $e^{n}_{j} $ the $j$ -th vector of the canonical basis of $\\mathbb {R}^n $ .", "The macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ defined in Fig.", "REF is sound for primitives provided that $[\\hspace{-2.5pt}[\\langle \\mathrm {op} ({y}_1,\\ldots ,{y}_n), \\partial _j\\mathrm {op} ({y}_1,\\ldots ,{y}_n) \\rangle ]\\hspace{-2.5pt}] = \\mathfrak {d}_{j}\\left( [\\hspace{-2.5pt}[\\mathrm {op} ]\\hspace{-2.5pt}] \\right) ,$ for any primitive operation $\\mathrm {op} \\in \\mathrm {Op}_ n $ of the source language." ], [ "Enriched scone and subscone", "Given an $\\mathbf {\\omega Cpo}$ -functor $G:{B}\\rightarrow {D}$ , the comma $\\mathbf {\\omega Cpo}$ -category ${D}\\downarrow G$ of the identity along $G$ in ${\\mathbf {\\omega Cpo}}\\textrm {-}\\mathsf {Cat}$ is defined as follows.", "The objects of ${D}\\downarrow G$ are triples $(D\\in {D}, C\\in {B}, j:D\\rightarrow G(C) )$ in which $j$ is a morphism of ${D}$ ; a morphism $(D,C, j)\\rightarrow (D^{\\prime }, C^{\\prime }, h)$ between objects of ${D}\\downarrow G$ is a pair (REF ) making (REF ) commutative in ${D}$ ; $\\alpha = \\left( \\alpha _0 : D\\rightarrow D^{\\prime } , \\alpha _1 : C\\rightarrow C^{\\prime }\\right)$ $ (0,0)|l|/->/<0,-225>[{D}`{{G(C)}};{{j}}](0,0)|a|/->/<1200,0>[{D}`{{D}^{\\prime }};{{{\\alpha }_0}}](1200,0)|r|/->/<0,-225>[{{D}^{\\prime }}`{{G(C^{\\prime })}};{{h}}](0,-225)|b|/->/<1200,0>[{{G(C)}}`{{G(C^{\\prime })}};{{G\\left(\\alpha _1\\right)}}]$ if $\\alpha = \\left( \\alpha _0 : D\\rightarrow D^{\\prime } , \\alpha _1 : C\\rightarrow C^{\\prime }\\right), \\beta = \\left( \\beta _0 : D\\rightarrow D^{\\prime } , \\beta _1 : C\\rightarrow C^{\\prime }\\right) : \\left( D, C , j\\right)\\rightarrow \\left( D^{\\prime }, C^{\\prime }, h\\right) $ , are two morphisms of ${D}\\downarrow G$ , we have that $\\alpha \\le \\beta $ if $\\alpha _0\\le \\beta _0 $ in ${D}$ and $\\alpha _1\\le \\beta _1$ in ${B}$ .", "Following the approach of [26], we have: Let $G: {B}\\rightarrow {D}$ be a right $\\mathbf {\\omega Cpo}$ -adjoint functor.", "Assuming that ${D}$ has finite $\\mathbf {\\omega Cpo}$ -products and ${B}$ has finite $\\mathbf {\\omega Cpo}$ -coproducts, the $\\mathbf {\\omega Cpo}$ -functor $\\mathcal {L}: {D}\\downarrow G\\rightarrow {D}\\times {B},$ defined by $\\left( D\\in {D}, C\\in {B}, j:D\\rightarrow G(C) \\right)\\mapsto \\left( D, C \\right)$ , is $\\mathbf {\\omega Cpo}$ -comonadic and $\\mathbf {\\omega Cpo}$ -monadic.", "This implies, in particular, that $\\mathcal {L}$ creates (and strictly preserves) $\\mathbf {\\omega Cpo}$ -limits and colimits.", "By Theorem and the enriched adjoint triangle theoremSee [10] for the original adjoint triangle theorem, and [25] for the enriched version., we have: Let $G: {B}\\rightarrow {D}$ be a right $\\mathbf {\\omega Cpo}$ -adjoint functor between $\\mathbf {\\omega Cpo}$ -bicartesian closed categories.", "In this case, ${D}\\downarrow G$ is an $\\mathbf {\\omega Cpo}$ -bicartesian closed category.", "Moreover, if ${D}\\times {B}$ is $\\mathbf {\\omega Cpo}$ -cocomplete, so is ${D}\\downarrow G$ .", "Theorem and Corollary are $\\mathbf {\\omega Cpo}$ -enriched versions of the fundamental results of [26].", "The details and proofs are presented in Appx.", "." ], [ "Subscone", "Henceforth, we assume that $\\mathbf {Sub}\\left( {D}\\downarrow G \\right)$ is a full reflective and replete $\\mathbf {\\omega Cpo}$ -subcategory of ${D}\\downarrow G$ .", "We denote, herein, by $\\mathfrak {T}_{sub}$ the idempotent $\\mathbf {\\omega Cpo}$ -monad induced by the $\\mathbf {\\omega Cpo}$ -adjuntion.", "Recall that a morphism $q$ in $\\mathbf {\\omega Cpo}$ is full if its underlying functor is full.", "In this case, the underlying functor is also faithful and injective on objects.", "Moreover, a morphism $j$ in an $\\mathbf {\\omega Cpo}$ -category ${B}$ is full if ${B}\\left( B , j \\right) $ is full in $\\mathbf {\\omega Cpo}$ for any $B\\in {B}$ .", "Furthermore, recall that an $\\mathbf {\\omega Cpo}$ -functor $H:{W}\\rightarrow {Z}$ is locally full if, for any $\\left( X,W \\right)\\in \\mathsf {ob}\\, {W}\\times \\mathsf {ob}\\, {W} $ , the morphism $H : {W}\\left( X , W \\right) \\rightarrow {Z}\\left( HX , HW \\right) $ is a full $\\mathbf {\\omega Cpo}$ -morphism.", "It should be noted that the 2-functor underlying a locally full $\\mathbf {\\omega Cpo}$ -functor is locally fully faithful.", "Moreover, since every full morphism in $\\mathbf {\\omega Cpo}$ is injective on objects, every locally full $\\mathbf {\\omega Cpo}$ -functor is faithful (locally injective on objects).", "Assumption We require that: whenever $\\left( D\\in {D}, C\\in {B}, j\\right)\\in \\mathbf {Sub}\\left( {D}\\downarrow G \\right) $ , $j$ is a full morphism in ${B}$ ; $G: {B}\\rightarrow {D}$ is a right $\\mathbf {\\omega Cpo}$ -adjoint functor between $\\mathbf {\\omega Cpo}$ -bicartesian closed categories; $\\mathfrak {T}_{sub}$ strictly preserves $\\mathbf {\\omega Cpo}$ -products; Diag.", "(REF ) commutes.", "$ (0,0)/->/<525,0>[{{\\mathbf {Sub}\\left( {D}\\downarrow G \\right)}}`{{{D}\\downarrow {G}}};](525,0)|a|/->/<450,0>[{{{D}\\downarrow {G}}}`{{{D}\\times {{B}}}};{{\\mathcal {L}}}](975,0)|a|/->/<375,0>[{{{D}\\times {{B}}}}`{{{B}}};{{\\pi _{{B}}}}]$ $ (150,0)|a|/->/<600,0>[{{{D}\\downarrow {G}}}`{{{D}\\downarrow {G}}};{\\mathfrak {T}_{sub}}](750,0)|r|/->/<150,-300>[{{{D}\\downarrow {G}}}`{{{D}\\times {{B}}}};{{\\mathcal {L}}}](150,0)|l|/->/<-150,-300>[{{{D}\\downarrow {G}}}`{{{D}\\times {{B}}}};{{\\mathcal {L}}}](0,-300)|b|/->/<450,0>[{{{D}\\times {{B}}}}`{{{B}}};{{\\pi _{{B}}}}](900,-300)|b|/->/<-450,0>[{{{D}\\times {{B}}}}`{{{B}}};{{\\pi _{{B}}}}]$ We denote by $\\underline{\\mathcal {L}}: \\mathbf {Sub}\\left( {D}\\downarrow G \\right)\\rightarrow {B}$ the $\\mathbf {\\omega Cpo}$ -functor given by the composition (REF ) where the unlabeled arrow is the full inclusion.", "The full inclusion $\\mathbf {Sub}\\left( {D}\\downarrow G \\right)\\rightarrow {D}\\downarrow G $ creates (and strictly preserves) $\\mathbf {\\omega Cpo}$ -limits and $\\mathbf {\\omega Cpo}$ -exponentials.", "Moreover, if ${D}\\downarrow G $ is $\\mathbf {\\omega Cpo}$ -cocomplete, so is $\\mathbf {Sub}\\left( {D}\\downarrow G \\right)$ .", "$\\mathbf {Sub}\\left( {D}\\downarrow G \\right)\\rightarrow {D}\\downarrow G $ is $\\mathbf {\\omega Cpo}$ -monadic and, hence, it creates $\\mathbf {\\omega Cpo}$ -limits.", "By REF of REF , $\\mathfrak {T}_{sub}$ is commutative and, hence, $\\mathbf {Sub}\\left( {D}\\downarrow G \\right)\\rightarrow {D}\\downarrow G $ creates $\\mathbf {\\omega Cpo}$ -exponentials.", "Since $\\mathfrak {T}_{sub}$ is idempotent, $\\mathbf {Sub}\\left( {D}\\downarrow G \\right)$ is $\\mathbf {\\omega Cpo}$ -cocomplete whenever ${D}\\downarrow G $ is.", "$\\mathbf {Sub}\\left( {D}\\downarrow G \\right)$ is an $\\mathbf {\\omega Cpo}$ -bicartesian closed category.", "Moreover, if ${D}\\times {B}$ is $\\mathbf {\\omega Cpo}$ -cocomplete, so is $\\mathbf {Sub}\\left( {D}\\downarrow G \\right)$ .", "It follows from Theorem REF and Corollary .", "The $\\mathbf {\\omega Cpo}$ -functor $\\underline{\\mathcal {L}}: \\mathbf {Sub}\\left( {D}\\downarrow G \\right)\\rightarrow {B}$ is strictly (bi)cartesian closed and locally full (hence, faithful).", "Moreover, $\\underline{\\mathcal {L}}$ strictly preserves $\\mathbf {\\omega Cpo}$ -colimits.", "The $\\mathbf {\\omega Cpo}$ -functors $\\mathcal {L}:{D}\\downarrow G\\rightarrow {D}\\times {B}$ and $\\pi _{{B}} : {D}\\times {B}\\rightarrow {B}$ strictly preserve $\\mathbf {\\omega Cpo}$ -weighted limits and colimits.", "Since $\\mathfrak {T}_{sub}$ is idempotent and (REF ) commutes, this implies that $\\underline{\\mathcal {L}}$ strictly preserves $\\mathbf {\\omega Cpo}$ -limits and colimits.", "The composition $\\pi _{{B}} \\circ \\mathcal {L}$ has a left $\\mathbf {\\omega Cpo}$ -adjoint given by $ C\\mapsto \\left( \\mathsf {0}, C, \\iota _{\\mathsf {0}} \\right) $ .", "Since the counit of this $\\mathbf {\\omega Cpo}$ -adjunction is the identity and $\\pi _{{B}} \\circ \\mathcal {L}$ strictly preserves $\\mathbf {\\omega Cpo}$ -products, we get that this $\\mathbf {\\omega Cpo}$ -adjunction strictly satisfies the Frobenius reciprocity condition.", "This implies that $\\pi _{{B}} \\circ \\mathcal {L}$ strictly preserves $\\mathbf {\\omega Cpo}$ -exponentials.", "Since $\\mathfrak {T}_{sub}$ strictly preserves $\\mathbf {\\omega Cpo}$ -products, we get that $\\mathbf {Sub}\\left( {D}\\downarrow G \\right)\\rightarrow {D}\\downarrow G $ strictly preserves $\\mathbf {\\omega Cpo}$ -exponentials as well.", "Therefore, $\\underline{\\mathcal {L}}$ strictly preserves $\\mathbf {\\omega Cpo}$ -exponentials.", "The locally fully faithfulness (and, hence, faithfulness) of $\\underline{\\mathcal {L}}$ follows from REF of REF .", "Remark Condition REF of REF ensures that our subscone indeed gives us a proof-irrelevant approach: in particular, as stressed above, it implies that $\\underline{\\mathcal {L}}$ is faithful.", "Given objects $(D, C, j), (D^{\\prime }, C^{\\prime }, j^{\\prime })$ and a morphism $f: C\\rightarrow C^{\\prime } $ in ${B}$ , if there is $\\alpha : (D, C, j)\\rightarrow (D^{\\prime }, C^{\\prime }, j^{\\prime })$ satisfying $\\underline{\\mathcal {L}}(\\alpha ) = f $ , then $\\alpha $ is unique with this property.", "In this case, we say that $f$ defines a morphism $(D, C, j)\\rightarrow (D^{\\prime }, C^{\\prime }, j^{\\prime })$ in $\\mathbf {Sub}\\left( {D}\\downarrow G \\right)$." ], [ "Correctness of Dual Numbers AD", "In this section, we show that, as long as the macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ defined in Fig.", "REF is sound for primitives and $\\mathbf {vect} $ implements $\\mathbb {R}^k $ , $\\scalebox {0.8}{\\mathcal {D}}_{}$ is correct according to the $k$ -specification below.", "More precisely, we prove that: Assume that $\\mathbf {vect} $ implements the vector space $\\mathbb {R}^k$ , for some $k\\in \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace $ .", "For any program ${x}:{\\tau }\\vdash {t}:{\\sigma }$ where ${\\tau },{\\sigma }$ are data types, we have that $[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] $ is differentiable and, moreover, $[\\hspace{-2.5pt}[\\scalebox {0.8}{\\mathcal {D}}_{}({t} )]\\hspace{-2.5pt}] _ {k} = \\mathfrak {d}^{k}\\left( [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] \\right) $ provided that $\\scalebox {0.8}{\\mathcal {D}}_{}$ is sound for primitives.", "In REF and REF , we show how we can correctly get the derivative and the transpose derivative out of Theorem .", "In other words, we get forward and reverse AD out of our correct macro, provided that $\\mathbf {vect} $ implements a suitable vector space $\\mathbb {R}^k $ ." ], [ "Basic setting", "Henceforth, we follow the notation and definitions established in Section .", "In particular, unless stated otherwise, the cartesian spaces $\\mathbb {R}^n$ and its subspaces are endowed with the discrete $\\mathbf {\\omega Cpo}$ -structure.", "For each $\\left( n, k\\right) \\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ , we define the $\\mathbf {\\omega Cpo}$ -functor (REF ).", "We consider the full reflective $\\mathbf {\\omega Cpo}$ -subcategory $ \\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right)$ of $\\mathbf {\\omega Cpo}\\downarrow G_{n, k} $ whose objects are triples (REF ) such that $j$ is full (and, hence, injective on objects).", "$ G_{n, k} \\stackrel{\\mathrm {def}}{=}\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}\\left( \\left( \\mathbb {R}^n , \\left( \\mathbb {R}\\times \\mathbb {R}^k\\right) ^n \\right) , \\left( - , - \\right) \\right) : \\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}\\rightarrow \\mathbf {\\omega Cpo}$ $\\left( D\\in \\mathbf {\\omega Cpo},\\, \\left( C, C^{\\prime }\\right)\\in \\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo},\\, \\left( j: D\\rightarrow G_{n , k}\\left( C, C^{\\prime } \\right)\\right)\\in \\mathbf {\\omega Cpo}\\right)$ The $\\mathbf {\\omega Cpo}$ -functor $G_{n, k} $ together with $ \\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right)$ satisfies REF .", "Therefore: $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right)$ is a cocomplete $\\mathbf {\\omega Cpo}$ -cartesian closed category.", "Moreover, the forgetful $\\mathbf {\\omega Cpo}$ -functor $\\underline{\\mathcal {L}}_{n,k} : \\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right)\\rightarrow \\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}$ is locally full and strictly cartesian closed.", "Furthermore, it strictly preserves $\\mathbf {\\omega Cpo}$ -colimits.", "It follows from Corollary REF and Theorem REF ." ], [ "The monad", "Let $(n,k)\\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ .", "In order to get a categorical model of our language, we need to define a partiality monad for $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right)$ .", "We denote by $\\mathfrak {O}_{n}$ the set of proper open non-empty subsets of the cartesian space $\\mathbb {R}^n $ .", "For each $U\\in \\mathfrak {O}_{n} $ , we define $\\mathsf {Diff}_{\\left( U, n, k \\right) } &\\stackrel{\\mathrm {def}}{=}&\\left( \\left\\lbrace \\left( g: \\mathbb {R}^n\\rightarrow U, \\mathfrak {D}^{k}{g} \\right) : g \\mbox{ is differentiable} \\right\\rbrace , \\left( U, \\varphi _{n,k}\\left( U\\times \\left( \\mathbb {R}^k\\right) ^n\\right) \\right), \\mathrm {incl.", "}\\right) \\\\&\\in &\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n,k} \\right) .$ We define the $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n,k} \\right)$ -monad $\\mathcal {P}_{n, k}\\left( -\\right) _\\bot $ on $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n,k} \\right)$ by $\\mathcal {P}_{n, k}\\left( D, \\left( C, C^{\\prime }\\right), j\\right) _\\bot \\stackrel{\\mathrm {def}}{=}\\left( \\underline{\\mathcal {P}_{n, k}\\left( D, \\left( C, C^{\\prime }\\right) , j\\right) _\\bot } , \\left( \\left( C\\right) _\\bot , \\left( C^{\\prime }\\right) _\\bot \\right) , \\mathtt {j}_{X} \\right)$ where $\\underline{\\mathcal {P}_{n, k}\\left( D, \\left( C, C^{\\prime }\\right) , j\\right) _\\bot } $ is the union $\\left\\lbrace \\bot \\right\\rbrace \\sqcup G_{n, k} \\left( C , C^{\\prime }\\right)\\sqcup \\left( \\coprod _ {U\\in \\mathfrak {O}_{n} }\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right)\\left( \\mathsf {Diff}_{\\left( U, n, k \\right) } , \\left( D, \\left( C, C^{\\prime }\\right) , j\\right) \\right) \\right)$ with the full $\\mathbf {\\omega Cpo}$ -substructure of $ G_{n, k} \\left( \\left( C\\right) _\\bot , \\left( C^{\\prime }\\right) _\\bot \\right) $ induced by the inclusion $\\mathtt {j}_{X} $ which is defined by the following components: the inclusion $\\left\\lbrace \\bot \\right\\rbrace \\rightarrow G_{n, k} \\left( \\left( C\\right) _\\bot , \\left( C^{\\prime }\\right) _\\bot \\right) $ of the least morphism $\\bot : \\left( \\mathbb {R}^n , \\left( \\mathbb {R}\\times \\mathbb {R}^k\\right) ^n \\right)\\rightarrow \\left( \\left( C\\right) _\\bot , \\left( C^{\\prime }\\right) _\\bot \\right) $ in $\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}\\left( \\left( \\mathbb {R}^n , \\left( \\mathbb {R}\\times \\mathbb {R}^k \\right) ^n \\right) , \\left( \\left( C\\right) _\\bot , \\left( C^{\\prime }\\right) _\\bot \\right) \\right) $ ; the inclusion of the total functions $G_{n, k}\\left( \\eta _{C} , \\eta _{C^{\\prime }} \\right) : G_{n, k} \\left( C , C^{\\prime } \\right)\\rightarrow G_{n, k} \\left( \\left( C\\right) _\\bot , \\left( C^{\\prime }\\right) _\\bot \\right) $ ; for each $U\\in \\mathfrak {O}_{n}$ , the injection $\\displaystyle \\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right)\\left( \\mathsf {Diff}_{\\left( U, n, k \\right) } , \\left( D, \\left( C, C^{\\prime }\\right) , j\\right) \\right) \\rightarrow G_{n, k} \\left( \\left( C\\right) _\\bot , \\left( C^{\\prime }\\right) _\\bot \\right) $ defined by $ \\left( \\alpha _0, \\alpha _1 = \\left( \\beta _0 : U\\rightarrow C, \\beta _1 : \\varphi _{n, k}\\left( U\\times \\left( \\mathbb {R}^k \\right) ^n\\right) \\rightarrow C^{\\prime } \\right) \\right) \\mapsto \\left( \\overline{\\beta _0} : \\mathbb {R}^n\\rightarrow \\left( C\\right) _\\bot , \\overline{\\beta _1} :\\left( \\mathbb {R}\\times \\mathbb {R}^k\\right) ^n\\rightarrow \\left( C^{\\prime }\\right) _\\bot \\right) ,$ where $\\overline{\\beta _0} $ and $\\overline{\\beta _1} $ are the respective corresponding canonical extensions.", "For each $(C,C^{\\prime })\\in \\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}$ , the component $\\left( \\mathrm {m}_{C} , \\mathrm {m}_ {C^{\\prime }} \\right) $ and $\\left( \\eta _{C} , \\eta _ {C^{\\prime }} \\right) $ of the multiplication and the unit of the monad $\\left( -\\right) _\\bot $ on $\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}$ define morphisms $\\overline{\\mathrm {m}} _{\\left( D, \\left( C, C^{\\prime }\\right), j \\right) } : &\\mathcal {P}_{n, k}\\left( \\mathcal {P}_{n,k}\\left( D, \\left( C, C^{\\prime }\\right), j\\right) _\\bot \\right) _\\bot &\\rightarrow \\mathcal {P}_{n, k}\\left( D, \\left( C, C^{\\prime }\\right), j\\right) _\\bot \\\\\\overline{\\eta }_{\\left( D, \\left( C, C^{\\prime }\\right), j \\right)} : &\\left( D, \\left( C, C^{\\prime }\\right), j \\right) &\\rightarrow \\mathcal {P}_{n, k}\\left( D, \\left( C, C^{\\prime }\\right), j\\right) _\\bot .", "$ in $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right)$ .", "Therefore, $\\overline{\\mathrm {m}}$ and $\\overline{\\eta }$ define the multiplication and the unit for $\\mathcal {P}_{n, k}\\left( -\\right) _\\bot $ , completing the definition of our monad.", "Analogously, we lift, as morphisms of $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right) $ , the strength of $\\left( -\\right) _\\bot $ , making $\\mathcal {P}_{n, k}\\left( -\\right) _\\bot $ into a strong monad (i.e.", "$\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right) $ -enriched monad).", "In order to finish the proof that $\\left( \\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right), \\mathcal {P}_{n, k}\\left( -\\right) _\\bot \\right) $ is a $CBV$ $\\mathbf {\\omega Cpo}$ -pair, it is enough to see that, for any pair of objects $\\left( D_0, \\left( C_0, C_0^{\\prime }\\right), j_0 \\right) $ , $ \\left( D_1, \\left( C_1, C_1^{\\prime }\\right), j_1 \\right)$ of $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right)$ , the least morphism $\\bot : \\left( C_0 , C_0 ^{\\prime }\\right) \\rightarrow \\left( \\left( C_1\\right) _\\bot , \\left( C_1^{\\prime }\\right) _\\bot \\right) ,$ of $\\mathbf {\\omega Cpo}\\left( C_0 , \\left( C_1\\right) _\\bot \\right) \\times \\mathbf {\\omega Cpo}\\left( C_0^{\\prime } , \\left( C_1^{\\prime }\\right) _\\bot \\right) $ defines the least morphism $\\left( D_0, \\left( C_0, C_0^{\\prime }\\right), j_0 \\right)\\rightarrow \\mathcal {P}_{n, k}\\left( D_1, \\left( C_1, C_1^{\\prime }\\right), j_1 \\right) _\\bot $ in $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right)$ .", "Finally, since the underlying endofunctor of the monad $\\mathcal {P}_{n, k}\\left( -\\right) _\\bot $ , the multiplication and the identity are clearly lifted from $\\left( -\\right) _\\bot $ through $\\underline{\\mathcal {L}}_{n,k} $ as defined above, we have: For each $(n,k)\\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ , $\\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right), \\mathcal {P}_{n, k}\\left( -\\right) _\\bot \\right) $ is a $CBV$ $\\mathbf {\\omega Cpo}$ -pair.", "Moreover, $\\underline{\\mathcal {L}}_{n,k} : \\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n,k} \\right)\\rightarrow \\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}$ is a $CBV $ $\\mathbf {\\omega Cpo}$ -pair morphism between $\\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right), \\mathcal {P}_{n, k}\\left( -\\right) _\\bot \\right) $ and $\\left(\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) $ .", "Therefore, by Lemma REF , $\\mathcal {U}_{\\mathcal {BV}} \\left( \\underline{\\mathcal {L}}_{n, k} \\right) $ is a $CBV $ model morphism between the underlying $CBV$ models of $\\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right), \\mathcal {P}_{n, k}\\left( -\\right) _\\bot \\right) $ and $\\left(\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) $ ." ], [ "Logical relations as a $CBV$ model morphism", "Henceforth, we assume that the macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ is sound for primitives (see Def.", "REF ).", "We establish the $CBV$ model morphism (REF ).", "We start by establishing the logical relations' assignment.", "Let $(n, k)\\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ .", "We define the object (REF ) in $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k } \\right)$ .", "$ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} \\stackrel{\\mathrm {def}}{=}\\left(\\left\\lbrace \\left( f : \\mathbb {R}^n\\rightarrow \\mathbb {R}, f ^\\ast \\right) : f\\mbox{ is differentiable, } f ^\\ast = \\mathfrak {D}^{k}{f} \\right\\rbrace , \\left( \\mathbb {R}, \\mathbb {R}\\times \\mathbb {R}^k \\right), \\mathrm {incl.}", "\\right)$ For each $m\\in \\mathbb {N}$ , $\\mathrm {op} \\in \\mathrm {Op}_m $ and $c\\in \\mathtt {R}$ , we define the morphisms (REF ), () and () in $\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}$ , in which $\\mathbb {D}$ , $[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] $ and $[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] _ {k} $ are the functors underlying the $CBV$ model morphisms respectively defined in (REF ), (REF ) and (REF ).", "$ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[\\mathbf {sign}\\,]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{k}&\\stackrel{\\mathrm {def}}{=}& \\left( \\mathsf {sign}, \\mathfrak {d}^{k}\\left( \\mathsf {sign}\\right) \\right) = \\left( \\mathsf {sign}, [\\hspace{-2.5pt}[\\mathbb {D}\\left( \\mathbf {sign}\\,\\right) ]\\hspace{-2.5pt}] _ {k} \\right) : \\left( \\mathbb {R}, \\mathbb {R}\\times \\mathbb {R}^k \\right) \\rightarrow \\left( \\left( \\mathsf {1}\\sqcup \\mathsf {1}\\right) _\\bot , \\left( \\mathsf {1}\\sqcup \\mathsf {1}\\right) _\\bot \\right) \\\\ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[\\underline{c} ]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{k}&\\stackrel{\\mathrm {def}}{=}& \\left( \\mathsf {c} , \\mathfrak {d}^{k}\\left( \\mathsf {c} \\right) \\right) : \\left( \\mathsf {1}, \\mathsf {1}\\right) \\rightarrow \\left( \\mathbb {R}, \\mathbb {R}\\times \\mathbb {R}^k \\right) \\\\ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[\\mathrm {op} ]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{k}&\\stackrel{\\mathrm {def}}{=}& \\left( [\\hspace{-2.5pt}[\\mathrm {op} ]\\hspace{-2.5pt}] , \\mathfrak {d}^{k}\\left( [\\hspace{-2.5pt}[\\mathrm {op} ]\\hspace{-2.5pt}] \\right) \\right) : \\left( \\mathbb {R}^m, \\left(\\mathbb {R}\\times \\mathbb {R}^k\\right)^m \\right) \\rightarrow \\left( \\left( \\mathbb {R}\\right) _\\bot , \\left( \\mathbb {R}\\times \\mathbb {R}^k \\right) _\\bot \\right)$ By Theorem REF , we have that the product $ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} ^m$ in $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right)$ is given by (REF ).", "Therefore, by the chain rule for derivatives, we have that (REF ), () and () respectively define the morphisms (REF ), (REF ), and (REF ) in $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n,k} \\right)$ , where $\\overline{\\mathsf {1}}\\sqcup \\overline{\\mathsf {1}} $ denotes the coproduct of the terminal $\\overline{\\mathsf {1}} = \\left( \\mathsf {1}, \\left( \\mathsf {1}, \\mathsf {1}\\right) , \\mathrm {id}\\right) $ with itself.", "$&& \\left(\\left\\lbrace \\left( f_j : \\mathbb {R}^n\\rightarrow \\mathbb {R}, f ^\\ast _j\\right)_{j\\in \\mathbb {I}_{m} } : f ^\\ast _j\\mbox{ is differentiable and } f ^\\ast _j = \\mathfrak {D}^{k}{f_j} , \\forall j\\in \\mathbb {I}_{m} \\right\\rbrace , \\left( \\mathbb {R}, \\mathbb {R}\\times \\mathbb {R}^k \\right) ^m , \\mathrm {incl.}", "\\right) \\nonumber \\\\&& \\cong \\left(\\left\\lbrace \\left( f : \\mathbb {R}^n\\rightarrow \\mathbb {R}^m , f ^\\ast \\right) : f\\mbox{ is differentiable, } f ^\\ast = \\mathfrak {D}^{k}{f} \\right\\rbrace , \\left( \\mathbb {R}^m , \\left(\\mathbb {R}\\times \\mathbb {R}^k\\right) ^m \\right), \\mathrm {incl.}", "\\right) .$ $ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[\\mathbf {sign}\\,]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n, k} : \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n, k} \\rightarrow \\mathcal {P}_{n, k}\\left( \\overline{\\mathsf {1}}\\sqcup \\overline{\\mathsf {1}}\\right) _\\bot $ $ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[\\underline{c} ]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} : \\overline{\\mathsf {1}} \\rightarrow \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} $ $ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[\\mathrm {op} ]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} : \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n, k} ^m \\rightarrow \\mathcal {P}_{n,k}\\left( \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} \\right) _\\bot $ By the universal property of the $CBV$ model $\\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S},\\mathbf {Syn}_\\mu , \\mathbf {Syn}_\\mathsf {it} \\right) $ , we get: For each $(n, k)\\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ , there is only one $CBV$ model morphism $ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} :\\left( \\mathbf {Syn}_V^{{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathbf {tr}}},\\mathbf {Syn}_\\mu ^{{\\mathbf {tr}}} , \\mathbf {Syn}_\\mathsf {it}^{{\\mathbf {tr}}} \\right)\\rightarrow \\mathcal {U}_{\\mathcal {BV}} \\left( \\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right), \\mathcal {P}_{n, k}\\left( -\\right) _\\bot \\right)$ that is consistent with the assignment given by (REF ), (REF ), (REF ), and (REF ).", "Moreover, Diag.", "(REF ) commutes.", "$ (0,0)|a|/->/<2025,0>[{{\\left(\\mathbf {Syn}_V,{\\mathbf {Syn}_\\mathcal {S}},{\\mathbf {Syn}_\\mu },\\mathbf {Syn}_\\mathsf {it} \\right)}}`{{\\left(\\mathbf {Syn}_V,{\\mathbf {Syn}_\\mathcal {S}},{\\mathbf {Syn}_\\mu },\\mathbf {Syn}_\\mathsf {it} \\right)\\times \\left(\\mathbf {Syn}_V^{{\\mathbf {tr}}}{,}\\mathbf {Syn}_\\mathcal {S}^{{\\mathbf {tr}}}{,}\\mathbf {Syn}_\\mu ^{{\\mathbf {tr}}} {,}\\mathbf {Syn}_\\mathsf {it}^{{\\mathbf {tr}}} \\right)}};{{\\left(\\mathrm {id}{,}\\mathbb {D}\\right)}}](2025,0)|r|/->/<0,-300>[{{\\left(\\mathbf {Syn}_V,{\\mathbf {Syn}_\\mathcal {S}},{\\mathbf {Syn}_\\mu },\\mathbf {Syn}_\\mathsf {it} \\right)\\times \\left(\\mathbf {Syn}_V^{{\\mathbf {tr}}}{,}\\mathbf {Syn}_\\mathcal {S}^{{\\mathbf {tr}}}{,}\\mathbf {Syn}_\\mu ^{{\\mathbf {tr}}} {,}\\mathbf {Syn}_\\mathsf {it}^{{\\mathbf {tr}}} \\right)}}`{{\\mathcal {U}_{\\mathcal {BV}} \\left(\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}{,}\\left( -\\right) _\\bot \\right)}};{{[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\times [\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] _ {k} }}](0,0)|l|/->/<0,-300>[{{\\left(\\mathbf {Syn}_V,{\\mathbf {Syn}_\\mathcal {S}},{\\mathbf {Syn}_\\mu },\\mathbf {Syn}_\\mathsf {it} \\right)}}`{{\\mathcal {U}_{\\mathcal {BV}} \\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n,k} \\right),\\mathcal {P}_{n,k}\\left( -\\right) _\\bot \\right)}};{{ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} }}](0,-300)|b|/->/<2025,0>[{{\\mathcal {U}_{\\mathcal {BV}} \\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n,k} \\right),\\mathcal {P}_{n,k}\\left( -\\right) _\\bot \\right)}}`{{\\mathcal {U}_{\\mathcal {BV}} \\left(\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}{,}\\left( -\\right) _\\bot \\right)}};{{\\mathcal {U}_{\\mathcal {BV}} \\left({\\underline{\\mathcal {L}}}_{n,k}\\right)}}]$ Both $\\left([\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\times [\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] _ {k} \\right)\\circ \\left( \\mathrm {id}\\times \\mathbb {D}\\right) $ and $\\mathcal {U}_{\\mathcal {BV}} \\left({\\underline{\\mathcal {L}}}_{n,k}\\right)\\circ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} $ yield $CBV$ model morphisms that are consistent with the assignment given by the object $\\left( \\mathbb {R}, \\mathbb {R}\\times \\mathbb {R}^k \\right) $ together with (REF ), () and ()." ], [ "AD Logical Relations for Data Types", "As a consequence of Theorem REF , we establish a fundamental result on the logical relations $ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} $ for data types in our setting: namely, Theorem REF .", "We start by establishing Lemma REF about our logical relations and the coproducts in $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{{n}, k} \\right)$ .", "Let $\\left( n, k\\right) \\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ .", "If $\\displaystyle \\left( g, \\dot{g}\\right) \\in \\coprod _{j\\in L} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} ^{l_j} $ , then $\\displaystyle g : \\mathbb {R}^n\\rightarrow \\coprod _{j\\in L} \\mathbb {R}^{l_j} $ is differentiable and $\\dot{g} = \\mathfrak {D}^{k}{g} $ .", "By Theorem REF , $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{{n}, k} \\right)$ has coproducts.", "Moreover, we can conclude that $\\displaystyle \\left( g, \\dot{g}\\right) \\in \\coprod _{j\\in L} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} ^{l_j} $ implies that, for some $r\\in L $ , we have a pair $\\displaystyle \\left(\\underline{g} : \\mathbb {R}^{n} \\rightarrow \\mathbb {R}^{l_r}, \\mathfrak {D}^{k}{g} : \\left(\\mathbb {R}\\times \\mathbb {R}^k\\right) ^{n}\\rightarrow \\left(\\mathbb {R}\\times \\mathbb {R}^k\\right) ^{l_r} \\right)$ such that $\\left( g, \\dot{g}\\right) = \\left( \\iota _{\\mathbb {R}^{l_r} } \\circ \\underline{g}, \\iota _{\\left(\\mathbb {R}\\times \\mathbb {R}^k\\right) ^{l_r} } \\circ \\mathfrak {D}^{k}{g} \\right) $ .", "Following Def.", "REF , this completes our proof.", "Let $\\left( n, k\\right) \\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ .", "If $\\displaystyle \\left( g, \\dot{g}\\right) \\in \\underline{\\mathcal {P}_{n,k}\\left( \\coprod _{j\\in L} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} ^{l_j} \\right) _\\bot } $ , then $\\displaystyle g : \\mathbb {R}^n\\rightarrow \\left( \\coprod _{j\\in L} \\mathbb {R}^{l_j}\\right) _\\bot $ is differentiable and $\\dot{g} = \\mathfrak {d}^{k}\\left( g\\right) $ .", "Indeed, by the definition of $\\underline{\\mathcal {P}_{{n},k}\\left( - \\right) _\\bot }$ , we have one of the following situations.", "$g$ and $\\dot{g} $ are the least morphisms, that is to say, they are constantly equal to $\\bot $ ; the pair $\\left( g, \\dot{g}\\right) $ come from a pair of total functions $\\left( \\underline{g}, \\underline{\\dot{g}} \\right) \\in \\displaystyle \\coprod _{j\\in L} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} ^{l_j}$ ; $\\displaystyle g^{-1}\\left( \\coprod _{j\\in L} \\mathbb {R}^{l _j} \\right) = W$ is open.", "Moreover, denoting by (REF ) the pair consisting of the corresponding total functions, we have that (REF ) holds for any differentiable map $ \\alpha : \\mathbb {R}^n \\rightarrow W $ .", "$ \\displaystyle \\left( \\underline{g} : W\\rightarrow \\left( \\coprod _{j\\in L} \\mathbb {R}^{l _j} \\right) , \\, \\underline{\\dot{g}} \\right)$ $ \\left( \\underline{g} \\circ \\alpha , \\, \\underline{\\dot{g} } \\circ \\mathfrak {D}^{k}{\\alpha } \\right)\\in \\coprod _{j\\in L} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} ^{l_j} .$ If (REF ) holds, following Def.", "REF , we get that $g$ is differentiable and $\\dot{g} = \\mathfrak {d}^{k}\\left( g\\right) $ by Remark REF .", "In case of (REF ), we get $\\underline{g}$ is differentiable and $\\underline{\\dot{g}} = \\mathfrak {D}^{k}{\\underline{g}} $ by Lemma REF .", "Hence $g$ is differentiable and $\\dot{g} = \\mathfrak {d}^{k}\\left( g\\right) $ .", "Finally, in case of (REF ), by Lemma REF , we get that, for any differentiable $ \\alpha : \\mathbb {R}^n \\rightarrow W $ , $\\underline{g} \\circ \\alpha $ is differentiable and $\\underline{\\dot{g} } \\circ \\mathfrak {D}^{k}{\\alpha } $ is well defined and equal to $\\mathfrak {D}^{k}{\\left( \\underline{ {g} } \\circ \\alpha \\right) } $ .", "By Lemma REF , this implies that $\\underline{g}$ is differentiable and $\\mathfrak {D}^{k}{ \\underline{g } } = \\underline{\\dot{g} } $ .", "Following Def.", "REF , this completes the proof that $g$ is differentiable and $\\dot{g} = \\mathfrak {d}^{k}\\left( g\\right) $ .", "Let $k\\in \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace $ .", "If, for each $i\\in \\mathfrak {L} $ , the morphism $ \\left(g, \\dot{g} \\right) $ in $\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}$ defines the morphism (REF ) in $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{{s_i}, k} \\right)$ , then $\\displaystyle g : \\coprod _{r\\in \\mathfrak {L} } \\mathbb {R}^{s_r}\\rightarrow \\left( \\coprod _{j\\in L} \\mathbb {R}^{l_j}\\right) _\\bot $ is differentiable and $\\dot{g} = \\mathfrak {d}^{k}\\left( g\\right) $ .", "$ \\mathtt {g} : \\coprod _{r\\in \\mathfrak {L} } \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} ^{s _r} \\rightarrow \\mathcal {P}_{{s_i}, k}\\left( \\coprod _{j\\in L} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} ^{l _j} \\right) _\\bot $ $ \\iota _{i} : \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} ^{s _i}\\rightarrow \\coprod _{r\\in K} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} ^{s _r}$ From the hypothesis, for each $i\\in \\mathfrak {L} $ , we conclude that the pair (REF ) defines the morphism (REF ), since $\\left( \\iota _{\\mathbb {R}^{s_i}} , \\iota _{\\left( \\mathbb {R}\\times \\mathbb {R}^k\\right) ^{s_i}} \\right)$ defines the coprojection (REF ) in $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{{s_i}, k} \\right)$ .", "$\\left( g_i\\stackrel{\\mathrm {def}}{=}g\\circ \\iota _{\\mathbb {R}^{s_i}} ,\\, \\dot{g}_i\\stackrel{\\mathrm {def}}{=}\\dot{g} \\circ \\iota _{\\left( \\mathbb {R}\\times \\mathbb {R}^k\\right) ^{s_i}} \\right)$ $\\mathtt {g}_i \\stackrel{\\mathrm {def}}{=}\\mathtt {g}\\circ \\iota _{i} : \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} ^{s _i} \\rightarrow \\mathcal {P}_{{s_i}, k}\\left( \\coprod _{j\\in L} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} ^{l _j} \\right) _\\bot $ Since $\\mathrm {id}_{\\mathbb {R}^{s_i}} : \\mathbb {R}^{s_i}\\rightarrow \\mathbb {R}^{s_i} $ is differentiable, and $\\mathfrak {D}^{k}{\\left( \\mathrm {id}_{\\mathbb {R}^{s_i}} \\right) } $ is given by the identity $\\left( \\mathbb {R}\\times \\mathbb {R}^k\\right) ^{s_i}\\rightarrow \\left( \\mathbb {R}\\times \\mathbb {R}^k\\right) ^{s_i} $ , we conclude that $\\left(g _i,\\dot{g}_i \\right)\\in \\underline{\\mathcal {P}_{{s_i},k}\\left( \\coprod _{j\\in L} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} ^{l_j} \\right) _\\bot }.$ By TheoremREF , (REF ) proves that $g_i$ is differentiable and $\\dot{g}_i = \\mathfrak {d}^{k}\\left( g_i\\right) $ .", "Since this result holds for any $i\\in \\mathfrak {L} $ , we conclude that $g$ is differentiable and $\\dot{g} = \\mathfrak {d}^{k}\\left( g \\right) $ ." ], [ "Fundamental AD correctness theorem", "We prove Theorem REF , which completes the proof of Theorem .", "Let $\\displaystyle t: \\coprod _{r\\in \\mathfrak {L} } ^{s _r} \\rightarrow \\mathbf {Syn}_\\mathcal {S}\\left( \\coprod _{j\\in L} ^{l _j} \\right) $ be a morphism in $\\mathbf {Syn}_V$ .", "We have that $\\displaystyle [\\hspace{-2.5pt}[ t ]\\hspace{-2.5pt}] : \\coprod _{r\\in \\mathfrak {L} } \\mathbb {R}^{s _r} \\rightarrow \\left( \\coprod _{j\\in L} \\mathbb {R}^{l _j} \\right) _\\bot $ is differentiable and, for any $k\\in \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ , $ [\\hspace{-2.5pt}[\\mathbb {D}\\left( t \\right) ]\\hspace{-2.5pt}] _ {k} = \\mathfrak {d}^{k}\\left( [\\hspace{-2.5pt}[ t ]\\hspace{-2.5pt}] \\right) $ .", "We assume that we have $t$ as above.", "For each $i\\in \\mathfrak {L} $ , the pair (REF ) is in the image of $\\left([\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\times [\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] _ {k} \\right)\\circ \\left( \\mathrm {id}\\times \\mathbb {D}\\right) =\\mathcal {U}_{\\mathcal {BV}} \\left({\\underline{\\mathcal {L}}}_{{s_i},k}\\right)\\circ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} $ .", "This implies that (REF ) defines the morphism (REF ) in $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{{s_i}, k} \\right)$ .", "Therefore, by Corollary REF , we conclude that $[\\hspace{-2.5pt}[t]\\hspace{-2.5pt}] $ is differentiable and $[\\hspace{-2.5pt}[\\mathbb {D}{\\left( t\\right) } ]\\hspace{-2.5pt}] _ {k} = \\mathfrak {d}^{k}\\left( [\\hspace{-2.5pt}[t]\\hspace{-2.5pt}] \\right) $ .", "$\\left([\\hspace{-2.5pt}[t]\\hspace{-2.5pt}] ,[\\hspace{-2.5pt}[\\mathbb {D}{\\left( t\\right) } ]\\hspace{-2.5pt}] _ {k} \\right)$ $ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[t]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} : \\coprod _{r\\in K} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} ^{s _r} \\rightarrow \\mathcal {P}_{{s_i}, k}\\left( \\coprod _{j\\in L} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} ^{l _j} \\right) _\\bot $" ], [ "Correctness of the dual numbers forward AD", "We assume that $\\mathbf {vect} $ implements the vector space $\\mathbb {R}$ .", "It is straightforward to see that we get forward mode AD out of our macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ : namely, for a program ${x}:{\\tau } \\vdash {t}:{\\sigma } $ (where ${\\tau } $ and ${\\sigma }$ are data types) in the source language, we get a program ${x}:\\scalebox {0.8}{\\mathcal {D}}_{}({\\tau }) \\vdash \\scalebox {0.8}{\\mathcal {D}}_{}({t}):\\scalebox {0.8}{\\mathcal {D}}_{}({\\sigma }) $ in the target language, which, by Theorem , satisfies the following properties.", "$[\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] : \\coprod _{r\\in K } \\mathbb {R}^{n_r} \\rightarrow \\left( \\coprod _{j\\in L } \\mathbb {R}^{m_j} \\right) _\\bot $ is differentiable as in Def.", "REF ; if $y\\in \\mathbb {R}^{n_i}\\cap [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] ^{-1}\\left( \\mathbb {R}^{m_j} \\right) = W_ j $ for some $i\\in K $ and $j\\in L $ , we have that, for any $w\\in \\mathbb {R}^{n_i} $ , denoting $z: = \\varphi _{{n_i},1}\\left(y,w\\right) $ , $[\\hspace{-2.5pt}[ \\scalebox {0.8}{\\mathcal {D}}_{}({t}) ]\\hspace{-2.5pt}] _ {1} \\left( \\varphi _{{n_i},1}\\left(y,w\\right) \\right) &=& \\mathfrak {d}^{1}\\left( [\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] \\right) \\left( z\\right) = \\mathfrak {D}^{1}{[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] |_{W_j} } \\left( z\\right) = \\varphi _{{m_j},1}\\left( [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] \\left( y\\right) , \\tilde{w}\\cdot [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] ^{\\prime }(y) ^{t} \\right) \\nonumber \\\\&=& \\varphi _{l,1}\\left( [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] \\left( y\\right) , [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] ^{\\prime }(y)(w) \\right) ,$ where $[\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] ^{\\prime }(y) : \\mathbb {R}^{n_i}\\rightarrow \\mathbb {R}^{m_j} $ is the derivative of $[\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] |_{W_j}: W_j \\rightarrow \\mathbb {R}^{m_j} $ at $y$ ." ], [ "Correctness of the dual numbers reverse AD", "We assume that $\\mathbf {vect} $ implements the vector space $\\mathbb {R}^k $ , for some fixed $k\\in \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace $ .", "We consider the respective (co)projections $\\mathfrak {p}_{k \\rightarrow s} $ for each $s\\in \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace $ , as defined in (REF ) .", "The following shows how our macro encompasses reverse mode AD.", "For each $s\\in \\mathbb {N}^\\ast $ with $s \\le k $ , we can define the morphism $\\mathbf {wrap}_{s}\\stackrel{\\mathrm {def}}{=}\\left( \\pi _{j}, \\overline{e} _{j} \\right) _{j\\in \\mathbb {I}_{s}} : ^s\\rightarrow \\left( \\times \\mathbf {vect} \\right) ^s $ in $\\mathbf {Syn}_V^{{\\mathbf {tr}}}$ , which corresponds to the wrapper defined in (REF ) in the target language.", "We denote $\\mathtt {wrap}_{s}\\stackrel{\\mathrm {def}}{=}[\\hspace{-2.5pt}[\\mathbf {wrap}_{s}]\\hspace{-2.5pt}] _ {k} $ .", "By the definition of the $k$ -semantics, it is clear that $\\mathtt {wrap}_{s} \\left( y \\right) = \\varphi _{s,k}\\left( y, e^{k}_{1}, \\ldots , e^{k}_{s} \\right) $ .", "For a program ${x}:^{s} \\vdash {t}:^l$ (where $s, l\\in \\mathbb {N}^\\ast $ ), we have that, for any $y\\in [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] ^{-1}\\left( \\mathbb {R}^l \\right)\\subset \\mathbb {R}^s $ , $[\\hspace{-2.5pt}[\\scalebox {0.8}{\\mathcal {D}}_{}({t})\\circ \\mathbf {wrap}_{s} ]\\hspace{-2.5pt}] _ {k} \\left( y \\right) & = & \\mathfrak {d}^{k}\\left( [\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] \\right) \\circ \\mathtt {wrap}_{s}\\left( y \\right) = \\mathfrak {D}^{k}{[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] } \\circ \\mathtt {wrap}_{s}\\left( y \\right)\\\\& = & \\mathfrak {D}^{k}{[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] } \\circ \\varphi _{s,k}\\left( y, e^{k}_{1}, \\ldots , e^{k}_{s} \\right) \\\\&=& \\varphi _{l,k}\\left( [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] \\left( y\\right) , \\mathfrak {p}_{s \\rightarrow k} [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] ^{\\prime }(y) ^t \\right)$ by Theorem .", "This gives the transpose derivative $\\mathfrak {p}_{s \\rightarrow k} [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] ^{\\prime }(y) ^t$ as something of the type $\\mathbf {vect} ^l $ .", "This should be good enough whenever $k = s $ , since, in this case, $[\\hspace{-2.5pt}[\\mathbf {vect} ^l]\\hspace{-2.5pt}] _ {k} = \\left( \\mathbb {R}^s\\right) ^l $ and $\\mathfrak {p}_{s \\rightarrow k} =\\mathfrak {p}_{k \\rightarrow k} = \\mathrm {id}$ .", "In case of $s < k $ , if needed, the type can be fixed by using the handler $\\mathfrak {h} _{s} $ .", "More precisely, we can define the morphism $\\mathfrak {h} _{l,s} \\stackrel{\\mathrm {def}}{=} \\left( \\mathrm {id}, \\mathfrak {h} _{s} \\right) _{i\\in \\mathbb {I}_{l}} : \\left( \\times \\mathbf {vect} \\right) ^l\\rightarrow \\left( \\times ^s \\right) ^l $ and, by the definition of $k$ -semantics, we conclude that $[\\hspace{-2.5pt}[\\mathfrak {h} _{l,s} \\circ \\scalebox {0.8}{\\mathcal {D}}_{}({t})\\circ \\mathbf {wrap}_{s} ]\\hspace{-2.5pt}] _ {k} \\left( y \\right) & = & [\\hspace{-2.5pt}[\\mathfrak {h} _{l,s} ] \\hspace{-2.5pt}] _ {k} \\circ \\varphi _{l,k}\\left( [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] \\left( y\\right) , \\mathfrak {p}_{s \\rightarrow k} [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] ^{\\prime }(y) ^t \\right) \\\\&=& \\varphi _{l,k}\\left( [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] \\left( y\\right) , \\mathfrak {p}_{k \\rightarrow s} \\circ \\mathfrak {p}_{s \\rightarrow k} [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] ^{\\prime }(y) ^t \\right) \\\\& = & \\varphi _{l,k}\\left( [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] \\left( y\\right) , [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] ^{\\prime }(y) ^t \\right) ,$ since $ \\mathfrak {p}_{k \\rightarrow s} \\circ \\mathfrak {p}_{s \\rightarrow k} = \\mathrm {id}$ whenever $s\\le k $ .", "Again, by Theorem , it is straightforward to generalize the correctness statements above to more general data types ${\\sigma } $ .", "Furthermore, it should be noted that, for $k=\\infty $ (representing the case of a type of dynamically sized array of cotangents), the above shows that our macro gives the reverse mode AD for any program ${x}:{\\tau } \\vdash {t}:{\\sigma }$ for data types ${\\tau }$ and ${\\sigma } $ .", "This choice of $k=\\infty $ is the easiest route to take for a practical implementation of this form of dual-numbers reverse AD, as it leads to a single type of cotangent vectors that works for any program." ], [ "Syntax", "We extend both our source and target languages of REF and REF with ML-style polymorphism and type recursion in the sense of FPC [13].", "That is, we extend types, values and computations for each of the two languages as $\\begin{array}[t]{l@{\\quad \\!\\!", "}*3{l@{}}@{\\,}l}{\\tau }, {\\sigma }, {\\rho } & ::=& & {-25mu}\\qquad \\text{types} \\\\&\\mathrel {\\vert }& \\ldots & \\qquad \\text{as before}\\\\&&&\\\\v, w, u & ::=& & {-25mu}\\qquad \\text{values} \\\\&\\mathrel {\\vert }& \\ldots & \\qquad \\text{as before}\\\\&&&\\\\{t}, {s}, {r} & ::=& & {-25mu}\\qquad \\text{computations} \\\\&\\mathrel {\\vert }& \\ldots & \\qquad \\text{as before}\\\\\\end{array}$   $\\begin{array}[t]{l@{\\quad \\!\\!", "}*3{l@{}}@{\\,}l}&\\mathrel {\\vert }& {\\alpha },{\\beta },{\\gamma } & \\qquad \\text{type variables}\\\\&\\mathrel {\\vert }\\quad \\, & \\mathbf {\\mu }{\\alpha }.", "{\\tau } & \\qquad \\text{recursive type}\\\\& & &\\\\&\\mathrel {\\vert }& \\mathbf {roll}\\,v & \\qquad \\text{recursive intro}\\\\&&&\\\\&&&\\\\&\\mathrel {\\vert }& \\mathbf {roll}\\,{t} & \\qquad \\text{recursive intro}\\\\&\\mathrel {\\vert }\\quad \\, & \\mathbf {case}\\,{t}\\,\\mathbf {of}\\,\\mathbf {roll}\\,{x}\\rightarrow {s} & \\qquad \\text{recursive elim}\\end{array}$ The new values and computations according to the rules in Fig.", "REF .", "Figure: Typing rules for the recursive types extension.Here, kinding contexts $\\Delta $ are lists of type variables ${\\alpha }_1,\\ldots ,{\\alpha }_n$ .", "We consider judgements $\\Delta \\mid \\Gamma \\vdash {t} : {\\tau }$ , where the types in $\\Gamma $ and ${\\tau }$ may contain free type variables from $\\Delta $ .", "They should be read as specifying that ${t}$ is a program of type ${\\tau }$ , with free variables typed according to $\\Gamma $ , that is polymorphic in the type variables of $\\Delta $ .", "We use the $\\beta \\eta $ -rules of Fig.", "REF .", "Figure: The standard βη\\beta \\eta -equational theory for recursive types in CBV.Once a language has recursive types, it is already expressive enough to get term recursion and, hence, iteration.", "Namely, we can now consider term recursion at type ${\\tau }={\\sigma }\\rightarrow {\\rho }$ as syntactic sugar.", "Namely, we first define $\\chi \\stackrel{\\mathrm {def}}{=}\\mathbf {\\mu }{\\alpha }.\\left( {\\alpha }\\rightarrow {\\tau }\\right) $ and then: $&\\mathbf {unroll}\\,{t}\\stackrel{\\mathrm {def}}{=}\\mathbf {case}\\,{t}\\,\\mathbf {of}\\,\\mathbf {roll}\\,{x}\\rightarrow {x}\\nonumber \\\\&\\mu {x}:{\\tau }.", "{t}\\stackrel{\\mathrm {def}}{=}\\mathbf {let}\\,body:\\chi \\rightarrow {\\tau }=\\,(\\lambda {y}:\\chi .\\lambda {z}:{\\sigma }.", "{\\mathbf {let}\\,{x}:{\\tau }=\\,\\mathbf {unroll}\\,{y}\\,{y}\\,\\mathbf {in}\\,{t}\\,{z}})\\,\\mathbf {in}\\,body (\\mathbf {roll}\\,\\,body).$ The semantics of the language is, of course, expected to be consistent – meaning that term recursion should be compatible with the definition above.", "Alternatively, we can consider that the source language is given by the basic language with the typing rules given by Fig.", "REF with the corresponding grammar plus the recursive types established above, while the target language is the source language plus the extension given by the grammar and typing rules defined in REF ." ], [ "Categorical models for recursive types: $rCBV$ models", "Here, we establish the basic categorical model for the syntax of call-by-value languages with recursive types.", "Let $\\left({V}, \\mathcal {T}\\right) $ be a $CBV$ pair and $J:{V}\\rightarrow {C}$ the corresponding universal Kleisli functor.", "Moreover, let $\\mathsf {Cat}\\left( \\mathsf {2} , {{V}}\\textrm {-}\\mathsf {Cat} \\right) $ be the category of morphisms of ${{V}}\\textrm {-}\\mathsf {Cat}$ .", "For each $n\\in \\mathbb {N}$ , an $n$ -variable $\\left({V}, \\mathcal {T}\\right) $ -parametric type (or a $\\left({V}, \\mathcal {T}\\right) $ -parametric type of degree $n$ ) is a morphism $E : \\left( J^\\mathrm {op} \\times J\\right) ^n \\rightarrow J $ in $\\mathsf {Cat}\\left( \\mathsf {2} , {{V}}\\textrm {-}\\mathsf {Cat} \\right) $ .", "In other words, it consists of a pair $E = \\left( {E}_{{V}}, {E}_{{C}} \\right) $ of ${V}$ -enriched functors such that (REF ) commutes.", "A $\\left({V}, \\mathcal {T}\\right) $ -parametric type of degree 0 (REF ) can be identified with the corresponding object ${V}$ .", "$ (0,-450)|l|/->/<0,450>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n}}}`{{\\left({{C}}^\\mathrm {op} \\times {C}\\right)^{n}}};{{\\left({J^\\mathrm {op} }\\times {J}\\right)^{n}}}](675,-450)|a|/->/<0,450>[{{{V}}}`{{{C}}};{{J}}](0,-450)|b|/->/<675,0>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n}}}`{{{V}}};{{{E}_{{V}}}}](0,0)|a|/->/<675,0>[{{\\left({{C}}^\\mathrm {op} \\times {C}\\right)^{n}}}`{{{C}}};{{{E}_{{C}}}}]$ We denote by $\\mathsf {Param}\\left( {V} , \\mathcal {T}\\right) $ the collection of all $\\left({V}, \\mathcal {T}\\right) $ -parametric types $E = \\left( {E}_{{V}}, {E}_{{C}}\\right) $ of any degree $n\\in \\mathbb {N} $ .", "As the terminology indicates, the objects of $\\mathsf {Param}\\left( {V} , \\mathcal {T}\\right) $ play the role of the parametric types in our language.", "However, the parametric types in the actual language could be a bit more restrictive.", "They usually are those constructed out of the primitive type formers.", "Namely, in our case, tupling (finite products), cotupling (finite coproducts), exponetiation (Kleisli exponential) and type recursion.", "[Free type recursion] A free decreasing degree type operator (fddt operator) for $\\left({V}, \\mathcal {T}\\right) $ is a function (REF ) identity on parametric types of degree 0 which takes each $(n+1)$ -variable $\\left({V}, \\mathcal {T}\\right) $ -parametric type $E = \\left( {E}_{{V}}, {E}_{{C}}\\right) $ to a $\\left({V}, \\mathcal {T}\\right) $ -parametric type $\\nu E = \\left( {\\nu E}_{{V}}, {\\nu E}_{{C}} \\right) $ of degree $n $ , provided that $n\\in \\mathbb {N}$ .", "$\\nu : \\mathsf {Param}\\left( {V} , \\mathcal {T}\\right) & \\rightarrow & \\mathsf {Param}\\left( {V} , \\mathcal {T}\\right) \\\\ (0,-450)|l|/->/<0,450>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n+1}}}`{{\\left({{C}}^\\mathrm {op} \\times {C}\\right)^{n+1}}};{{\\left({J^\\mathrm {op} }\\times {J}\\right)^{n+1}}}](675,-450)|a|/->/<0,450>[{{{V}}}`{{{C}}};{{J}}](0,-450)|b|/->/<675,0>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n+1}}}`{{{V}}};{{{E}_{{V}}}}](0,0)|a|/->/<675,0>[{{\\left({{C}}^\\mathrm {op} \\times {C}\\right)^{n+1}}}`{{{C}}};{{{E}_{{C}}}}] & \\mapsto & (0,-450)|l|/->/<0,450>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n}}}`{{\\left({{C}}^\\mathrm {op} \\times {C}\\right)^{n}}};{{\\left({J^\\mathrm {op} }\\times {J}\\right)^{n}}}](675,-450)|a|/->/<0,450>[{{{V}}}`{{{C}}};{{J}}](0,-450)|b|/->/<675,0>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n}}}`{{{V}}};{{{\\nu {E}}_{{V}}}}](0,0)|a|/->/<675,0>[{{\\left({{C}}^\\mathrm {op} \\times {C}\\right)^{n}}}`{{{C}}};{{{\\nu {{E}}}_{{C}}}}] \\nonumber $ A rolling for (REF ) is a collection (REF ) of natural transformations such that (REF ) is invertible for any $E = \\left( {E}_{{V}}, {E}_{{C}} \\right) $ , that is to say, $J\\left(\\mathsf {roll} ^{E} \\right) $ is a natural isomorphism.", "$\\left( \\left( {V}^\\mathrm {op} \\times {V}\\right) ^0 \\rightarrow {V}, \\left( {C}^\\mathrm {op} \\times {C}\\right) ^0 \\rightarrow {C}\\right)$ $\\underline{\\mathsf {roll} } = \\left( \\mathsf {roll} ^{ E } \\right) _ {E= \\left( {E}_{{V}}, {E}_{{C}} \\right)\\in \\mathsf {Param}\\left( {V} , \\mathcal {T}\\right) }$ $ (0,0)|a|/->/<1125,0>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n}}}`{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n+1}}};{{\\left( {},{\\nu {E}}_{{V}}^\\mathrm {op} {,}{\\nu {E}}_{{V}}\\right) }}](0,0)|b|/->/<1125,-450>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n}}}`{{{V}}};{{{\\nu {E}}_{{V}}}}](1125,0)|r|/->/<0,-450>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n+1}}}`{{{V}}};{{{E}_{{V}}}}](1125,-450)|b|/->/<-1125,0>[{{{V}}}`{{{C}}};{{J}}](675,-225)|a|/<=/<375,0>[{\\phantom{O}}`{\\phantom{O}};{{\\mathsf {roll} }^{E}}]$ A free type recursion for $\\left({V}, \\mathcal {T}\\right) $ is a pair $\\underline{\\nu } = \\left( \\nu , \\underline{\\mathsf {roll} } \\right) $ where $\\nu $ is an fddt operator and $\\underline{\\mathsf {roll} } $ is a rolling for $\\nu $ .", "[$H$ -compatible] Let $H$ be a $CBV$ pair morphism between $CBV$ pairs $\\left( {V}, \\mathcal {T}\\right) $ and $\\left( {V}^{\\prime } , \\mathcal {T}^{\\prime } \\right) $ .", "A pair $\\left( E, E^{\\prime }\\right) \\in \\mathsf {Param}\\left( {V} , \\mathcal {T}\\right) \\times \\mathsf {Param}\\left( {V}^{\\prime } , \\mathcal {T}^{\\prime }\\right) $ of parametric types is $H$ -compatible if they have the same degree $n$ and the diagram (REF ) commutes.", "In particular, if $n = 0$ , the pair $\\left( E, E^{\\prime }\\right) $ is $H$ -compatible if $H\\left( {E}_{{V}}\\right) = {E^{\\prime }}_{{V}} $ .", "$ (0,0)|a|/->/<1125,0>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n}}}`{{{V}}};{{{E}_{{{V}}}}}](0,0)|b|/->/<0,-300>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n}}}`{{\\left({{V}^{\\prime }}^\\mathrm {op} \\times {V}^{\\prime }\\right)^n}};{{\\left(H^\\mathrm {op} \\times {H}\\right)^n}}](1125,0)|r|/->/<0,-300>[{{{V}}}`{{{{V}}^{\\prime }}};{{H}}](0,-300)|b|/->/<1125,0>[{{\\left({{V}^{\\prime }}^\\mathrm {op} \\times {V}^{\\prime }\\right)^n}}`{{{{V}}^{\\prime }}};{{{E^{\\prime }}_{{{V}}^{\\prime }}}}]$ [$rCBV$ models] An $rCBV$ model is a triple $\\left({V}, \\mathcal {T}, \\underline{\\nu } \\right) $ where $\\left({V}, \\mathcal {T}\\right) $ is a $CBV$ pair and $\\underline{\\nu } $ is a free type recursion for $\\left({V}, \\mathcal {T}\\right) $ .", "An $rCBV$ model morphism between the $rCBV$ models $\\left({V}, \\mathcal {T}, \\underline{\\nu } \\right) $ and $\\left({V}^{\\prime }, \\mathcal {T}^{\\prime }, \\underline{\\nu } ^{\\prime }\\right) $ consists of a $CBV$ pair morphism between $\\left({V}, \\mathcal {T}\\right) $ and $\\left({V}^{\\prime }, \\mathcal {T}^{\\prime }\\right) $ such that, for every $H$ -compatible pair $\\left( E, E^{\\prime }\\right) \\in \\mathsf {Param}\\left( {V} , \\mathcal {T}\\right) \\times \\mathsf {Param}\\left( {V}^{\\prime } , \\mathcal {T}^{\\prime }\\right) $ of $n$ -variable parametric types, $\\left( \\nu E, \\nu E^{\\prime }\\right) $ is $H$ -compatible and, if $n>0$ , (REF ) holds, that is to say, $H\\left(\\mathsf {roll} ^{E} \\right) =\\mathsf {roll} ^{E} _{\\left( H^\\mathrm {op} \\times H \\right) ^{n-1} } $ .", "The $rCBV$ models and $rCBV$ model morphisms define a category, denoted herein by $\\mathfrak {C}_{\\mathcal {RBV}} $.", "$ (0,0)|a|/->/<1050,0>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n-1}}}`{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^n}};{{\\left( {},{\\nu {E}}_{{V}}^\\mathrm {op} {,}{\\nu {E}}_{{V}}\\right) }}](0,0)|b|/->/<1050,-450>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n-1}}}`{{{V}}};{{{\\nu {E}}_{{V}}}}](1050,0)|r|/->/<0,-450>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^n}}`{{{V}}};{{{E}_{{V}}}}](1050,-450)|b|/->/<-1050,0>[{{{V}}}`{{{{V}}^{\\prime }}};{{H}}](638,-225)|a|/<=/<375,0>[{\\phantom{O}}`{\\phantom{O}};{{\\mathsf {roll} }^{E}}] = (0,0)|a|/->/<1125,0>[{{\\left({{{V}}^{\\prime }}^\\mathrm {op} \\times {{V}}^{\\prime }\\right)^{n-1}}}`{{\\left({{{V}}^{\\prime }}^\\mathrm {op} \\times {{V}}^{\\prime }\\right)^n}};{{\\left( {},{\\nu {E^{\\prime }}}_{{{V}}^{\\prime }}^\\mathrm {op} {,}{\\nu {E^{\\prime }}}_{{{V}}^{\\prime }}\\right) }}](0,0)|b|/->/<1125,-450>[{{\\left({{{V}}^{\\prime }}^\\mathrm {op} \\times {{V}}^{\\prime }\\right)^{n-1}}}`{{{{V}}^{\\prime }}};{{{\\nu {E^{\\prime }}}_{{{V}}^{\\prime }}}}](1125,0)|r|/->/<0,-450>[{{\\left({{{V}}^{\\prime }}^\\mathrm {op} \\times {{V}}^{\\prime }\\right)^n}}`{{{{V}}^{\\prime }}};{{{E}_{{{V}}^{\\prime }}}}](0,-450)|l|/->/<0,450>[{{\\left({{V}}^\\mathrm {op} \\times {V}\\right)^{n-1}}}`{{\\left({{{V}}^{\\prime }}^\\mathrm {op} \\times {{V}}^{\\prime }\\right)^{n-1}}};{{\\left({H}^\\mathrm {op} \\times {H}\\right)^{n-1}}}](675,-225)|a|/<=/<375,0>[{\\phantom{O}}`{\\phantom{O}};{{\\mathsf {roll} }^{E^{\\prime }}}]$ There is, then, an obvious forgetful functor $\\mathcal {U}_{r\\mathtt {p}} : \\mathfrak {C}_{\\mathcal {RBV}} \\rightarrow \\mathfrak {C}_{\\mathtt {p}} $ .", "Remark We do not use this fact in our work, but every $rCBV$ model has an underlying $CBV$ model.", "More precisely, free term iteration can be defined out of the free term recursion, while the latter can be defined out of the free type recursion (see (REF )).", "This defines a forgetful functor $\\mathcal {R} : \\mathfrak {C}_{\\mathcal {RBV}} \\rightarrow \\mathfrak {C}_{\\mathcal {BV}} .$" ], [ "The $rCBV$ models {{formula:29039558-985e-40e0-b19a-865e371bc180}} and {{formula:4c9433cd-85a6-42dd-9eb9-decd64e4a4f1}}", "We consider the $rCBV$ model generated by each syntax, that is to say, the free $rCBV$ models coming from the fine-grain CBV translations of the source and target languages.", "This provides us with the $rCBV$ models $ \\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn}\\right) \\qquad \\mbox{and}\\qquad \\left( \\mathbf {Syn}_V^{{\\mathsf {R}}{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathsf {R}}{\\mathbf {tr}}},\\underline{\\nu } ^{{\\mathbf {tr}}}_\\mathbf {Syn}\\right)$ with the universal property described in Theorem REF .", "[Universal Property of the $rCBV$ models (REF )] Let $\\left( {V}, \\mathcal {T}, \\underline{\\nu } \\right) $ be an $rCBV$ model.", "Assume that Fig.", "REF and Fig.", "REF are given consistent assignments.", "There is a unique $rCBV$ model morphism $H: \\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn}\\right) \\rightarrow \\left( {V}, \\mathcal {T}, \\underline{\\nu } \\right) $ respecting the assignment of Fig.", "REF .", "There is a unique $rCBV$ model morphism $\\mathcal {H}: \\left( \\mathbf {Syn}_V^{{\\mathsf {R}}{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathsf {R}}{\\mathbf {tr}}},\\underline{\\nu } ^{{\\mathbf {tr}}}_\\mathbf {Syn}\\right) \\rightarrow \\left( {V}, \\mathcal {T}, \\underline{\\nu } \\right) $ that extends $H$ and respects the assignment of Fig.", "REF .", "Remark By Theorem REF , we have (unique) $CBV$ model morphisms $\\mathtt {s} : \\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S},\\mathbf {Syn}_\\mu , \\mathbf {Syn}_\\mathsf {it} \\right)\\rightarrow \\mathcal {R} \\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn}\\right) $ and $\\mathtt {s}^\\mathsf {t} : \\left( \\mathbf {Syn}_V^{{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathbf {tr}}},\\mathbf {Syn}_\\mu ^{{\\mathbf {tr}}} , \\mathbf {Syn}_\\mathsf {it}^{{\\mathbf {tr}}} \\right)\\rightarrow \\mathcal {R} \\left( \\mathbf {Syn}_V^{{\\mathsf {R}}{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathsf {R}}{\\mathbf {tr}}},\\underline{\\nu } ^{{\\mathbf {tr}}}_\\mathbf {Syn}\\right) $ that are identity on the primitive operations and types.", "Theorem REF states that $H\\mapsto \\mathcal {R} \\left( H \\right)\\circ \\mathtt {s} $ and $\\mathcal {H}\\mapsto \\mathcal {R} \\left( \\mathcal {H} \\right)\\circ \\mathtt {s}^\\mathsf {t} $ give the bijections (REF ) and (), respectively.", "$\\mathfrak {C}_{\\mathcal {RBV}} \\left(\\left(\\mathbf {Syn}_V^{\\mathsf {R}}{,}\\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}{,}\\underline{\\nu } _\\mathbf {Syn}\\right){,}\\left({V}{,}\\mathcal {T}{,}\\underline{\\nu } \\right)\\right) &\\cong & \\mathfrak {C}_{\\mathcal {BV}} \\left(\\left(\\mathbf {Syn}_V{,}\\mathbf {Syn}_\\mathcal {S}{,}\\mathbf {Syn}_\\mu {,}\\mathbf {Syn}_\\mathsf {it} \\right){,}\\mathcal {R} \\left({V}{,}\\mathcal {T}{,}\\underline{\\nu } \\right)\\right) \\\\\\mathfrak {C}_{\\mathcal {RBV}} \\left(\\left( \\mathbf {Syn}_V^{{\\mathsf {R}}{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathsf {R}}{\\mathbf {tr}}},\\underline{\\nu } ^{{\\mathbf {tr}}}_\\mathbf {Syn}\\right) {,}\\left({V}{,}\\mathcal {T}{,}\\underline{\\nu } \\right)\\right) & \\cong & \\mathfrak {C}_{\\mathcal {BV}} \\left(\\left( \\mathbf {Syn}_V^{{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathbf {tr}}},\\mathbf {Syn}_\\mu ^{{\\mathbf {tr}}} , \\mathbf {Syn}_\\mathsf {it}^{{\\mathbf {tr}}} \\right){,}\\mathcal {R} \\left({V}{,}\\mathcal {T}{,}\\underline{\\nu } \\right)\\right)$" ], [ "Automatic differentiation for languages with recursive types", "We extend our definition of AD to recursive types in Fig.", "REF .", "We note that our extension is compatible with our previous definitions if we view term recursion (and iteration) as syntactic sugar.", "[Type preservation] If $\\Delta \\mid \\Gamma \\vdash {t} : {\\tau }$ , then $\\Delta \\mid \\scalebox {0.8}{{\\mathcal {D}}}_{}(\\Gamma )\\vdash \\scalebox {0.8}{{\\mathcal {D}}}_{}({t}):\\scalebox {0.8}{{\\mathcal {D}}}_{}({\\tau })$ .", "Figure: The definitions of AD on recursive types." ], [ "AD transformation as an $rCBV$ model morphism", "By Theorem REF , the assignment defined in Fig.", "REF induces a unique $rCBV$ model morphism (REF ), which encompasses the macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ defined by Fig.", "REF and extended in Fig.", "REF .", "$\\mathbb {ID}: \\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn}\\right) \\rightarrow \\left(\\mathbf {Syn}_V^{{\\mathsf {R}}{\\mathbf {tr}}}{,}\\mathbf {Syn}_\\mathcal {S}^{{\\mathsf {R}}{\\mathbf {tr}}}{,}\\underline{\\nu } ^{{\\mathbf {tr}}}_\\mathbf {Syn}\\right).$" ], [ "Concrete models: $rCBV$ {{formula:df09372f-9c38-411d-84f0-c2eec5c6a187}} -pairs", "Although the setting of bilimit compact expansions is the usual reasonable basic framework for solving recursive domain equations, we do not need this level of generality.", "Instead, we consider a subclass of concrete models, the $rCBV$ $\\mathbf {\\omega Cpo}$ -pairs established in Def.", "REF .See [23] or [43] for the general setting of bilimit compact expansions.", "We are back again to the setting of $\\mathbf {\\omega Cpo}$ -enriched categories.", "Recall that an embedding-projection-pair (ep-pair) $u : A\\stackrel{\\hookrightarrow }{\\leftharpoondown }B$ in an $\\mathbf {\\omega Cpo}$ -category ${C}$ is a pair $u = \\left( u^e , u^p \\right) $ consisting of a ${C}$ -morphism $u^e:A\\rightarrow B$ , the embedding, and a ${C}$ -morphism $u^p:B\\rightarrow A$ , the projection, such that $u^e \\circ u^p \\le $ and $u^p\\circ u^e= $ .", "It should be noted that, when considering the underlying 2-category of the $\\mathbf {\\omega Cpo}$ -category, an ep-pair consists of an adjunctionSee, for instance, [21] or [27] for adjunctions in 2-categories.", "whose unit is the identity.", "In this context, it is also called a lari adjunction (left adjoint right-inverse), see [8].", "In particular, as in the case of any adjunction, an embedding $u^e : A \\rightarrow B$ uniquely determines the associated projection $u^p: B\\rightarrow A$ and vice-versa.", "A zero objectRecall that a zero object is an object that is both initial and terminal.", "$\\mathfrak {O}$ in an $\\mathbf {\\omega Cpo}$ -category ${C}$ is an ep-zero object if, for any object $A$ , the pair $\\iota _A = \\left( \\iota ^e: \\mathfrak {O}\\rightarrow A , \\iota ^p: A\\rightarrow \\mathfrak {O}\\right) $ consisting of the unique morphisms is an ep-pair.", "[$rCBV$ $\\mathbf {\\omega Cpo}$ -pair] An $rCBV$ $\\mathbf {\\omega Cpo}$ -pair is a $CBV$ pair $\\left( {V}, \\mathcal {T}\\right) $ such that, denoting by $J : {V}\\rightarrow {C}$ the corresponding universal Kleisli ${V}$ -functor, ${V}$ is a cocomplete $ \\mathbf {\\omega Cpo}$ -cartesian closed category${V}$ is, hence, $\\mathbf {\\omega Cpo}$ -cocomplete as well.", "; the unit of $\\mathcal {T}$ is pointwise a full morphism (hence, $J$ is a locally full $\\mathbf {\\omega Cpo}$ -functor); ${C}$ has an ep-zero object $\\mathfrak {O}= J\\left( \\mathsf {0}\\right) $ , where $\\mathsf {0}$ is initial in ${V}$ ; whenever $u : J(A)\\stackrel{\\hookrightarrow }{\\leftharpoondown }J(B)$ is an ep-pair in ${C}$ , there is one morphism $\\hat{{u}} : A\\rightarrow B $ in ${V}$ such that $J\\left( \\hat{{u}}\\right) =u^e $ .", "An $rCBV$ $\\mathbf {\\omega Cpo}$ -pair morphism from $\\left({V}, \\mathcal {T}\\right) $ into $\\left({V}^{\\prime }, \\mathcal {T}^{\\prime } \\right) $ is an $\\mathbf {\\omega Cpo}$ -functor $H : {V}\\rightarrow {V}^{\\prime } $ that strictly preserves $\\mathbf {\\omega Cpo}$ -colimits, and whose underlying functor is a morphism between the $CBV$ pairs.", "This defines a category of $rCBV$ $\\mathbf {\\omega Cpo}$ -pairs, denoted herein by $\\textrm {-}\\mathfrak {C}_{r\\mathcal {BV}} $ .", "Every $rCBV$ $\\mathbf {\\omega Cpo}$ -pair $\\left( {V}, \\mathcal {T}\\right) $ has an underlying $\\mathbf {\\omega Cpo}$ -pair, and this extends to a forgetful functor $\\textrm {-}\\mathfrak {C}_{r\\mathcal {BV}} \\rightarrow \\textrm {-}\\mathfrak {C}_{\\mathcal {BV}} $ .", "More importantly to our work, we have the following." ], [ "$rCBV$ {{formula:7f154a8e-637b-48ca-9ef3-9345e67bd662}} -pairs are {{formula:a4e2870c-6c41-42f2-8ef7-8edc3d17fb9d}} models", "Let $\\left( {V}, \\mathcal {T}\\right) $ be an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair.", "It is clear that we have an underlying $CBV$ pair which, by abuse of language, we denote by $\\left( {V}, \\mathcal {T}\\right) $ as well.", "Hence, we can consider $\\left({V}, \\mathcal {T}\\right) $ -parametric types.", "Let $n\\in \\mathbb {N}^\\ast $ and (REF ) be an $n$ -variable $\\left({V}, \\mathcal {T}\\right) $ -parametric type.", "For each $A\\in \\left( {V}^\\mathrm {op} \\times {V}\\right) ^{n-1} $ , we get an 1-variable $\\left({V}, \\mathcal {T}\\right) $ -parametric type $E^A = \\left( {E}^{A}_{{V}}, {E}^{A}_{{C}} \\right) $ where ${E}^{A}_{{V}}\\left( W,Y \\right) \\stackrel{\\mathrm {def}}{=}{E}_{{V}}\\left( A, W,Y \\right) $ and ${E}^{A}_{{C}}\\left( W^{\\prime },Y^{\\prime } \\right) \\stackrel{\\mathrm {def}}{=}{E}_{{C}}\\left( J(A), W^{\\prime },Y^{\\prime } \\right) $ .", "Let $\\mathcal {E}^{E}_{A} $ be the diagram (REF ) in ${C}$ given by the chain of morphisms $\\left( a_{n} ^e : \\mathfrak {A}_{n}\\rightarrow \\mathfrak {A}_{n+1} \\right) _{n\\in \\mathbb {N}} $ , where $\\left( a_{n} \\right) _{n\\in \\mathbb {N}} $ is the chain of ep-pairs inductively defined by (REF ).", "$a_{0} &\\stackrel{\\mathrm {def}}{=}& \\left( \\iota ^e: \\mathfrak {O}\\rightarrow {E}^{A}_{{C}}\\left( \\mathfrak {O}, \\mathfrak {O}\\right) , \\iota ^p: {E}^{A}_{{C}}\\left( \\mathfrak {O}, \\mathfrak {O}\\right) \\rightarrow \\mathfrak {O}\\right) \\nonumber \\\\a_{n+1} & \\stackrel{\\mathrm {def}}{=}& \\left( {E}^{A}_{{C}}\\left( a_{n} ^p , a_{n} ^e \\right), {E}^{A}_{{C}}\\left( a_{n} ^e , a_{n} ^p \\right) \\right) $ $ (0,0)|a|/->/<300,0>[{\\mathfrak {O}}`{\\mathfrak {A}_{1}};{a_{0} ^e }](300,0)|a|/->/<300,0>[{\\mathfrak {A}_{1}}`{\\mathfrak {A}_{2}};{a_{1} ^e }](600,0)|a|/->/<300,0>[{\\mathfrak {A}_{2}}`{\\mathfrak {A}_{3}};{a_{2} ^e }](900,0)|a|/->/<270,0>[{\\mathfrak {A}_{3}}`{{\\cdots }};{a_{3} ^e }] \\\\ (0,0)|a|/<-/<300,0>[{\\mathfrak {O}}`{\\mathfrak {A}_{1}};{a_{0} ^p}](300,0)|a|/<-/<300,0>[{\\mathfrak {A}_{1}}`{\\mathfrak {A}_{2}};{a_{1} ^p}](600,0)|a|/<-/<300,0>[{\\mathfrak {A}_{2}}`{\\mathfrak {A}_{3}};{a_{2} ^p}](900,0)|a|/<-/<270,0>[{\\mathfrak {A}_{3}}`{{\\cdots }};{a_{3} ^p}]$ There is a unique diagram $\\hat{{\\mathcal {E}^{E}_{A} }}$ such that $J\\circ \\hat{{\\mathcal {E}^{E}_{A} }} = \\mathcal {E}^{E}_{A} $ by (REF ) of Def.", "REF .", "Since ${V}$ has $\\mathbf {\\omega Cpo}$ -colimits, we conclude that the conical $\\mathbf {\\omega Cpo}$ -colimit of $\\hat{{\\mathcal {E}^{E}_{A} }}$ exists and is preserved by $J$ – hence, $\\mathcal {E}^{E}_{A} $ has a conical $\\mathbf {\\omega Cpo}$ -colimit in ${C}$ as well.", "By the celebrated limit-colimit coincidence [38], since (REF ) is the chain of embeddings of a chain of ep-pairs, the colimit gives us the $\\mathbf {\\omega Cpo}$ -limit of the associated chain $\\left( a_{n} ^p \\right) _{n\\in \\mathbb {N}} $ of projections (), denoted herein by $\\mathcal {P}^{E}_{A} $ .", "This bilimit of ep-pairs is absolute – this means that any $\\mathbf {\\omega Cpo}$ -functor $H : {C}\\rightarrow {C}^{\\prime }$ preserves the conical $\\mathbf {\\omega Cpo}$ -colimit (and $\\mathbf {\\omega Cpo}$ -limit) of $\\mathcal {E}^{E}_{A} $ (respectively, $\\mathcal {P}^{E}_{A} $ ).", "Since the conical $\\mathbf {\\omega Cpo}$ -colimit of $\\mathcal {E}^{E}_{A} $ is absolute, the diagram (REF ) commutes, and $J$ strictly preserves $\\mathbf {\\omega Cpo}$ -colimits, we have the invertible morphism (REF ) given by the composition of the respective canonical comparison morphisms.", "$ (0,0)|a|/->/<1350,0>[{J\\circ {E}^{A}_{{V}}\\left(\\mathrm {colim}\\left(\\hat{{\\mathcal {E}^{E}_{A} }}\\right){,}\\mathrm {colim}\\left(\\hat{{\\mathcal {E}^{E}_{A} }}\\right)\\right)}`{{E}^{A}_{{C}}\\left(\\mathrm {colim}\\left(\\mathcal {E}^{E}_{A} \\right){,}\\mathrm {colim}\\left({\\mathcal {E}^{E}_{A} }\\right)\\right)};{\\cong }](1350,0)|a|/->/<975,0>[{{E}^{A}_{{C}}\\left(\\mathrm {colim}\\left(\\mathcal {E}^{E}_{A} \\right){,}\\mathrm {colim}\\left({\\mathcal {E}^{E}_{A} }\\right)\\right)}`{\\mathrm {colim}\\left(\\mathcal {E}^{E}_{A} \\right)};{\\cong }](2325,0)|a|/->/<675,0>[{\\mathrm {colim}\\left(\\mathcal {E}^{E}_{A} \\right)}`{J\\mathrm {colim}\\left(\\hat{{\\mathcal {E}^{E}_{A} }}\\right)};{\\cong }]$ It should be noted that, for each $f: \\left( J^\\mathrm {op} \\times J\\right) ^{n-1} (A)\\rightarrow \\left( J^\\mathrm {op} \\times J\\right)^{n-1} (B) $ in $\\left( {C}^\\mathrm {op} \\times {C}\\right) ^{n-1} $ , we have an induced ${V}$ -natural transformation $\\mathcal {E}^{E}_{f} : \\mathcal {E}^{E}_{A} \\rightarrow \\mathcal {E}^{E}_{B} $ .", "This association extends to a ${V}$ -functor $\\mathcal {E}^{E} $ from $\\left( {C}^\\mathrm {op} \\times {C}\\right) ^{n-1} $ into the ${V}$ -category of chains in ${C}$.", "The association $A\\mapsto \\hat{{\\mathcal {E}^{E}_{A} }} $ also extends to a ${V}$ -functor $\\hat{{\\mathcal {E}^{E} }}$ from $\\left( {V}^\\mathrm {op} \\times {V}\\right) ^{n-1} $ into the ${V}$ -category of chains by the ${V}$ -faithfulness of $J$ , .", "We define the $\\textit {fddt} $ operator $\\nu _{\\omega } $ as follows.", "For each $n\\in \\mathbb {N}^\\ast $ , given a $\\left({V}, \\mathcal {T}\\right) $ -parametric type $E = \\left( {E}_{{V}}, {E}_{{C}}\\right) $ , we define: $\\nu _{\\omega } E = \\left( \\nu _{\\omega } {E}_{{V}} , \\nu _{\\omega } {E}_{{C}} \\right) \\stackrel{\\mathrm {def}}{=}\\left( \\mathrm {colim}\\circ \\hat{{\\mathcal {E}^{E} }} , \\mathrm {colim}\\circ \\mathcal {E}^{E} \\right)$ where, by abuse of language, $\\mathrm {colim}$ is the ${V}$ -functor from the ${V}$ -category of chains in ${V}$ (respectively, in ${C}$ ) into the ${V}$ -category ${V}$ (respectively, ${C}$ ).", "Since every isomorphism is an embedding, there is only one $\\omega \\mathsf {roll}^{E} _A $ in ${V}$ such that $J\\left( \\omega \\mathsf {roll}^{E} _A\\right) $ is equal to (REF ).", "The morphisms $\\omega \\mathsf {roll} ^E = \\left( \\omega \\mathsf {roll}^{E} _A\\right) _{A\\in \\left( {V}^\\mathrm {op} \\times {V}\\right) ^{n-1} } $ gives a ${V}$ -natural transformation ${E}_{{V}}\\left( , \\nu _{\\omega } {E}_{{V}} ^\\mathrm {op} , \\nu _{\\omega } {E}_{{V}} \\right)\\rightarrow \\nu _{\\omega } {E}_{{V}} $ such that $J\\left(\\omega \\mathsf {roll} ^E \\right) $ is invertible.", "Therefore $\\underline{\\mathsf {roll} }_{\\omega } \\stackrel{\\mathrm {def}}{=}\\left( \\omega \\mathsf {roll} ^E \\right) _{E\\in \\mathsf {Param}\\left( {V} , \\mathcal {T}\\right) } $ is a rolling for $\\nu _{\\omega } $ and we can define the (free) type recursion $\\underline{\\nu }_{\\omega } \\stackrel{\\mathrm {def}}{=}\\left( \\nu _{\\omega } , \\underline{\\mathsf {roll} }_{\\omega } \\right) $ .", "[Underlying $rCBV$ model] There is a forgetful functor $\\mathcal {U}_{r\\mathcal {BV}} : \\textrm {-}\\mathfrak {C}_{r\\mathcal {BV}} \\rightarrow \\mathfrak {C}_{\\mathcal {RBV}} $ defined by $\\mathcal {U}_{r\\mathcal {BV}} \\left( {V}, \\mathcal {T}\\right) = \\left( {V}, \\mathcal {T}, \\underline{\\nu }_{\\omega } \\right) $ , that takes every morphism $H$ to its underlying morphism of $CBV$ models.", "From the definition of $\\underline{\\nu }_{\\omega } $ and the fact that $H$ strictly preserves ${V}$ -colimits, we conclude that, indeed, $H$ respects the condition of $rCBV$ model morphism described in Def.", "REF .", "Remark The product of $rCBV$ $\\mathbf {\\omega Cpo}$ -pairs is given by $\\left( {V}_0 , \\mathcal {T}_0 \\right) \\times \\left( {V}_1 , \\mathcal {T}_1 \\right) \\cong \\left( {V}_0 \\times {V}_1 , \\mathcal {T}_0\\times \\mathcal {T}_1 \\right) $ .", "Moreover, it is clear that $\\mathcal {U}_{r\\mathcal {BV}} $ preserves finite products." ], [ "Concrete semantics", "The $CBV$ pair $\\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) $ as in REF clearly satisfies the conditions of Def.", "REF and, hence, it is also an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair.", "By Theorem REF , for each $k\\in \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace $ , we have unique $rCBV$ model morphisms (REF ) and (REF ) respecting the assignments of Fig.", "REF and ().", "In other words, following Remark REF , we have only one extension of the semantics (REF ) and (REF ) to the respective languages with recursive types.", "$[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] : \\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn}\\right) \\rightarrow \\mathcal {U}_{r\\mathcal {BV}} \\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right)$ $[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] _ {k} : \\left( \\mathbf {Syn}_V^{{\\mathsf {R}}{\\mathbf {tr}}}, \\mathbf {Syn}_\\mathcal {S}^{{\\mathsf {R}}{\\mathbf {tr}}},\\underline{\\nu } ^{{\\mathbf {tr}}}_\\mathbf {Syn}\\right) \\rightarrow \\mathcal {U}_{r\\mathcal {BV}} \\left( \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) .$ Moreover, by Remark REF , we have that the product $ \\left( \\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) $ as in REF is an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair." ], [ "Subscone for $rCBV$ {{formula:cc31d336-483f-48f4-bdfd-f9ecd350b83e}} -pairs", "The first step for our logical relations proof is to verify that, for each $(n,k)\\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ , the $CBV$ $\\mathbf {\\omega Cpo}$ -pair $\\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right), \\mathcal {P}_{n, k}\\left( -\\right) _\\bot \\right) $ as in Theorem REF yields an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair.", "In order to do that, we rely on Theorem REF about lifting the $rCBV$ $\\mathbf {\\omega Cpo}$ -pair structure.", "[Impurity preserving/purity reflecting] Let $\\left( {V}, \\mathcal {T}\\right) $ and $\\left( {V}^{\\prime }, \\mathcal {T}^{\\prime } \\right) $ be $CBV$ pairs.", "A $CBV$ pair morphism $H: {V}\\rightarrow {V}^{\\prime }$ is impurity preserving (or, purity reflecting) if, whenever $H(f) = \\eta ^{\\prime } _Y \\circ g $ , there is $\\hat{{f}}$ in ${V}$ such that $\\eta _Y \\circ \\hat{{f}} = f$ .", "Let $\\left( {V}^{\\prime }, \\mathcal {T}^{\\prime } \\right) $ be an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair, and $\\left( {V}, \\mathcal {T}\\right) $ a $CBV$ pair such that ${V}$ is a cocomplete $\\mathbf {\\omega Cpo}$ -cartesian closed category and $T(\\mathsf {0}) $ is terminal.", "If $H : {V}\\rightarrow {V}^{\\prime }$ is a locally full $\\mathbf {\\omega Cpo}$ -functor that yields an impurity preserving $CBV$ pair morphism $\\left( {V}, \\mathcal {T}\\right)\\rightarrow \\mathcal {U}_{r\\mathtt {p}} \\left( {V}^{\\prime }, \\mathcal {T}^{\\prime } \\right)$ , then $\\left( {V}, \\mathcal {T}\\right) $ is an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair.", "If, furthermore, $H$ strictly preserves $\\mathbf {\\omega Cpo}$ -colimits, then $H$ yields an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair morphism.", "We prove that $\\left( {V}, \\mathcal {T}\\right) $ yields an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair.", "By hypothesis, $\\left( {V}, \\mathcal {T}\\right) $ satisfies (REF ).", "We prove the remaining conditions of Def.", "REF below.", "Let $\\eta $ and $\\eta ^{\\prime }$ be respectively the unit of $\\mathcal {T}$ and $\\mathcal {T}^{\\prime }$ .", "Since $H$ is locally full, it reflects full morphisms.", "This implies that, for any $C\\in {V}$ , $\\eta _ C $ is full since $\\eta ^{\\prime }_{H(C)}= H\\left( \\eta _ C \\right) $ is full.", "Since $T(\\mathsf {0})$ is terminal, $J\\left(\\mathsf {0}\\right) $ is a zero object.", "Thus, for each $A\\in {C}$ , we have the pair (REF ) of unique morphisms in ${C}$ .", "Since $\\overline{H}$ preserves initial objects and $\\left( {V}^{\\prime }, \\mathcal {T}^{\\prime } \\right)$ is an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair, we have that (REF ) is the ep-pair of the unique morphisms.", "Finally, since $\\overline{H}$ is a locally full $\\mathbf {\\omega Cpo}$ -functor, it reflects ep-pairs and, hence, (REF ) is an ep-pair.", "$\\left( \\iota _A : J\\left(\\mathsf {0}\\right) \\rightarrow A , {\\iota ^A} : A \\rightarrow J\\left(\\mathsf {0}\\right) \\right)$ $\\left( \\overline{H}\\left( \\iota _A\\right) , \\overline{H}\\left({\\iota ^A}\\right) : \\overline{H}\\left( A\\right) \\rightarrow \\mathfrak {O}\\right)$ Given an ep-pair $u : J(A)\\stackrel{\\hookrightarrow }{\\leftharpoondown }J(B)$ in ${C}$ , the image $H(u) :\\overline{H}J(A)\\stackrel{\\hookrightarrow }{\\leftharpoondown }\\overline{H}J(B) $ by $H$ is an ep-pair.", "Since $\\left( {V}^{\\prime }, \\mathcal {T}^{\\prime }\\right) $ is an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair, there is one morphism $\\hat{{\\overline{H}\\left( u\\right)}} : H(A)\\rightarrow H(B) $ in ${V}^{\\prime }$ such that $J^{\\prime }\\left( \\hat{{\\overline{H}\\left( u\\right)}} \\right) = \\overline{H}\\left( u^e \\right) $ .", "Since the $CBV$ pair morphism $H: \\left( {V}, \\mathcal {T}\\right)\\rightarrow \\mathcal {U}_{r\\mathtt {p}} \\left( {V}^{\\prime }, \\mathcal {T}^{\\prime } \\right)$ is impurity preserving, we conclude that there is $\\hat{{ u }} : A\\rightarrow B$ such that $J\\left( \\hat{{ u }} \\right) = u^e $ .", "As a consequence, in the setting of subscones satisfying Assumption REF , we get: Let $\\left( {V}, \\mathcal {T}\\right) $ be an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair, and (REF ) the forgetful $\\mathbf {\\omega Cpo}$ -functor coming from a pair $\\left( G: {V}\\rightarrow {D}, \\mathfrak {T}_{sub}\\right) $ satisfying Assumption REF .", "If ${D}$ is cocomplete and $\\overline{\\mathcal {T}}= \\left( \\overline{T}, \\overline{\\mathrm {m}}, \\overline{\\eta } \\right) $ is a strong monad that is a lifting of the monad $\\mathcal {T}$ along (REF ) such that (REF ) and (REF ) hold, then $\\left( \\mathbf {Sub}\\left( {D}\\downarrow G \\right), \\overline{\\mathcal {T}}\\right) $ is an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair and $\\underline{\\mathcal {L}}$ yields an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair morphism (REF ).", "$\\overline{T}$ takes the initial to the terminal object; for any $\\left( D, C, j\\right)\\in \\mathbf {Sub}\\left( {D}\\downarrow G \\right) $ , denoting $\\overline{T}\\left( D, C, j\\right) = \\left( \\underline{\\overline{\\mathcal {T}}\\left(D,C,j\\right)}, T(D), \\underline{\\overline{\\mathcal {T}}{j}}\\right) $ , Diag (REF ) induced by the unit $\\overline{\\eta }$ is a pullback in ${D}$ .", "$\\underline{\\mathcal {L}}: \\mathbf {Sub}\\left( {D}\\downarrow G \\right)\\rightarrow {V}$ $\\left( \\mathbf {Sub}\\left( {D}\\downarrow G \\right), \\overline{\\mathcal {T}}\\right)\\rightarrow \\left( {V}, \\mathcal {T}\\right)$ $ (0,0)/->/<600,0>[{D}`{\\underline{\\overline{\\mathcal {T}}\\left(D,C,j\\right)}};](0,0)|b|/->/<0,-450>[{D}`{{G(C)}};{j}](600,0)|r|/->/<0,-450>[{\\underline{\\overline{\\mathcal {T}}\\left(D,C,j\\right)}}`{{G(T^{\\prime }(C))}};{\\underline{\\overline{\\mathcal {T}}{j}}}](0,-450)|b|/->/<600,0>[{{G(C)}}`{{G(T^{\\prime }(C))}};{{G(\\eta _C)}}]$ By Corollary REF , $\\mathbf {Sub}\\left( {D}\\downarrow G \\right)$ is cocomplete $\\mathbf {\\omega Cpo}$ -cartesian closed.", "Moreover, $\\underline{\\mathcal {L}}$ is locally full, strict $\\mathbf {\\omega Cpo}$ -cartesian closed, and $\\mathbf {\\omega Cpo}$ -colimit preserving by Theorem REF .", "Therefore, the fact that $\\overline{\\mathcal {T}}$ is a lifting of $\\mathcal {T}$ through $\\underline{\\mathcal {L}}$ implies that it yields a $CBV$ pair morphism (REF ).", "(REF ) implies that the $CBV$ pair morphism (REF ) is purity reflecting.", "Assuming (REF ), this implies that $\\left( \\mathbf {Sub}\\left( {D}\\downarrow G \\right), \\overline{\\mathcal {T}}\\right)$ is indeed an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair morphism and $\\underline{\\mathcal {L}}$ yields an (REF ) is an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair morphism by Theorem REF .", "In the particular case of interest, we conclude: For each $(n,k)\\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ , $\\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right), \\mathcal {P}_{n, k}\\left( -\\right) _\\bot \\right) $ is an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair.", "Moreover, $\\underline{\\mathcal {L}}_{n,k} : \\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n,k} \\right)\\rightarrow \\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}$ yields an $rCBV $ $\\mathbf {\\omega Cpo}$ -pair morphism $ \\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right), \\mathcal {P}_{n, k}\\left( -\\right) _\\bot \\right) \\rightarrow \\left(\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) .$ In fact, we already know that $\\underline{\\mathcal {L}}_{n,k} $ comes from a pair that satisfies Assumption REF .", "Moreover, $\\left(\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right)$ is an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair and $\\mathcal {P}_{n, k}\\left( -\\right) _\\bot $ is a lifting of $\\left( -\\right) _\\bot $ along $\\underline{\\mathcal {L}}_{n,k} $ satisfying the conditions of Theorem REF .", "By Theorems REF and REF , we get: $\\underline{\\mathcal {L}}_{n,k}$ yields an $rCBV$ model morphism $\\mathcal {U}_{r\\mathcal {BV}} \\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right), \\mathcal {P}_{n, k}\\left( -\\right) _\\bot \\right)\\rightarrow \\mathcal {U}_{r\\mathcal {BV}} \\left(\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}, \\left( -\\right) _\\bot \\right) .", "$" ], [ "Logical relations as an $rCBV$ model morphism", "Let $(n, k)\\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ , and let's assume that $\\scalebox {0.8}{\\mathcal {D}}_{}$ is sound for primitives (see REF ).", "By the universal property of the $rCBV$ model $\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn}\\right) $ and the chain rule for derivatives, there is only one $rCBV$ model morphism $ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} :\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn}\\right) \\rightarrow \\mathcal {U}_{r\\mathcal {BV}} \\left( \\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right), \\mathcal {P}_{n, k}\\left( -\\right) _\\bot \\right)$ that is consistent with the assignment given by (REF ), (REF ), (REF ), and (REF ).", "For any $(n, k)\\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right)$ , Diag.", "(REF ) commutes.", "$ (0,0)|a|/->/<2025,0>[{{\\left(\\mathbf {Syn}_V^{\\mathsf {R}},\\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}{,}\\underline{\\nu } _\\mathbf {Syn}\\right)}}`{{{\\left(\\mathbf {Syn}_V^{\\mathsf {R}},\\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}{,}\\underline{\\nu } _\\mathbf {Syn}\\right)}\\times \\left(\\mathbf {Syn}_V^{{\\mathsf {R}}{\\mathbf {tr}}}{,}\\mathbf {Syn}_\\mathcal {S}^{{\\mathsf {R}}{\\mathbf {tr}}}{,}\\underline{\\nu } ^{{\\mathbf {tr}}}_\\mathbf {Syn}\\right)}};{{\\left(\\mathrm {id}{,}\\mathbb {ID}\\right)}}](2025,0)|r|/->/<0,-300>[{{{\\left(\\mathbf {Syn}_V^{\\mathsf {R}},\\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}{,}\\underline{\\nu } _\\mathbf {Syn}\\right)}\\times \\left(\\mathbf {Syn}_V^{{\\mathsf {R}}{\\mathbf {tr}}}{,}\\mathbf {Syn}_\\mathcal {S}^{{\\mathsf {R}}{\\mathbf {tr}}}{,}\\underline{\\nu } ^{{\\mathbf {tr}}}_\\mathbf {Syn}\\right)}}`{{\\mathcal {U}_{r\\mathcal {BV}} \\left(\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}{,}\\left( -\\right) _\\bot \\right)}};{{[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\times [\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] _ {k} }}](0,0)|l|/->/<0,-300>[{{\\left(\\mathbf {Syn}_V^{\\mathsf {R}},\\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}{,}\\underline{\\nu } _\\mathbf {Syn}\\right)}}`{{\\mathcal {U}_{r\\mathcal {BV}} \\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n,k} \\right),\\mathcal {P}_{n,k}\\left( -\\right) _\\bot \\right)}};{{ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} }}](0,-300)|b|/->/<2025,0>[{{\\mathcal {U}_{r\\mathcal {BV}} \\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n,k} \\right),\\mathcal {P}_{n,k}\\left( -\\right) _\\bot \\right)}}`{{\\mathcal {U}_{r\\mathcal {BV}} \\left(\\mathbf {\\omega Cpo}\\times \\mathbf {\\omega Cpo}{,}\\left( -\\right) _\\bot \\right)}};{{\\mathcal {U}_{r\\mathcal {BV}} \\left({\\underline{\\mathcal {L}}}_{n,k}\\right)}}]$ Both $\\left([\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\times [\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] _ {k} \\right)\\circ \\left( \\mathrm {id}\\times \\mathbb {ID}\\right) $ and $\\mathcal {U}_{r\\mathcal {BV}} \\left({\\underline{\\mathcal {L}}}_{n,k}\\right)\\circ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[-]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} $ yield $rCBV$ model morphisms that are consistent with the assignment given by the object $\\left( \\mathbb {R}, \\mathbb {R}\\times \\mathbb {R}^k \\right) $ and the morphisms (REF ), () and ().", "Therefore, by the universal property of $\\left(\\mathbf {Syn}_V^{\\mathsf {R}},\\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}{,}\\underline{\\nu } _\\mathbf {Syn}\\right) $ , we conclude that Diag.", "(REF ) indeed commutes." ], [ "AD correctness theorem for non-recursive data types", "The correctness theorem for non-recursive data types follows from Lemma REF and Corollary REF .", "That is to say, we have: Let $\\displaystyle t: \\coprod _{r\\in \\mathfrak {L} } ^{s _r} \\rightarrow \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}\\left( \\coprod _{j\\in L} ^{l _j} \\right) $ be a morphism in $\\mathbf {Syn}_V^{\\mathsf {R}}$ .", "We have that $\\displaystyle [\\hspace{-2.5pt}[ t ]\\hspace{-2.5pt}] : \\coprod _{r\\in \\mathfrak {L} } \\mathbb {R}^{s _r} \\rightarrow \\left( \\coprod _{j\\in L} \\mathbb {R}^{l _j} \\right) _\\bot $ is differentiable and, for any $k\\in \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ , $ [\\hspace{-2.5pt}[\\mathbb {ID}\\left( t \\right) ]\\hspace{-2.5pt}] _ {k} = \\mathfrak {d}^{k}\\left( [\\hspace{-2.5pt}[ t ]\\hspace{-2.5pt}] \\right) $ ." ], [ "AD on recursive data types", "The LR argument we presented provides us with an easy way to compute the logical relations of general recursive types: namely, since $\\left(\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{n, k} \\right), \\mathcal {P}_{n, k}\\left( -\\right) _\\bot \\right)$ is an $rCBV$ $\\mathbf {\\omega Cpo}$ -pair, the recursive types will be computed out of suitable colimits.", "This gives us useful information about the semantics of $\\scalebox {0.8}{\\mathcal {D}}_{}({t} )$ for a program ${x}:{\\tau }\\vdash {t}:{\\sigma }$ where ${\\tau }$ and ${\\sigma }$ are recursive types.", "In particular, we can extend the correctness result REF to any data type, including those involving recursion.", "We denote by $\\mathbf {Syn}_C^{\\mathsf {R}}$ the Kleisli $\\mathbf {Syn}_V^{\\mathsf {R}}$ -category associated with $\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}\\right)$ .", "Moreover, we respectively denote by (REF ) and (REF ) the coproduct, product and $n$ -diagonal functors.", "$\\sqcup , \\times : \\mathbf {Syn}_V^{\\mathsf {R}}\\times \\mathbf {Syn}_V^{\\mathsf {R}}\\rightarrow \\mathbf {Syn}_V^{\\mathsf {R}}$ $\\mathrm {diag}_{n} : \\left(\\mathbf {Syn}_V^{\\mathsf {R}}\\right) ^\\mathrm {op} \\times \\mathbf {Syn}_V^{\\mathsf {R}}\\rightarrow \\left(\\left(\\mathbf {Syn}_V^{\\mathsf {R}}\\right) ^\\mathrm {op} \\times \\mathbf {Syn}_V^{\\mathsf {R}}\\right) ^n$ Let $R, I, O : \\left(\\mathbf {Syn}_V^{\\mathsf {R}}\\right) ^\\mathrm {op} \\times \\mathbf {Syn}_V^{\\mathsf {R}}\\rightarrow \\mathbf {Syn}_V^{\\mathsf {R}}$ be the constant functors which are, respectively, equal to $$ , $\\mathsf {1}$ and $\\mathsf {0}$ .", "We define the set $\\mathfrak {P}^{\\mathfrak {d}}\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn} \\right) $ inductively by REF , REF and REF .", "The functors $R, I, O$ are in $\\mathfrak {P}^{\\mathfrak {d}}\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn} \\right) $ .", "Moreover, the projection $\\pi _{2}:\\left(\\mathbf {Syn}_V^{\\mathsf {R}}\\right) ^\\mathrm {op} \\times \\mathbf {Syn}_V^{\\mathsf {R}}\\rightarrow \\mathbf {Syn}_V^{\\mathsf {R}}$ belongs to $\\mathfrak {P}^{\\mathfrak {d}}\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn} \\right) $ .", "For each $n\\in \\mathbb {N}^\\ast $ , if the functors (REF ) belong to $\\mathfrak {P}^{\\mathfrak {d}}\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn} \\right) $ , then the functors (REF ) and (REF ) are in $\\mathfrak {P}^{\\mathfrak {d}}\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn} \\right) $ .", "If $E = \\left({E}_{\\mathbf {Syn}_V^{\\mathsf {R}}}, {E}_{\\mathbf {Syn}_C^{\\mathsf {R}}} \\right)\\in \\mathsf {Param}\\left( \\mathbf {Syn}_V^{\\mathsf {R}} , \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}\\right) $ is such that ${E}_{\\mathbf {Syn}_V^{\\mathsf {R}}}\\in \\mathfrak {P}^{\\mathfrak {d}}\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn} \\right) $ , then $ \\left( {\\nu _\\mathbf {Syn}{E}}_{\\mathbf {Syn}_V^{\\mathsf {R}}} \\right)$ is in $\\mathfrak {P}^{\\mathfrak {d}}\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn} \\right) $ .", "We define the set $\\mathfrak {P}^{\\mathfrak {d}}\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn} \\right) $ of parametric data types by (REF ).", "$G, G^{\\prime } : \\left(\\left(\\mathbf {Syn}_V^{\\mathsf {R}}\\right) ^\\mathrm {op} \\times \\mathbf {Syn}_V^{\\mathsf {R}}\\right) ^n \\rightarrow \\mathbf {Syn}_V^{\\mathsf {R}}$ $G\\circ \\mathrm {diag}_{n} : \\left(\\mathbf {Syn}_V^{\\mathsf {R}}\\right) ^\\mathrm {op} \\times \\mathbf {Syn}_V^{\\mathsf {R}}\\rightarrow \\mathbf {Syn}_V^{\\mathsf {R}}$ $\\times \\circ \\left( G\\times G^{\\prime }\\right), \\sqcup \\circ \\left( G\\times G^{\\prime }\\right) : \\left(\\left(\\mathbf {Syn}_V^{\\mathsf {R}}\\right) ^\\mathrm {op} \\times \\mathbf {Syn}_V^{\\mathsf {R}}\\right) ^{2n} \\rightarrow \\mathbf {Syn}_V^{\\mathsf {R}}$ $\\mathsf {Param}^{\\mathfrak {d}}\\left( \\mathbf {Syn}_V^{\\mathsf {R}} , \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}},\\underline{\\nu } _\\mathbf {Syn}\\right) \\left\\lbrace E \\in \\mathsf {Param}\\left( \\mathbf {Syn}_V^{\\mathsf {R}} , \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}\\right) : {E}_{\\mathbf {Syn}_V^{\\mathsf {R}}}\\in \\mathfrak {P}^{\\mathfrak {d}}\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn} \\right) \\right\\rbrace $ Let $E$ be an $n$ -variable $\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn}\\right)$ -parametric data type, where $n\\in \\mathbb {N}^\\ast $ .", "There is a countable family of natural numbers $\\displaystyle \\left( \\mathsf {m}_{\\left( j, \\mathtt {T}\\right) } \\right) _{\\left( j, \\mathtt {T}\\right)\\in \\left( \\mathbb {I}_{n}\\cup \\left\\lbrace 0\\right\\rbrace \\right)\\times {\\mathsf {Tree}}} $ such that, for any $rCBV$ model morphism $H : \\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn}\\right) \\rightarrow \\mathcal {U}_{r\\mathcal {BV}} \\left( {V}, \\mathcal {T}\\right) $ and any $H$ -compatible pair $\\left( E,F\\right)$ , we have that (REF ) holds, where the isomorphism $\\cong $ is induced by coprojections and projectionsThat is to say, it is just a reorganization of the involved coproducts and products.. $H\\left(\\right) = \\coprod _{j\\in L} H\\left( \\right) ^{l_j}$ ${F}_{{V}}\\left( W_j, Y_j\\right) _{j\\in \\mathbb {I}_{n}} \\cong \\coprod _{\\mathtt {T}\\in {\\mathsf {Tree}} }\\left( {H\\left( \\right) }^{\\mathsf {m}_{\\left( 0,\\mathtt {T}\\right) }}\\times \\prod _{j=1}^{n} Y_j ^{\\mathsf {m}_{\\left( j,\\mathtt {T}\\right) } }\\right)$ As a consequence, if $\\in \\mathbf {Syn}_V^{\\mathsf {R}}$ corresponds to a data type ${\\tau }$ , then there is a countable family $\\left( l_j \\right) _{j\\in L}\\in \\mathbb {N}^L $ of natural numbers such that (REF ) holds for any $rCBV$ model morphism $H : \\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}, \\underline{\\nu } _\\mathbf {Syn}\\right) \\rightarrow \\mathcal {U}_{r\\mathcal {BV}} \\left( {V}, \\mathcal {T}\\right) $ .", "The result follows from induction.", "The non-trivial part is a consequence of the following.", "Let $\\left( \\tilde{E}, \\tilde{F}\\right) \\in \\mathsf {Param}^{\\mathfrak {d}}\\left( \\mathbf {Syn}_V^{\\mathsf {R}} , \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}\\right) \\times \\mathsf {Param}{ \\left(\\mathcal {U}_{r\\mathcal {BV}} \\left( {V}, \\mathcal {T}\\right) \\right) }$ be an $H$ -compatible pair of $\\left( n+1\\right)$ -variable parametric types where ${\\tilde{F}}_{ {V}} $ is given by (REF ) for some countable family $\\displaystyle \\left( \\mathsf {s}_{\\left( i,r\\right) } \\right) _{{\\left(i,r\\right)\\in \\left( \\mathbb {I}_{n+1}\\cup \\left\\lbrace 0\\right\\rbrace \\right)\\times {\\mathfrak {L} }} }$ of natural numbers.", "We prove below that $\\left( \\nu _\\mathbf {Syn}{\\tilde{E}},F\\right) $ is $H$ -compatible for some $F$ such that ${F}_{{V}}$ satisfies Eq.", "(REF ).", "By the definition $rCBV$ model morphism, we have that $\\left( \\nu _\\mathbf {Syn}{\\tilde{E}} , \\nu _{\\omega } {\\tilde{F}} \\right) $ is $H$ -compatible.", "Hence, we only need to prove that ${\\nu _{\\omega } {\\tilde{F}} }_{{V}}$ is given by (REF ).", "We inductively define the set $\\mathsf {Tree}$ by the following.", "Let $r\\in \\mathfrak {L} $ : (a) if $\\mathsf {s}_{\\left( n+1,r\\right) } = 0$ , then $r\\in \\mathsf {Tree}$ ; (b) if $\\mathsf {s}_{\\left( n+1,r\\right) } \\ne 0$ , then, for any $\\mathtt {T}\\in \\mathsf {Tree}^{\\mathsf {s}_{\\left( n+1,r\\right) }} $ , the pair $\\left( \\mathtt {T}, r\\right) $ is in $\\mathsf {Tree}$ .", "We inductively define the family $\\left( \\mathsf {m}_{\\left( j, \\mathtt {T} \\right) }\\right) _{\\left( j, \\mathtt {T}\\right) \\in \\left( \\mathbb {I}_{n}\\cup \\left\\lbrace 0\\right\\rbrace \\right) \\times \\mathsf {Tree}} $ of indices by the following.", "Let $r\\in \\mathfrak {L} $ : (a) if $\\mathsf {s}_{\\left( n+1,r\\right) } = 0 $ , we define $\\mathsf {m}_{\\left( j, r \\right) } \\mathsf {s}_{\\left( j, r\\right) }$ for each $j$ ; (b) if $\\mathsf {s}_{\\left( n+1,r\\right) } \\ne 0 $ , given $\\mathtt {T} = \\left( \\mathtt {T}_i\\right) _{i\\in \\mathbb {I}_{\\mathsf {s}_{\\left( n+1,r\\right) }} }\\in \\mathsf {Tree}^{\\mathsf {s}_{\\left( n+1,r\\right) }} $ , we define $\\mathsf {m}_{\\left( j, \\left( \\mathtt {T} , r \\right) \\right) } $ by (REF ) for each $j$ .", "${\\tilde{F}}_{{V}}\\left( W_i, Y_i\\right) _{i\\in \\mathbb {I}_{n+1}} =\\coprod _{r\\in {\\mathfrak {L} } }\\left( {H\\left( \\right) }^{\\mathsf {s}_{\\left( 0,r\\right) }}\\times \\prod _{i=1}^{n+1} Y_i ^{\\mathsf {s}_{\\left( i,r\\right) } }\\right)$ $\\mathsf {m}_{\\left( j, \\left( \\mathtt {T} , r \\right) \\right) } = \\mathsf {s}_{\\left( j,r\\right) } + \\sum _{i=1}^{\\mathsf {s}_{\\left( n+1,r\\right) }}\\mathsf {m}_{\\left( j, \\mathtt {T}_i \\right) }$ Let $X=\\left( W_i, Y_i\\right) _{i\\in \\mathbb {I}_{n}}\\in \\left({V}^\\mathrm {op} \\times {V}\\right) ^n $ , $\\mathfrak {F}_X{\\tilde{F}}^{X}_{{V}}\\left( \\mathsf {0}, - \\right) $ and $\\iota _{} $ the obvious unique morphism.", "The colimit of (REF ) is isomorphic to (REF ).", "Hence, by the definition of the fddt operator $\\nu _{\\omega } $ of $ \\mathcal {U}_{r\\mathcal {BV}} \\left( {V}, \\mathcal {T}\\right) = \\left( {V}, \\mathcal {T}, \\underline{\\nu }_{\\omega } \\right)$ , ${\\nu _{\\omega } \\tilde{F}}_{{V}} $ is given by the formula given in (REF ).", "This completes the proof.", "$ (0,0)|a|/->/<300,0>[{\\mathsf {0}}`{\\mathfrak {F}_X\\left(\\mathsf {0}\\right)};{\\iota _{} }](300,0)|a|/->/<480,0>[{\\mathfrak {F}_X\\left(\\mathsf {0}\\right)}`{\\mathfrak {F}_X^2\\left(\\mathsf {0}\\right)};{\\mathfrak {F}_X\\left(\\iota _{} \\right)}](780,0)|a|/->/<480,0>[{\\mathfrak {F}_X^2\\left(\\mathsf {0}\\right)}`{\\mathfrak {F}_X^3\\left(\\mathsf {0}\\right)};{\\mathfrak {F}_X^2\\left(\\iota _{} \\right)}](1260,0)/->/<300,0>[{\\mathfrak {F}_X^3\\left(\\mathsf {0}\\right)}`{{\\cdots }};]$ $\\coprod _{\\mathtt {T}\\in {\\mathsf {Tree}} }\\left( {H\\left( \\right) }^{\\mathsf {m}_{\\left( 0,\\mathtt {T}\\right) }}\\times \\prod _{j=1}^{n} Y_j ^{\\mathsf {m}_{\\left( j,\\mathtt {T}\\right) } }\\right)$ Finally, if $\\in \\mathbf {Syn}_V^{\\mathsf {R}}$ corresponds to a data type ${\\tau }$ , then the constant parametric type $ \\underline{} $ equal to $$ is an $\\left( \\mathbf {Syn}_V^{\\mathsf {R}}, \\mathbf {Syn}_\\mathcal {S}^{\\mathsf {R}}\\right)$ -parametric data type of degree 1.", "Hence, denoting by $\\underline{H}$ the constant parametric type equal to $H\\left(\\right) $ , since $\\left(\\underline{} , \\underline{H} \\right) $ is $H$ -compatible, we conclude that (REF ) holds for some $\\left( l_j\\right) _{j\\in L}$ where $L$ is countable.", "$ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[R]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} = \\coprod _{j\\in L} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{n,k} ^{l_j}$ $[\\hspace{-2.5pt}[R]\\hspace{-2.5pt}] = \\coprod _{j\\in L} \\mathbb {R}^{l_j}$ Let $\\displaystyle t: \\rightarrow $ be a morphism in $\\mathbf {Syn}_V^{\\mathsf {R}}$ .", "If $$ and $$ correspond to data types, $\\displaystyle [\\hspace{-2.5pt}[ t ]\\hspace{-2.5pt}] : \\coprod _{r\\in \\mathfrak {L} } \\mathbb {R}^{s _r} \\rightarrow \\left( \\coprod _{j\\in L} \\mathbb {R}^{l _j} \\right) _\\bot $ is differentiable and, for any $k\\in \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right) $ , $ [\\hspace{-2.5pt}[\\mathbb {ID}\\left( t \\right) ]\\hspace{-2.5pt}] _ {k} = \\mathfrak {d}^{k}\\left( [\\hspace{-2.5pt}[ t ]\\hspace{-2.5pt}] \\right) $ .", "First of all, indeed, by Theorem REF , we have that there are countable families $\\left( s _r\\right) _{r\\in \\mathfrak {L} }$ and $\\left( l_j\\right) _{j\\in L}$ such that $ \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[t]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} : \\coprod _{r\\in \\mathfrak {L} } \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} ^{s _r} \\rightarrow \\mathcal {P}_{{s_i}, k}\\left( \\coprod _{j\\in L} \\overline{[\\hspace{-2.5pt}[\\hspace{-2.5pt}[]\\hspace{-2.5pt}] \\hspace{-2.5pt}]}_{{s_i},k} ^{l _j} \\right) _\\bot $ is a morphism in $\\mathbf {Sub}\\left( \\mathbf {\\omega Cpo}\\downarrow G_{{s_i}, k} \\right)$ , for each $i\\in \\mathfrak {L} $ and any $k\\in \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace $ .", "By the commutativity of (REF ) for any $(s_i, k)\\in \\mathbb {N}\\times \\left( \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace \\right)$ , we get that the pair $\\left( [\\hspace{-2.5pt}[t]\\hspace{-2.5pt}] , [\\hspace{-2.5pt}[\\mathbb {ID}{\\left( t\\right) } ]\\hspace{-2.5pt}] _ {k} \\right) $ defines the morphism (REF ) for each $i\\in \\mathfrak {L} $ .", "By Corollary REF , this implies that $[\\hspace{-2.5pt}[t]\\hspace{-2.5pt}] $ is differentiable and $ [\\hspace{-2.5pt}[\\mathbb {ID}\\left( t \\right) ]\\hspace{-2.5pt}] _ {k} = \\mathfrak {d}^{k}\\left( [\\hspace{-2.5pt}[ t ]\\hspace{-2.5pt}] \\right) $ .", "Finally, as a consequence, we get: Assume that $\\mathbf {vect} $ implements the vector space $\\mathbb {R}^k$ , for some $k\\in \\mathbb {N}\\cup \\left\\lbrace \\infty \\right\\rbrace $ .", "For any program ${x}:{\\tau }\\vdash {t}:{\\sigma }$ where ${\\tau },{\\sigma }$ are data types (including recursive data types), we have that $[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] $ is differentiable and, moreover, $[\\hspace{-2.5pt}[\\scalebox {0.8}{\\mathcal {D}}_{}({t} )]\\hspace{-2.5pt}] _ {k} = \\mathfrak {d}^{k}\\left( [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] \\right) $ provided that $\\scalebox {0.8}{\\mathcal {D}}_{}$ is sound for primitives.", "Following the considerations of REF and REF , it follows from Theorem REF that $\\scalebox {0.8}{\\mathcal {D}}_{}$ as defined in REF correctly provides us with forward and reverse AD transformations for data types." ], [ "AD on arrays", "Arrays are semantically the same as lists: in our language, if ${\\tau }$ is a data type, an array of ${\\tau }$ is given by $\\mathbf {\\mu }{\\alpha }.\\mathbf {1}\\,{\\mathop {\\sqcup }}\\, {\\tau }\\,{\\mathop {\\times }}\\,{\\alpha }$ .", "It should be noted that, if ${x}:\\mathbf {\\mu }{\\alpha }.\\mathbf {1}\\,{\\mathop {\\sqcup }}\\, {\\tau }\\,{\\mathop {\\times }}\\,{\\alpha }\\vdash {t}:\\mathbf {\\mu }{\\alpha }.\\mathbf {1}\\,{\\mathop {\\sqcup }}\\, {\\tau }\\,{\\mathop {\\times }}\\,{\\beta }$ , we have that $[\\hspace{-2.5pt}[{t}]\\hspace{-2.5pt}] : \\coprod _{i=1}^{\\infty } [\\hspace{-2.5pt}[{\\tau }]\\hspace{-2.5pt}] \\rightarrow \\left( \\coprod _{i=1}^{\\infty } [\\hspace{-2.5pt}[{\\sigma }]\\hspace{-2.5pt}] \\right) _\\bot .$ By Theorem REF , if ${\\tau }$ and ${\\sigma }$ are data types, we get that $\\mathfrak {d}^{k}\\left( [\\hspace{-2.5pt}[{t} ]\\hspace{-2.5pt}] \\right) $ (as defined in (REF )) is equal to $[\\hspace{-2.5pt}[\\scalebox {0.8}{\\mathcal {D}}_{}({t} )]\\hspace{-2.5pt}] _ {k} $ .", "Therefore, Theorem REF already encompasses the correctness for arrays (of data types)." ], [ "Related Work", "This is an improved version of the unpublished preprint [43].", "In particular, we have simplified the correctness argument to no longer depend on diffeological or sheaf-structure and to have it apply to arbitrary differentiable (rather than merely smooth) operations.", "We have further simplified the subsconing technique for recursive types.", "There has recently been a flurry of work studying AD from a programming language point of view, a lot of it focussing on functional formulations of AD and their correctness.", "Examples of such papers are [32], [12], [35], [6], [1], [19], [28], [44], [26], [18], [45], [22], [37].", "Of these papers, [32], [1], [28], [37] are particularly relevant as they also consider automatic differentiation of languages with partial features.", "Here, [32] considers an implementation that differentiates recursive programs and the implementation of [37] even differentiates code that uses recursive types.", "They do not give correctness proofs, however.", "Existing work on differential restriction categories [9] seems to give a more abstract semantic study of the interaction between forward-mode automatic differrentiation and partiality.", "We found that for our purposes, a concrete semantics in terms of $\\omega $ -cpos sufficed, however.", "The present paper can be seen as giving a correctness proof of the techniques implemented by [37].", "[1] does give a denotational correctness proof of AD on a first-order functional language with (first-order) recursion.", "The first-orderness of the language allows the proof to proceed by plain induction rather than needing logical technique.", "[28] proves the correctness of basically the same AD algorithms that we consider in this paper when restricted to PCF with a base type of real numbers and a real conditional.", "Their proof relies on operational semantic techniques.", "Our contribution is to give an alternative denotational argument, which we believe is simple and systematic, and to extend it to apply to languages which, additionally, have the complex features of recursively defined datastructures that we find in realistic ML-family languages.", "Such AD for languages with expressive features such as recursion and user-defined datatypes has been called for by the machine learning community [20], [46].", "Previously, the subtlety of the interaction of automatic differentiation and real conditionals had first been observed by [3].", "Our work gives a relatively simple denotational semantics for recursive types, which can be considered as an important special case of bilimit compact categories [23].", "Bilimit compact categories are themselves, again, an important special case of the very general semantics of recursive types in terms of algebraically compact categories [15].", "We believe that working with this special case of the semantics significantly simplifies our presentation.", "In particular, this simplified semantics of recursive types allows us to give a very simple but powerful (open, semantic) logical technique for recursive types.", "It is an alternative to the two existing techniques for logical relations for recursive types: relational properties of domains [33], which is quite general but very technical to use, in our experience, and step-indexed logical relations [2], which are restricted to logical relations arguments about syntax, hence not applicable to our situation.", "Finally, we hope that our work adds to the existing body of programming languages literature on automatic differentiation and recursion (and recursive types).", "In particular, we believe that it provides a simple, principled denotational explaination of how AD and expressive partial language features should interact.", "We plan to use it to generalise and prove correct the more advanced AD technique CHAD [44], [45], [26] when applied to languages with partial features.", "This project has received funding via NWO Veni grant number VI.Veni.202.124 as well as the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No.", "895827.", "This research was supported through the programme “Oberwolfach Leibniz Fellows” by the Mathematisches Forschungsinstitut Oberwolfach in 2022.", "It was also partially supported by the CMUC, Centre for Mathematics of the University of Coimbra - UIDB/00324/2020, funded by the Portuguese Government through FCT/MCTES." ], [ "Fine grain call-by-value and AD", "In §, we have discussed a standard coarse-grain CBV language, also known as the $\\lambda _C$ -calculus, computational $\\lambda $ -calculus [30], or, plainly, CBV.", "In this appendix, we discuss an alternative presentation in terms of fine-grain CBV [24], [23] (also known as Moggi's monadic metalanguage [31]).", "While it is slightly more verbose, this presentation clarifies the precise universal property that is satisfied by the syntax of our language." ], [ "Fine grain call-by-value", "We consider a standard fine-grain call-by-value language (with complex values) over a ground type $$ of real numbers, real constants $\\underline{c}\\in \\mathrm {Op}_0$ for $c\\in \\mathbb {R}$ , and certain basic operations $\\mathrm {op} \\in \\mathrm {Op}_n$ for each natural number $n\\in \\mathbb {N}$ .", "The types ${\\tau },{\\sigma },{\\rho }$ , (complex) values $v,w,u$ , and computations ${t},{s},{r}$ of our language are as follows.", "$\\begin{array}[t]{l@{\\quad \\!\\!", "}*3{l@{}}@{\\,}l}{\\tau }, {\\sigma }, {\\rho } & ::=& & {-25mu}\\qquad \\text{types} \\\\&\\mathrel {\\vert }& & \\qquad \\text{numbers}\\\\&\\mathrel {\\vert }& \\mathbf {0}\\mathrel {\\vert }{\\tau } + {\\sigma } & \\qquad \\text{sums}\\\\&&&\\\\v, {y}, u & ::=& & {-25mu}\\qquad \\text{values} \\\\&\\mathrel {\\vert }& {x},{y},{z} & \\qquad \\text{variables}\\\\&\\mathrel {\\vert }& \\underline{c} & \\qquad \\text{constant}\\\\&\\mathrel {\\vert }& \\mathbf {case}\\,v\\,\\mathbf {of}\\,\\lbrace \\,\\rbrace & \\qquad \\text{sum match}\\\\&\\mathrel {\\vert }& \\mathbf {inl}\\,{v} \\mathrel {\\vert }\\mathbf {inr}\\,{v} & \\qquad \\text{inclusions}\\\\&&&\\\\{t}, {s}, {r} & ::=& & {-25mu}\\qquad \\text{computations} \\\\&\\mathrel {\\vert }& {t}\\,\\mathbf {to}\\,{x}.\\,{s} & \\qquad \\text{sequencing}\\\\&\\mathrel {\\vert }& \\mathbf {return}\\,v & \\qquad \\text{pure comp.", "}\\\\&\\mathrel {\\vert }& \\mathrm {op} (v_1,\\ldots ,v_n) & \\qquad \\text{operation}\\\\&\\mathrel {\\vert }& \\mathbf {case}\\,v\\,\\mathbf {of}\\,\\lbrace \\,\\rbrace & \\qquad \\text{sum match}\\\\\\end{array}$   $\\begin{array}[t]{l@{\\quad \\!\\!", "}*3{l@{}}@{\\,}l}&\\mathrel {\\vert }\\quad \\, & \\mathbf {1}\\mathrel {\\vert }{\\tau }_1 \\,{\\mathop {\\times }}\\, {\\tau }_2 & \\qquad \\text{products}\\\\&\\mathrel {\\vert }& {\\tau } \\rightarrow {\\sigma } & \\qquad \\text{function} \\\\& & &\\\\&\\mathrel {\\vert }& \\mathbf {case}\\,v\\,\\mathbf {of}\\,\\lbrace \\begin{array}{l}\\;\\;\\mathbf {inl}\\,{x}\\rightarrow w\\\\\\mathrel {\\vert }\\mathbf {inr}\\,{y}\\rightarrow u\\end{array}\\rbrace \\hspace{-15.0pt}\\; & \\qquad \\text{sum match}\\\\ &\\mathrel {\\vert }\\quad \\, & \\langle \\,\\rangle \\ \\mathrel {\\vert }\\langle v, w\\rangle & \\qquad \\text{tuples}\\\\&\\mathrel {\\vert }\\quad \\, & \\mathbf {case}\\,v\\,\\mathbf {of}\\,\\langle {x}, {y}\\rangle \\rightarrow w & \\qquad \\text{product match}\\\\&\\mathrel {\\vert }& \\lambda {x}.", "{{t}} & \\qquad \\text{abstractions} \\\\& \\mathrel {\\vert }& \\mu {x}.v & \\qquad \\text{term recursion}\\\\&&&\\\\&\\mathrel {\\vert }& \\mathbf {case}\\,v\\,\\mathbf {of}\\,\\lbrace \\begin{array}{l}\\;\\;\\mathbf {inl}\\,{x}\\rightarrow {t}\\\\\\mathrel {\\vert }\\mathbf {inr}\\,{y}\\rightarrow {s}\\end{array}\\rbrace \\hspace{-15.0pt}\\; & \\qquad \\text{sum match}\\\\&\\mathrel {\\vert }\\quad \\, & \\mathbf {case}\\,v\\,\\mathbf {of}\\,\\langle {x}, {y}\\rangle \\rightarrow {t} & \\qquad \\text{product match}\\\\&\\mathrel {\\vert }& v\\ w & \\qquad \\text{function app.}", "\\\\&\\mathrel {\\vert }&\\mathbf {iterate}\\,{t}\\,\\mathbf {from}\\,{x}=v & \\qquad \\text{iteration}\\\\&\\mathrel {\\vert }&\\mathbf {sign}\\,v & \\qquad \\text{sign function}\\\\\\end{array}$ We will use sugar $&\\mathbf {if}\\,v\\,\\mathbf {then}\\,{t}\\,\\mathbf {else}\\,{s}\\,\\stackrel{\\mathrm {def}}{=}\\mathbf {sign}\\,(v)\\,\\mathbf {to}\\,{x}.\\,\\mathbf {case}\\,{x}\\,\\mathbf {of}\\,\\lbrace {\\_\\rightarrow {{s}}\\mathrel {\\big \\vert }\\_\\rightarrow {{r}}}\\rbrace \\\\&\\mathbf {fst}\\,v\\stackrel{\\mathrm {def}}{=}\\mathbf {case}\\,v\\,\\mathbf {of}\\,\\langle {x}, \\_\\rangle \\rightarrow {x}\\\\&\\mathbf {snd}\\,v\\stackrel{\\mathrm {def}}{=}\\mathbf {case}\\,v\\,\\mathbf {of}\\,\\langle \\_, {x}\\rangle \\rightarrow {x}\\\\&\\mathbf {let} \\,\\mathbf {rec}\\,f\\!\\left( {x}\\right) = {t}\\,\\mathbf {in}\\,{s}\\stackrel{\\mathrm {def}}{=}(\\mu f.\\mathbf {return}\\,(\\lambda {x}.", "{t}))\\,\\mathbf {to}\\,f.\\,{s}.$ We could also define iteration as syntactic sugar: $\\mathbf {iterate}\\,{t}\\,\\mathbf {from}\\,{x}=v\\stackrel{\\mathrm {def}}{=}\\left(\\mu {z}.\\lambda {x}.", "{{t}\\,\\mathbf {to}\\,{y}.\\,\\mathbf {case}\\,{y}\\,\\mathbf {of}\\,\\lbrace \\mathbf {inl}\\,{x}^{\\prime }\\rightarrow {z}\\, {x}^{\\prime }\\, \\mid \\mathbf {inr}\\,{x}^{\\prime \\prime }\\rightarrow \\mathbf {return}\\,{x}^{\\prime \\prime }\\rbrace }\\right)\\, v$ .", "The typing rules are in Figure REF .", "Figure: Typing rules for the our fine-grain CBV language with iteration and real conditionals.We use a typing judgement ⊢ v \\vdash ^v for values and ⊢ c \\vdash ^c for computations." ], [ "Equational theory", "We consider our language up to the usual $\\beta \\eta $ -equational theory for fine-grain CBV, which is displayed in Fig.", "REF .", "Figure: Standard βη\\beta \\eta -laws for fine-grain CBV.We write = #x 1 ,...,x n \\stackrel{\\# {x}_1,\\ldots ,{x}_n}{=} to indicate that the variables are fresh in the left hand side.In the top right rule, x{x} may not be free in r{r}.Equations hold on pairs of terms of the same type.Under the translation of coarse-grain CBV into fine-grain CBV, this equational theory induces precisely that of Section ." ], [ "The $CBV$ model {{formula:c4c851e9-70ed-44f8-99cd-eacd93a70503}}", "Our fine grain call-by-value language corresponds with a $CBV$ model (see Def.", "REF ).", "We define the category $\\mathbf {Syn}_V$ of values, which has types as objects.", "$\\mathbf {Syn}_V({\\tau },{\\sigma })$ consists of $(\\alpha )\\beta \\eta $ -equivalence classes of values ${x}:{\\tau }\\vdash ^v v:{\\sigma }$ , where identities are ${x}:{\\tau }\\vdash ^v {x}:{\\sigma }$ and composition of ${x}:{\\tau }\\vdash ^v v:{\\sigma }$ and ${y}:{\\sigma }\\vdash ^v w:{\\rho }$ is given by ${x}:{\\tau }\\vdash ^v w{}[^{v}\\!/\\!_{{y}}]:{\\rho }$ .", "$\\mathbf {Syn}_V$ is bicartesian closed.", "Similarly, we define the category $\\mathbf {Syn}_C$ of computations, which also has types as objects.", "$\\mathbf {Syn}_C({\\tau },{\\sigma })$ consists of $(\\alpha )\\beta \\eta $ -equivalence classes of computations ${x}:{\\tau }\\vdash ^c {t}:{\\sigma }$ , where identities are ${x}:{\\tau }\\vdash ^c \\mathbf {return}\\,{x}:{\\sigma }$ and composition of ${x}:{\\tau }\\vdash ^c {t}:{\\sigma }$ and ${y}:{\\sigma }\\vdash ^c {s}:{\\rho }$ is given by ${x}:{\\tau }\\vdash ^c {t}\\,\\mathbf {to}\\,{y}.\\,{s}:{\\rho }$ .", "$\\mathbf {Syn}_C$ is a $\\mathbf {Syn}_V$ -category.", "We define the $\\mathbf {Syn}_V$ -functors $\\mathbf {Syn}_G: &\\mathbf {Syn}_C&\\mathbf {Syn}_V\\\\&{\\tau } & \\mapsto \\left( {\\mathbf {1}\\rightarrow {\\tau }}\\right) \\\\& {t} & \\mapsto \\lambda \\langle \\,\\rangle .", "{{t}}$ $\\mathbf {Syn}_J: &\\mathbf {Syn}_V&\\mathbf {Syn}_C\\\\&{\\tau } & \\mapsto {\\tau }\\\\& v & \\mapsto \\mathbf {return}\\,v .$ We have that $\\mathbf {Syn}_J\\dashv \\mathbf {Syn}_G$ is a (Kleisli) $\\mathbf {Syn}_V$ -adjunction $\\mathbf {Syn}_J\\dashv \\mathbf {Syn}_G$ and, hence, denoting by $\\mathbf {Syn}_\\mathcal {S}$ the induced $\\mathbf {Syn}_V$ -monad, $\\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S}\\right) $ is a $CBV$ pair, as defined in .", "Moreover, considering the free recursion and free iteration $\\mathbf {Syn}_\\mathsf {it} : \\qquad &\\left({x}:{\\sigma }\\vdash ^c {t}:{\\sigma }\\,{\\mathop {\\sqcup }}\\,{\\tau }\\right) &\\mapsto \\lambda {y} .", "{ \\left( {\\mathbf {iterate}\\,{t}\\,\\mathbf {from}\\,{x}={y}} \\right) } \\\\\\mathbf {Syn}_\\mu :\\qquad &\\left({x}:{\\tau }\\vdash ^v v:{\\tau }\\right) &\\mapsto {\\mu {x}.v} \\quad ({\\tau }={\\sigma }\\rightarrow {\\rho }),$ we get the $CBV$ model $\\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S},\\mathbf {Syn}_\\mu , \\mathbf {Syn}_\\mathsf {it} \\right) $ which has the following universal property.", "[Universal Property of the Syntax] Let $\\left( {V}, \\mathcal {T}, \\mu , \\right) $ be a $CBV$ model with chosen finite products, coproducts and exponentials.", "For each consistent assignment $H() &\\in & \\mathsf {ob}\\, {V} \\\\H (\\underline{c}) &\\in & {V}\\left( \\mathsf {1} , H() \\right) \\\\H (\\mathrm {op} )&\\in &{C}\\left( H()^n , H() \\right) = {V}\\left( H()^n , TH() \\right) , \\mbox{ for each } \\mathrm {op} \\in \\mathrm {Op}_n \\\\H (\\mathbf {sign}\\,) & \\in &{C}\\left( H() , \\mathsf {1}\\sqcup \\mathsf {1} \\right) = {V}\\left( H() , T\\left( \\mathsf {1}\\sqcup \\mathsf {1}\\right) \\right) $ there is a unique $CBV$ model morphism $H$ between $\\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S},\\mathbf {Syn}_\\mu , \\mathbf {Syn}_\\mathsf {it} \\right)$ and $\\left( {V}, \\mathcal {T}, \\mu , \\right) $ respecting it.", "[Universal Property of the Syntax] Let $\\left( {V}, \\mathcal {T}, \\mu , \\right) $ be a $CBV$ model with chosen finite products, coproducts and exponentials.", "For each consistent assignment $H() &\\in & \\mathsf {ob}\\, {V} \\\\H (\\underline{c}) &\\in & {V}\\left( \\mathsf {1} , H() \\right) \\\\H (\\mathrm {op} )&\\in &{C}\\left( H()^n , H() \\right) = {V}\\left( H()^n , TH() \\right) , \\mbox{ for each } \\mathrm {op} \\in \\mathrm {Op}_n \\\\H (\\mathbf {sign}\\,) & \\in &{C}\\left( H() , \\mathsf {1}\\sqcup \\mathsf {1} \\right) = {V}\\left( H() , T\\left( \\mathsf {1}\\sqcup \\mathsf {1}\\right) \\right) $ there is a unique $CBV$ model morphism $H$ between $\\left( \\mathbf {Syn}_V, \\mathbf {Syn}_\\mathcal {S},\\mathbf {Syn}_\\mu , \\mathbf {Syn}_\\mathsf {it} \\right)$ and $\\left( {V}, \\mathcal {T}, \\mu , \\right) $ respecting it." ], [ "A translation from coarse-grain CBV to fine-grain CBV", "This translation $(-)^\\dagger $ operates on types and contexts as the identity.", "It faithfully translates terms $\\Gamma \\vdash {t}:{\\tau }$ of coarse-grain CBV into computations $\\Gamma \\vdash ^c {t}^\\dagger : {\\tau }$ of fine-grain CBV.", "This translation illustrates the main difference between coarse-grain and fine-grain CBV: in coarse-grain CBV, values are subset of computations, while fine-grain CBV is more explicit in keeping values and computations separate.", "This makes it slightly cleaner to formulate an equational theory, denotational semantics, and logical relations arguments.", "We list the translation $(-)^\\dagger $ below where all newly introduced variables are chosen to be fresh.", "$\\begin{array}{l|l}\\textnormal {coarse-grain CBV computation } {t} & \\textnormal {fine-grain CBV translation } {t}^\\dagger \\\\\\hline {x} & \\mathbf {return}\\,{x} \\\\\\mathbf {let}\\,{x}=\\,{t}\\,\\mathbf {in}\\,{s} & {t}^\\dagger \\,\\mathbf {to}\\,{x}.\\,{s}^\\dagger \\\\\\underline{c} & \\mathbf {return}\\,\\underline{c} \\\\\\mathbf {inl}\\,{t} & {t}^\\dagger \\,\\mathbf {to}\\,{x}.\\,\\mathbf {return}\\,\\mathbf {inl}\\,{x}\\\\\\mathbf {inr}\\,{t} & {t}^\\dagger \\,\\mathbf {to}\\,{x}.\\,\\mathbf {return}\\,\\mathbf {inr}\\,{x}\\\\\\langle \\,\\rangle & \\mathbf {return}\\,\\langle \\,\\rangle \\\\\\langle {t}, {s}\\rangle & {t}^\\dagger \\,\\mathbf {to}\\,{x}.\\,{s}^\\dagger \\,\\mathbf {to}\\,{y}.\\,\\mathbf {return}\\,\\langle {x}, {y}\\rangle \\\\\\lambda {x}.", "{{t}} & \\mathbf {return}\\,\\lambda {x}.", "{{t}^\\dagger }\\\\\\mathrm {op} ({t}_1,\\ldots ,{t}_n) & {t}_1^\\dagger \\,\\mathbf {to}\\,{x}_1.\\,\\ldots {t}_n^\\dagger \\,\\mathbf {to}\\,{x}_n.\\,\\mathrm {op} ({x}_1,\\ldots ,{x}_n)\\\\\\mathbf {case}\\,{t}\\,\\mathbf {of}\\,\\lbrace \\,\\rbrace & {t}^\\dagger \\,\\mathbf {to}\\,{x}.\\,\\mathbf {case}\\,{x}\\,\\mathbf {of}\\,\\lbrace \\,\\rbrace \\\\\\mathbf {case}\\,{t}\\,\\mathbf {of}\\,\\lbrace \\mathbf {inl}\\,{x}\\rightarrow {s}\\, \\mid \\mathbf {inr}\\,{y}\\rightarrow {r}\\rbrace & {t}^\\dagger \\,\\mathbf {to}\\,{z}.\\,\\mathbf {case}\\,{z}\\,\\mathbf {of}\\,\\lbrace \\mathbf {inl}\\,{x}\\rightarrow {s}^\\dagger \\, \\mid \\mathbf {inr}\\,{y}\\rightarrow {r}^\\dagger \\rbrace \\\\\\mathbf {case}\\,{t}\\,\\mathbf {of}\\,\\langle {x}, {y}\\rangle \\rightarrow {s} & {t}^\\dagger \\,\\mathbf {to}\\,{z}.\\,\\mathbf {case}\\,{z}\\,\\mathbf {of}\\,\\langle {x}, {y}\\rangle \\rightarrow {s}^\\dagger \\\\{t}\\,{s} & {t}^\\dagger \\,\\mathbf {to}\\,{x}.\\,{s}^\\dagger \\,\\mathbf {to}\\,{y}.\\,{x}\\,{y}\\\\\\mathbf {iterate}\\,{t}\\,\\mathbf {from}\\,{x}={s} & {s}^\\dagger \\,\\mathbf {to}\\,{y}.\\,\\mathbf {iterate}\\,{t}^\\dagger \\,\\mathbf {from}\\,{x}={y}\\\\\\mathbf {sign}\\,\\,{t} & {t}^\\dagger \\,\\mathbf {to}\\,{x}.\\,\\mathbf {sign}\\,\\,{x}\\\\\\mu {z}.", "{t} & \\mu {z}.\\lambda {x}.", "{ {t}^\\dagger \\,\\mathbf {to}\\,{y}.\\,{y}{x} }\\end{array}$" ], [ "Dual numbers forward AD transformation", "As before, we fix, for all $n\\in \\mathbb {N}$ , for all $\\mathrm {op} \\in \\mathrm {Op}_n$ , for all $1\\le i \\le n$ , computations ${x}_1:,\\ldots ,{x}_n:\\vdash ^c \\partial _i\\mathrm {op} ({x}_1,\\ldots ,{x}_n):$ , which represent the partial derivatives of $\\mathrm {op} $ .", "Using these terms for representing partial derivatives, we define, in Fig.", "REF , a structure preserving macro $\\scalebox {0.8}{\\mathcal {D}}_{}$ on the types, values, and computations of our language for performing forward-mode AD.", "We observe that this induces the following AD rule for our sugar: $\\scalebox {0.8}{\\mathcal {D}}_{}{}_{{C}}(\\mathbf {if}\\,v\\,\\mathbf {then}\\,{t}\\,\\mathbf {else}\\,{s}\\,)=\\mathbf {case}\\,\\scalebox {0.8}{\\mathcal {D}}_{}{}_{{V}}(v)\\,\\mathbf {of}\\,\\langle {x}, \\_\\rangle \\rightarrow \\mathbf {if}\\,{x}\\,\\mathbf {then}\\,\\scalebox {0.8}{\\mathcal {D}}_{}{}_{{C}}({t})\\,\\mathbf {else}\\,\\scalebox {0.8}{\\mathcal {D}}_{}{}_{{C}}({s})\\,$ .", "Figure: A forward-mode AD macro defined on types as 0.8𝒟 (-)\\scalebox {0.8}{\\mathcal {D}}_{}(-), values as 0.8𝒟 V (-)\\scalebox {0.8}{\\mathcal {D}}_{}{}_{{V}}(-), and computations as 0.8𝒟 C (-)\\scalebox {0.8}{\\mathcal {D}}_{}{}_{{C}}(-).All newly introduced variables are chosen to be fresh.In fact, by the universal property of $\\mathbf {Syn}_J$ , $\\scalebox {0.8}{\\mathcal {D}}_{}$ is the unique structure preserving functor on $\\scalebox {0.8}{\\mathcal {D}}_{}$ that has the right definition for constants, primitive operations and $\\mathbf {sign}\\,$ .", "It automatically follows that $\\scalebox {0.8}{\\mathcal {D}}_{}$ respects $\\beta \\eta $ -equality.", "Under the translation of coarse-grain CBV into fine-grain CBV, this code transformation induces precisely that of §." ], [ "A more efficient derivative for $\\mathbf {sign}\\,{}$", "We can define by mutual induction (for both $\\scalebox {0.8}{{\\mathcal {D}}}_{}=\\scalebox {0.8}{\\mathcal {D}}_{},\\scalebox {0.8}{\\overleftarrow{\\mathcal {D}}}_{k}$ ) $x : \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\tau }) &\\vdash \\mathbf {p}_{{\\tau }}(x) : {\\tau }\\\\x : \\scalebox {0.8}{{\\mathcal {D}}}_{}() &\\vdash \\mathbf {fst}\\,(x) : \\\\x : \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\tau }) \\,{\\mathop {\\times }}\\, \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\sigma }) &\\vdash \\langle \\mathbf {p}_{{\\tau }}(\\mathbf {fst}\\,x), \\mathbf {p}_{{\\sigma }}(\\mathbf {snd}\\,x)\\rangle : {\\tau } \\,{\\mathop {\\times }}\\, {\\sigma } \\\\x : \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\tau }) \\,{\\mathop {\\sqcup }}\\, \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\sigma }) &\\vdash \\mathbf {case}\\,x\\,\\mathbf {of}\\,\\lbrace \\mathbf {inl}\\,y\\rightarrow \\mathbf {inl}\\,\\mathbf {p}_{{\\tau }}(y) \\mid \\mathbf {inr}\\,z \\rightarrow \\mathbf {inr}\\,\\mathbf {p}_{{\\sigma }}(z)\\rbrace : {\\tau } \\,{\\mathop {\\sqcup }}\\, {\\sigma }\\\\x : \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\tau }) \\rightarrow \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\sigma }) &\\vdash \\lambda y.", "{\\mathbf {p}_{{\\sigma }}(x(\\mathbf {z}_{{\\tau }}(y)))} : {\\tau } \\rightarrow {\\sigma }\\\\x : \\mathbf {\\mu }{\\alpha }.\\scalebox {0.8}{{\\mathcal {D}}}_{}({\\tau }) &\\vdash \\mathbf {case}\\,x\\,\\mathbf {of}\\,\\mathbf {roll}\\,y\\rightarrow \\mathbf {roll}\\,\\mathbf {p}_{{\\tau }}(x) : \\mathbf {\\mu }{\\alpha }.", "{\\tau }\\\\x : {\\alpha } &\\vdash x : {\\alpha }$ and $x : {\\tau } &\\vdash \\mathbf {z}_{{\\tau }}(x) : \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\tau }) \\\\x : &\\vdash \\langle x, \\underline{0}\\rangle : \\scalebox {0.8}{{\\mathcal {D}}}_{}() \\\\x : {\\tau } \\,{\\mathop {\\times }}\\, {\\sigma } &\\vdash \\langle \\mathbf {z}_{{\\tau }}(\\mathbf {fst}\\,x), \\mathbf {z}_{{\\sigma }}(\\mathbf {snd}\\,x)\\rangle : \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\tau }) \\,{\\mathop {\\times }}\\, \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\sigma })\\\\x : {\\tau } \\,{\\mathop {\\sqcup }}\\, {\\sigma } &\\vdash \\mathbf {case}\\,x\\,\\mathbf {of}\\,\\lbrace \\mathbf {inl}\\,y \\rightarrow \\mathbf {inl}\\,\\mathbf {z}_{{\\tau }}(y) \\mid \\mathbf {inr}\\,z \\rightarrow \\mathbf {inr}\\,\\mathbf {z}_{{\\sigma }}(z)\\rbrace : \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\tau }) \\,{\\mathop {\\sqcup }}\\, \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\sigma })\\\\x : {\\tau } \\rightarrow {\\sigma } &\\vdash \\lambda y.", "{\\mathbf {z}_{{\\sigma }}(x(\\mathbf {p}_{{\\tau }}(y)))} : \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\tau }) \\rightarrow \\scalebox {0.8}{{\\mathcal {D}}}_{}({\\sigma }) \\\\x : \\mathbf {\\mu }{\\alpha }.", "{\\tau } &\\vdash \\mathbf {case}\\,x\\,\\mathbf {of}\\,\\mathbf {roll}\\,y\\rightarrow \\mathbf {roll}\\,\\mathbf {z}_{{\\tau }}(x) : \\mathbf {\\mu }{\\alpha }.\\scalebox {0.8}{{\\mathcal {D}}}_{}({\\tau })\\\\x : {\\alpha } &\\vdash x : {\\alpha }.$ Then, observe that, for any ${x}_1:{\\tau }_1,\\ldots ,{x}_n:{\\tau }_n\\vdash {t}:$ , we have $[\\hspace{-2.5pt}[\\mathbf {sign}\\,{(\\mathbf {fst}\\,\\scalebox {0.8}{{\\mathcal {D}}}_{}({t}))}]\\hspace{-2.5pt}] =[\\hspace{-2.5pt}[\\mathbf {let}\\,x_1=\\,\\mathbf {p}_{{\\tau }_1}(x_1)\\,\\mathbf {in}\\,\\cdots \\mathbf {let}\\,x_n=\\,\\mathbf {p}_{{\\tau }_n}(x_n)\\,\\mathbf {in}\\,\\cdots {\\mathbf {sign}\\,{t}}]\\hspace{-2.5pt}] $ .", "Therefore, we can define, for ${x}_1:{\\tau }_1,\\ldots ,{x}_n:{\\tau }_n\\vdash {t}:$ , $\\scalebox {0.8}{{\\mathcal {D}}}_{}(\\mathbf {sign}\\,{t})&\\stackrel{\\mathrm {def}}{=}\\mathbf {let}\\,x_1=\\,\\mathbf {p}_{{\\tau }_1}(x_1)\\,\\mathbf {in}\\,\\cdots \\mathbf {let}\\,x_n=\\,\\mathbf {p}_{{\\tau }_n}(x_n)\\,\\mathbf {in}\\,\\cdots {\\mathbf {sign}\\,{t}}.$ This yields more efficient definitions of the forward and reverse derivatives of $\\mathbf {sign}\\,{}$ and $\\mathbf {if}\\,\\,\\mathbf {then}\\,\\,\\mathbf {else}\\,\\,$ as we do not need to differentiate ${t}$ at all." ], [ "Enriched scone", "We present straightforward generalizations (enriched versions) of the results presented in [26] below.", "Considering the $\\mathbf {\\omega Cpo}$ -category $\\mathsf {2}$ with two objects and only one non-trivial morphism between them, the $\\mathbf {\\omega Cpo}$ -category $\\mathsf {2}\\pitchfork {D}$ of morphisms of ${D}$ can be described as the $\\mathbf {\\omega Cpo}$ -category $\\mathbf {\\omega Cpo}\\textrm {-}\\mathsf {Cat}\\left[ \\mathsf {2} , {D} \\right] $ of $\\mathbf {\\omega Cpo}$ -functors $\\mathsf {2}\\rightarrow {D}$ .", "Explicitly, the objects of $\\mathsf {2}\\pitchfork {D}$ are morphisms $f: Y_0\\rightarrow Y_1 $ of ${D}$ .", "A morphism between $f$ and $g$ is a pair $\\alpha = \\left( \\alpha _0 , \\alpha _1\\right) : f\\rightarrow g $ such that $\\alpha _1 f = g\\alpha _0 $ , that is to say, a ($\\mathbf {\\omega Cpo}$ -)natural transformation.", "Finally, the $\\mathbf {\\omega Cpo}$ -structure is defined by $\\left( \\alpha _0 , \\alpha _1\\right) \\le \\left( \\beta _0 , \\beta _1\\right) $ if $\\alpha _0\\le \\beta _ 0 $ and $\\alpha _1\\le \\beta _ 1 $ in ${D}$ .", "Given an $\\mathbf {\\omega Cpo}$ -functor $G:{C}\\rightarrow {D}$ , the comma category ${D}\\downarrow G$ of the identity on ${D}$ along $G$ in ${\\mathbf {\\omega Cpo}}\\textrm {-}\\mathsf {Cat}$ is also known as the $\\mathbf {\\omega Cpo}$ -scone or Artin glueing of $G$ .", "It can be described as the pullback (REF ) in ${\\mathbf {\\omega Cpo}}\\textrm {-}\\mathsf {Cat}$ , in which $\\mathrm {codom}: \\mathsf {2}\\pitchfork {D}\\rightarrow {D}$ , defined by $\\left( \\alpha = \\left( \\alpha _0 , \\alpha _1\\right) : f \\rightarrow g\\right)\\mapsto \\alpha _1 $ , is the codomain $\\mathbf {\\omega Cpo}$ -functor.", "$ (0,0)|l|/->/<0,-450>[{{{D}\\downarrow {G}}}`{{{C}}};{{\\mathsf {proj}_{{C}}}}](0,0)|a|/->/<675,0>[{{{D}\\downarrow {G}}}`{{\\mathsf {2}\\pitchfork {D}}};{{\\mathsf {proj}_{2\\pitchfork {D}} }}](675,0)|r|/->/<0,-450>[{{\\mathsf {2}\\pitchfork {D}}}`{{{D}}};{{\\mathrm {codom}}}](0,-450)|b|/->/<675,0>[{{{C}}}`{{{D}}};{{G}}]$ Since $\\mathrm {codom}$ is an isofibration, the pullback (REF ) is equivalent to the pseudo-pullback of $\\mathrm {codom}$ along $G$ , which is the $\\mathbf {\\omega Cpo}$ -category defined as follows.", "The objects of the pseudo-pullback are triples $\\left( \\left( f: Y_0\\rightarrow Y_1\\right)\\in \\mathsf {2}\\pitchfork {D}, C\\in {C}, \\xi : \\left(\\mathrm {codom}f\\right){\\cong } G(C) \\right) $ where $\\xi $ is an isomorphism in ${D}$ .", "A morphism $(f,C,\\xi ) \\rightarrow (f^{\\prime },C^{\\prime },\\xi ^{\\prime } )$ is a pair of morphisms $\\left( \\alpha : f\\rightarrow f^{\\prime }, h: C\\rightarrow C^{\\prime } \\right) $ such that $ G(h)\\circ \\xi = \\xi ^{\\prime }\\circ \\mathrm {codom}\\left( \\alpha \\right) $ .", "Finally, the $\\mathbf {\\omega Cpo}$ -structure of the homs are given pointwise.", "That is to say, $\\left( \\alpha , h \\right)\\le \\left( \\alpha ^{\\prime } , h ^{\\prime } \\right) $ if $\\alpha \\le \\alpha $ in $\\mathsf {2}\\pitchfork {D}$ and $h\\le h ^{\\prime }$ in ${C}$ .", "The forgetful $\\mathbf {\\omega Cpo}$ -functor $\\mathcal {L}: {D}\\downarrow G\\rightarrow {D}\\times {C}$ , defined in (REF ), creates all absolute (weighted) limits and colimits.", "Clearly, the $\\mathbf {\\omega Cpo}$ -functor $\\mathcal {L}$ reflects isomorphisms.", "Let $D$ be a diagram in ${D}\\downarrow G$ such that the weighted (co)limit $(co) \\mathsf {lim}\\left( W , \\mathcal {L}D \\right) $ exists and is preserved by any $\\mathbf {\\omega Cpo}$ -functor.", "Since ${D}\\downarrow G$ is the pullback (REF ), there is a unique pair of diagrams $\\left( D_0, D_1 \\right) $ such that $\\mathsf {proj}_{2\\pitchfork {D}} \\circ D = D_0, \\quad \\mathsf {proj}_{{C}}\\circ D = D_1, \\quad \\mathrm {codom}\\circ D_0 = G\\circ D_1 ,$ hold.", "Since $\\mathrm {dom}\\circ D_0 = \\pi _{{D}}\\circ \\mathcal {L}\\circ D $ and $\\mathrm {codom}\\circ D_0 = G\\circ \\pi _{{C}}\\circ \\mathcal {L}\\circ D $ , we get that $(co) \\mathsf {lim}\\left( W , \\mathrm {dom}D_0 \\right) \\cong \\pi _{{D}}\\left( (co) \\mathsf {lim}\\left( W , \\mathcal {L}\\circ D \\right) \\right) $ and $ (co) \\mathsf {lim}\\left( W , \\mathrm {codom}\\circ D_0 \\right) \\cong G\\circ \\pi _{{C}}\\left( (co) \\mathsf {lim}\\left( W , \\mathcal {L}\\circ D \\right) \\right) $ .", "Therefore, $(co) \\mathsf {lim}\\left( W , \\mathcal {L}\\circ D_0 \\right) $ exists in $\\mathsf {2}\\pitchfork {D}$ , pointwise constructed out of $(co) \\mathsf {lim}\\left( W , \\mathrm {dom}\\circ D_0 \\right) $ and $(co) \\mathsf {lim}\\left( W , \\mathrm {codom}\\circ D_0 \\right) $ .", "Moreover, since $D_1 = \\pi _{{C}}\\circ \\mathcal {L}\\circ D $ , we have that $(co) \\mathsf {lim}\\left( W , D_1 \\right) \\cong \\pi _{{C}}\\left( (co) \\mathsf {lim}\\left( W , \\mathcal {L}\\circ D \\right) \\right) $ .", "Therefore, the isomorphism $\\xi $ given by $\\mathrm {codom}\\left( (co) \\mathsf {lim}\\left( W , D_0 \\right) \\right) &\\cong & (co) \\mathsf {lim}\\left( W , \\mathrm {codom}\\circ D_0 \\right) \\\\& \\cong & G\\circ \\pi _{{C}}\\left( (co) \\mathsf {lim}\\left( W , \\mathcal {L}\\circ D \\right) \\right)\\\\& \\cong & G \\left( (co) \\mathsf {lim}\\left( W , D_1 \\right) \\right)$ together with the pair $\\left( (co) \\mathsf {lim}\\left( W , D_0 \\right) , (co) \\mathsf {lim}\\left( W , D_1 \\right) \\right) $ defines, up to isomorphism, an object of ${D}\\downarrow G$ , which satisfies the universal property for $(co) \\mathsf {lim}\\left( W , D \\right) = (co) \\mathsf {lim}\\left( W , \\left( D_0, D_1\\right) \\right) $ .", "Moreover, by the construction above, we conclude that $(co) \\mathsf {lim}\\left( W , D \\right) $ is preserved by $\\mathcal {L}$ .", "In particular: $ \\mathcal {L}\\left( (co) \\mathsf {lim}\\left( W , D_0 \\right) , (co) \\mathsf {lim}\\left( W , D_1 \\right) , \\xi \\right) =\\left( (co) \\mathsf {lim}\\left( W , \\mathrm {dom}\\circ D_0 \\right) , (co) \\mathsf {lim}\\left( W , D_1 \\right) \\right) .$ The above completes the proof that the $\\mathbf {\\omega Cpo}$ -functor $\\mathcal {L}$ creates $(co) \\mathsf {lim}\\left( W , D \\right) $ .", "The $\\mathbf {\\omega Cpo}$ -functor $\\mathcal {L}$ has a right $\\mathbf {\\omega Cpo}$ -adjoint provided that ${D}$ has binary $\\mathbf {\\omega Cpo}$ -products.", "It is given by $\\left( D\\in {D}, C\\in {C}\\right)\\mapsto \\left( D\\times G\\left( C\\right), C, \\pi _{G(C)} \\right) $ .", "Therefore: The forgetful $\\mathbf {\\omega Cpo}$ -functor $\\mathcal {L}: {D}\\downarrow G\\rightarrow {D}\\times {C}$ is $\\mathbf {\\omega Cpo}$ -comonadic provided that ${D}$ has binary $\\mathbf {\\omega Cpo}$ -products.", "By duality, we get that the forgetful $\\mathbf {\\omega Cpo}$ -functor $F \\downarrow {C}\\rightarrow {D}\\times {C}$ is $\\mathbf {\\omega Cpo}$ -monadic provided that ${C}$ has finite $\\mathbf {\\omega Cpo}$ -coproducts.", "Therefore: The forgetful $\\mathbf {\\omega Cpo}$ -functor $\\mathcal {L}: {D}\\downarrow G\\rightarrow {D}\\times {C}$ is $\\mathbf {\\omega Cpo}$ -monadic whenever $G$ has a left $\\mathbf {\\omega Cpo}$ -adjoint and ${C}$ has finite $\\mathbf {\\omega Cpo}$ -coproducts.", "Indeed, by the $\\mathbf {\\omega Cpo}$ -adjunction $F\\dashv G $ , we get an isomorphism ${D}\\downarrow G\\cong F \\downarrow {C}$ which composed with the forgetful $\\mathbf {\\omega Cpo}$ -functor $ F \\downarrow {C}\\rightarrow {D}\\times {C}$ is equal to $\\mathcal {L}: {D}\\downarrow G\\rightarrow {D}\\times {C}$ .", "As a consequence, we conclude that: Let $G: {C}\\rightarrow {D}$ be a right $\\mathbf {\\omega Cpo}$ -adjoint functor between $\\mathbf {\\omega Cpo}$ -bicartesian closed categories.", "We have that the forgetful $\\mathbf {\\omega Cpo}$ -functor $\\mathcal {L}$ is $\\mathbf {\\omega Cpo}$ -monadic and comonadic.", "In particular, ${D}\\downarrow G$ is an $\\mathbf {\\omega Cpo}$ -bicartesian closed category." ], [ "Some Haskell Code for a Recursive Neural Network", "-- example implementation of https://icml.cc/2011/papers/125_icmlpaper.pdf -- Some of the basic datatypes we use -- we elide the implementation of some data Tree a   = Leaf a   | Node (Tree a) (Tree a)   deriving (Eq) -- \\mu b. a + (b x b), leaf a = roll (iota_1 a), node l r = roll (iota_2 (l, r))   data Vector   data Scalar   data Matrix   type ActivationVectors = [Vector]   type AdjacencyMatrix = [(Tree Int, Tree Int)]   -- Some basic data and operations that we need for the RNN -- Again, we elide much of the implementation as it is beside the point of this example f :: Vector -> Vector --  some non-linear function, usually elementwise applied sigmoid function f = undefined   conc :: Vector -> Vector -> Vector -- concatenate vectors conc = undefined   mult :: Matrix -> Vector -> Vector -- matrix vector multiplication mult = undefined   add :: Vector -> Vector -> Vector -- elementwise vector addition add = undefined   innerprod :: Vector -> Vector -> Scalar -- vector inner product innerprod = undefined   a :: ActivationVectors a = undefined -- input (for example, sequence of words as vectors or image segments as vectors)   adjMat :: AdjacencyMatrix -- start with matrix that only stores (Leaf i, Leaf j) pairs in case i is a neighbour of j; -- we later extend adjacency to parent nodes adjMat = undefined -- input (specify which words/image segments are neighbours  )   w :: Matrix w = undefined -- parameter to learn: weights   b :: Vector b = undefined -- parameter to learn: biases   wScore :: Vector wScore = undefined -- parameter to learn: scoring vector   -- The implementation of the RNN -- version 1, without caching modelH ((w, b, wScore), (adjMat, globalScore)) =   let getNode (Leaf i) = a !!", "i    in let getNode (Node l r) = f (w `mult` conc (getNode l) (getNode r)) `add` b        in let parentsScores =                 map                   (\\i -> (i, innerprod wScore (getNode (uncurry Node i))))                   adjMat -- compute scores for all parent nodes of neighbours;                   -- super inefficient without caching getNode, but conceptually cleaner            in let ((bp1, bp2), bestScore) =                     foldl                       (\\(i, s) (i', s') ->                          if s > s'                            then (i, s)                            else (i', s'))                       (head parentsScores)                       parentsScores -- find the neighbours that have the higest score                in let globalScore2 = globalScore + bestScore                -- add the local contribution of our chosen neighbour pair to the global score                    in let bestPar = Node bp1 bp2                    -- actually compute our favourite parent;                    -- I guess we'd already done this before but it's cheap to redo                        in let mergeParH i                                 | i == bp1 || i == bp2 = bestPar                            in let mergeParH i                                     | otherwise = i                                in let mergePar (i, j) =                                         (mergeParH i, mergeParH j)                                    in let adjMat2 =                                             filter                                               (/= (bestPar, bestPar))                                               [ mergePar (i, j)                                               | (i, j) <- adjMat                                               ]                                               -- replace bp1 and bp2 with bestPar in adjacencyMatrix,                                               -- but we have a convention that nodes are not neighbours                                               -- of themselves                                        in if null adjMat2                                             then Right globalScore2                                             else Left (adjMat, globalScore2)                                             -- if we run out of neighbours that can be merged, we are done;                                             -- otherwise iterate with new adjacency matrix and score   it :: ((c, a) -> Either a b) -> (c, a) -> b -- functional iteration it f (c, a) =   case f (c, a) of     Left a' -> it f (c, a')     Right b -> b   model :: ((Matrix, Vector, Vector), (AdjacencyMatrix, Scalar)) -> Scalar model = it modelH   -- The implementation of the RNN -- version2, with caching of getNode modelH2 ((w, b, wScore), (adjMat, values, globalScore)) =   let getNode (Leaf i) = look (Leaf i) values    in let getNode (Node l r) =             let lv = look l values              in let rv = look r values                  in f (w `mult` conc lv rv) `add` b        in let parentsValScores =                 map                   (\\i ->                      let v = getNode (uncurry Node i)                       in (i, v, innerprod wScore v))                   adjMat            in let ((bp1, bp2), bestVal, bestScore) =                     foldl                       (\\(i, v, s) (i', v', s') ->                          if s > s'                            then (i, v, s)                            else (i', v', s'))                       (head parentsValScores)                       parentsValScores                in let globalScore2 = globalScore + bestScore                    in let bestPar = Node bp1 bp2                        in let mergeParH i                                 | i == bp1 || i == bp2 = bestPar                            in let mergeParH i                                     | otherwise = i                                in let mergePar (i, j) =                                         (mergeParH i, mergeParH j)                                    in let adjMat2 =                                             filter                                               (/= (bestPar, bestPar))                                               [ mergePar (i, j)                                               | (i, j) <- adjMat                                               ]                                        in if null adjMat2                                             then Right globalScore2                                             else Left                                                    ( adjMat                                                    , (bestPar, bestVal) : values                                                    , globalScore2)   -- initial values will be zip (map Leaf [0..], a) look :: Tree Int -> [(Tree Int, b)] -> b --  a map operation for looking up cache look k m =   case lookup k m of     Just x -> x   model2 :: ((Matrix, Vector, Vector), (AdjacencyMatrix, Scalar)) -> Scalar model2 ((w, b, wScore), (adjMat, globalScore)) =   it modelH2 ((w, b, wScore), (adjMat, zip (map Leaf [0 ..]) a, globalScore))" ] ]
2210.07724
[ [ "Contrastive Audio-Visual Masked Autoencoder" ], [ "Abstract In this paper, we first extend the recent Masked Auto-Encoder (MAE) model from a single modality to audio-visual multi-modalities.", "Subsequently, we propose the Contrastive Audio-Visual Masked Auto-Encoder (CAV-MAE) by combining contrastive learning and masked data modeling, two major self-supervised learning frameworks, to learn a joint and coordinated audio-visual representation.", "Our experiments show that the contrastive audio-visual correspondence learning objective not only enables the model to perform audio-visual retrieval tasks, but also helps the model learn a better joint representation.", "As a result, our fully self-supervised pretrained CAV-MAE achieves a new SOTA accuracy of 65.9% on VGGSound, and is comparable with the previous best supervised pretrained model on AudioSet in the audio-visual event classification task." ], [ "Introduction", "Acoustic and visual modalities have different properties, yet humans are able to seamlessly connect and integrate them to perceive the world.", "Developing learning algorithms to replicate these abilities, especially for multi-modal audio-visual fusion and retrieval is of great interest.", "Since manually annotating audio and video is expensive and difficult to scale, how to utilize web-scale unlabeled video data in a self-supervised manner has become a core research question.", "One major line of audio-visual self-supervised learning research is leveraging the natural audio-visual correspondences found in videos.", "Among numerous ways to use such correspondences, Contrastive Audio-Visual Learning has shown to be a simple yet effective approach [3], [33], [41].", "It learns coordinated Multi-modal representations can be divided into two categories: joint representations that combine the unimodal signals into the same representation space, and coordinated representations that process unimodal signals separately, but enforce certain similarity constraints on them.", "[7] representations that are closer for paired audio and visual samples than for mismatched samples.", "Such coordinated representations are particularly useful for tasks such as cross-modal retrieval.", "Another vetted commonly used self-supervised learning framework is Masked Data Modeling (MDM), which learns a meaningful representation with the pretext task of recovering the original inputs or features from the corrupted ones [13].", "Particularly, based on the Audio Spectrogram Transformer [20] and Vision Transformer [14] backbones, the single-modal Masked Auto-Encoder (MAE) [23] achieved state-of-the-art (SOTA) performance on images and audio tasks [48] individually.", "Inspired by these advances, we propose to extend the single-modal MAE to Audio-Visual Masked Auto-Encoder (AV-MAE).", "By allowing the model to reconstruct one modality based on the information of another modality, AV-MAE learns a joint representation that fuses the unimodal signals.", "Although these two major self-supervised frameworks have been widely used individually, to the best of our knowledge, they have never been combined in audio-visual learning.", "In fact, we find they are complementary: Contrastive audio-visual learning explicitly leverages the very useful audio-visual pair information, but it could discard modality-unique information that is useful in downstream tasks; The reconstruction task of AV-MAE forces its representation to encode the majority of the input information in the fusion, but it lacks an explicit audio-visual correspondence objective.", "This motivates us to design the Contrastive Audio-Visual Masked Autoencoder (CAV-MAE) that integrates contrastive learning and masked data modeling which learns a joint and coordinated audio-visual representation with a single model.", "Our experiments support our design: on audio-visual event classification, CAV-MAE significantly outperforms baseline models trained with only contrastive or masked data modeling objectives, demonstrating that the two objectives are complementary in learning a strong joint audio-visual representation.", "As a result, CAV-MAE achieves a new SOTA accuracy of 65.9% on VGGSound, and is comparable with the previous best supervised pretrained model on AudioSet.", "Moreover, when it comes to audio-visual retrieval, CAV-MAE also performs equally well or even better than models trained with only the contrastive objective, which demonstrates that CAV-MAE can learn both a joint and coordinated representation well.", "Finally, CAV-MAE multi-modal pretraining improves single-modal performance, consequently, CAV-MAE achieves a new SOTA for audio-based event classification on AudioSet-20K and VGGSound.", "In summary, our contributions are: (1) we extend the single-modal MAE to multi-modal AV-MAE, which fuses audio-visual inputs for self-supervised learning through cross-modal masked data modeling; (2) More importantly, we investigate how to best combine contrastive audio-visual learning with masked data modeling and propose CAV-MAE; (3) We demonstrate that contrastive and masked data modeling objectives are complementary.", "As a result, CAV-MAE matches or outperforms SOTA models on audio-visual classification.", "We will release the code upon acceptance." ], [ "Audio and Image Pre-processing and Tokenization", "As depicted in Figure REF (A), we follow pre-processing and tokenization in AST [20] and ViT [14] for audio and image inputs, respectively.", "Specifically, we use 10-second videos (with parallel audios) in AudioSet [17] and VGGSound [9] to pretrain and fine-tune the model.", "For audio, each 10-second audio waveform is first converted to a sequence of 128-dimensional log Mel filterbank (fbank) features computed with a 25ms Hanning window every 10ms.", "This results in a $1024(\\text{time})\\times 128(\\text{frequency})$ spectrogram.", "We then split the spectrogram into 512 $16\\times 16$ square patches $\\mathbf {a}=[a^1, ..., a^{512}]$ as the input of the model.", "Processing video with Transformer models is expensive and typically requires industrial-level computation resources.", "To lower the computational overhead and fit our resources, we use a frame aggregation strategy.", "Specifically, we uniformly sample 10 RGB frames from each 10-second video (i.e., 1 FPS).", "During training, we randomly select one RGB frame as the input; during inference, we average the model prediction of each RGB frame as the video prediction.", "Compare with concatenating multiple RGB frames as the input of the Transformer that has a quadratic complexity (e.g., in [34]), frame aggregation is much more efficient with a linear complexity in time at a cost of not considering inter-frame correlation.", "For each RGB frame, we resize and center crop it to $224\\times 224$ , and then split it into 196 $16\\times 16$ square patches $\\mathbf {v}=[v^1, ..., v^{196}]$ ." ], [ "The Transformer Architecture", "Throughout this paper, we use the standard Transformer [43] as our main model component.", "Each Transformer layer consists of multi-headed self-attention (MSA), layer normalization (LN) , and multilayer perceptron (MLP) blocks with residual connections.", "Specifically, we denote a Transformer layer $\\mathbf {y}=\\mathrm {Transformer}(\\mathbf {x};\\mathrm {MSA},\\mathrm {LN1},\\mathrm {LN2},\\mathrm {MLP})$ as: $\\mathbf {x^\\prime } = \\mathrm {MSA}(\\mathrm {LN_1}(\\mathbf {x})) + \\mathbf {x} ; \\quad \\mathbf {y} = \\mathrm {MLP}(\\mathrm {LN_2}(\\mathbf {x^\\prime })) + \\mathbf {x^\\prime }$ where $\\mathrm {MSA}$ computes dot-product attention of each element of $\\mathbf {x}$ and thus has a quadratic complexity w.r.t.", "to the size of $\\mathbf {x}$ .", "Please refer to [43] for further details on Transformers." ], [ "Contrastive Audio-visual Learning (CAV)", "The natural pairing of audio and visual information in videos is a useful signal for learning audio-visual representations through self-supervision.", "A conventional CAV model is shown in Figure REF .B (top), for a mini-batch of $N$ audio-visual pair samples, we first pre-process and tokenize the audios and images and get a sequence of audio and visual tokens $\\lbrace \\mathbf {a}_i,\\mathbf {v}_i\\rbrace $ for each sample $i$ .", "We then input $\\mathbf {a}_i$ and $\\mathbf {v}_i$ to independent audio and visual Transformer encoders $\\mathrm {E_a}(\\cdot )$ and $\\mathrm {E_{v}}(\\cdot )$ , respectively, and get the mean pooled audio and visual representation $c^a_i$ and $c^v_i$ , i.e., $c_i^a = \\mathrm {Mean Pool}(\\mathrm {E_a}(\\mathrm {Proj_a}(\\mathbf {a}_i))$ and $c_i^v = \\mathrm {Mean Pool}(\\mathrm {E_v}(\\mathrm {Proj_v}(\\mathbf {v}_i))$ , where $\\mathrm {Proj_a}$ and $\\mathrm {Proj_v}$ are linear projections that maps each audio and visual token to $\\mathbb {R}^{768}$ .", "We then apply a contrastive loss (Equation REF ) on $c_i^a$ and $c_i^a$ .", "Figure: An illustration of our method.", "A) We tokenize audio spectrograms and RGB images into 16×\\times 16 square patches and use them as the input to all models.", "B) Conventional contrastive audio-visual learning model (top) and vanilla audio-visual masked auto-encoder (bottom, also novel and first introduced in this paper).", "C) Our proposed contrastive audio-visual masked auto-encoder (CAV-MAE) model.", "CAV-MAE integrates two major self-supervised frameworks: contrastive audio-visual learning and cross-modal masked data modeling, which learns a joint and coordinate representations and performs well on both multi-modal joint classification tasks and cross-modal retrieval tasks." ], [ "Single Modality Masked Autoencoder (MAE)", "Another line of major self-supervised frameworks is masked data modeling (MDM).", "Among numerous variants of MDM (e.g., [8], [47]), the masked auto-encoder (MAE) is a simple yet effective approach.", "For an input sample $\\mathbf {x}$ that can be tokenized as $\\mathbf {x}=[x^1, x^2, ..., x^{n}]$ , MAE masks a portion of the input $\\mathbf {x}_\\textrm {mask}$ and only inputs the unmasked tokens $\\mathbf {x}\\setminus \\mathbf {x}_\\textrm {mask}$ to a Transformer based encoder-decoder model.", "The model is asked to reconstruct the masked tokens with the goal of minimizing the mean square error (MSE) loss.", "During this process, the model learns a meaningful representation of the input data.", "The advantages of MAE are multifold.", "First, MAE directly uses the original input as the prediction target, which greatly simplifies the training pipeline.", "Second, MAE only inputs unmaksed tokens to the encoder, and combined with a high masking ratio, MAE noticeably lowers the computational overhead.", "Third, MAE demonstrated strong performance in single-modal tasks for both audio and visual modality.", "Due to the space limitation, please refer to [23], [48] for single-modal MAEs." ], [ "Vanilla Audio-Visual Masked Autoencoder (AV-MAE)", "While MAE has been applied to both audio and visual modality individually, it has never been applied to audio-visual multi-modality learning.", "As the first contribution of this work, we extend MAE from a single modality to audio-visual multi-modality and build a “vanilla” audio-visual autoencoder (AV-MAE).", "As shown in Figure REF .B (bottom), for a pair of audio and image inputs, we first tokenize them to $\\mathbf {a}=[a^1, ..., a^{512}]$ and $\\mathbf {v}=[v^1, ..., v^{196}]$ and project them to $\\mathbb {R}^{768}$ with two modal-specific linear projection layer as well as add a modality type embedding $\\mathbf {E_a}$ and $\\mathbf {E_v}$ and modality specific 2-D sinusoidal positional embedding $\\mathbf {E^p_a}$ and $\\mathbf {E^p_v}$ , i.e., $\\mathbf {a^\\prime } = \\mathrm {Proj_a}(\\mathbf {a}) + \\mathbf {E_a} + \\mathbf {E^p_a}$ and $\\mathbf {v^\\prime } = \\mathrm {Proj_v}(\\mathbf {v}) + \\mathbf {E_v} + \\mathbf {E^p_v}$ .", "We concatenate $\\mathbf {a^\\prime }$ and $\\mathbf {v^\\prime }$ and construct a joint embedding $\\mathbf {x}=[\\mathbf {a^\\prime }, \\mathbf {v^\\prime }]$ .", "We then mask a portion (75%) of $\\mathbf {x}$ and only input unmasked tokens $\\mathbf {x}_\\textrm {unmask}=\\mathbf {x}\\setminus \\mathbf {x}_\\textrm {mask}$ to an audio-visual joint encoder $\\mathrm {E_j}(\\cdot )$ and get the output $\\mathbf {x^\\prime _\\textrm {unmask}}$ .", "After that, we pad $\\mathbf {x^\\prime _\\textrm {unmask}}$ with trainable masked tokens at their original position as $\\mathbf {x^\\prime }$ .", "Again, we also add modality type embedding $\\mathbf {E_a^\\prime }$ and $\\mathbf {E_v^\\prime }$ and modality-specific 2-D sinusoidal positional embedding $\\mathbf {E^p_a}^\\prime $ and $\\mathbf {E^p_v}^\\prime $ before feeding $\\mathbf {x^\\prime }$ to a joint audio-visual decoder $\\mathrm {D_j}(\\cdot )$ to reconstruct the input, i.e., $\\hat{\\mathbf {a}}, \\hat{\\mathbf {v}} = \\mathrm {D_j}(\\mathbf {x^\\prime } + [\\mathbf {E_a^\\prime }, \\mathbf {E_v^\\prime }] + [\\mathbf {E^p_a}^\\prime , \\mathbf {E^p_v}^\\prime ])$ Finally, we minimize the mean square error (MSE) between $\\hat{\\mathbf {a}}, \\hat{\\mathbf {v}}$ and normalized $\\mathbf {a}, \\mathbf {v}$ .", "Compared with single-modal MAEs, the AV-MAE features a cross-modal masked data modeling objective that allows the model to reconstruct one modality based on the information of another modality, which potentially helps the model learn audio-visual correlation.", "However, without an explicit objective of encouraging paired audio-visual correspondence, to which extent the vanilla AV-MAE leverages the audio-visual pairing information is unknown.", "In addition, using a joint encoder for two modalities allows cross-modal attention, but it also means the two very different modalities are processed with the same weights, which could lead to a sub-optimal solution." ], [ "Constrastive Audio-Visual Masked Autoencoder (CAV-MAE)", "As discussed in Section REF and REF , contrastive audio-visual learning and AV-MAE each has its advantages and disadvantages.", "Can we integrate the complementary advantages of CAV and AV-MAE?", "With this goal, we design the Contrastive Audio-Visual Masked Autoencoder (CAV-MAE) (shown in Figure REF .C).", "For a mini-batch of $N$ audio-visual pair samples, we first pre-process and tokenize the audios and images and get a sequence of audio and visual tokens $\\lbrace \\mathbf {a}_i,\\mathbf {v}_i\\rbrace $ for each sample $i$ and project them to $\\mathbb {R}^{768}$ with two modal-specific linear projection layer.", "We also add a modality type embedding $\\mathbf {E_a}$ and $\\mathbf {E_v}$ and modality-specific 2-D sinusoidal positional embedding $\\mathbf {E^p_a}$ and $\\mathbf {E^p_v}$ .", "After that, we uniformly mask 75% of tokens of each modality, i.e., $\\mathbf {a}_i^{\\mathrm {unmask}} = \\mathrm {Mask_{0.75}}(\\mathrm {Proj_a}(\\mathbf {a}_i) + \\mathbf {E_a} + \\mathbf {E^p_a}) \\\\\\mathbf {v}_i^{\\mathrm {unmask}} = \\mathrm {Mask_{0.75}}(\\mathrm {Proj_v}(\\mathbf {v}_i) + \\mathbf {E_v} + \\mathbf {E^p_v})$ We then input $\\mathbf {a}_i^{\\mathrm {unmask}}$ and $\\mathbf {v}_i^{\\mathrm {unmask}}$ to independent audio and visual Transformer encoders $\\mathrm {E_a}(\\cdot )$ and $\\mathrm {E_{v}}(\\cdot )$ and get $\\mathbf {a}_i^\\prime $ and $\\mathbf {v}_i^\\prime $ , respectively.", "After that, we apply multi-stream forward passes to input $\\mathbf {a}_i^\\prime $ , $\\mathbf {v}_i^\\prime $ to a joint audio-visual encoder $\\mathrm {E_j}(\\cdot ;\\mathrm {MSA},\\mathrm {LN1},\\mathrm {LN2},\\mathrm {MLP})$ .", "Specifically, we input audio tokens $\\mathbf {a}_i^\\prime $ , video tokens $\\mathbf {v}_i^\\prime $ , and concatenated audio-visual tokens $[\\mathbf {a}_i^\\prime , \\mathbf {v}_i^\\prime ]$ in three independent forward passes to $\\mathrm {E_j}$ .", "For each stream, we use different layer normalization layers $\\mathrm {LN1_{\\lbrace a,v,av\\rbrace }}$ and $\\mathrm {LN2_{\\lbrace a,v,av\\rbrace }}$ , all other weights (i.e., weights of the $\\mathrm {MSA}$ and $\\mathrm {MLP}$ ) of $\\mathrm {E_j}$ are shared for all three streams.", "Formally, $c_i^a = \\mathrm {Mean Pool}(\\mathrm {E_j}(\\mathrm {E_a}(\\mathbf {a}_i^{\\mathrm {unmask}}));\\mathrm {LN1_a},\\mathrm {LN2_a})) \\\\c_i^v = \\mathrm {Mean Pool}(\\mathrm {E_j}(\\mathrm {E_v}(\\mathbf {v}_i^{\\mathrm {unmask}}));\\mathrm {LN1_v},\\mathrm {LN2_v})) \\\\\\mathbf {x_i} = \\mathrm {E_j}([\\mathrm {E_a}(\\mathbf {a}_i^{\\mathrm {unmask}}),\\mathrm {E_v}(\\mathbf {v}_i^{\\mathrm {unmask}})];\\mathrm {LN1_{av}},\\mathrm {LN2_{av}})$ We use the output of the audio and visual single modality stream $c_i^a$ and $c_i^v$ for contrastive learning and the output of the audio-visual multi-modal stream $\\mathbf {x_i}$ for the reconstruction task.", "For contrastive audio-visual learning, we use the contrastive loss $\\mathcal {L}_\\mathrm {c}$ : $\\mathcal {L}_\\mathrm {c} = - \\frac{1}{N} \\sum _{i=1}^N {\\rm log} \\left[ \\frac{ {\\rm exp} (s_{i,i}/\\tau )}{\\sum _{k \\ne i} {\\rm exp} (s_{i,k}/\\tau ) + {\\rm exp} (s_{i,i}/\\tau )} \\right]$ where $s_{i,j} = \\Vert c^v_i\\Vert ^T\\Vert c^a_j\\Vert $ and $\\tau $ is the temperature.", "For the reconstruction task, we pad $\\mathbf {x_i}$ with trainable masked tokens at their original position as $\\mathbf {x_i^\\prime }$ .", "We also add modality type embedding $\\mathbf {E_a^\\prime }$ and $\\mathbf {E_v^\\prime }$ and modality-specific 2-D sinusoidal positional embedding $\\mathbf {E^p_a}^\\prime $ and $\\mathbf {E^p_v}^\\prime $ before feeding $\\mathbf {x_i^\\prime }$ to a joint audio-visual decoder $\\mathrm {D_j}(\\cdot )$ to reconstruct the input audio and image.", "$\\mathrm {D_j}(\\cdot )$ processes audio and visual tokens with a same set of weights except the last modal-specific projection layer, it outputs $\\hat{\\mathbf {a}}_i$ and $\\hat{\\mathbf {v}}_i$ .", "We then apply a mean square error reconstruction loss $\\mathcal {L}_\\mathrm {r}$ : $\\hat{\\mathbf {a}}_i, \\hat{\\mathbf {v}}_i = \\mathrm {D_j}(\\mathbf {x^\\prime } + [\\mathbf {E_a^\\prime }, \\mathbf {E_v^\\prime }] + [\\mathbf {E^p_a}^\\prime , \\mathbf {E^p_v}^\\prime ])$ $\\mathcal {L}_\\mathrm {r} = \\frac{1}{N} \\sum _{i=1}^N\\left[\\frac{\\sum (\\hat{\\mathbf {a}}_i^\\mathrm {mask}-\\mathrm {norm}(\\mathbf {a}_i^\\mathrm {mask}))^2}{|\\mathbf {a}_i^\\mathrm {mask}|}+ \\frac{\\sum (\\hat{\\mathbf {v}}_i^\\mathrm {mask}-\\mathrm {norm}(\\mathbf {v}_i^\\mathrm {mask}))^2}{|\\mathbf {v}_i^\\mathrm {mask}|}\\right]$ where $N$ is the mini-batch size; $\\mathbf {a}^\\mathrm {mask}$ , $\\mathbf {v}^\\mathrm {mask}$ , $\\hat{\\mathbf {a}}^\\mathrm {mask}$ , $\\hat{\\mathbf {v}}^\\mathrm {mask}$ denote the original and predicted masked patches (we only calculate the loss based on the masked portion of the input); $|\\mathbf {a}^\\mathrm {mask}|$ and $|\\mathbf {v}^\\mathrm {mask}|$ denote the number of masked audio and visual patches, respectively.", "Finally, we sum up the contrastive loss $\\mathcal {L}_\\mathrm {c}$ (multiplied by a weight $\\lambda _c$ ) and the reconstruction loss $\\mathcal {L}_\\mathrm {r}$ as the loss for CAV-MAE, i.e., $\\mathcal {L}_\\mathrm {CAV-MAE} = \\mathcal {L}_\\mathrm {r} + \\lambda _c\\cdot \\mathcal {L}_\\mathrm {c}$ .", "After pretraining, we abandon the decoder and only keep the encoders of the model for downstream tasks.", "We can use the sum of the single-modality stream output and the multi-modal modality stream output, or just the multi-modal stream output for finetuning.", "They perform similarly in our experiments.", "Discussion: we next discuss the motivation of some key designs of CAV-MAE: 1.", "Multi-stream forward passes of the joint encoder.", "We find it important to restrict the representations used for contrastive audio-visual learning, so that $c^a$ only comes from the audio input and $c^v$ only comes from the visual input, otherwise the contrastive objective will collapse.", "In the meantime, we hope the encoder fuses the audio and visual information for the reconstruction task and downstream tasks.", "Therefore, we design the multi-stream forward pass strategy for CAV-MAE.", "2.", "Modality-specific encoders and $\\mathrm {LN}$ layers.", "While there are a few recent attempts [1], [12] to process audio and visual modalities with a unified network, due to the very different nature of audio and visual modalities, the general conclusion is that modality-specific networks are still optimal in terms of performance.", "Therefore, we choose to encode audio and visual inputs with modality-specific encoders before the joint encoder.", "For the same reason, we also use different normalization statistics for each stream of the joint encoder.", "Efficiency-wise, having two modality-specific encoders increases the model size, but lowers the computation as the Transformer has a quadratic complexity w.r.t.", "the input sequence length.", "3.", "Masked contrastive audio-visual learning.", "Unlike single-modality contrastive learning, conventional contrastive audio-visual learning does not typically apply augmentation or masking.", "In this work, we propose to use masked contrastive audio-visual learning, i.e., we randomly mask a portion of the input before conducting contrastive learning.", "This design not only allows us to combine CAV with AV-MAE, but also helps to avoid overfitting.", "In practice, when the masking ratio is 75% and the effective contrastive batch size is 27 (108 on 4 GPUs), the audio-visual matching accuracy during pretraining on the evaluation set is about 72%, which shows the masked contrastive audio-visual task is neither trivial nor impossible." ], [ "Implementation Details", "By default, all encoder Transformer layers are 768-dimensional and have 12 attention heads.", "The joint encoder of the Vanilla AV-MAE is a 12-layer Transformer; The audio and visual encoders of CAV-MAE are 11-layer Transformers (each is 768- dimensional) and the joint encoder is a single-layer Transformer.", "I.e., we control the total number of encoder layers of all models as 12, but CAV and CAV-MAE are larger models due to the modality-specific encoders.", "The decoder of AV-MAE and CAV-MAE are 8-layer Transformers with an embedding dimension of 512 and 16 attention heads.", "These settings are identical to the original vision MAE [23].", "We fix the contrastive loss temperature $\\tau =0.05$ .", "For CAV-MAE, we use $\\lambda _c=0.01$ .", "Note the relatively small $\\lambda _c$ is due to the scale of the gradient of $\\mathcal {L}_\\mathrm {c}$ being larger than $\\mathcal {L}_\\mathrm {r}$ , it does not mean the contrastive objective is unimportant.", "The encoder and decoder of the default CAV-MAE model have about 164M and 27M parameters, respectively.", "Following the common practice of audio-visual learning, we initialize the weights of all models with ImageNet pretrained weights.", "Specifically, we use the weights of the original vision MAE [23].", "Nevertheless, unlike previous work that uses supervised pretrained weights (e.g., [15], [34]), we only use the self-supervised pretrained weights (i.e., without finetuning), which does not lead to the best performance but makes our whole training pipeline self-supervised." ], [ "Self-Supervised Model Pretraining", "We pretrain and compare the performance of the following models: 1.", "Audio-MAE/Visual-MAE: Single-modal masked auto-encoder models.", "The model architecture is the same with Vanilla AV-MAE but they are only pretrained with data of a single modality.", "2.", "CAV: The contrastive audio-visual learning model that has no reconstruction objective.", "For a fair comparison, we implement CAV using the same encoder architecture (modal-specific encoders + joint encoder) with CAV-MAE but remove the reconstruction objective $\\mathcal {L}_r$ .", "3.", "Vanilla AV-MAE: The vanilla audio-visual masked auto-encoder with a joint encoder and no contrastive objective as described in Section REF .", "4.", "AV-MAE: The audio-visual masked auto-encoder with two modal-specific encoders and a joint encoder.", "It has the same architecture with CAV-MAE, but $\\lambda _c$ is set to 0 (no contrastive loss).", "We use this model to disentangle the impact of modal-specific encoders (when compared with Vanilla AV-MAE) and contrastive objective (when compared with CAV-MAE).", "5.", "CAV-MAE: Our proposed contrastive masked auto-encoder as described in Section REF .", "6.", "CAV-MAEscale+: The same model with CAV-MAE, but trained with a larger batch size=108 (effective contrastive batch size=27) and more epochs=25.", "We train this model on our best GPUs.", "For a fair comparison, all models (except CAV-MAEscale+) are pretrained with the same pipeline with a batch size of 48 for 12 epochs on the full AudioSet-2M.", "During pretraining, we intentionally do not use class balanced sampling as that implicitly leverages the label information.", "Our pretraining process (including the ImageNet pretrained weight initialization) is fully self-supervised.", "Please refer to the Appendix  for all pretraining details." ], [ "Audio-Visual Event Classification", "We evaluate the representation quality on the audio-visual event classification task, a major audio-visual learning benchmark.", "Specifically, we fine-tune the pretrained models on three datasets: 1) AudioSet-20K (20K samples, same domain as the pretraining data); 2) AudioSet-2M (2 million samples, same with pretraining data); and 3) VGGSound (200K samples, different domain than the pretraining data), covering various downstream data volume and domain situations.", "In the fine-tuning stage, we only keep the encoder of the pretrained models and connect it to a randomly initialized linear classification head.", "To avoid overriding too much of the knowledge learned in pretraining, we use a smaller learning rate for the pretrained weights and a 10$\\times $ -100$\\times $ larger learning rate for the new classification head.", "We use the standard training pipeline used in prior audio-based and audio-visual event classification work [20], [21], [34] with mixup [50], balanced sampling, label smoothing, and random time shifts.", "Please refer to the appendix for the details.", "We fine-tune the model using audio-only data (A), video-only data (V), and audio-visual data (AV) to evaluate the single-modal and multi-modal representation quality.", "We show the results in Table REF .", "Key findings are as follows: Table: Comparing audio-visual classification performance on AudioSet and VGGSound.IN SL=ImageNet supervised learning; SSL=self-supervised learning; †Industry-level computation.", "*Nonstandard data split; ensEnsemble of single-modal models.", "We bold the best methods without supervised pretraining, and underline the overall best methods.1.", "Contrastive learning and masked data modeling are complementary.", "While both AV-MAE (only with masked data modeling objective) and CAV (only with contrastive objective) perform better than ensembling two single-modal MAEs, the proposed CAV-MAE that combines the two objectives significantly boosts the performance (e.g., 2.0 and 3.1 mAP boost from CAV and AV-MAE on AudioSet-20K, respectively).", "Note CAV-MAE, AV-MAE, and CAV have the same architecture during fine-tuning, the only difference is the objective in the pretraining stage.", "This demonstrates that the two major self-supervised learning frameworks are complementary in the context of audio-visual learning and CAV-MAE is an effective way to combine their advantages.", "2.", "CAV-MAE multi-modal pretraining improves single-modal performance.", "We find the CAV-MAE model pretrained with paired audio-visual data, when fine-tuned with only a single modality, performs noticeably better than Audio-MAE and Visual-MAE on single-modal classification tasks (e.g., 34.2$\\rightarrow $ 37.7 mAP for audio, 15.7$\\rightarrow $ 19.8 mAP for visual on AudioSet-20K).", "Note for single-modal fine-tuning, CAV-MAE only keeps one branch and has the same architecture with Audio-MAE and Visual-MAE, so the performance improvement can only come from the use of multi-modal data during pretraining.", "We hypothesize this is due to the two modalities serving as soft labels for each other, providing richer information than the binary human-annotated labels.", "As a result, CAV-MAE achieves a new SOTA performance on audio-based event classification on AudioSet-20K (37.7 mAP) and VGGSound (59.5% accuracy), without supervised pretraining and industry-level computational resources.", "3.", "Fully SSL pretrained CAV-MAE matches or outperforms SOTA models with significantly fewer computational resources.", "There are two major setting differences between this work and previous SOTA works.", "First, our pretraining is completely self-supervised so that our model can leverage web-scale unlabeled videos, while supervised ImageNet pretraining is commonly used in previous audio-visual works, e.g., in MBT [34].", "ImageNet labels are strong supervision signals that can directly impact the visual branch performance.", "As a result, our visual branch is worse than SOTA models.", "Second, we pretrain and fine-tune the model with 4 GPUs, making our work easy to reproduce.", "Most SOTA models are trained with industry-level resources (e.g., 32 TPUs for Perceiver [25], 64 GPUs for Audio-MAE [48] and MBT), which brings many benefits such as large batch size (particularly useful for contrastive learning), multiple frames input (MBT uses 8 frames as input), and more training epochs (Audio-MAE pretrain the model for 32 epochs).", "Even with such setting differences, on the audio-visual event classification task, our CAV-MAE performs better than the best existing audio-visual model MBT on VGGSound (even when CAV-MAE is only pretrained on VGGSound, see Table REF ) and comparable on AudioSet-20K and AudioSet-2M.", "On the audio-based event classification task, our CAV-MAE performs better than the best existing audio model Audio-MAE on AudioSet-20k and comparable on AudioSet-2M.", "Besides, we find modal-specific encoders are helpful as AV-MAE outperforms Vanilla AV-MAE.", "Vanilla AV-MAE with only a joint encoder does not outperform the ensemble of single-modal Audio-MAE and Visual-MAE.", "Scaling up the batch size and training epochs improves the performance as CAV-MAEscale+ generally performs better than CAV-MAE.", "The performance margin is smaller on larger fine-tuning datasets.", "Ablation Studies: We conduct a series of ablation studies to show the impact of each design factor.", "For each study, we use CAV-MAEscale+ or CAV-MAE as the base model, change one factor at a time, and report the downstream classification performance of the model on AudioSet-20K or VGGSound.", "Our findings are as follows: the weight of the contrastive loss $\\lambda _c$ has a large impact on the performance, too large or too small $\\lambda _c$ leads to a noticeable performance drop (Table REF ); Scaling up the pretraining epochs and batch size consistently leads to a performance improvement (Table REF and  REF ); Normalizing the prediction target only leads to marginal performance improvement (Table REF ); When finetuning on VGGSound, pretraining with the larger out-of-domain AudioSet-2M is better than pretraining with the smaller in-domain VGGSound itself, but pretraining first on AudioSet-2M and then on VGGSound leads to the best result (Table REF ); During fine-tuning, using the output of the multi-modal stream of the encoder leads to better performance than using the concatenated single-modal stream outputs, and summing the output of two streams generally lead to similar result (Table REF ); When only one modality is of interest, it is better to fine-tune the model with single-modal data than fine-tune the model with audio-visual data and do single modality inference.", "However, the performance gap is small for audio (Table REF ); The frame aggregation strategy boosts the performance without the need to input multiple frames to the model (Table REF ); In the linear probe setting, CAV-MAE also noticeably outperform the baselines (Table REF ).", "Table: Ablation studies on audio-visual classification.", "MM=multi-modal, SM=single-modal." ], [ "Audio-Visual Retrieval", "In the previous section, we show that CAV-MAE learns a good audio-visual joint representation that effectively fuses the unimodal signals for the audio-visual event classification task.", "Next, we study if CAV-MAE also learns a good coordinated representation that captures audio-visual correspondences for audio-visual retrieval.", "Specifically, we uniformly sample a subset of 1,725 and 1,545 audio-visual samples from the AudioSet and VGGSound evaluation set, respectively (about 10%) to make the similarity matrix of a reasonable size.", "We input audio and image to each model in two independent forward passes and take the mean-pooled encoder outputs as the audio and visual representation, respectively.", "We then calculate the retrieval recall at rank 1, 5, and 10 (R$@$ 1, R$@$ 5, R$@$ 10) based on the cosine similarity of the audio and visual representation.", "All models are self-supervised pretrained but not fine-tuned, i.e., labels are not used.", "We conduct bi-directional visual$\\rightarrow $ audio and audio$\\rightarrow $ visual retrieval, but due to the space limitation and the conclusions being similar, we show the quantitative result of visual$\\rightarrow $ audio in Table REF and leave the result of audio$\\rightarrow $ visual retrieval in the Appendix .", "We also show retrieval samples in Figure REF and Appendix .", "We find a contrastive objective is necessary for the audio-visual retrieval task as the performance of both Vanilla-MAE and AV-MAE are close to random guesses.", "Nevertheless, the cross-modal masked data modeling objective does not hurt, and in many cases, improves the retrieval performance, e.g., when $\\lambda _c=0.1$ , CAV-MAE generally performs better than CAV.", "Scaling up the batch size and training epoch also leads to a better retrieval performance.", "When tested on a dataset different from the pretraining dataset (VGGSound), the retrieval performance is still competitive, indicating the audio-visual correspondence transfers well in addition to the audio and visual representations.", "These results demonstrate that the contrastive and mask data modeling objectives do not conflict, a single pretrained CAV-MAE can be applied to both audio-visual fusion and correspondence tasks." ], [ "Related Work", "Contrastive Audio-Visual Learning The natural pairing of audio and visual information in videos has been a useful signal for learning audio-visual representations through self-supervision.", "Existing methods include knowledge distillation [4], [38], paired sample discrimination [2], [28], [37], and contrastive learning [33].", "To improve contrastive learning, some recent methods sought to mine better negative samples [31], [32], while others proposed additional data augmentation [39], [45] or using global and local video views [49], [40].", "Our approach instead combines the contrastive loss with masked data modeling, which not only leads to an improvement in classification performance but also maintains the compelling ability of audio-visual retrieval [3], [41].", "Masked Auto-Encoder.", "Masking data modeling has a long history [44].", "Given the success of MAE in the vision domain [23], [6], [19], [42], [16], several efforts adapt MAE for audio with relatively minor changes to the overall pipeline [5], [36], [11], [48].", "There are a few recent works investigating multi-modal MAE for the vision & language multi-modal scenarios [18], [30], which inspired us to design an audio-visual MAE.", "To the best of our knowledge, our AV-MAE and CAV-MAE are the first audio-visual masked autoencoders.", "One closely related concurrent work is CMAE [24], which also combines MAE and contrastive loss, but only for single-modal images.", "Our motivation and implementation are very different from CMAE as we aim to leverage the unique audio-visual pair information and CAV-MAE features a multi-stream joint encoder design.", "Finally, while we take a modern approach with Transformers, multi-modal autoencoders have been studied more than a decade ago with much simpler models and datasets [35]." ], [ "Conclusion", "In this paper, we introduce two novel audio-visual learning models AV-MAE and CAV-MAE.", "The main idea of this paper is simple: masked data modeling and contrastive learning are a pair of complementary frameworks that should be used together for audio-visual self-supervised learning.", "Effectively combining the two frameworks and avoiding representation collapse requires some careful design choices such as the multi-stream forward pass strategy, joint-specific encoder architecture, and masked contrastive learning.", "From the perspective of audio-visual representation learning, CAV-MAE learns a joint and coordinated representation and can be used for both audio-visual joint event classification task as well as the audio-visual retrieval task.", "As a result, on the audio-visual event classification task, CAV-MAE matches or outperforms SOTA models with fully self-supervised pretraining and noticeably fewer computational resources; on the retrieval task, CAV-MAE is comparable to models trained with only the contrastive objective.", "Finally, we find that CAV-MAE multi-modal pretraining also learns strong single-modal representations, consequently, CAV-MAE achieves new SOTA performance on audio-based event classification." ], [ "Reproducibility Statement", "We document all implementation details in Section REF and Appendix .", "We will release the code and model upon publication." ], [ "Dataset Details", "We use two major audio-visual datasets for our experiments: AudioSet [17] and VGGSound [9].", "AudioSet-2M is a collection of 2M 10-second YouTube video clips labeled with the sounds that the clip contains from a set of 527 labels of audio events, AudioSet-20K is a subset of AudioSet-2M with more balanced class distribution.", "Due to changes in video availability, we downloaded 1.77M AudioSet-2M training, 19K AudioSet-20K training, and 17K evaluation samples, respectively.", "VGGSound [9] is a collection of 200K 10-second YouTube video clips annotated with 309 classes.", "We download 183K training and 15K test samples.", "We only use the labels in the fine-tuning stage to make our pretraining pipeline fully self-supervised." ], [ "Training Details", "Our training hyper-parameters are listed in Table REF .", "Most of our experiments are run on 4$\\times $ NVIDIA GTX Titan X Pascal GPUs with 12GB memory, only the scaled-up CAV-MAEScale+ is pretrained on 4$\\times $ NVIDIA RTX A5000 GPUs with 24GB memory, making our result easier to reproduce with reasonable resources.", "Pretraining CAV-MAE takes about one week with 4 GPUs.", "Table: Our pre-training and fine-tuning hyperparameters." ], [ "Additional Audio-Visual Retrieval Results.", "We show additional audio-visual bi-directional retrieval results.", "All samples in Figure REF and REF are from VGGSound, a dataset that is different from the pretraining dataset AudioSet.", "Table: Audio to visual retrieval results on AudioSet and VGGSound.Figure: Audio to video retrieval results.", "Since the spectrogram is hard to read, we show its paired image in dashed box just for visualization purpose.Figure: Video to audio retrieval results.", "Since the spectrogram is hard to read, we show its paired image in dashed box just for visualization purpose." ], [ "MAE Reconstruction Results", "We show the CAV-MAE reconstruction samples in Figure REF , REF , and REF .", "All samples are from VGGSound, a different dataset from the pretraining set.", "The CAV-MAE model is trained with 75% masking ratio without target normalization.", "As we show in our experiment, it has similar performance with the one with target normalization.", "CAV-MAE has strong reconstruction ability even the masking ratio goes to 90%, which makes it potentially can be used for in-painting and enhancement tasks.", "Figure: CAV-MAE reconstruction samples when 50% of the input is masked.", "Samples are from VGGSound, a different dataset from the pretraining dataset.", "The CAV-MAE model is pretrained on AudioSet with 75% masking ratio without target normalization.Figure: CAV-MAE reconstruction samples when 75% of the input is masked.", "Samples are from VGGSound, a different dataset from the pretraining dataset.", "The CAV-MAE model is pretrained on AudioSet with 75% masking ratio without target normalization.Figure: CAV-MAE reconstruction samples when 90% of the input is masked.", "Samples are from VGGSound, a different dataset from the pretraining dataset.", "The CAV-MAE model is pretrained on AudioSet with 75% masking ratio without target normalization." ] ]
2210.07839
[ [ "s-Club Cluster Vertex Deletion on Interval and Well-Partitioned Chordal\n Graphs" ], [ "Abstract In this paper, we study the computational complexity of \\textsc{$s$-Club Cluster Vertex Deletion}.", "Given a graph, \\textsc{$s$-Club Cluster Vertex Deletion ($s$-CVD)} aims to delete the minimum number of vertices from the graph so that each connected component of the resulting graph has a diameter at most $s$.", "When $s=1$, the corresponding problem is popularly known as \\sloppy \\textsc{Cluster Vertex Deletion (CVD)}.", "We provide a faster algorithm for \\textsc{$s$-CVD} on \\emph{interval graphs}.", "For each $s\\geq 1$, we give an $O(n(n+m))$-time algorithm for \\textsc{$s$-CVD} on interval graphs with $n$ vertices and $m$ edges.", "In the case of $s=1$, our algorithm is a slight improvement over the $O(n^3)$-time algorithm of Cao \\etal (Theor.", "Comput.", "Sci., 2018) and for $s \\geq 2$, it significantly improves the state-of-the-art running time $\\left(O\\left(n^4\\right)\\right)$.", "We also give a polynomial-time algorithm to solve \\textsc{CVD} on \\emph{well-partitioned chordal graphs}, a graph class introduced by Ahn \\etal (\\textsc{WG 2020}) as a tool for narrowing down complexity gaps for problems that are hard on chordal graphs, and easy on split graphs.", "Our algorithm relies on a characterisation of the optimal solution and on solving polynomially many instances of the \\textsc{Weighted Bipartite Vertex Cover}.", "This generalises a result of Cao \\etal (Theor.", "Comput.", "Sci., 2018) on split graphs.", "We also show that for any even integer $s\\geq 2$, \\textsc{$s$-CVD} is NP-hard on well-partitioned chordal graphs." ], [ "Introduction", "Detecting “highly-connected” parts or “clusters” of a complex system is a fundamental research topic in network science [39], [29] with numerous applications in computational biology [13], [31], [7], [35], [36], machine learning [6], image processing [38], etc.", "In a graph-theoretic approach, a complex system or a network is often viewed as an undirected graph $G$ that consists of a set of vertices $V(G)$ representing the atomic entities of the system and a set of edges $E(G)$ representing a binary relationship among the entities.", "A cluster is often viewed as a dense subgraph (often a clique) and partitioning a graph into such clusters is one of the main objectives of graph-based data clustering [7], [34], [14].", "Ben-Dor et al.", "[7] and Shamir et al.", "[34] observed that the clusters of certain networks may be retrieved by making a small number of modifications in the network.", "These modifications may be required to account for the errors introduced during the construction of the network.", "In graph-theoretic terms, the objective is to modify (e.g.", "edge deletion, edge addition, vertex deletion) a given input graph as little as possible so that each component of the resulting graph is a cluster.", "When deletion of vertices is the only valid operation on the input graph, the corresponding clustering problem falls in the category of vertex deletion problems, a core topic in algorithmic graph theory.", "Many classic optimization problems like Maximum Clique, Maximum Independent Set, Vertex cover are examples of vertex deletion problems.", "In this paper, we study popular vertex deletion problems called Cluster Vertex Deletion and its generalisation $s$ -Club Cluster Vertex Deletion, both being important in the context of graph-based data clustering.", "Given a graph $G$ , the objective of Cluster Vertex Deletion (CVD) is to delete a minimum number of vertices so that the remaining graph is a set of disjoint cliques.", "Below we give a formal definition of CVD.", "[style=MyFrame] Cluster Vertex Deletion (CVD) Input: An undirected graph $G$ , and an integer $k$ .", "Output: Yes, if there is a set $S$ of vertices with $|S|\\le k$ , such that each component of the graph induced by $V(G)\\setminus S$ is a clique.", "No, otherwise.", "The term Cluster Vertex Deletion was coined by Gramm et al.", "[20] in 2004.", "However NP-hardness of CVD, even on planar graphs and bipartite graphs, follows from the seminal works of Yannakakis [40] and Lewis & Yannakakis [25] from four decades ago.", "Since then many researchers have proposed parameterized algorithms and approximation algorithms for CVD on general graphs [9], [37], [21], [18], [19], [32], [41], [16], [17], [4].", "In this paper, we focus on polynomial-time solvability of CVD on special classes of graphs.", "Cao et al.", "[10] gave polynomial-time algorithms for CVD on interval graphs (see Definition REF ) and split graphs.", "Chakraborty et al.", "[11] gave a polynomial-time algorithm for CVD on trapezoid graphs.", "However, much remains unknown: Chakraborty et al.", "[11] pointed out that computational complexity of CVD on planar bipartite graphs and cocomparability graphs is unknown.", "Cao et al.", "[10] asked if CVD can be solved on chordal graphs (graphs with no induced cycle of length greater than 3) in polynomial-time.", "Ahn et al.", "[1] introduced well-partitioned chordal graphs (see Definition REF ) as a tool for narrowing down complexity gaps for problems that are hard on chordal graphs, and easy on split graphs.", "Since several problems (for example: transversal of longest paths and cycles, tree 3-spanner problem, geodetic set problem) which are either hard or open on chordal graphs become polynomial-time solvable on well-partitioned chordal graphs [2], the computational complexity of CVD on well-partitioned chordal graphs is a well-motivated open question.", "In this paper, we also study a generalisation of CVD known as $s$ -Club Cluster Vertex Deletion ($s$ -CVD).", "In many applications the equivalence of cluster and clique is too restrictive [5], [30], [3].", "For example, in protein networks where proteins are the vertices and the edges indicate the interaction between the proteins, a more appropriate notion of clusters may have a diameter of more than 1 [5].", "Therefore researchers have defined the notion of $s$ -clubs [27], [5].", "An $s$ -club is a graph with diameter at most $s$ .", "The objective of $s$ -Club Cluster Vertex Deletion ($s$ -CVD) is to delete the minimum number of vertices from the input graph so that all connected components of the resultant graph is an $s$ -club.", "Below we give a formal definition of $s$ -CVD.", "[style=MyFrame] $s$ -Club Cluster Vertex Deletion ($s$ -CVD) Input: An undirected graph $G$ , and integers $k$ and $s$ .", "Output: Yes, if there is a set $S$ of vertices with $|S|\\le k$ , such that each component of the graph induced by $V(G)\\setminus S$ has diameter at most $s$ .", "No, otherwise.", "Schäfer [33] introduced the notion of $s$ -CVD and gave a polynomial-time algorithm for $s$ -CVD on trees.", "Researchers have studied the particular case of 2-CVD as well [26], [15].", "In general, $s$ -CVD remains NP-hard on planar bipartite graphs for each $s\\ge 2$ , APX-hard on split graphs for $s=2$  [11] (contrasting the polynomial-time solvability of CVD on split graphs).", "Combination of the ideas of Cao et al.", "[10] and Schäfer [33], provides an $O(n^8)$ -time algorithm for $s$ -CVD on a trapezoid graphs (intersection graphs of trapezoids between two horizontal lines) with $n$ vertices [11].", "This algorithm can be modified to give an $O(n^4)$ -time algorithm for $s$ -CVD on interval graphs with $n$ vertices.", "General notations: For a graph $G$ , let $V(G)$ and $E(G)$ denote the set of vertices and edges, respectively.", "For a vertex $v\\in V(G)$ , the set of vertices adjacent to $v$ is denoted by $N(v)$ and $N[v]=N(v)\\cup \\lbrace v\\rbrace $ .", "For $S \\subseteq V(G)$ , let $G-S$ be an induced graph obtained by deleting the vertices in $S$ from $G$ .", "For two sets $S_1,S_2$ , let $S_1-S_2$ denotes the set obtained by deleting the elements of $S_2$ from $S_1$ .", "The set $S_1\\Delta S_2$ denotes $(S_1\\cup S_2)-(S_1\\cap S_2)$ ." ], [ "Our Contributions", "In this section, we state our results formally.", "We start with the definition of well-partitioned chordal graphs as given in [1].", "Definition 1 ([1]) A connected graph $G$ is a well-partitioned chordal graph if there exists a partition $\\mathcal {P}$ of $V(G)$ and a tree $\\mathcal {T}$ having $\\mathcal {P}$ as a vertex set such that the following hold.", "Each part $X\\in \\mathcal {P}$ is a clique in $G$ .", "For each edge $XY\\in E(\\mathcal {T})$ , there exist $X^{\\prime } \\subseteq X$ and $Y^{\\prime } \\subseteq Y$ such that edge set of the bipartite graph $G[X,Y]$ is $ X^{\\prime } \\times Y^{\\prime }$ .", "For each pair of distinct $X,Y\\in V(\\mathcal {T})$ with $XY\\notin E(\\mathcal {T})$ , there is no edge between a vertex in $X$ and a vertex in $Y$ .", "The tree $\\mathcal {T}$ is called a partition tree of $G$ , and the elements of $\\mathcal {P}$ are called its bags or nodes of $\\mathcal {T}$ .", "Our first result is on CVD for well-partitioned chordal graphs which generalises a result of Cao et al.", "[10] for split graphs.", "We prove the following theorem in Section .", "Theorem 1 Given a well-partitioned chordal graph $G$ and its partition tree, there is an $O(m^2 n)$ -time algorithm to solve CVD on $G$ , where $n$ and $m$ are the number of vertices and edges.", "Since a partition tree of a well-partitioned chordal graph can be obtained in polynomial time [1], the above theorem adds CVD to the list of problems that are open on chordal graphs but admits polynomial-time algorithm on well-partitioned chordal graphs.", "Our algorithm relies on a characterisation of the solution set and we show that the optimal solution of a well-partitioned chordal graph with $m$ edges can be obtained by finding weighted minimum vertex cover [24] of $m$ many weighted bipartite graphs with weights at most $n$ .", "Then standard Max-flow based algorithms [24], [28], [23] from the literature yields Theorem REF .", "On the negative side, we prove the following theorem in Section .", "Theorem 2 Unless the Unique Games Conjecture is false, for any even integer $s\\ge 2$ , there is no $(2-\\epsilon )$ -approximation algorithm for $s$ -CVD on well-partitioned graphs.", "Our third result is a faster algorithm for $s$ -CVD on interval graphs.", "Definition 2 A graph $G$ is an interval graph if there is a collection $\\mathcal {I}$ of intervals on the real line such that each vertex of the graph can be mapped to an interval and two intervals intersect if and only if there is an edge between the corresponding vertices in $G$ .", "The set $\\mathcal {I}$ is an interval representation of $G$ We prove the following theorem in Section .", "Theorem 3 For each $s\\ge 1$ , there is an $O(n(n+m))$ -time algorithm to solve $s$ -CVD on interval graphs with $n$ vertices and $m$ edges.", "We note that our techniques deviate significantly from the ones in the previous literature [33], [10], [11].", "We show that the optimal solution (for $s$ -CVD on interval graphs) must be one of “four types” and the optimum for each of the “four types” can be found by solving $s$ -CVD on $O(m+n)$ many induced subgraphs.", "Furthermore, we exploit the “linear” structure of interval graphs to ensure that optimal solution in each case can be found in $O(n)$ -time.", "Our result significantly improves the state-of-the-art running time $\\left(O\\left(n^4\\right)\\text{, See \\cite {chakraborty2021algorithms}} \\right)$ for $s$ -CVD on interval graphs." ], [ "Polynomial time algorithm for CVD on well-partitioned chordal graphs", "In this section, we shall give a polynomial-time algorithm to solve CVD on well-partitioned chordal graphs.", "In the next section, we present the main ideas of our algorithm and describe our techniques for proving Theorem REF ." ], [ "Overview of the algorithm", "Let $G$ be a well-partitioned chordal graph with a partition tree $\\mathcal {T}$ rooted at an arbitrary node.", "For a node $X$ , let $\\mathcal {T}_X$ be the subtree rooted at $X$ and $G_X$ be the subgraph of $G$ induced by the vertices in the nodes of $\\mathcal {T}_X$ .", "For two adjacent nodes $X,Y$ of $\\mathcal {T}$ , the boundary of $X$ with respect to $Y$ is the set $bd(X,Y) = \\lbrace x\\in X\\colon N(x)\\cap Y\\ne \\emptyset \\rbrace $ .", "For a node $X$ , $P(X)$ denotes the parent of $X$ in $\\mathcal {T}$ .", "We denote minimum CVD sets of $G_X$ and $G_X-bd(X,P(X))$ as $OPT(G_X)$ and $OPT(G_X-bd(X,P(X))$ , respectively.", "We shall use the above notations extensively in the description of our algorithm and proofs.", "Our dynamic programming-based algorithm traverses $\\mathcal {T}$ in a post-order fashion and for each node $X$ of $\\mathcal {T}$ , computes $OPT(G_X)$ and $OPT(G_X-bd(X,P(X)))$ .", "A set $S$ of vertices is a CVD set of $G$ if $G-S$ is disjoint union of cliques.", "At the heart of our algorithm lies a characterisation of CVD sets of $G_X$ , showing that any CVD set of $G_X$ can be exactly one of two types, namely, $X$ -CVD set or $(X,Y)$ -CVD set where $Y$ is a child of $X$ (See Definitions REF and REF ).", "Informally, for a node $X$ , a CVD set is an $X$ -CVD set if it contains $X$ or removing it from $G_X$ creates a cluster all of whose vertices are from $X$ .", "On the contrary, a CVD set is an $(X,Y)$ -CVD set if its removal creates a cluster intersecting both $X$ and $Y$ , where $Y$ is a child of $X$ .", "In Lemma REF , we formally show that any CVD set of $G_X$ must be one of the above two types.", "To compute a minimum $X$ -CVD set, first we construct a weighted bipartite graph $\\mathcal {H}$ which is defined in Section REF and show that a minimum weighted vertex cover of $\\mathcal {H}$ can be used to construct a minimum $X$ -CVD set of $G$ .", "(See Equations REF , , , ).", "Then in Section REF , we show that the subroutine for finding minimum $X$ -CVD sets can be used to to get a minimum $(X,Y)$ -CVD set for each child $Y$ of $X$ .", "Finally, in Section REF we combine our tools and give an $O(m^2 n)$ -time algorithm to find a minimum CVD set of an well-partitioned chordal graph $G$ with $n$ vertices and $m$ edges." ], [ "Definitions and lemma", "In this section, we introduce some definitions and prove the lemma that facilitates the construction of a polynomial-time algorithm for finding a minimum CVD set of well-partitioned graphs.", "Definition 3 A cluster $C$ of a graph $G$ is a connected component that is isomorphic to a complete graph.", "Definition 4 Let $G$ be a well-partitioned graph, $\\mathcal {T}$ be its partition tree, and $X$ be the root node of $\\mathcal {T}$ .", "A CVD set $S$ of $G$ is an $X$ -CVD set if either $X\\subseteq S$ or $G-S$ contains a cluster $C\\subseteq X$ .", "Definition 5 Let $G$ be a well-partitioned graph, $\\mathcal {T}$ be its partition tree, $X$ be the root node of $\\mathcal {T}$ .", "Let $Y$ be a child of $X$ .", "A CVD set $S$ is a “$(X,Y)$ -CVD set” if $G-S$ has a cluster $C$ such that $C \\cap X \\ne \\emptyset $ and $C \\cap Y \\ne \\emptyset $ .", "Lemma 4 Let $S$ be a CVD set of $G$ .", "Then exactly one of the following holds.", "The set $S$ is a $X$ -CVD set.", "There is exactly one child $Y$ of $X$ in $\\mathcal {T}$ such that $S$ is an $(X,Y)$ -CVD set of $G$ .", "If $X\\subseteq S$ or if $G-S$ has a cluster which is contained in $X$ , then $S$ is an $X$ -CVD set.", "Otherwise, $X^{*} =(G-S) \\cap X \\ne \\emptyset $ and since $X^{*}$ is a clique, $G-S$ must contain a cluster $C$ such that $X^{*} \\subset C \\lnot \\subseteq X$ .", "Therefore, $C$ should intersect with at least one child of $X$ .", "Let $Y_1, Y_2$ be children of $X$ .", "If both $C \\cap Y_1 \\ne \\emptyset $ and $C \\cap Y_2 \\ne \\emptyset $ , then $C$ is not a cluster because $Y_1$ and $Y_2$ are non-adjacent nodes of $\\mathcal {T}$ .", "Hence $C$ intersects exactly one child of $X$ ." ], [ "Finding minimum $X$ -CVD sets", "In this section, we prove the following theorem.", "Theorem 5 Let $G$ be a well-partitioned graph rooted at $X$ and $\\mathcal {T}$ be a partition tree of $G$ .", "Assume for each node $Y\\in V(\\mathcal {T})-\\lbrace X\\rbrace $ both $OPT(G_Y)$ and $OPT(G_Y-bd(Y,P(Y)))$ are given, where $P(Y)$ is the parent of $Y$ in $\\mathcal {T}$ .", "Then a minimum $X$ -CVD set of $G$ can be computed in $O\\left(|E(G)|.|V(G)|\\right)$ time.", "For the remainder of this section, we denote by $G$ a fixed well-partitioned graph rooted at $X$ with a partition tree $\\mathcal {T}$ .", "Let $X_1,X_2,\\ldots ,X_t$ be the children of $X$ .", "The main idea behind our algorithm for finding minimum $X$ -CVD set of $G$ is to construct an auxiliary vertex weighted bipartite graph $\\mathcal {H}$ with at most $|V(G)|$ vertices such that the (minimum) vertex covers of $\\mathcal {H}$ can be used to construct (minimum) $X$ -CVD-CVD set.", "Below we describe the construction of $\\mathcal {H}$ .", "Let $\\mathcal {B} = \\left\\lbrace bd(X_i,X)\\colon i\\in [t]\\right\\rbrace $ .", "The vertex set of $\\mathcal {H}$ is $X \\cup \\mathcal {B}$ and the edge set of $\\mathcal {H}$ is defined as $E(\\mathcal {H}) &=& \\lbrace uB\\colon u\\in X, B\\in \\mathcal {B}, \\forall v\\in B, uv\\in E(G)\\rbrace $ The weight function on the vertices of $\\mathcal {H}$ is defined as follows.", "For each vertex $u\\in X$ , define $w(u)=1$ and for each set $B\\in \\mathcal {B}$ where $B=bd(X_j,X)$ , define $w(B) &=& |B| + \\left|OPT(G_{X_j} - B)\\right| - \\left|OPT (G_{X_j})\\right|$ Remark 1.", "Since $B \\cup OPT(G_{X_j}-B)$ is a CVD set of $G_{X_j}$ , we have $ |OPT(G_{X_j})| \\le |B| + |OPT(G_{X_j}-B)| $ and therefore $w(B)\\ge 0$ .", "Below we show how minimum weighted vertex covers of $\\mathcal {H}$ can be used to compute minimum $X$ -CVD set of $G$ .", "For an $X$ -CVD set $Z$ of $G$ , define $Cov(Z)=(X\\cap Z) \\cup \\left\\lbrace B \\in \\mathcal {B} \\colon B \\subseteq Z\\right\\rbrace $ .", "Lemma 6 Let $Z$ be an $X$ -CVD set of $G$ .", "Then $Cov(Z)$ is a vertex cover of $\\mathcal {H}$ .", "Assume that $Cov(Z)$ is not a vertex cover of $\\mathcal {H}$ .", "Then there exists at least one edge $e=uB$ in $\\mathcal {H}- Cov(Z)$ .", "Hence from the definition of $Cov(Z)$ we infer that $u \\in X-Z$ and $B \\lnot \\subseteq Z$ .", "Let $C_u$ be the cluster of $G-Z$ that contains the vertex $u$ .", "Since $X$ is a clique, $X-Z \\subseteq C_u$ .", "Observe that since $uB$ is an edge of $\\mathcal {H}$ , there exists a vertex $w \\in B$ such that $uw \\in E(G)$ .", "Then the definition of partition tree $\\mathcal {T}$ and $B$ implies that all vertices of $B$ are contained in $N(u)$ .", "Since $B \\lnot \\subseteq Z$ it follows that there exists at least one vertex $v \\in B$ in $G-Z$ such that $uv \\in E(G-Z)$ and hence $v \\in C_u$ .", "Therefore, the cluster $C_u$ intersects the child of $X$ that contains $B$ which contradicts the assumption that $Z$ is an $X$ -CVD set of $G$ (see definition of $X$ -CVD set).", "For a vertex cover $D$ of $\\mathcal {H}$ , define $S_1(D)&=&D\\cap X \\\\S_2(D)&=& \\displaystyle \\bigcup \\limits _{\\begin{array}{c}B\\in D\\cap \\mathcal {B} \\\\ B=bd(X_i,X)\\end{array}} B \\cup OPT(G_{X_i} - bd(X_i,X)) \\\\S_3(D) &=& \\displaystyle \\bigcup \\limits _{\\begin{array}{c}B\\in \\mathcal {B}- D \\\\ B=bd(X_i,X)\\end{array}} OPT(G_{X_i}) \\\\Sol(D) &=& S_1(D)\\cup S_2(D) \\cup S_3(D)$ Note that, by definition $S_i(D) \\cap S_j(D) = \\emptyset , 1 \\le i < j \\le 3$ .", "We have the following lemma.", "Lemma 7 Let $D$ be a vertex cover of $\\mathcal {H}$ .", "Then $Sol(D)$ is an $X$ -CVD set of $G$ .", "Suppose for the sake of contradiction that $Sol(D)$ is not an $X$ -CVD set of $G$ .", "First assume $Sol(D)$ is not a CVD set of $G$ .", "Then there exists an induced path $P=uvw$ in $G-Sol(D)$ .", "Consider the following cases.", "$X\\cap \\lbrace u,v,w\\rbrace =\\emptyset $ .", "Then there must exist a child $Y$ of $X$ such that $u,v,w$ are vertices of $G_{Y}$ .", "If $B=bd(Y,X)\\in D$ , then by Equations  and , $Sol(D)$ contains $B\\cup OPT(G_Y-B)$ .", "But then $B\\cup OPT(G_Y-B)$ is not a CVD set of $G_Y$ , a contradiction.", "If $B\\notin D$ , then by Equations  and , $Sol(D)$ contains $OPT(G_Y)$ .", "But then $OPT(G_Y)$ is not a CVD set of $G_Y$ , also a contradiction.", "Otherwise, there always exists two adjacent vertices $z_1,z_2$ such that $\\lbrace z_1,z_2\\rbrace \\subset \\lbrace u,v,w\\rbrace $ and $z_1\\in X$ and $z_2\\in Y$ , where $Y$ is a child of $X$ .", "Observe that $z_2\\in B=bd(Y,X)$ and therefore $z_1$ is adjacent to $B$ in $\\mathcal {H}$ .", "Since $\\lbrace z_1,z_2\\rbrace \\cap Sol(D) = \\emptyset $ , $\\mathcal {H}-D$ contains the edge $z_1 B$ , contradicting the fact that $D$ is a vertex cover of $\\mathcal {H}$ .", "Now assume that $Sol(D)$ is a CVD set but not an $X$ -CVD set.", "Then there must exists a cluster $C$ in $G-Sol(D)$ that contains an $(X,Y)$ -edge $uv$ where $u\\in X$ and $v\\in bd(Y,X)$ .", "Therefore $u\\notin D$ and $B=bd(Y,X)\\notin D$ .", "Then $\\mathcal {H}-D$ contains the edge $u B$ , contradicting the fact that $D$ is a vertex cover of $\\mathcal {H}$ .", "A minimum weighted vertex cover $D$ of $\\mathcal {H}$ is also minimal if no proper subset of $D$ is a vertex cover of $\\mathcal {H}$ .", "The restriction of minimality is to avoid the inclusion of redundant vertices with weight 0 in the minimum vertex cover.", "Observation 2 Let $D$ be a minimal minimum weighted vertex cover of $\\mathcal {H}$ .", "For any $i\\in [t]$ , either $bd(X,X_i)\\subseteq D$ or $bd(X_i,X) \\in D$ , but not both.", "First assume $bd(X,X_i)\\lnot \\subseteq D$ and $B=bd(X_i,X)\\notin D$ .", "Observe that, the neighbourhood of $B$ in $\\mathcal {H}$ is $bd(X,X_i)$ .", "Since $bd(X,X_i)\\lnot \\subseteq D$ , there must exists a vertex $u\\in (bd(x,X_i)-D) \\subseteq X-D$ .", "Then it follows that $uB$ is an edge of $\\mathcal {H}-D$ .", "This contradicts the fact that $D$ is a vertex cover of $\\mathcal {H}$ .", "Now assume that both $bd(X,X_i)\\subseteq D$ and $B=bd(X_i,X)\\in D$ .", "Since $\\lbrace x: xB \\in E(\\mathcal {H}\\rbrace = bd(X,X_i)$ the set $D-\\lbrace B\\rbrace $ is also a vertex cover of $\\mathcal {H}$ , a contradiction.", "From now on $D$ denotes a minimal minimum weighted vertex cover of $\\mathcal {H}$ and $Z$ denotes a fixed but arbitrary $X$ -CVD set of $G$ .", "Our goal is to show that $\\left|Sol(D)\\right|\\le \\left|Z\\right|$ .", "We need some more notations and observations.", "First we define four sets $I_1,I_2,I_3,I_4$ as follows.", "(Recall that $X_1,X_2,\\ldots , X_t$ are children of the root $X$ of the partition tree $\\mathcal {T}$ of $G$ .)", "$I_1 &=& \\lbrace i\\in [t]\\colon bd(X,X_i) \\subseteq Sol(D) \\text{ and } bd(X, X_i) \\subseteq Z \\rbrace \\\\I_2 &=& \\lbrace i\\in [t]\\colon bd(X,X_i) \\subseteq Sol(D) \\text{ and } bd(X, X_i) \\lnot \\subseteq Z \\rbrace \\\\I_3 & = & \\lbrace i\\in [t] - (I_1 \\cup I_2)\\colon bd(X_i,X) \\subseteq Sol(D) \\text{ and } bd(X_i,X) \\subseteq Z \\rbrace \\\\I_4 & = & \\lbrace i\\in [t] - (I_1 \\cup I_2)\\colon bd(X_i,X) \\subseteq Sol(D) \\text{ and } bd(X_i,X) \\lnot \\subseteq Z \\rbrace $ Note that $I_1 \\cup I_2 \\cup I_3 \\cup I_4 =[t]$ and $(I_1 \\cup I_2) \\cap (I_3 \\cup I_4) =\\emptyset $ .", "We have the following observations on the sets $I_i, 1 \\le i \\le 4$ .", "Observation 3 The sets $I_1,I_2,I_3,I_4$ form a partition of $[t]$ .", "From the definition of $I_i, 1 \\le i \\le 4$ , it is clear that $I_i \\cap I_j = \\emptyset , i \\ne j$ .", "Assume that there exists an $i \\in [t]$ such that $ i \\notin I_1\\cup I_2$ .", "Hence, $bd(X,X_i) \\lnot \\subseteq Sol(D) \\cap X = D \\cap X$ .", "Then by Observation REF , $bd(X_i,X) \\in D$ and by equation the set of vertices $bd(X_i,X) \\subseteq Sol(D)$ .", "Therefore each $i \\in [t] - (I_1\\cup I_2)$ either belongs to the set $I_3$ or $I_4$ .", "Observation 4 Let $D$ be a vertex cover of $\\mathcal {H}$ and $Sol(D)$ be an $X$ -CVD set of $G$ defined as in equation .", "For the sets $I_i, 1 \\le i \\le 4$ defined by the Equations REF - , the following holds.", "(i) $\\bigcup \\limits _{i \\in I_1 \\cup I_2} bd(X,X_i) = S_1(D)$ (ii) $\\bigcup \\limits _{i \\in I_3 \\cup I_4} bd(X_i,X) \\cup OPT(G_{X_i} - bd(X_i,X) = S_2(D)$ (iii) $\\bigcup \\limits _{i \\in I_1 \\cup I_2} OPT(G_{X_i}) = S_3(D)$ .", "First note that $S_1(D) = D \\cap X = Sol(D) \\cap X$ (by definition of $Sol(D)$ ).", "On the other hand, by definition of $I_1$ and $I_2$ we have $\\bigcup \\limits _{i \\in I_1 \\cup I_2} bd(X,X_i) \\subseteq Sol(D)$ .", "Moreover, $\\bigcup \\limits _{i \\in I_1 \\cup I_2} bd(X,X_i) \\subseteq X$ .", "Therefore, $\\bigcup \\limits _{i \\in I_1 \\cup I_2} bd(X,X_i) \\subseteq Sol(D)\\cap X = S_1(D)$ .", "Now to prove the other side, $S_1(D) \\subseteq \\bigcup \\limits _{i \\in I_1 \\cup I_2} bd(X,X_i)$ , suppose for the sake of contradiction that there exists a vertex $v \\in S_1(D) - \\bigcup \\limits _{i \\in I_1 \\cup I_2} bd(X,X_i)$ .", "Let $J = \\lbrace j : v \\in bd(X,X_j)\\rbrace $ .", "Since $J \\cap (I_1 \\cup I_2 ) =\\emptyset $ , by definition of $I_1$ and $I_2$ , for each $j \\in J, bd(X,X_j) \\lnot \\subseteq Sol(D) \\cap X = D \\cap X$ .", "Hence by Observation REF , $bd(X_j, X) \\in D, \\forall $ .", "Therefore, $D-\\lbrace v\\rbrace $ is also a vertex cover of $\\mathcal {H}$ , contradicting the minimality of $D$ .", "By Observation REF , $I_3 \\cup I_4 = [t] - I_1 \\cup I_2$ .", "Moreover, by the definition of $I_1$ and $I_2$ , for each $i \\in [t] - I_1 \\cup I_2$ the set $bd(X, X_i) \\lnot \\subseteq Sol(D) \\cap X = D \\cap X$ .", "Hence by Observation REF , we have $bd(X_i, X) \\in D$ for each $i \\in I_3 \\cup I_4$ and $bd(X_i, X) \\notin D, i \\in I_1 \\cup I_2$ .", "Thus it follows from Observation REF and the definition of $S_2$ and $S_3$ that $\\bigcup \\limits _{i \\in I_3 \\cup I_4} bd(X_i,X) \\cup OPT(G_{X_i} - bd(X_i,X)) = S_2(D)$ and $\\bigcup \\limits _{i \\in I_1 \\cup I_2} OPT(G_{X_i}) = S_3(D)$ .", "Based on the set $I_1$ , we construct two sets $D_1$ and $Z_1$ from $Sol(D)$ and $Z$ , respectively, which are defined as follows.", "$D_1 &=& \\bigcup \\limits _{i \\in I_1} bd(X,X_i) \\cup (Sol(D)\\cap G_{X_i})\\\\Z_1 &=& \\bigcup \\limits _{i \\in I_1} bd(X,X_i) \\cup (Z \\cap G_{X_i})$ Observation 5 $ \\left|D_1 \\right| \\le \\left|Z_1 \\right| $ .", "From the definition of $Sol(D)$ and equation REF , for $i \\in I_1$ , we infer that $bd(X,X_i) \\subseteq D$ .", "Hence, by Observation REF , $bd(X_i,X) \\notin D, i \\in I_1$ and from equation , $Sol(D) \\cap G_{X_i} = OPT(G_{X_i})$ .", "Since for each $i, j \\in I_1$ , $G_{X_i} \\cap G_{X_j} = \\emptyset $ and $|Z \\cap G_{X_i}| \\ge |OPT(G_{X_i})|$ , by the definitions of $D_1$ and $Z_1$ we have $|D_1| \\le |Z_1|$ .", "Based on the set $I_2$ , we construct the following two sets $D_2 \\subseteq Sol(D)$ and $Z_2 \\subseteq Z$ .", "$D_2 &=& \\bigcup \\limits _{i \\in I_2} bd(X,X_i) \\cup (Sol(D)\\cap G_{X_i}) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i)\\\\Z_2 &=& \\bigcup \\limits _{i \\in I_2} bd(X_i,X) \\cup (Z \\cap (G_{X_i} - bd(X_i,X)))$ By the definition of the set $I_2$ , the set of vertices $bd(X,X_i) \\lnot \\subseteq Z, i \\in I_2$ .", "By Lemma REF , recall that there exits a vertex cover, $Cov(Z)$ of $\\mathcal {H}$ corresponding to every $X$ -CVD-set $Z$ .", "Since $bd(X,X_i) \\lnot \\subseteq Z$ and thus $bd(X,X_i) \\lnot \\subseteq Cov(Z)$ , it is implicit in Observation REF that $bd(X_i,X) \\in Cov(Z)$ .", "Hence $bd(X_i,X) \\subseteq Z$ and the set $Z_2 \\subseteq Z$ .", "Observation 6 $|D_2| \\le |Z_2|$ .", "By arguments similar to that in the proof of Observation REF , for $i \\in I_2, (Sol(D)\\cap G_{X_i}) = OPT(G_{X_i})$ .", "Hence, $ D_2 = \\bigcup \\limits _{i \\in I_2} bd(X,X_i) \\cup OPT(G_{X_i}) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i)$ .", "Suppose for contradiction that $|D_2| > |Z_2|$ .", "Then by the definitions of $D_2$ and $Z_2$ we have $\\left|\\bigcup \\limits _{i \\in I_2}bd(X,X_i) \\cup OPT(G_{X_i}) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i)\\right| > \\left|\\bigcup \\limits _{i \\in I_2}bd(X_i,X) \\cup (Z \\cap (G_{X_i} - bd(X_i,X))) \\right|$ Since $X \\cap G_{X_i} = \\emptyset , 1 \\le i \\le t$ and $\\left|Z \\cap (G_{X_i} - bd(X_i,X)) \\right| \\ge OPT(G_{X_i} - bd(X_i,X))$ , we can rewrite the above inequality as follows.", "$\\left|\\bigcup \\limits _{i \\in I_2}bd(X,X_i) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i) \\right| > \\left|\\bigcup \\limits _{i \\in I_2}bd(X_i,X) \\right| + \\\\ \\left|\\bigcup \\limits _{i \\in I_2}OPT(G_{X_i} - bd(X_i,X)) \\right| - \\left|\\bigcup \\limits _{i \\in I_2}OPT(G_{X_i}) \\right|$ That is, $\\left|\\bigcup \\limits _{i \\in I_2}bd(X,X_i) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i) \\right| > \\sum \\limits _{i \\in I_2}(\\left|bd(X_i,X)\\right| + \\left|OPT(G_{X_i} - bd(X_i,X) \\right| - \\left|OPT(G_{X_i} ) \\right|)$ By equation REF , $\\left|bd(X_i,X) \\right| + \\left|OPT(G_{X_i} - bd(X_i,X)) \\right| - \\left|OPT(G_{X_i} ) \\right| = w(bd(X_i,X)) $ and hence, $\\left|\\bigcup \\limits _{i \\in I_2}bd(X,X_i) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i) \\right| > \\sum \\limits _{i \\in I_2}w(bd(X_i,X))$ Recall that $D$ is a minimal minimum weighted vertex cover of $\\mathcal {H}$ .", "By Observation REF we have $\\bigcup \\limits _{i \\in I_2}bd(X,X_i) \\subseteq D$ and hence for each $i \\in I_2$ , the vertex $B=bd(X_i,X) \\notin D$ by Observation REF .", "Now we show that if we delete the vertices in $\\bigcup \\limits _{i \\in I_2}bd(X,X_i) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i)$ from $D$ and add the set of vertices $\\left\\lbrace bd(X_i,X) \\colon i \\in I_2\\right\\rbrace $ then we get a vertex cover of smaller weight for $\\mathcal {H}$ by inequality (REF ), a contradiction.", "Claim 1 Let $D_1$ be a set of vertices obtained from $D$ by deleting the vertices in $\\bigcup \\limits _{i \\in I_2}bd(X,X_i) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i)$ and by adding the set of vertices $\\left\\lbrace bd(X_i,X) \\colon i \\in I_2\\right\\rbrace $ .", "Then, $D_1$ is a vertex cover of $\\mathcal {H}$ .", "[Proof of claim] Assume that there exists an edge $uB \\in E(\\mathcal {H} - D_1)$ where $B = bd(X_j,X), j \\in [t]$ .", "Since $bd(X_j,X) \\notin D_1$ , by the definition of $D_1$ (given above ) observe that $bd(X_j,X) \\notin D$ and $j \\notin I_2$ .", "Note that the neighbourhood of $bd(X_j,X)$ in $\\mathcal {H}$ is $bd(X,X_j)$ and hence $u \\in bd(X,X_j)$ .", "Since $D$ is a vertex cover of $\\mathcal {H}$ , we have $bd(X,X_j) \\subseteq D$ .", "Now we show that $j \\notin I_1$ : By definition of $D_1$ we have $\\bigcup \\limits _{i \\in I_1} bd(X,X_i) \\cap D \\subseteq D_1$ .", "Since $u \\in bd(X,X_j)$ and $bd(X,X_j) \\subseteq D$ , if $j \\in I_1$ then the vertex $u$ remains in $D_1$ .", "Thus no such edge $uB$ exists in $\\mathcal {H} - D_1$ .", "Therefore, we infer that $j \\notin I_1$ .", "Since $j \\notin I_1 \\cup I_2$ , from Observation REF we have $bd(X,X_j) \\lnot \\subseteq D \\cap X$ .", "Hence there exists a vertex $w \\in bd(X,X_j)$ such that $w \\in \\mathcal {H} - D$ .", "Moreover, by the definition of partition tree $\\mathcal {T}$ and $bd(X,X_j)$ the edge $wB \\in E(\\mathcal {H} - D)$ .", "This contradicts the assumption that $D$ is a vertex cover of $\\mathcal {H}$ .", "This completes the proof of the observation.", "Based on the set $I_3$ , we construct the following two sets $D_3 \\subseteq Sol(D)$ and $Z_3 \\subseteq Z$ .", "$D_3 & = & \\bigcup \\limits _{i \\in I_3} bd(X_i,X) \\cup OPT(G_{X_i} - bd(X_i,X)) \\\\Z_3 & = & \\bigcup \\limits _{i \\in I_3}bd(X_i,X) \\cup (Z \\cap (G_{X_i} - bd(X_i,X)))$ Observation 7 $\\left|D_3 \\right| \\le \\left|Z_3 \\right|$ .", "Since $\\left|Z \\cap (G_{X_i}- bd(X_i,X)) \\right| \\ge OPT(G_{X_i} - bd(X_i,X))$ , by the definitions of $D_3$ and $Z_3$ we have $\\left|D_3 \\right| \\le \\left|Z_3 \\right|$ .", "Based on the set $I_4$ , we construct the following two sets $D_4 \\subseteq Sol(D)$ and $Z_4 \\subseteq Z$ .", "$D_4 & = & \\bigcup \\limits _{i \\in I_4} bd(X_i,X) \\cup OPT(G_{X_i} - bd(X_i,X)) \\\\Z_4 & = & \\bigcup \\limits _{i \\in I_4} bd(X,X_i) \\cup (Z \\cap (G_{X_i})) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i)$ By the definition of the set $I_4$ , the set of vertices $bd(X_i,X) \\lnot \\subseteq Z, i \\in I_4$ .", "By Lemma REF , recall that there exits a vertex cover, $Cov(Z)$ of $\\mathcal {H}$ corresponding to every $X$ -CVD-set $Z$ .", "Since $bd(X_i,X) \\lnot \\subseteq Z, i \\in I_4$ , by definition of $Cov(Z)$ we have $bd(X_i,X) \\notin Cov(Z)$ and hence it is implicit in Observation REF that $bd(X,X_i) \\subseteq Cov(Z)$ .", "Hence $bd(X,X_i) \\subseteq Z$ and the set $Z_4 \\subseteq Z$ .", "Observation 8 $\\left|D_4 \\right| \\le \\left|Z_4 \\right|$ .", "Suppose for contradiction that $\\left|D_4 \\right| > \\left|Z_4 \\right|$ .", "Then by the definitions of $D_4$ and $Z_4$ we have $\\left|\\bigcup \\limits _{i \\in I_4}(bd(X_i,X) \\cup OPT(G_{X_i}- bd(X_i,X))) \\right| > \\left|\\bigcup \\limits _{i \\in I_4}(bd(X,X_i) \\cup (Z \\cap (G_{X_i}) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i) \\right|$ Since $G_{X_i} \\cap G_{X_j} = \\emptyset $ for $i, j \\in I_4$ and $Z \\cap (G_{X_i}) \\ge OPT(G_{X_i})$ , we have $\\left|\\bigcup \\limits _{i \\in I_4}(bd(X_i,X) \\right| + \\left|\\bigcup \\limits _{i \\in I_4}OPT(G_{X_i}- bd(X_i,X))) \\right| - \\left|\\bigcup \\limits _{i \\in I_4}OPT(G_{X_i}) \\right| \\\\ > \\left|\\bigcup \\limits _{i \\in I_4}(bd(X,X_i) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i) \\right|$ Note that by equation REF , $\\left|bd(X_i,X) \\right| + \\left|OPT(G_{X_i} - bd(X_i,X)) \\right| - \\left|OPT(G_{X_i} ) \\right| = w(bd(X_i,X)) $ and hence, $\\sum \\limits _{i \\in I_4} w(bd(X_i,X)) > \\left|\\bigcup \\limits _{i \\in I_4}(bd(X,X_i) - \\bigcup \\limits _{i \\in I_1} bd(X,X_i) \\right|$ Recall that $D$ is a minimal minimum weighted vertex cover of $\\mathcal {H}$ .", "Observe that by definition of $I_1$ and $Sol(D)$ , the set $\\bigcup \\limits _{i \\in I_1} bd(X,X_i) \\subseteq D$ .", "Now we show that if we delete the vertices in $\\left\\lbrace bd(X_i,X)\\colon i \\in I_4 \\right\\rbrace $ from $D$ and adding the set of vertices $\\bigcup \\limits _{i \\in I_4}bd(X,X_i) - \\bigcup \\limits _{i \\in I_1}bd(X,X_i)$ , then we get a vertex cover of smaller weight for $\\mathcal {H}$ by inequality (REF ), a contradiction: By definition of $I_1$ and $Sol(D)$ , the set $\\bigcup \\limits _{i \\in I_1} bd(X,X_i) \\subseteq D$ .", "Hence by the addition of the vertices $\\bigcup \\limits _{i \\in I_4}bd(X,X_i) - \\bigcup \\limits _{i \\in I_1}bd(X,X_i)$ to $D$ we have the neighbourhood of each deleted vertex $bd(X_i,X)$ in $D$ .", "Lemma 8 $Sol(D) = \\bigsqcup \\limits _{i=1}^{4} D_i$ and for each $i,j\\subset [4]$ , $Z_i\\cap Z_j=\\emptyset $ .", "By Observation REF it follows from the definition that for $1 \\le i \\ne j \\le 4$ , the sets $D_i \\cap D_j =\\emptyset $ and $Z_i \\cap Z_j =\\emptyset $ .", "Now we show that $Sol(D) = D_1 \\cup D_2 \\cup D_3 \\cup D_4$ .", "First consider the set $D_1 \\cup D_2 = \\bigcup \\limits _{i \\in I_1 \\cup I_2} (bd(X,X_i) \\cup (Sol(D)\\cap G_{X_i}))$ .", "By Observation REF , $\\bigcup \\limits _{i \\in I_1 \\cup I_2} bd(X,X_i)= S_1(D)$ and $\\bigcup \\limits _{i \\in I_1 \\cup I_2}(Sol(D)\\cap G_{X_i}) = S_3(D)$ .", "Hence $D_1 \\cup D_2 = S_1(D) \\cup S_3(D)$ Now consider the set $D_3 \\cup D_4 = \\bigcup \\limits _{i \\in I_3 \\cup I_4} bd(X_i,X) \\cup OPT(G_{X_i}- bd(X_i,X))$ .", "Hence by Observation REF , $D_3 \\cup D_4 = S_2(D)$ .", "Therefore, the definition of $Sol(D)$ (Equation ) implies $Sol(D) = \\bigsqcup \\limits _{i=1}^{4} D_i$ .", "Proof of Theorem  REF Using Lemma REF , we have that $\\left|Sol(D)\\right|\\le |Z_1\\cup Z_2\\cup Z_3\\cup Z_4|\\le |Z|$ .", "Hence, $Sol(D)$ is a minimum $X$ -CVD set of $G$ .", "Furthermore, $\\mathcal {H}$ has at most $|V(G)|$ vertices and $|E(G)|$ edges.", "Therefore minimum weighted vertex cover of $\\mathcal {H}$ can be found in $O(|V(G)|\\cdot |E(G)|)$ -time and $Sol(D)$ can be computed in total of $O(|V(G)|\\cdot |E(G)|)$ -time.", "Below we give a short pseudocode of our algorithm to find a minimum $X$ -CVD set of $G$ .", "[H] computeCompute_sCD(G,a,A) KwInInput KwOutOutput A well-partitioned chordal graph $G$ , a partition tree $\\mathcal {T}$ of $G$ rooted at the node $X$ , for each node $Y\\in \\mathcal {T}-\\lbrace X\\rbrace $ both $OPT(G_Y)$ and $OPT(G_Y-bd(Y,P(Y)))$ are given as part of input A minimum $X$ -CVD set Construct a weighted bipartite graph $\\mathcal {H}$ as described in Equations REF and Equations REF ; Find a minimum weighted vertex cover $D$ of $\\mathcal {H}$ ; Construct the sets $S_1(D), S_2(D), S_3(D)$ and $Sol(D)$ as described in Equations REF , ,  and , respectively; $Sol(D)$ Pseudocode to find a minimum $X$ -CVD set of a well-partitioned chordal graph" ], [ "Finding minimum $(X,Y)$ -CVD set of well-partitioned chordal graphs", "In this section, we prove the following theorem.", "Theorem 9 Let $G$ be a well-partitioned graph; $\\mathcal {T}$ be a partition tree of $G$ rooted at $X$ ; $Y$ be a child of $X$ .", "Moreover, for each $Z\\in V(\\mathcal {T})-\\lbrace X\\rbrace $ , assume both $OPT(G_Z)$ and $OPT(G_{Z}-bd(Z,P(Z)))$ are given $(P(Z)$ denotes the parent of $Z$ in $\\mathcal {T})$ .", "Then a minimum $(X,Y)$ -CVD set of $G$ can be computed in $O\\left(|E(G)|^2.|V(G)|\\right)$ time.", "For the remainder of this section, the meaning of $G$ , $\\mathcal {T}$ , $X$ and $Y$ will be as given in Theorem REF .", "For an $(X,Y)$ -edge $e$ , we say that a minimum $(X,Y)$ -CVD set $A$ “preserves\" the edge $e$ if $G-A$ contains the edge $e$ .", "Let $e \\in E(X,Y)$ be an $(X,Y)$ -edges of $G$ .", "Then to prove Theorem REF , we use Theorem REF .", "First we show how to construct a minimum $(X,Y)$ -CVD set $S_e$ that preserves the edge $e \\in E(X,Y)$ and prove Theorem REF .", "Clearly, a minimum $(X,Y)$ -CVD set $S$ of $G$ is the one that satisfies $|S|=\\min \\limits _{e \\in E(X,Y)}|S_e|$ .", "Therefore, Theorem REF will follow directly from Theorem REF .", "The remainder of this section is devoted to prove Theorem REF .", "Theorem 10 Assuming the same conditions as in Theorem REF , for $e \\in E(X,Y)$ , a minimum $(X,Y)$ -CVD set of $G$ that preserves $e$ can be computed in $O\\left(|E(G)|.|V(G)|\\right)$ time.", "First, we need the following observation about the partition trees of well-partitioned chordal graphs, which is easy to verify.", "Observation 9 Let $G$ be a well-partitioned graph with a partition tree $\\mathcal {T}$ .", "Let $X,Y$ be two adjacent nodes of $\\mathcal {T}$ such that $X\\cup Y$ induces a complete subgraph in $G$ and $\\mathcal {T}^{\\prime }$ be the tree obtained by contracting the edge $XY$ in $\\mathcal {T}$ .", "Now associate the newly created node with the subset of vertices $(X\\cup Y)$ and retain all the other nodes of $\\mathcal {T}^{\\prime }$ and their associated subsets as in $\\mathcal {T}$ .", "Then $\\mathcal {T}^{\\prime }$ is also a partition tree of $G$ .", "Now we begin building the machinery to describe our algorithm for finding a minimum $(X,Y)$ -CVD of $G$ that preserves an $(X,Y)$ -edge $ab$ .", "Observe that any $(X,Y)$ -CVD set that preserves the edge $ab$ must contain the set $\\left(N(a)~\\Delta ~N(b)\\right)$ as subset.", "(Otherwise, the connected component of $G-S$ containing $ab$ would not be a cluster, a contradiction).", "Let $H$ denote the graph $G-\\left(N(a)~\\Delta ~N(b)\\right)$ .", "Now consider the partition $\\mathcal {Q}$ defined as $\\lbrace Z-\\left(N(a)~\\Delta ~N(b)\\right)\\colon Z \\in V(\\mathcal {T})\\rbrace $ .", "Now construct a graph $\\mathcal {F}$ whose vertex set is $\\mathcal {Q}$ and two vertices $Z_1,Z_2$ are adjacent in $\\mathcal {F}$ if there is an edge $uv \\in E(H)$ such that $u\\in Z_1$ and $v\\in Z_2$ .", "Observe that $\\mathcal {F}$ is a forest.", "Now we have the following observation that relates the connected components of $H$ with that of $\\mathcal {F}$ .", "Observation 10 There is a bijection $f$ between the connected components of $H$ and the connected components of $\\mathcal {F}$ , such that for a component $C$ of $H$ , $f(C)$ is the partition tree of $C$ .", "Moreover, the vertices of the root node of $f(C)$ is subset of a node in $\\mathcal {T}$ .", "Recall that $\\mathcal {Q}$ is a partition of $V(H)$ and the graph $\\mathcal {F}$ is a forest.", "Let $A$ be a connected component of $H$ .", "We have the following cases.", "There is a vertex $u\\in A$ and a vertex in $v\\in bd(X,Y)$ such that $uv\\in E(G)$ .", "Then observe that $A$ contains both vertices $a$ and $b$ .", "Observe that there is a set $Z=bd(X,Y)$ in $\\mathcal {Q}$ .", "Hence, $Z$ is a vertex of $\\mathcal {F}$ .", "Now define $f(A)$ to be the subgraph of $\\mathcal {F}$ that contains $Z$ .", "Clearly, $f(A)$ is a partition tree of $A$ and the root node of $f(A)$ is $bd(X,Y)$ which is a subset of $X$ , the root node of $\\mathcal {T}$ .", "There is a vertex $u\\in A$ and a vertex in $v\\in bd(Y,X)$ such that $uv\\in E(G)$ .", "In this case, $A$ contains both vertices $a$ and $b$ .", "Hence, $f(A)$ can be defined as in Case REF .", "Consider the case when any edge $e=uv$ with $u\\in X$ and $v\\in A$ satisfies $u\\in X-bd(X,Y)$ .", "In this case, observe that $v$ must lie in some child $Z$ of $X$ .", "Moreover, there is a set $Z$ in $\\mathcal {Q}$ .", "Hence, $Z$ is a vertex of $\\mathcal {F}$ .", "Now define $f(A)$ to be the subgraph of $\\mathcal {F}$ that contains $Z$ .", "Clearly, $f(A)$ is a partition tree of $A$ and the root node of $f(A)$ is $Z$ which is a node of $\\mathcal {T}$ .", "Consider the case when any edge $e=uv$ with $u\\in Y$ and $v\\in A$ satisfies $u\\in Y-bd(Y,X)$ .", "In this case, observe that $v$ must lie in some child $Z$ of $Y$ .", "Moreover, there is a set $Z$ in $\\mathcal {Q}$ .", "Hence, $Z$ is a vertex of $\\mathcal {F}$ .", "Now define $f(A)$ to be the subgraph of $\\mathcal {F}$ that contains $Z$ .", "Clearly, $f(A)$ is a partition tree of $A$ and the root node of $f(A)$ is $Z$ which is a node of $\\mathcal {T}$ .", "This completes the proof.", "Consider the connected component $H^*$ of $H$ which contains $a$ and $b$ and let $\\mathcal {F}^{\\prime }=f(H^*)$ where $f$ is the function given by Observation REF .", "Observe that the root $R^{\\prime }$ of $\\mathcal {F}^{\\prime }$ is actually $bd(X,Y)$ .", "Moreover, $R^{\\prime }$ has a child $R^{\\prime \\prime }$ which is actually $bd(Y,X)$ .", "Observe that, $R^{\\prime }\\cup R^{\\prime \\prime }$ induces a complete subgraph in $H^*$ .", "Hence, due to Observation REF , the tree $\\mathcal {F}^*$ obtained by contracting the edge $R^{\\prime }R^{\\prime \\prime }$ is a partition tree of $H^*$ .", "Moreover, $R^*=R^{\\prime }\\cup R^{\\prime \\prime } = bd(X,Y) \\cup bd(Y,X)$ is the root node of $\\mathcal {F}^*$ .", "Recall that our objective is to find a minimum $(X,Y)$ -CVD set that preserves the edge $ab$ .", "We have the following lemma.", "Lemma 11 Let $H^*,H_1,H_2,\\ldots ,H_{k^{\\prime }}$ be the connected components of $H$ .", "Let $S^*$ be a minimum $(R^*)$ -CVD set of $H^*$ , $S_0=\\left(N(a)~\\Delta ~N(b)\\right)$ , and for each $j\\in [k^{\\prime }]$ , let $S_j$ denote a minimum CVD set of $H_j$ .", "Then $(S_0 \\cup S_1 \\cup S_2\\cup \\ldots \\cup S_{k^{\\prime }} \\cup S^*)$ is a minimum $(X,Y)$ -CVD set of $G$ that preserves the edge $ab$ .", "Observe that, any vertex which is adjacent to $a$ or $b$ lie in $R^*$ .", "Since $S^*$ is a minimum $(R^*)$ -CVD set, $ S^* \\cap \\lbrace a,b\\rbrace =\\emptyset $ and therefore $H^*-S^*$ has a cluster that contains the edge $ab$ .", "Hence $S_0 \\cup S_1\\cup S_2 \\cup \\ldots \\cup S_{k^{\\prime \\prime }} \\cup S^*$ is an $(X,Y)$ -CVD set that preserves the edge $ab$ .", "Let $Z$ be any $(X,Y)$ -CVD set of $G$ that preserves the edge $ab$ .", "For any vertex $u\\in S_0$ , observe that $a,b,u$ induce a path of length 3.", "Hence, $S_0 \\subseteq Z$ .", "Let $C$ be a connected component of $G-S_0$ .", "Observe that $Z\\cap C$ must be a CVD set of $C$ .", "Therefore, for each $i\\in [k^{\\prime }]$ , $|Z\\cap H_i|\\le |S_i|$ .", "Since $Z$ is an $(X,Y)$ -CVD set of $G$ that preserves the edge $ab$ , $\\lbrace a,b\\rbrace \\cap Z=\\emptyset $ .", "Since $a,b$ are vertices of $H^*$ , $ (Z\\cap H^*) \\cap \\lbrace a,b\\rbrace =\\emptyset $ .", "Now suppose $(Z\\cap H^*)$ is not a $(R^*)$ -CVD set of $H^*$ .", "Then due to Lemma REF , $(Z\\cap H^*)$ must be a $(R^*,R)$ -CVD set of $G^*$ for some child $R$ of $R^*$ in $\\mathcal {T}^*$ .", "Hence, there exists a $(R^*,R)$ -edge $cd$ which is preserved by $(Z\\cap H^*)$ .", "Without loss of generality assume $c\\in R^*$ and $d\\in R$ .", "Observe that $d$ is not adjacent to $a$ or $b$ .", "Hence, $a,c,d$ induce a path of length 3 in $H^*-(Z\\cap H^*)$ , a contradiction.", "Hence $|Z\\cap H^*|\\le |S^*|$ .", "Therefore $|Z| \\le |S_0\\cup S_1\\cup S_2\\cup \\ldots \\cup S_{k^{\\prime \\prime }} \\cup S^*|$ .", "Lemma REF provides a way to compute a minimum $(X,Y)$ -CVD set of $G$ that preserves the edge $ab$ .", "Clearly, the set $S_0=\\left(N(a)~\\Delta ~N(b)\\right)$ can be computed in polynomial time.", "The following observation provides a way to compute a minimum CVD set of all connected components that are different from $H^*$ .", "Observation 11 Let $A$ be a connected component of $H$ which is different from $H^*$ .", "Then a minimum $CVD$ set of $A$ can be computed in polynomial time.", "Recall that $A$ was obtained by deleting $\\left(N(a)~\\Delta ~N(b)\\right)$ from $G$ , $\\mathcal {T}$ is the partition tree of $G$ and root of $\\mathcal {T}$ is $X$ .", "Due to Observation REF , there is a function $f$ between the connected components of $H$ and the connected components of $\\mathcal {F}$ such that $f(A)$ is the partition tree of $A$ and there is a node $R\\in \\mathcal {T}$ such that the vertices in root node of $f(A)$ is a subset of $R$ .", "Now consider the following cases.", "Consider the case when $\\lbrace a,b\\rbrace \\cap bd(P(R),R) = \\emptyset $ .", "This implies no vertex of $R$ is adjacent to $a$ or $b$ .", "Moreover, since $A$ is different from $H^*$ , $bd(P(R),R) \\cap (bd(X,Y)\\cup bd(Y,X))=\\emptyset $ .", "This further implies that, either $bd(P(R),R)\\subseteq N[a]-N[b]$ or $bd(P(R),R)\\subseteq N[b]-N[a]$ .", "In either case, $R \\cap (N[a]\\cup N[b]) = \\emptyset $ .", "This implies $R$ is a node of $\\mathcal {T}$ distinct from $X$ such that $A$ is isomorphic to $G_R$ .", "Hence, due to the assumption given in Theorem REF , $OPT(G_R)$ is known and therefore a minimum CVD set of $A$ is known.", "Consider the case when there is a vertex $z\\in \\lbrace a,b\\rbrace $ such that $z\\in bd(P(R),R)$ .", "Let $z^{\\prime }$ be the vertex among $a$ and $b$ distinct from $z$ .", "Since $A$ is different from $H^*$ , $z^{\\prime }\\notin bd(P(R),R)$ .", "Hence, $bd(R,P(R)) \\subset N(z)$ and therefore $bd(R,P(R)) \\subset \\left(N(a)~\\Delta ~N(b)\\right)$ .", "This implies that $R$ is a node of $\\mathcal {T}$ distinct from $X$ such that $A$ is isomorphic to $G_R - bd(R,P(R))$ .", "Hence, due to the assumption given in Theorem REF , $OPT(G_R - bd(R,P(R)))$ is known and therefore a minimum CVD set of $A$ is known.", "Clearly, distinguishing between the above cases takes $O(|E(G)|)$ time.", "This completes the proof.", "Let $H_1,H_2,\\ldots , H_{k^{\\prime }}$ be the connected components of $H$ , all different from $H^*$ .", "Applying Observation REF repeatedly on each component, it is possible to obtain, for each $j\\in [k^{\\prime }]$ , a minimum CVD set $S_j$ of $H_j$ .", "The following observation provides a way to compute a minimum $(R^*)$ -CVD set of $H^*$ .", "Observation 12 Let $R$ be a child of $R^*$ in $\\mathcal {F}^*$ .", "Then both $OPT(H^*_R)$ and $OPT(H^*_R-bd(R,R^*))$ are known.", "Since no vertex of $R$ is adjacent to $a$ or $b$ in $G$ , there must exist a node $Q\\in \\mathcal {T}$ such that the vertices in the node $Q$ is same as that in $R$ , $\\mathcal {T}_Q=\\mathcal {T}^*_R$ and $G_Q=H^*_R$ .", "Moreover, $bd(R,R^*)=bd(Q,P(Q))$ , where $P(Q)$ is the parent of $Q$ in $\\mathcal {T}$ .", "Hence, due to the assumption given in Theorem REF , $OPT(H^*_R - bd(R,R^*)$ is known.", "Due to Observation REF and Theorem REF , it is possible to compute a minimum $(R^*)$ -CVD set $S^*$ of $H^*$ in $O(|V(G)|\\cdot |E(G)|)$ time.", "Now due to Lemma REF , we have that $(S_0 \\cup S_1 \\cup S_2\\cup \\ldots \\cup S_{k^{\\prime }} \\cup S^*)$ is a minimum $(X,Y)$ -CVD set of $G$ that preserves the edge $ab$ .", "This completes the proof of Theorem REF and therefore of Theorem REF .", "In Algorithm REF , we give a short pseudocode of our algorithm to find a minimum $(X,Y)$ -CVD set of $G$ that preserves an $(X,Y)$ -edge $ab$ .", "Using Algorithm REF , in Algorithm REF we provide a short pseudocode to find a minimum $(X,Y)$ -CVD set of $G$ .", "[H] KwInInput KwOutOutput A well-partitioned chordal graph $G$ , a partition tree $\\mathcal {T}$ of $G$ rooted at the node $X$ , a child node $Y$ , an $(X,Y)$ -edge $ab$ , for each node $Z\\in \\mathcal {T}-\\lbrace X\\rbrace $ both $OPT(G_Z)$ and $OPT(G_Z-bd(Z,P(Z)))$ are given as part of input A minimum $(X,Y)$ -CVD set of $G$ that preserves the edge $ab$ Construct the set $S_0=\\left(N(a)~\\Delta ~N(b)\\right)$ ; Construct the graph $H=G-\\left(N(a)~\\Delta ~N(b)\\right)$ ; Let $H^*$ be the connected component of $H$ containing $a$ and $b$ .", "Let $H_1,H_2,\\ldots ,H_{k^{\\prime }}$ be the remaining connected components of $H$ .", "$i= 1 \\text{ to } k^{\\prime }$ Compute a minimum CVD set $S_i$ of $H_i$ (Observation REF ); Find the partition tree of $\\mathcal {T}^*$ of $G^*$ whose root is $X^*=bd(X,Y) \\cup bd(Y,X)$ ; Compute a minimum $(X^*)$ -CVD set $S^*$ of $G^*$ using Algorithm REF ; $Sol=S_0 \\cup S_1\\cup S_2 \\cup \\ldots \\cup S_{k^{\\prime }} \\cup S^*$ ; Sol; Pseudocode to find a minimum $(X,Y)$ -CVD set of a well-partitioned chordal graph that preserves an $(X,Y)$ -edge [H] KwInInput KwOutOutput A well-partitioned chordal graph $G$ , a partition tree $\\mathcal {T}$ of $G$ rooted at the node $X$ , a child node $Y$ , for each node $Z\\in \\mathcal {T}-\\lbrace X\\rbrace $ both $OPT(G_Z)$ and $OPT(G_Z-bd(Z,P(Z)))$ are given as part of input A minimum $(X,Y)$ -CVD set of $G$ .", "For each $(X,Y)$ -edge $e$ , compute a minimum $(X,Y)$ -CVD set that preserves the edge $e$ using Algorithm REF ; Let $S$ be a set among all $S_e$ 's that has the least cardinality; $S$ ; Pseudocode to find $(X,Y)$ -CVD set of a well-partitioned chordal graph." ], [ "Main Algorithm", "From now on $G$ denote a fixed well-partitioned chordal graph with a partition tree $\\mathcal {T}$ whose vertex set is $\\mathcal {P}$ , a partition of $V(G)$ .", "We will process $\\mathcal {T}$ in the post-order fashion and for each node $X$ of $\\mathcal {T}$ , we give a dynamic programming algorithm to compute both $OPT(G_X)$ and $OPT(G_X-bd(X,P(X)))$ where $P(X)$ is the parent of $X$ (when exists) in $\\mathcal {T}$ .", "Due to Observation REF , we can assume that $bd(X,P(X)) \\subsetneq X$ .", "In the remaining section, $X$ is a fixed node of $\\mathcal {T}$ , $A$ has a fixed value (which is either $\\emptyset $ or $bd(X,P(X))$ ), $G^A_X$ denotes the graph $G_X - A$ .", "Since well-partitioned chordal graphs are closed under vertex deletion, $G^A_X$ is a well partitioned chordal graph which may be disconnected.", "Now consider the partition $\\mathcal {P}^A$ defined as $\\lbrace Y-A\\colon Y \\in V(\\mathcal {T}_X)\\rbrace $ .", "Observe that, apart from the set $X$ all other sets of the partitions $\\mathcal {P}$ have remained in $\\mathcal {P}^A$ .", "Now construct a graph $\\mathcal {T}^{\\prime }$ whose vertex set is the partition sets of $\\mathcal {P}^A$ and two vertices $X,Y$ are adjacent in $\\mathcal {T}^{\\prime }$ if there is an edge $uv \\in E(G^A_X)$ such that $u\\in X$ and $v\\in Y$ (since the graph induced by the union of the sets in $\\mathcal {P}^A$ is $G^A_X$ , the definition of $\\mathcal {T}^{\\prime }$ is valid).", "Now we have the following observation whose proof is similar to that of Observation REF .", "Observation 13 There is a bijection $f$ between the connected components of $G^A_X$ and the connected components of $\\mathcal {T}^{\\prime }$ , such that for a component $C$ of $G^A_X$ , $f(C)$ is a partition tree of $C$ , and the root of $f(C)$ is a child of $X$ .", "Since the vertices of $X-A$ induces a clique in $G^A_X$ , there exists at most one component $G^*$ in $G^A_X$ that contains a vertex from $X-A$ .", "Due to Observation REF there exists a unique connected component $f(G^*)=\\mathcal {T}^*$ of $\\mathcal {T}^{\\prime }$ which is a partition tree of $G^*$ .", "Let the remaining connected components of $G^A_X$ be $G_1,G_2,\\ldots , G_k$ and for each $i\\in [k]$ , let $f(G_i)=\\mathcal {T}_i$ and $X_i$ is the root of $\\mathcal {T}_i$ .", "Let $X^*$ denote the root node of $\\mathcal {T}^*$ and $X_1^*, X_2^*,\\ldots ,X_t^*$ be the children of $X^*$ in $\\mathcal {T}^*$ .", "We have the following observation.", "Observation 14 For each $j\\in [t]$ , there is a child $Y_j$ of $X$ in $\\mathcal {T}$ such that $Y_j=X^*_j$ and $G_{Y_j} = G^*_{X^*_j}$ .", "Observe that the root of $\\mathcal {T}^*$ is $X^*=X-A$ .", "Since $A\\subsetneq X$ , any child of $X^*$ must be a child of $X$ .", "We have the following lemma.", "Lemma 12 $OPT(G^A_X) = \\left( \\displaystyle \\bigsqcup \\limits _{i=1}^{k} OPT(G_{X_i}) \\right) \\sqcup OPT(G^*)$ The lemma follows directly from the fact that $G_{X_1},G_{X_2},\\ldots ,G_{X_k}$ and $G^*$ are connected components of $G^A_X$ .", "Due to Observation REF , $OPT(G_{X_i})$ is already known.", "Due to Lemma REF , any CVD set $S$ of $G^*$ is either a $(X^*)$ -CVD set or there exists a unique child $Y$ of $X^*$ , such that $S$ is a $(X^*,Y)$ -CVD set of $G^*$ .", "by Theorem REF , it is possible to compute a minimum $(R^*)$ -CVD set $S_0$ of $G^*$ .", "Due to Observation REF , for any node $Y$ of $\\mathcal {T}^*$ which is different from $X^*$ , both $OPT(G_{Y})$ and $OPT(G_{Y} - bd(Y,P(Y)))$ are known, where $P(Y)$ is the parent of $Y$ in $\\mathcal {T}^*$ .", "Hence, by Theorem REF for each child $X^*_i$ , $i\\in [t]$ , computing a minimum $(X^*,X^*_i)$ -CVD set $S_i$ is possible in $O(|V(G^*_{X^*_i})|\\cdot |E(G^*_{X^*_i})|)$ time.", "Let $S^*\\in \\lbrace S_0,S_1,S_2,\\ldots ,S_t\\rbrace $ be a set with the minimum cardinality.", "Due to Lemma REF , $S^*$ is a minimum CVD set of $G^*$ that can be obtained in $O(m^{2}n)$ .", "Finally, due to Lemma REF , we have a minimum CVD set of $G^A_X$ ." ], [ "$O(n(n+m))$ -time algorithm for {{formula:d536a4d9-0eed-42dc-8755-368e23c2d06a}} -CVD on interval graphs", "In this section we shall give an $O(n(n+m))$ -time algorithm to solve $s$ -CVD on interval graph $G$ with $n$ vertices and $m$ edges.", "For a set $X \\subseteq V(G)$ , if each connected component of $G-X$ is an $s$ -club, then we call $X$ as an $s$ -club vertex deleting set ($s$ -CVD set).", "In the next section we present the main ideas of our algorithm to find a minimum cardinality $s$ -CVD set of an interval graph." ], [ "Overview of the algorithm", "In the heart of our algorithm lies a characterisation of $s$ -CVD sets of an interval graph.", "We show (in Lemma REF ) that any $s$ -CVD set must be one of four types, defined in Definitions REF - REF .", "Hence, the problem boils down to computing a minimum $s$ -CVD set of each type.", "To do this, first we arrange the maximal cliques in the order of its Helly region.", "Let $Q_1,Q_2,\\ldots ,Q_k$ be the ordering of the cliques.", "Then for each $1\\le a\\le k$ , we find minimum cardinality $s$ -CVD set of the graph $G\\left[1,a\\right]$ which is the subgraph induced by the vertices in $(Q_1\\cup Q_2\\cup \\ldots \\cup Q_a)$ .", "Moreover, to facilitate future computations we also find minimum $s$ -CVD set of the graph $G\\left[1,a\\right] - A$ where $A=Q_a\\cap Q_b$ for some $a<b\\le k$ .", "The trick was to show that, by solving $s$ -CVD on $O(n+m)$ many different “induced subgraphs\" of $G$ , it is possible to solve $s$ -CVD on $G$ .", "In other words, by solving $O(n+m)$ many different subproblems, it is possible to solve $s$ -CVD on $G$ .", "Moreover, it is possible to solve a subproblem in $O(n)$ time.", "In Section REF we define four types of $s$ -CVD sets and state that any optimal solution must be one of those four types.", "In Section REF we give a sketch of our algorithm and analyse the time complexity in Section REF ." ], [ "Definitions and main lemma", "Let $G$ denotes a connected interval graph with $n$ vertices and $m$ edges.", "The set $\\mathcal {I}$ denotes a fixed interval representation of $G$ where the endpoints of the representing intervals are distinct.", "Let $l(v)$ and $r(v)$ denote the left and right endpoints, respectively, of an interval corresponding to a vertex $v \\in V(G)$ .", "Then the interval assigned to the vertex $v$ in $\\mathcal {I}$ is denoted by $I(v)=\\left[l(v),r(v)\\right]$ .", "Observe that, intervals on a real line satisfies the Helly property and hence for each maximal clique $Q$ of $G$ there is an interval $I = \\displaystyle \\bigcap _{v\\in Q} I(v)$ .", "We call $I$ as the Helly region corresponding to the maximal clique $Q$ .", "Let $Q_1,Q_2,\\ldots ,Q_k$ denote the set of maximal cliques of $G$ ordered with respect to their Helly regions $I_a, 1 \\le a \\le k$ on the real line.", "That is, $I_1 < I_2 <\\ldots < I_k$ .", "Observe that, for any two integers $a,b$ we have $I_a \\cap I_b = \\emptyset $ as both $Q_a$ and $Q_b$ are maximal cliques.", "Moreover, for any $a\\le b \\le c$ if a vertex $v\\in Q_a\\cap Q_c$ , then $v\\in Q_b$ .", "With respect to an ordering of maximal cliques $Q_1,Q_2,\\ldots ,Q_k$ of $G$ , we define the following.", "Definition 6 For integers a,b where $1\\le a < b \\le k$ , let $S_{a}^{b} = Q_a\\cap Q_b$ .", "For an integer $a$ , let $\\mathcal {S}\\left(Q_a\\right)=\\left\\lbrace S_{a}^{b} \\colon a < b \\le k \\text{ and } S_{a}^{b} \\ne S_{a}^{b^{\\prime }}, a < b^{\\prime }< b \\right\\rbrace \\cup \\emptyset $ .", "(Note that, the members of the set $\\mathcal {S}\\left(Q_a\\right)$ are distinct.)", "For $ A \\in \\mathcal {S}\\left(Q_a\\right)$ , let $Y_A^a = (Q_a-Q_{a-1})-A$ .", "For a vertex $v\\in V(G)$ , the index $q^-_{v}=\\min \\lbrace a\\colon v\\in Q_a\\rbrace $ .", "That is, the minimum integer $a$ such that $v$ belongs to the maximal clique $Q_a$ .", "For a vertex $v\\in V(G)$ , the index $ q^+_{v} = \\max \\lbrace a \\colon v\\in Q_a\\rbrace $ .", "That is, the maximum integer $b$ such that $v$ belongs to the maximal clique $Q_b$ .", "We use the following observation to prove our main lemma.", "Observation 15 Let $X\\subseteq V(G)$ and $u,v$ be two vertices with $r(u) < l(v)$ such that $u$ and $v$ lie in different connected components in $G-X$ .", "Then there exists an integer $a$ with $ q^+_{u} \\le a < q^-_{v}$ , such that $S_{a}^{a+1} \\subseteq X$ .", "Let $\\mathcal {C}$ be the set of all connected components of $G-X$ .", "For a connected component $C\\in \\mathcal {C}$ , define $\\hat{r}(C)=\\max \\lbrace r(v)\\colon v\\in C\\rbrace $ and $\\hat{l}(C)=\\min \\lbrace l(v) \\colon v\\in C\\rbrace $ .", "Note that the interval $[\\hat{l}(C),\\hat{r}(C)] = \\bigcup \\limits _{v \\in V(C)}I(v)$ and we call it as the span($C$ ).", "Observe that for two distinct connected components $C, C^{\\prime } \\in \\mathcal {C}$ we have $\\emph {span}(C)\\cap \\emph {span}(C^{\\prime }) = \\emptyset $ .", "Therefore, $\\mathcal {C}$ can be ordered with respect to the order in which the span of components appears on the real line.", "Let $C_1,\\ldots ,C_x$ be this ordering.", "We define gap$(C_i,C_{i+1}) =(\\hat{r}(C_i),\\hat{l}(C_{i+1})), 1 \\le i \\le x-1$ .", "Note that any vertex whose corresponding interval contains a point in gap$(C_i,C_{i+1})$ should be a member of $X$ : otherwise that vertex belongs to another component in between $C_i$ and $C_{i+1}$ (by definition of gap$(C_i,C_{i+1})$ ) which contradicts the ordering of components.", "Let $C^u = C_t$ and $C^v=C_{t^{\\prime }}$ denote the connected components of $G-X$ that contain $u$ and $v$ , respectively.", "Since $r(u) < l(v)$ , we have $t < t^{\\prime }$ .", "Let $p\\in V(G)$ be such that $r(p) = \\max \\lbrace r(w)\\colon w\\in V(G), r(w)<\\hat{l}(C^v)\\rbrace $ .", "Now take $a = q^+_{p}$ , the maximum index $i$ such that $p \\in Q_i, 1 \\le i \\le k$ .", "For the index $a$ , we will show that $q^+_{u} \\le a < q^-_{v}$ and $S_{a}^{a+1} \\subseteq X$ .", "(i) $q^+_{u} \\le a < q^-_{v}$: It is immediate from the definition of $r(p)$ that $r(u) \\le \\hat{r}(C^{u}) \\le r(p)$ and $r(p) <\\hat{l}(C^v) \\le l(v)$ .", "Since $r(p) <l(v)$ , observe that the Helly region corresponding to the clique containing the vertex $p$ come before that of $v$ on the real line.", "Moreover, since the maximal cliques are numbered with respect to the order in which their Helly regions appear on the real line, we can infer that $ q^+_{p}=a < q^-_{v}$ .", "Similarly, since $r(u) \\le r(p)$ , by similar arguments as above, we have $q^+_{u} \\le a$ .", "Therefore we have proved $q^+_{u} \\le a < q^-_{v}$ (ii)$S_{a}^{a+1} \\subseteq X$: Consider the component $C_{t^{\\prime }-1}$ which comes in the immediate left of $C^v$ in the ordering of the components in $\\mathcal {C}$ .", "Since $r(p) <\\hat{l}(C^v)$ , the Helly region of $Q_a$ ends before $span(C^v)$ .", "Observe that $r(p) \\ge \\hat{r}(C_{t^{\\prime }-1})$ .", "Moreover, the Helly region of $Q_{a+1}$ starts after that of $Q_{a}$ .", "Since $p \\notin Q_{a+1}$ by definition of $a$ it follows that Helly region of $Q_{a+1}$ is after the $span(C_{t^{\\prime }-1})$ .", "Therefore, the intervals corresponding to those vertices common to both $Q_a$ and $Q_{a+1}$ contain some points of gap$(C_{t^{\\prime }-1},C^v)$ .", "This implies $S_{a}^{a+1} \\subseteq X$ .", "For two integers $a,b$ with $1\\le a\\le b\\le k$ , let $G\\left[a,b\\right]$ denotes the subgraph induced by the set $\\lbrace Q_a \\cup Q_{a+1} \\cup \\ldots \\cup Q_b\\rbrace $ .", "Definition 7 For an induced subgraph $H$ of $G$ , a vertex $v\\in V(H)$ and an integer $a$ , let $L_{H}\\left(a,v\\right)$ denote the set of vertices in $H$ that lie at distance $a$ from $v$ in $H$ .", "In the remainder of this section, we use the notation $L_{H}\\left(s+1,v\\right)$ where $H=G\\left[1,a\\right]- A$ for some integer $a$ and $v\\in Y^a_A$ (See Definition REF , (iii)) several times.", "Definition 8 For an integer $a, 1\\le a\\le k-1$ and a set $A \\in \\mathcal {S}\\left(Q_a\\right)$ consider the induced subgraph $H=G\\left[1,a\\right]- A$ and the sub-interval representation $\\mathcal {I}^{\\prime }\\subseteq \\mathcal {I}$ of $H$ .", "We define the “frontal component\" of the induced graph as the connected component of $G\\left[1,a\\right]- A$ containing the vertex with the rightmost endpoint in $\\mathcal {I}^{\\prime }$ .", "Note that for an integer $a$ and $A\\in \\mathcal {S}\\left(Q_a\\right)$ , the vertices of $Y_A^a$ , if any, lies in the frontal component of $G\\left[1,a\\right]- A$ .", "Below we categorize an $s$ -CVD set $X$ of $G\\left[1,a\\right]-A$ into four types.", "In the following definitions, we consider an integer $a, 1<a\\le k$ and a set $A\\in \\mathcal {S}\\left(Q_a\\right)$ .", "Definition 9 An $s$ -CVD set $X$ of $G\\left[1,a\\right]-A$ is of “type-1” if $Y_A^a \\subseteq X$ .", "Definition 10 An $s$ -CVD set $X$ of $H=G\\left[1,a\\right]-A$ is of “type-2” if there is a vertex $v\\in Y_A^a$ such that $L_{H}\\left(s+1,v\\right) \\subseteq X$ .", "Definition 11 An $s$ -CVD set $X$ of $H=G\\left[1,a\\right]-A$ is of “type-3” if there exists an integer $c, 1\\le c < a$ such that $S_{c}^{c+1}-A \\subseteq X$ and $G\\left[c+1,a\\right]-(S_{c}^{c+1}\\cup A)$ is connected and has diameter at most $s$ .", "Definition 12 An $s$ -CVD set $X$ of $H=G\\left[1,a\\right]-A$ is of “type-4” if there exists an integer $c, 1\\le c < a$ such that $S_{c}^{c+1}-A \\subseteq X$ and $G\\left[c+1,a\\right]-(S_{c}^{c+1}\\cup A)$ is connected and has diameter exactly $s+1$ .", "The following lemma is crucial for our algorithm.", "Lemma 13 (Main Lemma) Consider an integer $1\\le a\\le k$ and a set $A\\in \\mathcal {S}\\left(Q_a\\right)$ .", "Then at least one of the following holds: Every connected component of $G\\left[1,a\\right]-A$ have diameter at most $s$ .", "Any $s$ -CVD set of $G\\left[1,a\\right]-A$ is of some type-$j$ where $j\\in \\lbrace 1,2,3,4\\rbrace $ .", "Assume that the frontal component of $H=G\\left[1,a\\right]-A$ has diameter at least $s+1$ and the set $Y_A^a \\ne \\emptyset $ .", "Otherwise, any $s$ -CVD set $X$ of $H$ is of either type-1 or type-2: Type-1 is obvious when $Y_A^a = \\emptyset $ because $\\emptyset \\subseteq X$ .", "If the diameter of frontal component is at most $s$ then the set $L_{H}\\left(s+1,v\\right) = \\emptyset $ and hence any $s$ -CVD set of $H$ is of type-2.", "Let $H$ has an $s$ -CVD set $X$ that is not of type-$j$ for any $j\\in \\lbrace 1,2\\rbrace $ and $v$ be a vertex in $Y_A^a$ .", "Since $X$ is not of type-2, $H-X$ contains a vertex $u$ such that $u\\in L_{H}\\left(s+1,v\\right)$ .", "Now choose a vertex $u\\in L_{H}\\left(s+1,v\\right)$ such that $q^+_{u} = \\max \\lbrace q^+_{u^{\\prime }} \\colon u^{\\prime }\\in L_{H}\\left(s+1,v\\right)-X \\rbrace $ .", "Let $X^{\\prime }=X \\cup A$ .", "Then observe that $G-X^{\\prime } = H-X$ and hence, $ u \\in G-X^{\\prime }$ .", "Since $X$ is an $s$ -CVD set of $H$ and the distance between $u$ and $v$ in $H$ is $s+1$ , the vertices $u$ and $v$ must lie in different connected components in $G-X^{\\prime }$ .", "Therefore, by Observation REF , there is an integer $b$ such that $S_{b}^{b+1} \\subseteq X^{\\prime }$ and $q^+_{u} \\le b < q^-_{v}$ .", "Let $b$ be the maximum among all $b^{\\prime }$ such that $q^+_{u} \\le b^{\\prime } < q^-_{v}$ and $S_{b^{\\prime }}^{b^{\\prime }+1} \\subseteq X^{\\prime }$ .", "Note that $S_{b}^{b+1} \\subseteq X^{\\prime }$ implies $S_{b}^{b+1} -A \\subseteq X$ .", "To complete the proof we need the following claim.", "Claim Let $Y$ be a subset of $H$ such that $S_{b}^{b+1} \\subseteq Y \\subseteq X$ where $b$ is the maximum among all $b^{\\prime }$ such that $S_{b^{\\prime }}^{b^{\\prime }+1} \\subseteq X$ .", "Then $G\\left[b+1,a\\right]-(Y\\cup A)$ is connected.", "[Proof of Claim:] Suppose $G\\left[b+1,a\\right]-(Y\\cup A)$ is not connected.", "Let $Z=Y\\cup A$ and $C_v$ be the connected component containing a vertex $v \\in Q_a$ (Note that $Y_A^a \\ne \\emptyset )$ ) in $G\\left[b+1,a\\right]-Z$ .", "Since $G\\left[b+1,a\\right]-Z$ is not connected, there exists a vertex $u^{\\prime } \\in G-Z$ such that $u^{\\prime } \\notin C_v$ .", "Let $C_{u^{\\prime }}$ be the connected component containing $u^{\\prime }$ .", "Observe that $q^+_{u^{\\prime }} < q^-_{v} = a$ and $G-Z$ is also not connected.", "Hence by Observation REF , there exists an integer $b^*$ such that $S_{b^*}^{b^*+1} \\subseteq Z$ and $q^+_{u^{\\prime }} \\le b^* < q^-_{v}$ .", "Since $u^{\\prime } \\in G\\left[b+1,a\\right]- Z$ , the index $q^+_{u^{\\prime }} > b$ .", "Thus it follows that $b < q^+_{u} \\le b^* < q^-_{v}$ , which contradicts the maximality of the index $b$ .", "Let $H_b = G\\left[b+1,a\\right]-(S_{b}^{b+1}\\cup A)$ .", "Now we show that $H_b$ has diameter at most $s+1$ .", "Otherwise, $H_b$ contains vertices that are at distance greater than $s+1$ from the vertex $v$ .", "Let $Q_{b^{\\prime \\prime }}$ be the highest indexed maximal clique containing a vertex $x$ such that distance between $x$ and $v$ in $H_b$ is exactly $s+2$ .", "Observe that $b^{\\prime \\prime } >b$ .", "Now we show that $S_{b^{\\prime \\prime }}^{b^{\\prime \\prime }+1} \\subseteq X$ which contradicts the maximality of $b$ (See the definition of $b$ defined in the above paragraph.)", "For that, since $S_{b^{\\prime \\prime }}^{b^{\\prime \\prime }+1} \\subseteq Q_{b^{\\prime \\prime }+1}$ , the maximality of $b^{\\prime \\prime }$ implies that the vertices in $S_{b^{\\prime \\prime }}^{b^{\\prime \\prime }+1}$ are at distance $s+1$ from $v$ in $H_b$ .", "Note that by the above claim, the induced subgraphs $H_b$ and $G\\left[b+1,a\\right]-(X \\cup A)$ are connected.", "Moreover, since $X$ is an $s$ -CVD set of $H=G\\left[1,a\\right]-A$ , when $S_{b}^{b+1} -A \\subseteq X$ all vertices at distance greater than $s+1$ from the vertex $v$ in $H_b$ must be in $X - (S_{b}^{b+1}-A)$ .", "Therefore, $S_{b^{\\prime \\prime }}^{b^{\\prime \\prime }+1} \\subseteq X$ and $S_{b^{\\prime \\prime }}^{b^{\\prime \\prime }+1} \\cup A \\subseteq X^{\\prime }$ .", "This contradicts the maximality of $b$ .", "If the diameter of $H_b$ is exactly $s+1$ , then $X$ is of type-4.", "Otherwise, $X$ is of type-3." ], [ "Some more observations", "Let $H$ be an induced subgraph of $G$ and $u,v$ be two vertices of $H$ .", "The distance between $u$ and $v$ in $H$ is denoted by $d_{H}(u,v)$ .", "Observation 16 Consider two integers $a,b$ with $1\\le a<b\\le k$ and a set $A \\in \\mathcal {S}\\left(Q_b\\right)$ .", "Let $H = G\\left[1,b\\right]-A$ and $u,v,w$ be three vertices of $H$ such that $\\lbrace u,v\\rbrace \\subseteq Q_{b} - Q_{b-1}$ and $w \\in Q_a$ .", "Then $d_{H}(u,w) = d_{H}(v,w)$ .", "Suppose for contradiction that $d_{H}(u,w) \\ne d_{H}(v,w)$ .", "Without loss of generality assume that $d_{H}(u,w) < d_{H}(v,w)$ .", "Let $P$ be a shortest path between $u$ and $w$ in $H$ and $u^{\\prime }$ be the vertex in $P$ which is adjacent to $u$ .", "Observe that $u^{\\prime }\\in Q_b\\cap Q_{b-1}$ (this is because: $u$ is not intersecting with the Helly region of $Q_{b-1}$ , $a < b$ in the ordering and P is a shortest path).", "Therefore $u^{\\prime }$ is adjacent to $v$ and $P^{\\prime }=(P-\\lbrace u\\rbrace )\\cup \\lbrace v\\rbrace $ is a path between $v$ and $w$ such that $d_{H}(v,w) \\le |P^{\\prime }|=|P| = d_{H}(u,w)$ , a contradiction.", "Observation 17 Let $C_f^H$ be the frontal component of $H=G\\left[1,a\\right]-A^{*}, A^{*} \\subseteq V(G)$ .", "Let $Y_{A^*}^a= (Q_a-Q_{a-1})-A^*$ .", "If $Y_{A^*}^a \\ne \\emptyset $ then any vertex $v \\in Y_{A^*}^a$ is an end vertex of a diametral path (a shortest path whose length is equal to the diameter of a graph) of $C_f^H$ .", "Suppose that $Y_{A^*}^a \\ne \\emptyset $ and no vertex $v \\in Y_{A^*}^a$ is an end vertex of a diametral path of $C_f^H$ .", "Let $P$ be a diametral path of $C_f^H$ and $x,y$ be the end vertices.", "Observe that neither $x$ nor $y$ is in $Y_{A^*}^a$ .", "Without loss of generality assume that $q^-_{x} \\le q^-_{y}$ .", "Let $P^{\\prime }$ be a shortest path between $x$ and $v$ where $v \\in Y_{A^*}^a$ .", "Since $P$ has the maximum size among the shortest paths and $P^{^{\\prime }}$ is not a diametral path, we have $|P^{\\prime }|< |P|$ .", "Since $v \\in Y_{A^*}^a$ and $x, y \\notin Y_{A^*}^a$ we have $a=q^-_{v} > q^-_{y} \\ge q^-_{x}$ .", "Hence the path $P^{\\prime }$ contains a vertex $w$ such that $w \\ne v$ and $q^-_{w} \\le q^-_{y}\\le q^+_{w}$ (That is, any path from $v$ to $x$ should cross the cliques containing $y$ ).", "This implies $w$ is a neighbor of $y$ and there exists a path $P^{\\prime \\prime }$ between $x$ and $y$ via $w$ such that $|P^{\\prime \\prime }| \\le |P^{\\prime }|$ (the path $P^{\\prime \\prime }$ is obtained by adding the edge $wy$ to the subpath from $x$ to $w$ in $P^{\\prime }$ ).", "Since $|P^{\\prime }|< |P|$ , this contradicts the assumption that $P$ is a shortest path between $x$ and $y$ .", "Therefore, there exists at least one vertex $v \\in Y_{A^*}^a$ which is an end vertex of a diametral path of $C_f^H$ .", "Then by Observation REF , each vertex in $Y_{A^*}^a$ is an end vertex of a diametral path of $C_f^H$ ." ], [ "The algorithm", "Our algorithm constructs a table $\\Psi $ iteratively whose cells are indexed by two parameters.", "For an integer $a, 1\\le a\\le k$ and a set $A \\in \\mathcal {S}\\left(Q_a\\right)$ , the cell $\\Psi [a,A]$ contains a minimum $s$ -CVD set of $G\\left[1,a\\right]- A$ .", "Clearly, $\\Psi [k,\\emptyset ]$ is a minimum $s$ -CVD set of $G$ .", "Now we start the construction of $\\Psi $ .", "Since $G\\left[1,1\\right]$ is a clique, we set $\\Psi [1,A] = \\emptyset $ for all $A\\in \\mathcal {S}\\left(Q_1\\right)$ : Lemma 14 For any $A\\in \\mathcal {S}\\left(Q_1\\right)$ , $\\Psi [1,A] = \\emptyset $ .", "From now on assume $a\\ge 2$ and $A$ be a set in $\\mathcal {S}\\left(Q_a\\right)$ .", "Let $H$ be the graph $G\\left[1,a\\right]-A$ and $F$ be the graph $G\\left[1,a-1\\right]-(A\\cap Q_{a-1})$ .", "Observe that for any two integers $a, b, 1 \\le a < b \\le k$ the set $S_{a-1}^{b} = S_{a}^{b} \\cap Q_{a-1}$ .", "Then, for any $A\\in \\mathcal {S}\\left(Q_a\\right)$ we have $(A\\cap Q_{a-1})\\in \\mathcal {S}\\left(Q_{a-1}\\right)$ and $\\Psi [a-1,A\\cap Q_{a-1}]$ is defined.", "Note that $H-F = Y_A^a$ .", "In the following lemma we show that $\\Psi [a,A] = \\Psi [a-1,A\\cap Q_{a-1}]$ if the frontal component of $H$ has diameter at most $s$ .", "Lemma 15 Let $H=G\\left[1,a\\right]-A$ , for $A\\in \\mathcal {S}\\left(Q_a\\right), 1 < a \\le k$ .", "If the frontal component of $H$ has diameter at most $s$ , then $\\Psi [a,A] = \\Psi [a-1,A\\cap Q_{a-1}]$ .", "Let $F$ denote the graph $G\\left[1,a-1\\right]-(A\\cap Q_{a-1})$ .", "Since $H= F \\cup Y_A^a$ , if $Y_A^a = \\emptyset $ then $H=F$ and hence, $\\Psi [a,A] = \\Psi [a-1,A\\cap Q_{a-1}]$ .", "Now assume that $Y_A^a \\ne \\emptyset $ .", "Observe that the connected components of $H$ and $F$ are same except the frontal components.", "The frontal components of $H$ and $F$ differs depending on the set $S_{a-1}^{a}$ as follows.", "If $S_{a-1}^{a} \\cap H = \\emptyset $ then the frontal component of $H$ is $Y_A^a$ .", "If $S_{a-1}^{a} \\cap H \\ne \\emptyset $ then the frontal component of $H$ is the union of the frontal component of $G\\left[1,a-1\\right]-A$ and $Y_A^a$ .", "If the frontal component of $H$ is $Y_A^a$ then $\\Psi [a,A] = \\Psi [a-1,A\\cap Q_{a-1}]$ because diameter of $Y_A^a$ is 1.", "Hence assume that the frontal component of $H$ belongs to the case (ii) defined above.", "Let $C_f^H$ be the frontal component of $H$ and $C_f^F$ be the frontal component of $F$ .", "Then $C_f^H= C_f^F \\cup Y_A^a$ .", "We have the following claim.", "Claim Let $C_f^H= C_f^F \\cup Y_A^a$ .", "If the diameter of $C_f^H$ is at most $s$ then the diameter of $C_f^F$ is also at most $s$ .", "[Proof of Claim:] Suppose not, then $C_f^F$ contains two vertices $u$ and $v$ such that the distance between $u$ and $v$ in $C_f^F$ is at least $s+1$ .", "Without loss of generality, assume that $l(u) < l(v)$ .", "Let $P$ be a shortest path between $u$ and $v$ in $C_f^F$ .", "Observe that since $C_f^F=C_f^H - Y_A^a$ , no vertex $w\\in Y_A^a$ belongs to $V(P)$ .", "Moreover, for any vertex $w\\in Y_A^a$ we have $l(u) < l(v) < l(w)$ in the interval representation.", "Therefore, any shortest path between $u$ and $v$ in $C_f^H$ does not contain a vertex $w\\in Y_A^a$ .", "Hence the shortest path between $u$ and $v$ in $C_f^H$ is also at least $s+1$ which contradicts the assumption that the diameter of $C_f^H$ is at most $s$ .", "Hence by the minimality of $\\Psi [a-1,A\\cap Q_{a-1}]$ , no vertices of $C_f^F$ are in $\\Psi [a-1,A\\cap Q_{a-1}]$ .", "Thus it follows that $\\Psi [a,A] = \\Psi [a-1,A\\cap Q_{a-1}]$ .", "Now assume that the frontal component of $H=G\\left[1,a\\right]-A$ has diameter at least $s+1$ .", "Recall that if $Y_A^a = \\emptyset $ , we have $\\Psi [a,A] = \\Psi [a-1,A\\cap Q_{a-1}]$ .", "Hence assume that $Y_A^a \\ne \\emptyset $ .", "Due to Lemma REF , any $s$ -CVD set of $H$ has to be one of the four types defined in Section REF .", "First, for each $j\\in \\lbrace 1,2,3,4\\rbrace $ , we find an $s$ -CVD set of minimum cardinality, which is of type-$j$ .", "We begin by showing how to construct a minimum cardinality $s$ -CVD set $X_1$ of type-1 of $G\\left[1,a\\right]-A$ .", "We define $X_1$ as below.", "$X_1 = Y^a_A\\cup \\Psi [a-1,A\\cap Q_{a-1}]$ Lemma 16 The set $X_1$ is a minimum cardinality $s$ -CVD set of type-1 of $G\\left[1,a\\right]-A$ .", "Observe that the graph $H= G\\left[1,a\\right]-Y^a_A$ is isomporphic to $G\\left[1,a-1\\right]-(A \\cap Q_{a-1})$ .", "Hence $ X_1 = Y^a_A \\ \\cup \\Psi [a-1,A\\cap Q_{a-1}]$ is an $s$ -CVD set of $H$ .", "By definition, $Y^a_A$ is included in an $s$ -CVD set of type-1.", "Hence the minimality of $\\Psi [a-1,A\\cap Q_{a-1}]$ implies that $X_1$ is a minimum cardinality set of type-1.", "Let $v$ be some vertex in $Y^a_A$ and $b<a$ be the maximum integer such that $\\left(Q_{b} \\cap L_{H}\\left(s+2,v\\right)\\right) \\ne \\emptyset $ .", "We construct a minimum cardinality $s$ -CVD set of type-2 of $G\\left[1,a\\right]-A$ defined as follows.", "$X_2 = L_{H}\\left(s+1,v\\right) \\cup \\Psi [b,S_{b}^{b+1}]$ Lemma 17 The set $X_2$ is a minimum cardinality $s$ -CVD set of type-2 of $G\\left[1,a\\right]-A$ .", "By the maximality of $b$ we have $S_{b}^{b+1} \\subseteq L_{H}\\left(s+1,v\\right)$ .", "Moreover, the graph $(G\\left[b+1,a\\right]-A)-L_{H}\\left(s+1,v\\right)$ is connected: otherwise, if $L_{H}\\left(s+1,v\\right)$ is a separator of $(G\\left[b+1,a\\right]-A)$ then $(S_{b^{\\prime }}^{b^{\\prime }+1}-A) \\subseteq L_{H}\\left(s+1,v\\right)$ for some $b^{\\prime } > b$ .", "Since $Q_{b^{\\prime }}$ is a maximal clique there exists at least one vertex $w \\in Q_{b^{\\prime }}$ and $w \\notin Q_{b^{\\prime }+1}$ .", "Hence the distance between $w$ and $v$ is $s+2$ and $\\left(Q_{b^{\\prime }} \\cap L_{H}\\left(s+2,v\\right)\\right) \\ne \\emptyset $ .", "Since $b^{\\prime } >b$ , this contradicts the maximality of $b$ .", "Since $(G\\left[b+1,a\\right]-A)-L_{H}\\left(s+1,v\\right)$ is connected we have $((G\\left[b+1,a\\right]-A)-L_{H}\\left(s+1,v\\right))$ is a frontal component of $G[1,a]- (A \\cup L_{H}\\left(s+1,v\\right)$ .", "Let $A^{\\prime }=A \\cup L_{H}\\left(s+1,v\\right)$ .", "Note that $Y_{A^{\\prime }}^a = Y_{A}^a \\ne \\emptyset $ .", "Observe that the distance between $v \\in Y_{A^{\\prime }}^a$ and any other vertex in $(G\\left[b+1,a\\right]-A)-L_{H}\\left(s+1,v\\right)$ is at most $s$ .", "Hence by Observation REF , $(G\\left[b+1,a\\right]-A)-L_{H}\\left(s+1,v\\right)$ has diameter at most $s$ .", "Note that any vertex of $G\\left[1,b\\right]$ that belongs to $A$ is also in $S_{b}^{b+1}$ .", "Hence $G\\left[1,b\\right] - (A \\cup S_{b}^{b+1}) = G\\left[1,b\\right] - S_{b}^{b+1}$ .", "Since $\\Psi [b,S_{b}^{b+1}]$ is a minimum cardinality $s$ -CVD set of $G\\left[1,b\\right] - S_{b}^{b+1}$ the set $X_2 = L_{H}\\left(s+1,v\\right) \\cup \\Psi [b,S_{b}^{b+1}]$ is an $s$ -CVD set of $H$ .", "By definition, $L_{H}\\left(s+1,v\\right) $ is included in an $s$ -CVD set of type-2.", "Observe that any vertex of $G\\left[1,b\\right]$ that belongs to $L_{H}\\left(s+1,v\\right)$ is also in $S_{b}^{b+1}$ and hence the minimality of $\\Psi [b,S_{b}^{b+1}]$ implies that $X_2$ is a minimum cardinality set of type-2.", "Now we show how to construct a minimum cardinality $s$ -CVD set $X_3$ of type-3 of $G\\left[1,a\\right]-A$ .", "Let $B\\subseteq \\lbrace 1,2,\\ldots ,a-1\\rbrace $ be the set of integers such that for any $i\\in B$ the graph $H_i = G\\left[i+1,a\\right]- (S_{i}^{i+1} \\cup A)$ is connected and has diameter at most $s$ .", "By definition, a type-3 $s$ -CVD set $X$ of $H$ contains $S_c^{c+1}$ for some $c \\in B$ .", "We call each such type-3 $s$ -CVD set as type-$3(c)$ .", "Now we define minimum type-$3(c)$ $s$ -CVD set as follows.", "$ \\begin{split}\\text{For each } c\\in B,\\quad Z_c & = (S_{c}^{c+1}-A) \\cup \\Psi [c,S_{c}^{c+1}]\\\\\\end{split}$ Claim The set $Z_c$ is a minimum cardinality $s$ -CVD set of type-$3(c)$ of $G\\left[1,a\\right]-A$ .", "[Proof of Claim] Note that any vertex of $G\\left[1,c\\right]$ that belongs to $A$ is also in $S_{c}^{c+1}$ .", "By definition, $S_{c}^{c+1}$ separates the connected component $G\\left[c+1,a\\right]- (S_{c}^{c+1} \\cup A)$ from the rest of the graph namely, $G\\left[1,c\\right]- (S_{c}^{c+1})$ .", "Since the diameter of $G\\left[c+1,a\\right]- (S_{c}^{c+1} \\cup A)$ is at most $s$ and $\\Psi [c,S_{c}^{c+1}]$ is the minimal cardinality $s$ -CVD set of $G\\left[1,c\\right]- S_{c}^{c+1}$ the set $Z_c = (S_{c}^{c+1}-A) \\cup \\Psi [c,S_{c}^{c+1}]$ is a minimum cardinality $s$ -CVD set of $H$ of type-$3(c)$ .", "We define $X_3$ as below.", "$ X_3 = \\min \\lbrace Z_c\\colon c\\in B\\rbrace $ Lemma 18 The set $X_3$ is a minimum cardinality $s$ -CVD set of type-3 of $G\\left[1,a\\right]-A$ .", "The minimality of each $Z_c$ implies that the set $X_3$ is a minimum cardinality type-3 $s$ -CVD set.", "Finally, we show the construction of a minimum cardinality $s$ -CVD set $X_4$ of type-4 of $G\\left[1,a\\right]-A$ .", "Let $C\\subseteq \\lbrace 1,2,\\ldots ,a-1\\rbrace $ be the set of integers such that for any $i\\in C$ the graph $H_i = G\\left[i+1,a\\right]-(S_{i}^{i+1}\\cup A)$ is connected and has diameter exactly $s+1$ .", "By definition, a type-4 $s$ -CVD set $X$ of $H$ contains $S_i^{i+1}$ for some $i \\in C$ .", "We call each such type-4 $s$ -CVD set as type-$4(c)$ .", "Now we define minimum type-$4(c)$ $s$ -CVD set as follows.", "Note that $Y_A^a \\ne \\emptyset $ .", "Let $v$ be some vertex in $Y^a_A$ and $Y_i = L_{H_i}\\left(s+1,v\\right)$ .", "$ \\begin{split}\\text{For each } i\\in C,\\quad Z_i & = (S_{i}^{i+1}-A) \\cup Y_i \\cup \\Psi [i,S_{i}^{i+1}]\\\\\\end{split}$ Claim The set $Z_i$ is a minimum cardinality $s$ -CVD set of type-$4(c)$ of $G\\left[1,a\\right]-A$ .", "[Proof of Claim] Recall that $H_i$ is connected and we claim that the graph $H_i-Y_i$ is also connected: otherwise, if $Y_i$ is a separator of $H_i$ then there exits a vertex $w$ in $H_i-Y_i$ such that $w$ does not belongs to the component containing $v$ in $H_i-Y_i$ .", "Since any path from $v$ to $w$ in $H_i$ passes through $Y_i$ , the distance of $w$ from $v$ in $H_i$ is at least $s+2$ contradicting the assumption that $H_i$ has diameter exactly $s+1$ .", "Since $H_i - Y_i$ is connected, it is the frontal component of $G\\left[1,a\\right]-A-(S_{i}^{i+1}\\cup Y_i)$ .", "Let $A^{\\prime }=A \\cup S_{i}^{i+1}\\cup Y_i$ .", "Note that $Y_{A^{\\prime }}^a = Y_{A}^a \\ne \\emptyset $ .", "Hence the distance between $v \\in Y_{A^{\\prime }}^a$ and any other vertex in $H_i-Y_i$ is at most $s$ .", "Thus by Observation REF the graph $H_i - Y_i$ has diameter at most $s$ .", "Note that $\\Psi [i,S_{i}^{i+1}]$ is the minimal cardinality $s$ -CVD set of $G\\left[1,i\\right]- S_{i}^{i+1}$ and any vertex of $G\\left[1,i\\right]- S_{i}^{i+1}$ that belongs to $Y_i$ or $A$ is also in $S_{i}^{i+1}$ .", "Hence, the set $Z_i =(S_{i}^{i+1}-A) \\cup Y_i \\cup \\Psi [i,S_{i}^{i+1}]$ is a minimum $s$ -CVD set of $H$ of type-$4(c)$ .", "Now define $X_4$ as follows.", "$ X_4 = \\min \\lbrace Z_i\\colon i\\in C\\rbrace $ Lemma 19 The set $X_4$ is a minimum cardinality $s$ -CVD set of type-4 of $G\\left[1,a\\right]-A$ .", "The minimality of each $Z_i$ implies that the set $X_4$ is a minimum cardinality type-4 $s$ -CVD set.", "Now we define a minimum $s$ -CVD set of $G\\left[1,a\\right]- A$ as the one with minimum cardinality among the sets $X_i, 1 \\le i\\le 4$ .", "That is, $\\Psi [a,A] = \\min \\lbrace X_1,X_2,X_3,X_4\\rbrace $ A pseudocode of the procedure to find Equation REF is given by Procedure REF .", "[H] Let $H=G[1,a]-A$ and $Y_A^a = (Q_a-Q_{a-1})-A$ Set $X_1 = Y_A^a\\cup \\Psi [a-1,A\\cap Q_{a-1}]$ For a vertex $v \\in Y^a_A$ , find the maximum integer $b$ such that $b<a$ and $Q_{b} \\cap L_{H}\\left(s+2,v\\right) \\ne \\emptyset $ Set $ X_2 = L_{H}\\left(s+1,v\\right) \\cup \\Psi [b,S_{b}^{b+1}] $ Set $B=C=\\emptyset $ $c= 1 \\text{ to } a-1$ Diam$[c][a][A] \\le s$ $Z_c=(S_{c}^{c+1}-A) \\cup \\Psi [c,S_{c}^{c+1}]$$H_c=G\\left[c+1,a\\right]-(S_{c}^{c+1}\\cup A)$ $B= B \\cup \\lbrace c\\rbrace $ Diam$[c][a][A] = s+1$$W_c=(S_{c}^{c+1}- A) \\cup L_{H_c}\\left(s+1,v\\right) \\cup \\Psi [c,S_{c}^{c+1}]$ $C= C \\cup \\lbrace c\\rbrace $ Set $X_3 = \\min \\lbrace Z_i\\colon i\\in B\\rbrace $ Set $X_4 = \\min \\lbrace W_i\\colon i\\in C\\rbrace $ Set $\\Psi [a,A] = \\min \\lbrace X_1,X_2,X_3,X_4\\rbrace $ Return $\\Psi [a,A]$ Compute_sCD$(G,a,A)$ We formally summarize the above discussion in the following lemma.", "Lemma 20 For $1 < a \\le k$ , if the diameter of the frontal component of $G\\left[1,a\\right]-A$ is at least $s+1$ , then $\\Psi [a,A] = \\min \\lbrace X_1,X_2,X_3,X_4\\rbrace $ .", "The proof follows from Lemma REF and the above discussion on the minimality of the sets $X_i, 1 \\le i\\le 4$ , in their respective types.", "The proof of correctness of the algorithm follows from the Lemmas REF , REF and REF .", "A pseudocode of the algorithm for finding a minimum $s$ -CVD set of an interval graph is given in Algorithm REF .", "In the following section, we discuss the time complexity of the algorithm.", "0 [H] computeCompute_sCD(G,a,A) KwInInput KwOutOutput An interval graph $G$ and a positive integer $s$ $\\Psi [k,\\emptyset ]$ Using algorithm in [8] find the ordered set of maximal cliques of $G$ , say $Q_1,Q_2,\\ldots ,Q_k$ and $N_{\\text{left}}(v)$ , $q^+_{v}$ and $q^-_{v}$ for each vertex $v \\in V(G)$ Find $\\mathcal {S}\\left(Q_1\\right)$ all $A \\in \\mathcal {S}\\left(Q_1\\right)$ $\\Psi [1,A]=\\emptyset $ $a= 2 \\text{ to } k$ Find $\\mathcal {S}\\left(Q_a\\right)$ $A \\in \\mathcal {S}\\left(Q_a\\right)$ Set $Y_A^a = (Q_a - Q_{a-1})-A$ $Y_A^a = \\emptyset $$\\Psi [a,A]=\\Psi [a-1,A\\cap Q_{a-1}]$ $c = 1 \\text{ to } a-1$ Find the diameter of the induced subgraph $H_c= G[c+1,a]-(A\\cup S_{c}^{c+1})$ using $N_{\\text{left}}(v), v \\in Y_A^a$ and store it in Diam$[c][a][A]$ .", "diameter Diam$[1][a][A]$ of the frontal component of $H_0 = G[1,a]-A \\le s$ $\\Psi [a,A]=\\Psi [a-1,A\\cap Q_{a-1}]$ $\\Psi [a,A]=$ $s$ -CVD$(G,s)$ :G is an interval graph and $s$ is a positive integer" ], [ "Time complexity", "For a given interval graph $G$ with $n$ vertices and $m$ edges, the algorithm first finds the ordered set of maximal cliques of $G$ as described in Section REF .", "Such an ordered list of the maximal cliques of G can be produced in linear time as a byproduct of the linear (O(n + m)) time recognition algorithm for interval graphs due to Booth and Leuker [8].", "For each vertex $v\\in G$ , the algorithm gathers the following information during the enumeration of maximal cliques: (i) the values $q^-_{v}$ and $q^+_{v}$ and (ii) the set of neighbours of $v$ whose corresponding interval starts before that of $v$ which we call as $N_{\\text{left}}(v)$ and are ordered with respect to the left endpoints.", "Let $Q_1,Q_2,\\ldots ,Q_k$ be the ordered set of maximal cliques of $G$ .", "From the ordered set of cliques, the algorithm constructs the set $\\mathcal {S}\\left(Q_a\\right)$ (steps 2, 6, Algorithm REF) for each $Q_a, 1 \\le a < k$ .", "For an integer $a, 1 \\le a < k$ the set $\\mathcal {S}\\left(Q_a\\right)$ can be constructed by adding a vertex $v \\in Q_a$ to each $S_{a}^{b} \\in \\mathcal {S}\\left(Q_a\\right)$ for $a < b \\le q^+_{v}$ .", "For the computation of each $\\Psi [a,A], 1 \\le a \\le k, A \\in \\mathcal {S}\\left(Q_a\\right) $ the algorithm needs to compute the following: (i) the set of vertices, $Y_A^a$ (step 8, Algorithm REF); (ii) the diameter of the frontal component of the graph $H= G[1,a]-A$ (step 14, Algorithm REF) and (iii) the diameter of the induced subgraphs $H_c= G[c+1,a]-(A\\cup S_{c}^{c+1}), 1 \\le c \\le a-1$ (steps 12-13, Procedure REF).", "The set $Y_A^a$ can be obtained from the vertex set of $Q_a$ in linear time by checking the $q^-_{v}$ and $q^+_{v}$ values of each vertex $v \\in Q_a$ .", "That is, $Y_A^a = \\lbrace v \\in Q_a: q^-_{v}=a \\text{ and } q^+_{v} < b, A = S_{a}^{b}\\rbrace $ .", "Let Diam$[1][a][A]$ be the diameter of the frontal component of $H= G[1,a]-A$ .", "By Observation REF , diameter of the frontal component of $H$ is equal to the eccentricity of a vertex $v \\in Y_a^A$ .", "That is, the maximum distance of $v$ from other vertices in $H$ which we denote by ecc$_{H}(v)$.", "Hence, Diam$[1][a][A]=$ecc$_{H}(v)$.", "Let $v_l$ be the leftmost neighbour of $v$ in $H$ such that $q^-_{v_l}= a^{\\prime }$ and ecc$_{H^{\\prime }}(v_l)$ be the eccentricity of $v_l$ in $H^{\\prime } = G[1,a^{\\prime }]- (Q_{a^{\\prime }} \\cap Q_b)$ .", "Then observe that ecc$_{H}(v)$= ecc$_{H^{\\prime }}(v_l) +1$.", "Therefore, Diam$[1][a][A]$ = Diam$[1][a^{\\prime }][Q_{a^{\\prime }} \\cap Q_b] + 1$ .", "Since the leftmost neighbour of $v$ in $H$ can be found in linear time from $N_{\\text{left}}(v)$ by checking the $q^-_{v}$ and $q^+_{v}$ values of each vertex $u \\in N_{\\text{left}}(v)$ , diameter of the frontal component of $H$ can be found in $O(n)$ time.", "Similarly, diameter of the induced subgraphs $H_c= G[c+1,a]-(A\\cup S_{c}^{c+1})$ in steps 12 -13, Procedure REF together can be found in $O(n)$ time by similar arguments as above and the following observation; $N_{\\text{left}}(v) - (A \\cup S_{c}^{c+1}) \\supseteq N_{\\text{left}}(v) - (A \\cup S_{c+1}^{c+2})$ .", "To compute the overall time complexity of our algorithm, we have the following claims.", "Claim 2 Total number of subproblems computed by the algorithm, Algorithm REF is at most $O(|V|+ |E|)= O(n+m)$ .", "[Proof of Claim] Note that with respect to the ordering of maximal cliques of $G$ the elements of the set $\\mathcal {S}\\left(Q_a\\right)$ have the following relation.", "For each $b, a < b \\le k$ we have $S_a^{b+1} \\subseteq S_a^b$ .", "Hence the number of distinct subproblems computed by the algorithm corresponding to each maximal clique $Q_a$ is at most $|S_a^{a+1}|+1$ (Recall that one of the subproblem corresponds to $\\emptyset \\in \\mathcal {S}\\left(Q_a\\right)$ ).", "Since the number of maximal cliques in $G$ is at most $|V|=n$ and $|S_a^{a+1}| \\le degree(v), v \\in Q_a -Q_{a+1} $ , the total number of subproblems computed by the algorithm is at most $\\sum \\limits _{v \\in Q_a -Q_{a+1} }degree(v) + |V| \\le O(|V|+ |E|)= O(n+m)$ .", "Claim 3 The procedure Compute_sCD$(G,a,A)$ computes the minimum cardinality $s$ -CVD set of $H=G\\left[1,a\\right]- A$ in $O(n)$ time.", "[Proof of Claim:] Observe that the time complexity of the procedure Compute_sCD$(G,a,A)$ depends mainly on building the sets $X_i, 1 \\le i \\le 4$ .", "Since the set $Y_A^a, 1 \\le a < k, A \\in \\mathcal {S}\\left(Q_a\\right)$ is obtained in $O(n)$ time, the set $X_1$ can be computed in $O(n)$ time.", "The set $L_{H}\\left(s+1,v\\right)$ can be computed from the leftmost neighbour of $v$ in $H$ , say $v_l$ in linear time by $s$ iterations: In the first iteration, find the leftmost vertex of $v_l$ in $N_{\\text{left}}(v_l)-A$ , in the second iteration find the leftmost vertex in the second neighbourhood and so on.", "Moreover, the leftmost neighbour of $v$ in $H$ can be obtained by a linear search of $N_{\\text{left}}(v)$ .", "Since the number of induced subgraphs $H_c$ is at most $O(n)$ , the sets $X_3$ and $X_4$ can be constructed in $O(n)$ time.", "Hence the claim follows.", "Therefore, by the above claims the overall time complexity of our algorithm is $O(n\\cdot (n+m))$ and Theorem REF follows." ], [ "Hardness for well-partitioned chordal graphs", ".", "In this section, we prove Theorem REF .", "We shall use the following observation.", "Observation 18 Let $H$ be a well-partitioned chordal graph.", "Let $H^{\\prime }$ be a graph obtained from $H$ by adding a vertex of degree 1.", "Then $H^{\\prime }$ is an well-partitioned chordal graph.", "Let $s\\ge 2$ be an even integer and let $s=2k$ .", "We shall reduce Minimum Vertex Cover (MVC) on general graphs to $s$ -CVD on well partitioned graphs.", "Let $\\langle G,k \\rangle $ be an instance of Minimum Vertex Cover such that maximum degree of $G$ is at most $n-3$ .", "Let $\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}G\\hspace{-0.83328pt}}\\hspace{0.83328pt}$ denote the complement of $G$ .", "Now construct a split graph $G_{well}$ from $G$ as follows.", "For each vertex of $v\\in V(G)$ , we introduce a new path $P_v$ with $k-1$ edges and let $x_v,x^{\\prime }_v$ be the endpoints of $P_v$ .", "For each edge $e\\in E\\left(\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}G\\hspace{-0.83328pt}}\\hspace{0.83328pt}\\right)$ we introduce a new vertex $y_e$ in $G_{well}$ .", "For each pair of edges $e_1,e_2 \\in E(\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}G\\hspace{-0.83328pt}}\\hspace{0.83328pt})$ we introduce an edge between $y_{e_1}$ and $y_{e_2}$ in $G_{well}$ .", "For each edge $e=uv \\in E\\left(\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}G\\hspace{-0.83328pt}}\\hspace{0.83328pt}\\right)$ , we introduce the edges $x_u y_e$ and $x_v y_e$ in $G_{well}$ .", "Observe that $C=\\lbrace y_e\\rbrace _{e\\in E\\left(\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}G\\hspace{-0.83328pt}}\\hspace{0.83328pt}\\right)}$ is a clique, $I=\\lbrace x_v\\rbrace _{v\\in V(G)}$ is an independent set of $G_{well}$ .", "Therefore $C\\cup I$ induces a split graph, say $G^{\\prime }$ , in $G_{well}$ .", "Since $G_{well}$ can be obtained from $G^{\\prime }$ by adding vertices of degree 1, due to Observation REF , we have that $G_{well}$ is an well-partitioned graph.", "We shall show that $G$ has a vertex cover of size $k$ if and only if $G_{well}$ has a $s$ -CVD set of size $k$ .", "Observation 19 For each vertex $v\\in C$ , $|N[v]\\cap I|=2$ and for each vertex $u\\in I$ , $|N[u]\\cap C|\\ge 2$ .", "Lemma 21 Let $D$ be a subset of $I$ and let $T=\\lbrace u\\in V(G)\\colon x_u \\in D\\rbrace $ .", "The set $D$ is a $s$ -CVD set of $G_{well}$ if and only if $T$ is a vertex cover of $G$ .", "Let $D^{\\prime }=\\lbrace x^{\\prime }_v\\colon x_v \\in I-D\\rbrace $ and $T^{\\prime }=\\lbrace u\\in V(G)\\colon x_u \\in D^{\\prime }\\rbrace $ (note that $T=V(G) - T^{\\prime }$ ).", "Note that there is one single component $G^{\\prime }$ of $G_{well} - D$ that contains vertices from $C$ since there are no isolated vertices by observation REF .", "Observe that $G^{\\prime }$ contains $I-D$ .", "Therefore, for any two vertices $x^{\\prime }_u,x^{\\prime }_v\\in D^{\\prime }$ the distance between $x^{\\prime }_u,x^{\\prime }_v$ is $s$ if and only if there is an edge between $u,v$ in $\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}G\\hspace{-0.83328pt}}\\hspace{0.83328pt}$ .", "Therefore, distance between any two pair of vertices in $D^{\\prime }$ is $s$ if and only if $T^{\\prime }$ induces a clique in $\\hspace{0.83328pt}\\overline{\\hspace{-0.83328pt}G\\hspace{-0.83328pt}}\\hspace{0.83328pt}$ and therefore an independent set in $G$ .", "Since $T=V(G)- T^{\\prime }$ , we have that distance between any two pair of vertices in $D^{\\prime }$ is $s$ if and only if $T$ is a vertex cover of $G$ .", "Since $|D^{\\prime }|=|I-D|$ we have that $D$ is an $s$ -CVD set of $G_{well}$ if and only if $T$ is a vertex cover of $G$ .", "Lemma 22 There is a subset of $I$ which is a minimum $s$ -CVD set of $G_{well}$ .", "Let $S$ be a minimum $s$ -CVD set of $G_{well}$ such that $|S \\cap I|$ is maximum.", "We claim that $S \\subseteq I$ .", "Suppose for contradiction this is not true.", "Let $I^{\\prime }=\\bigcup \\limits _{u\\in V(G)} P_u-\\lbrace x_u\\rbrace $ .", "Then we must have that $S\\cap I^{\\prime } \\ne \\emptyset $ or $S \\cap C \\ne \\emptyset $ .", "Let $a$ be a vertex of $S\\cap I^{\\prime }$ .", "Observe that there must be a vertex $u\\in V(G)$ such that $a\\in P_u$ and that $(S-\\lbrace a\\rbrace )\\cup \\lbrace x_u\\rbrace $ is an $s$ -CVD set of $G_{well}$ .", "This contradicts the assumption that $S$ is a minimum $s$ -CVD set of $G_{well}$ with $|S\\cap I|$ maximum.", "Now consider the collection $\\mathcal {C}$ of connected components of $G_{well}-S$ .", "First, observe that there exists at most one connected component in $\\mathcal {C}$ that intersects $C$ (the clique of $G_{well}$ ).", "We shall call such a component as the big component and let $X$ be the set of vertices of the big component.", "In fact $I$ itself is a $s$ -CVD set and observation REF implies $|I| \\le |C|$ .", "Therefore, without loss of generality we can assume that $C \\lnot \\subset S$ and indeed such a big component exists.", "Let $Y$ denote those vertices of $G_{well}-S$ that belongs to $I - X$ .", "Let $S_C = S\\cap C$ and $S_I = S \\cap I$ .", "Recall that by assumption, $S_C \\ne \\emptyset $ .", "If there is a vertex $v \\in S_C$ such that $|N[v]\\cap Y| = 0$ , then $S - \\lbrace v\\rbrace $ is a $s$ -CVD set with $X \\cup \\lbrace v\\rbrace $ as corresponding big component with diameter less than or equal to $s$ .", "This contradicts the minimality of $S$ .", "Similarly, if there exists a vertex $v \\in S_C$ such that $N[v]\\cap Y = \\lbrace u\\rbrace $ , a singleton set then $S^{\\prime } = S \\cup \\lbrace u\\rbrace - \\lbrace v\\rbrace $ is a new $s$ -CVD set with $X \\cup \\lbrace v\\rbrace $ as corresponding new big component.", "This contradicts the assumption that $S$ is a minimum $s$ -CVD set with $|S\\cap I|$ is maximum.", "Hence together with observation REF we infer that $|N(v)\\cap Y|=2$ , for each $v\\in S_C$ .", "Observation REF also implies that for each vertex $u \\in Y$ , $|N(u)\\cap S_C|\\ge 2$ , since $Y \\subseteq I$ for each $u \\in Y$ we have $N(u)\\subseteq S_C$ .", "Therefore, $|Y| \\le |S_C|$ and $S^{\\prime }=(S- S_C)\\cup Y$ is a minimum 2-CVD set with $X \\cup S_C$ as the corresponding new big component and $|S^{\\prime } \\cap I| > |S \\cap I| $ .", "This contradicts the assumption for $S$ .", "Hence we conclude that $S$ is indeed a minimum $s$ -CVD set such that $S \\subseteq I$ .", "Lemmas REF and REF imply that $G$ has a vertex cover of size $k$ if and only if $G_{well}$ has a $s$ -CVD set of size $k$ .", "Now Theorem REF follows from a result of Khot and Regev [22], where they showed that unless the Unique Games Conjecture is false, there is no $(2-\\epsilon )$ -approximation algorithm for Minimum Vertex Cover on general graphs, for any $\\epsilon >0$ ." ], [ "Conclusion", "In this paper we studied the computational complexity of $s$ -CVD on well-partitioned chordal graphs, a subclass of chordal graphs which generalizes split graphs.", "We gave a polynomial-time algorithm for $s=1$ and we proved that for any even integer $s\\ge 2$ , $s$ -CVD is NP-hard on well-partitioned chordal graphs.", "We also provide a faster algorithm for $s$ -CVD on interval graphs for each $s\\ge 1$ .", "This raises the following questions.", "Question 1 What is the time complexity of Cluster Vertex Deletion on chordal graphs?", "Question 2 What is the time complexity of $s$ -CVD on chordal graphs for odd values of $s$ ?", "Question 3 Is there a constant factor approximation algorithm for $s$ -CVD, $s\\ge 2$ on chordal graphs?", "Another generalisation of interval graphs is the class of cocomparability graphs.", "It would be interesting to investigate the following question.", "Question 4 What is the time complexity of $s$ -CVD on cocomparability graphs for each $s\\ge 1$ ?" ] ]
2210.07699
[ [ "Confidence estimation of classification based on the distribution of the\n neural network output layer" ], [ "Abstract One of the most common problems preventing the application of prediction models in the real world is lack of generalization: The accuracy of models, measured in the benchmark does repeat itself on future data, e.g.", "in the settings of real business.", "There is relatively little methods exist that estimate the confidence of prediction models.", "In this paper, we propose novel methods that, given a neural network classification model, estimate uncertainty of particular predictions generated by this model.", "Furthermore, we propose a method that, given a model and a confidence level, calculates a threshold that separates prediction generated by this model into two subsets, one of them meets the given confidence level.", "In contrast to other methods, the proposed methods do not require any changes on existing neural networks, because they simply build on the output logit layer of a common neural network.", "In particular, the methods infer the confidence of a particular prediction based on the distribution of the logit values corresponding to this prediction.", "The proposed methods constitute a tool that is recommended for filtering predictions in the process of knowledge extraction, e.g.", "based on web scrapping, where predictions subsets are identified that maximize the precision on cost of the recall, which is less important due to the availability of data.", "The method has been tested on different tasks including relation extraction, named entity recognition and image classification to show the significant increase of accuracy achieved." ], [ "Introduction", "Neural Networks (NN) gained a great importance in classification tasks.", "However, like other algorithms, NN models suffer from the generalization problem, that is, when applied on future data, NN models do not always repeat their accuracies that have been measured on the test dataset.", "An additional problem of NN models is that they always attempt to provide classifications with high probabilities (softmax values), even when these models are not (well) trained or when the example is out-of-distribution, i.e.", "they are never aware when they fail [10].", "These shortages make them difficult to be used for decision making in business einvironments or in critical task like in medical applications [3].", "Being not aware of the reliability of classification models because of missing generalization is a problem that hinders using such models especially in critical applications where important decisions are to be made, like in business and medicine.", "In such situations, the knowledge about the reliability and confidence of outcomes of the models are of great importance.", "We present here some examples of notes reported in research on the problem of uncertainty in Knowledge Discovery to raise the importance of confidence estimation: \"Traditional IE and KD techniques are facing new challenges of dealing with uncertainty of facts extracted from heterogeneous data sources from different documents, languages and modalities on the WWW\" [11].", "\"Veracity stresses the importance of data quality and level of trust due to the concern that many data sources (e.g.", "social networking sites) inherently contain a certain degree of uncertainty and unreliability.\"", "[15].", "\"While most of the reviewed studies focus on designing and evaluating a mathematical model that takes a number of uncertainties and risks into account, there is less focus on establishing and analysing the applicability of the proposed models\" [4].", "\"While uncertainty estimation in neural networks is an active field of research, the current methods are rarely adopted.", "It is desirable to develop a method that does not create additional computational overhead \"[14].", "These notes and many others emphasize the problem of uncertainty, which raises the awareness of the need of methods quantifying uncertainty and confidence in AI models.", "In the typical case, a neural network has an output layer with a number of nodes equal to the number of classes.", "Sometimes with an additional node representing the negative class, i.e.", "a class representing objects not belonging to any of the predefined classes.", "Given an object as input, a NN model as a classifier responds with an outcome on the output layer in form of values, each of them corresponds to one of the nodes in the output layer.", "Each of this values, which are called logits, corresponds to a class and reflects the likelihood that the example object (input) is assigned to this class.", "In case of existence of a negative class, the value reflects the likelihood of the object being assigned to none of the given classes.", "The name logits refers to the logit activation function that normally produces these values.", "However, for simplicity, for the remainder of this paper, we will denote these values as logits and the corresponding layer as logit-layer regardless of the activation function used to produce them.", "The most common way to classify objects based on the logit layer is to simply assign an object to the class with the largest logit value without taking the other logit values consideration.", "In this paper, we use the distribution of the logit values to infer information about the confidence of the model when performed to classify a particular object.", "In Particular, we provide methods that enable filtering predictions by selecting those that are most probably correctly classified.", "This is done based on two things: First, confidence estimation functions are developed that, based on the logit values, calculate a confidence measure for a particular prediction.", "Second, a method for calculating a threshold for a particular model, based on which the predictions are filtered according their confidence values.", "The methods provided in this paper are useful and especially designed as a tool to increase confidence in those cases where knowledge is extracted from the web.", "More specifically, it is designed to filter classifications performed on data that is scraped from the web to extract knowledge from, e.g.", "to build knowledge graphs.", "In those cases, data is normally unstructured, heterogeneous and noise.", "These conditions promote two problems: The low accuracy due unclean noisy data and less generalization due to heterogeneity.", "The methods provided in this paper help to increase precision by filtering out classifications that are probably miss-classified on cost of recall, which is in those cases less important since there are a lot of data in the web.", "There are two main contributions of this paper: First, we propose confidence estimation functions that provide an estimate of how confident the NN model is, when applied on of a particular object.", "This is done based on the distribution of the values of the logit-layer corresponding to the object being classified.", "Second, for a given model and a given confidence level, we provide a method for finding a threshold based on which the predictions can be divided into two subsets, one subset meets the given confidence level, which call exploit subset and another one of low quality and confidence, which we call waste subset.", "The remainder of this paper is organized as follows: In Section , we present related work in confidence and reliability of NN models.", "After introducing some notations in SectionREF , we present two novel confidence estimation functions in Section REF and a method for defining a threshold for filtering classifications with regard to a given confidence level in Section REF .", "Finally, the proposed methods are discussed and evaluated in Section ." ], [ "Related work", "The research most related to our work might be Hendrycks and Gimpel ([10]), where the authors show that the maximum of the softmax outcome in a neural network gives information about the uncertainty of the model.", "In particular, they show that correctly classified examples tend to have a relatively larger maximum in the softmax outcome.", "They assessed their finding using several tasks in computer vision, natural language processing, and automatic speech recognition, Thereby, they defined a simple baseline for utilizing the softmax outcome for estimating the uncertainty of a neural network classification models.", "However, to differentiate our work, first Gimpel et al.", "([10]) apply their methods on top of the softmax outcome, although there are much research, such as [17] [18] [16], stating that the softmax tends to provide very high changes in the probabilities for small changes in the logits due to the exponential functions in the definition of the softmax and this property makes it poor for predicting confidence [10].", "In contrast, we build our methods rather on the original values, i.e.", "the logit values, which are the input of the softmax rather than the output and they are less sensitive in this regards.", "Second, while Gimpel et al.", "([10]) only state the general relation between the maximum of the softmax and the uncertainty, we provide functions that quantify the uncertainty of a particular prediction based on the distribution of the corresponding logit values.", "Third, we provide a method to interpret this uncertainty in in the context of confidence levels, in particular it calculates a threshold to filter predictions such that they meet a given accuracy level in relation to a required confidence level.", "Inspired by the research above, Tran et al.", "([19]) introduce a module that infers Neural Network Uncertainty, which can be used used as a building block that replaces the common neural network layers.", "This module, which they call Bayesian layer, is capable to discover uncertainty based on the distributions of values over layers and activation functions using the concept of Bayesian Neural Network Layers.", "While this approach suggest an entire change the architecture of the neural network, our proposed methods do not require any change because they build on the common logit layer, i.e.", "they infer uncertainty based on the distribution of the logit values in the existing neural networks as they are.", "Mozejko et al.", "([14]) provide a method to quantify the uncertainty of a NN classifier by replacing the standard softmax function in the output layer with a new modified softmax function that they called the inhibited softmax function.", "In particular, the softmax function is modified such that it infers uncertainty of the model by capturing the cross-entropy loss function values in the training phase.", "The Other research, such as Blundell et al.", "[5], Louizos and Welling[12], Malinin and Gales [13], Wang et al.", "[21], Hafner et al.", "[8], try to tackle the problem of uncertainty by inferring the model confidence from the distribution over the models’ weights, an approach that has been inspired by the Bayesian approaches suggested by Buntine and Weigend [6].", "However, as there approaches aim to infer the model uncertainty, they are not designed to estimate and quantify uncertainty at level of a particular prediction, which our methods do." ], [ "Methods", "In this section, we propose methods for estimating the confidence of classification done by a neural network (NN) based on the distribution of the output layer." ], [ "Notation", "Given a neural network trained to produce the classification model $\\mathcal {M}$ , let $x = \\lbrace x_1, ....., x_n\\rbrace $ be a set of objects classified by the model $\\mathcal {M}$ into $r$ predefined classes $c =\\lbrace c^1,..,c^{r} \\rbrace $ .", "Furthermore, let $L_i=\\lbrace L_i^1,...,L_i^r\\rbrace $ be the logit values of the output layer corresponding to the objects $o_i$ .", "In this paper, the term classification denotes in relation to a single object the process of assigning the object to a particular single class and in relation to a dataset assigning each of the objects in the dataset to a particular single class.", "Given the object $o_i$ classified by the model $\\mathcal {M}$ , we denote the largest corresponding logit $\\hat{L_i}$ and the related class $\\hat{c}$ .", "Analogously, we denote the smallest corresponding logit $\\check{l^i}$ and the related class $\\check{c}$ In a common NN, the logit value $l_i^j$ represents the likelihood that the object $o_i$ belongs to the class $c^j$ .", "Note that we intentionally say the likelihood because the logit values are not necessarily probabilities and they don't necessarily sum to one.", "The range and distribution of the logits depend on different factors like the activation function used for the output layer as well as on the technology and the implementation of the NN.", "However, the logits always tend to reflect the relative likelihoods of the object being in the corresponding classes.", "Also, note that the most common way to apply a binary classification is to select the class with the largest logit, i.e.", "$\\hat{c}$ ." ], [ "Confidence Functions", "A confidence function $conf(x)$ in the sense of this paper is a function that provides a value reflecting the confidence of the prediction $x$ , i.e.", "a measure that reflects the probability $x$ being a true prediction.", "In this section, we propose two functions of the logit values related to a particular classification to be estimators of the confidence of the corresponding classification, two implementations of the function $conf()$ .", "Before we present the functions, let's motivate the idea behind them.", "To illustrate how the distribution of the logits reflects the confidence, let's consider two exemplary logits distributions corresponding to two different classifications with four classes (C1, C2, C3 and C4) as shown in Figure REF (A) and (B).", "Both classifications suggest the class C2 as label since C2 has the maximum logit value in both cases.", "However, in (A), the differences between the logit value of the winner class C2 and those of the other classes are not significant.", "In contrast, C2 in (B) is a winner with distinction since there is are larger differences between the winner.", "The motivation here is that the more difference between the two largest logits, the more distinguished they are which is in turn an indicator that the model is more confident.", "Figure: (A) and (B) are exemplary logits of two different classifications, where in both cases each of the objects encircled is assigned to class C2.", "In (A) the logit of the winner class has weak difference to the other classes, which would mean that in (A) the winner class is not entirely sure.", "In contrast, the winner in (B) is significantly distinct from the other classes, which would mean that the winner class is more sure.", "(C) and (D) apply the kurtosis function as a measure of recognizing the winner class.", "(E) and (F) applies only the difference between the two top winners and ignore the rest of the logits.The question now is how to measure the distinction of the winner class based on the logits?", "Figure REF (C) and (D) applies the function kurtosis as a measure of distinction, because higher kurtosis means a large value (peak) representing the winner class compared with many smaller values (tails) representing the other classes.", "Figure REF (E) and (F) applies the difference between the two highest classes (the winner and next best winner) as a measure, whereby rest of the logit values are ignored.", "Based on this motivation, we propose the two following functions as confidence measures.", "Both functions are applied on the logits $L$ corresponding to an object $x$ which has been assigned to class c to estimate the confidence of the classification $c$ .", "The first function $KRT$ is the kurtosis of the logits, which is by definition the fourth moment of the distribution.", "$KRT$ is defined as $conf(x) = KRT(L_x) = E\\Big [\\Big ( \\frac{L_x - \\mu }{\\sigma }\\Big )^4\\Big ]$ The second function $WDF$ is the normalized difference between the logits of the winner and second winner classes.", "$WDF$ is defined as $conf(x) = WDF(L_x) = \\frac{Max1(L_x) - Max2(L_x)}{abs(Max1(L_x) + Max2(L_x))}$ where Max1 is the largest and Max2 is the second largest logit value.", "Note that $WDF$ is always in between the range $[0,1]$ regardless of the range of the logits.", "When the two largest logits are equal ($Max1= Max2$ ), $WDF$ provides a zero confidence.", "When $Max2$ is zero, this means that all other logits are zero and $WDF$ provides a confidence of 1 meaning $100\\%$ .", "In all other cases $WDF$ provides a value between zero and 1.", "Figure REF shows four examples of NN models having the behavior of the confidence estimators (in this case WDF, Equation REF , but KRT shows also similar behavior).", "Each blue point corresponds to a particular classification (class prediction) and its value is the outcome of Equation REF , when applied to the corresponding logits.", "The data points have been sorted along the x-axis according to their values.", "Each red point means that the prediction is failed.", "In analogy, each green point means a successful prediction.", "In all sub-figures, less errors occur where the confidence estimation is higher and vice versa, which means that the confidence estimator Equation (2) can predict the correctness of the model prediction.", "(A) and (B) correspond to the same prediction model for the semantic relation extraction, which is designed to operate in multilingual settings as well.", "If (A) is applied on a German dataset the f1-score is 0.58 and if (B) applied on Chinese dataset the f1-score is 0.91.", "(C) and (D) correspond to two different image classification models applied on the same dataset.", "In (C), the first model (Image classification 5%) was trained on 5% of the training data to simulate a bad model and it achieved an f1-score of 0.68 and in (D), the model has been trained on 100% of the training data and has an f1-score of 0.92.", "These fourexamples and many other example on different domains, technologies and datasets show similar behavior.", "That is the two estimators (Equations REF and REF ) behave indirect proportional to the probability of the prediction errors.", "Figure: The figure demonstrates the behavior the confidence estimation, Equation (blue points).", "Each point corresponds to a single prediction and all the points are sorted according to their values.", "A red points signals a failing prediction whereas agreen point signals a successful prediction.", "In general less errors occur where the confidence estimation is high and vice versa.", "(A) and (B) correspond to the same prediction model for multilingual relation extraction.", "In (A) with German datasets the f-measure is 0.58 and (B) with Chinese the f-measure is 0.92.", "(C) and (D) correspond to two different image classification models applied on the same dataset.", "In (C), the model is trained on 5% of the training data and has an f-measure of 0.68 and in (D), the model has been trained on 100% of the training data and has an f-measure of 0.92.However, a function providing a value that is indirect proportional to the probability is not enough for the practical use, since we need to answerquestions in regards of the meaning of a particular value of Equations REF and REF and how can we use it?", "Further questions consider whether the values universal, i.e.", "do the same values in two different models or two different datasets have the same meaning?", "How can one relate these values to a given confidence level?", "All these questions will be discussed in the following sections.", "The remainder of this paper is organized as follows: In the next section, we show how to use these functions (kurtosis and winner difference) in combination with thresholds.", "The thresholds are needed to identify those classifications that fulfill a particular confidence level." ], [ "Thresholding the confidence values ", "In this section, we show how to filter predictions to reach a given confidence level.", "To this end we discuss how to define and interpret thresholds in relation to the confidence estimators defined in Equations REF and REF .", "From now, we will denote them simply thresholds.", "More specifically, we define a threshold $\\mu $ to be a value that splits up the predictions obtained from a model into two subsets: The first subset, which we denote exploit exploit ($\\mu $ ), contains predictions $x_i$ with $conf(x_i)>=\\mu $ and the other subset, which we denote waste $\\textit {waste ($$)}$ , contains the rest of the predictions $x_i$ having $conf(x_i)<\\mu $ .", "At first, let us check the universality of thresholds, in the sense whether the same threshold produces a similar error partitioning with different models.", "In other words, we want to know whether it is possible to define universal thresholds that produce repeatedly the same exploit ratio for a given model.", "Figure REF shows the confidence plots (based on Equation REF of two different models, (A) corresponds to a text relation extraction model based on a pre-trained xlm-roberta model using PyTorch and (B) corresponds to an image classification model based on VGG16 pre-trained model using Keras.", "The two models have the same accuracy (f-measure=$0.92$ ).", "Both thresholds plotted as vertical lines have been selected to accept only $10\\%$ of the existing errors.", "We can see that the threshold are not similar in their values: $0.99$ in (A) and $0.78$ in (B).", "Figure: The figure shows how the confidence estimation values in different models correspond to different thresholds and vise versa.", "In (A) the threshold dividing the errors into 10%10\\% and 90%90\\% is 0.990.99 while the same threshold in (B) it is 0.780.78 (The two models are based on different technologies, but have the same accuracy - f-measure=0.920.92).", "(A) corresponds to a relation extraction model based on a pre-trained xlm-roberta model using Torch and (B) corresponds to an image classification model based on VGG16 pre-trained model using Keras.The result in Figure REF , which repeats itself in different models, also using Equation REF , suggests that there a universal threshold can not be obtained.", "It means practically thresholds should be found each model at hand.", "A topic which will be handled in the remainder of this section.", "To find a threshold for a particular model, we use a test dataset, which exists always.", "The assumption is that the threshold is based on the model not on the data; To examine the role of the data we defined an experiment to see whether a threshold depends on some data.", "The threshold behaves similar for other data as well as future data:: For a particular model, we created different subsets of the test set using random sampling (10%, 20%, ..., 100% of the test set).", "For each sample, we calculated the threshold that accepts only 10% of the existing errors.", "Figure REF shows the results: Independent of the sample of the data, the calculated threshold remains unchanged, which means that the threshold is data-independent.", "The same results have been observed with different data and different models , more specifically with all the model and data combinations used in the Section .", "Figure: The figure shows empirically that for a given model, the shape of the confidence estimation tends to be constant for different samples of data.", "The same experiment has been performed on multiple datasets and repeated the same results.The result of this experiment is quite important and useful: Since the threshold is data independent with respect to error distribution, this means we can predefine a threshold for a given confidence level and a given model at training time based on the test data, which is normally available when creating a model.", "This important observation will be the basis of the next section, where we relate thresholds to confidence levels.", "Therefore, we want to mathematically formulate this observation as follows: The function $f(\\mu )$ shows the relative error in exploit and is defined by $f(\\mu ) = \\frac{e_{\\mu }}{e_{\\mu }+s_{\\mu }}$ where $e_{\\mu }$ and $s_{\\mu }$ are the error predictions and successful predictions in $exploit(\\mu )$ (i.e.", "instances $x$ satisfying $conf(x)>\\mu $ ) is stable and data invariant for a given model, which has been empirically shown using various models and data." ], [ "Confidence Levels", "Until now, we considered the relative error in exploit (Equation REF ) by finding a threshold that accepts a particular ratio of existing errors, e.g.", "$10\\%$ .", "Obviously, the final number of errors accepted by filtering with threshold$\\mu $ depends on the original number of errors, which consequently depends on the accuracy of the model.", "In this section, we introduce a method for providing a threshold that can be used as a filter to generate a subset of predictions that fulfills a given accuracy regardless of the original accuracy.", "In other words, given a confidence level and a model with an accuracy, we need to define a threshold that ensures an accuracy sufficient for the confidence level, i.e.", "a threshold that defines the optimal exploit/waste ratio, in such a way that the wished accuracy is however met.", "Let p be the original accuracy of the model $M$ based on evaluation using a test set T. We want to find a threshold $\\mu $ that produces an exploit with accuracy $q$ where $q>p$ (Note that when q<p, no filtering is required, because accuracy is already optimized).", "For the sake of simplicity, we assume that the accuracy is measured by by the number of correct predictions divided by the total number of predictions.", "Let $e_{\\mu }$ and $s_{\\mu }$ be the number of wrong predictions and true predictions in $exploit(\\mu )$ respectively, then $q = \\frac{s_{\\mu }}{e_{\\mu }+s_{\\mu }} = 1 - f(\\mu )$ which is characteristic for a given model according to Equation REF .", "Now, the task is to find the threshold $\\mu $ that meets Equation REF .", "Since the percentage of the errors before filtering is $1-p$ and the maximum error percentage we aim to reach is $1-q$ , we propose the threshold satisfying Equation REF to be $\\mu $ , where $e_{\\mu } = e \\frac{1-q }{1-p}$ where $c$ is the total number of error and $c_{\\mu }$ is the accepted number of error, which means $\\mu $ should be experimentally selected using the testset, such that the exploit contains $ e_{\\mu }$ errors and the waste contains $c - e_{\\mu }$ errors.", "The two confidence estimation functions proposed in Section REF , namely $KRT$ (Eq.", "REF ) and $WDF$ (Eq.", "REF ) have been intensively tested using various models of different technologies in different domains and different datasets.", "No evaluation against state-of-the-art methods has been performed, simply because to our knowledge no methods exists in the state-of-the-art that estimates the confidence of a particular prediction.", "Rather, we evaluated the methods by comparing the original accuracy of the model with its accuracy on the exploited relations, i.e.", "after performing filtering by using the confidence threshold.", "This has been done for different confidence levels as well.", "Also, we have considered the exploit ratio for the evaluation.", "In particular, we used the following measures: Enhanced accuracy (E.ACCU): The accuracy of the models measured on the exploit, compared with the original accuracy (ACCU) on the dataset.", "That is the accuracy when excluding predictions with confidence value lower than a threshold $\\mu $ , calculated according to Equation REF for the given confidence level.", "Exploit ratio (EXPL.R): That is the ratio of predictions having confidence values over the threshold $\\mu $ , i.e.", "the ratio of the predictions recommended to be used for the given confidence level.", "We used 20 models of different technologies performed on different datasets as well as different domains, such as imaging, relation extraction and semantic domain.", "The aim was to ensure that the methods work for a broad variety of models, technologies and data.", "In particular, we used the following groups of experiments: RelEx / Relation extraction on TACRED dataset: In this category, we included 12 models from the RelEX benchmark [1], which includes NN-based relation extraction algorithms based on various combinations of state-of-the-art embedding systems like BERT and ELMO as well as different NN technologies like CNN (convolutional neural network), GCN (graph convolutional network) and self attention.", "All experiments in this group were performed on the TACRED dataset [23], where predefined relations between named entities are to be identified.", "TRE / Relation extraction on TACRED dataset: The TRE [2] is an NLP-based algorithm that uses pre-trained language representations to improving Relation Extraction.", "It was used in combination of the TACRED dataset [23].", "TRE / Relation extraction on SemEval2010 Task 8: This is the sasme algorithm described in 2), but applied on the SemEval1010 Task 8 [9].", "TransRelation / Semantic relation on CogALex: The CogALex VI Task https://sites.google.com/site/cogalexvisharedtask/ is a shared task for semantic relation extraction, i.e.", "to discover whether semantic relations between words in a text exist and which.", "The TransRelation [20] algorithm uses the pre-trained model XLM-RoBERTa to build a semantic relation extraction model.", "Note that this kind of relation extraction differ from the previous groups: while the previous groups have to do with specific relations like producer (e.g.", "Siemens produces Motors), this groups deals with general semantic relation like synonym and Antonym (e.g.", "the words wide and far are synonyms).", "Named entity recognition: xxxxxxx xxxxx xxxxxxx xxxxxxxxxxxxxx xxxxxxxxx xxxxx xx xxx xxx xx xxx xxxxxx xxxx xxxxxxxx xxxxxxxx xxx xxxxxxx xxxxx VGG16 / Image classification: This experiment group includes binary image classification models based on the pre-trained VGG16 model [22], which are performed on the known dogs and cats image dataset [7].", "The first goal of including this experiment group is to show that the methods are applicable on other domains and the performance of the confidence estimation repeats itself on domains other than text.", "The second goal is to show that the WDF confidence estimator works with any number of classes, even binary classification.", "Figure REF shows the results of the evaluation.", "The first columns show the name and original accuracy of the model.", "For each of the confidence levels $90\\%$ , $95\\%$ and $99\\%$ , it shows the enhanced accuracy (E.ACCU) and the exploit ratio (EXPL.R) for both confidence estimation functions KRT and WDF.", "The evaluation results can be summarized as follows: Significant increase of accuracy: The E.ACCU is significantly higher than the original accuracy.", "Note that if the original accuracy is high compared with the confidence level (e.g Experiments xxx, yyy, zzz), the exploit ratio is very high and the accuracy is almost unchanged.", "This has been expected and actually already planned in the threshold calculation (Equations REF and REF ) since the threshold is set based on the difference between the original accuracy in F1-measure and the confidence level.", "Higher increase of accuracy is observed with the less accurate models (e.g.", "Experiments numbers xxx,yyy,zzz).", "As designed, the threshold tends to keep the accuracy close to the given confidence level regardless of the original accuracy.", "Exploit: The EXPL.R values looks reasonable except in extreme cases, i.e.", "when the model accuracy is very low and the confidence level is very high, e.g.", "Experiments xxx,yyy,zzzz.", "In this case the exploit drops very much.", "WDF outperforms KRT: In most experiments in all confidence levels, WDF outperforms KRT both in accuracy and exploit ratio.", "This provides an evidence to make WDF (Equation REF ) the recommended confidence estimation function." ], [ "Conclusion", "This paper provides methods to infer and increase the confidence in classifications generated by a neural network (NN).", "The methods don't require any changed on existing NN.", "In particular, the contribution of this paper are: Two confidence estimation functions that, given a prediction, they provide estimates for the correctness of the prediction, i.e.", "estimate of the likelihood that the prediction is correctly classified.", "This estimate is inferred based on the distribution of the logits corresponding to the underlying prediction.", "A method that, given a classification model and a confidence level, calculates a threshold that separates the classifications generated by this model into two subsets, where one of them fulfills the accuracy required by the given confidence level to be used.", "The provided methods are recommended for filtering predictions generated in the process of knowledge extraction, like named entity recognition and relation extraction, e.g.", "based on web Scraping.", "In those applications there is availability in data, but lack in model accuracy and generalization due data noise and heterogeneity.", "Our methods help in solving the problem by increasing the confidence by filtering on cost of recall which is less important due the data availability." ] ]
2210.07745
[ [ "Extracting Cultural Commonsense Knowledge at Scale" ], [ "Abstract Structured knowledge is important for many AI applications.", "Commonsense knowledge, which is crucial for robust human-centric AI, is covered by a small number of structured knowledge projects.", "However, they lack knowledge about human traits and behaviors conditioned on socio-cultural contexts, which is crucial for situative AI.", "This paper presents CANDLE, an end-to-end methodology for extracting high-quality cultural commonsense knowledge (CCSK) at scale.", "CANDLE extracts CCSK assertions from a huge web corpus and organizes them into coherent clusters, for 3 domains of subjects (geography, religion, occupation) and several cultural facets (food, drinks, clothing, traditions, rituals, behaviors).", "CANDLE includes judicious techniques for classification-based filtering and scoring of interestingness.", "Experimental evaluations show the superiority of the CANDLE CCSK collection over prior works, and an extrinsic use case demonstrates the benefits of CCSK for the GPT-3 language model.", "Code and data can be accessed at https://cultural-csk.herokuapp.com/." ], [ "Introduction", "Motivation.", "Structured knowledge, often stored in knowledge graphs (KGs) [12], [39], is a key asset for many AI applications, including search, question answering, and conversational bots.", "KGs cover factual knowledge about notable entities such as singers, songs, cities, sports teams, etc.", "However, even large-scale KGs deployed in practice hardly touch on the dimension of commonsense knowledge (CSK): properties of everyday objects, behaviors of humans, and more.", "Some projects, such as ConceptNet [36], Atomic [32], and Ascent++ [21] have compiled large sets of CSK assertions, but are solely focused on “universal CSK”: assertions that are agreed upon by almost all people and are thus viewed as “globally true”.", "What is missing, though, is that CSK must often be viewed in the context of specific social or cultural groups: the world view of a European teenager does not necessarily agree with those of an American business person or a Far-East-Asian middle-aged factory worker.", "Figure: Human-bot conversations without and with CCSK.This paper addresses this gap, by automatically compiling CSK that is conditioned on socio-cultural contexts.", "We refer to this as cultural CSK or CCSK for short.", "For example, our CCSK collection contains assertions such as: $\\bullet $ subject:East Asia, facet:food, Tofu is a major ingredient in many East Asian cuisines, or $\\bullet $ subject:firefighter, facet:behavior, Firefighters use ladders to reach fires.", "The value of having a KG with this information lies in making AI applications more situative and more robust.", "Consider the conversation between a human and the GPT-3 chatbotExecuted at beta.openai.com/playground using the davinci-002 model at temp=0.7.", "shown in Fig.", "REF .", "The GPT-3-based bot, leveraging its huge language model, performs eloquently in this conversation, but completely misses the point that the user is in China, where dragons are viewed positively and espresso is difficult to get.", "If we prime the bot with CCSK about Far-East-Asian culture, then GPT-3 is enabled to provide culturally situative replies.", "If primed with CCSK about European views (not shown in Fig.", "REF ), the bot points out that dragons are portrayed as evil monsters but do not exist in reality and recommends a strong cup of coffee.", "State of the art.", "Mainstream KGs do not cover CCSK at all, and major CSK collections like ConceptNet contain only very few culturally contextualized assertions.", "To the best of our knowledge, the only prior works with data that have specifically addressed the socio-cultural dimension are the projects Quasimodo [30], StereoKG [7], and the work of Acharya et al.", "[1].", "The latter merely contains a few hundred assertions from crowdsourcing, StereoKG uses a specialized way of automatically extracting stereotypes from QA forums and is still small in size, and Quasimodo covers a wide mix of general CSK and a small fraction of culturally relevant assertions.", "These are the three baselines to which we compare our results.", "Language models (LMs) such as BERT [8] or GPT-3 [5] are another form of machine-based CSK, including CCSK, in principle.", "However, all LM knowledge is in latent form, captured in learned values of billions of parameters.", "Knowledge cannot be made explicit; we observe it only implicitly through the LM-based outputs in applications.", "The example of Fig.", "REF demonstrates that even large LMs like GPT-3 do not perform well when socio-cultural context matters.", "Approach.", "CCSK is expressed in text form on web pages and social media, but this is often very noisy and difficult to extract.", "We devised an end-to-end methodology and system, called Candle (Extracting Cultural Commonsense Knowledge at Scale), to automatically extract and systematically organize a large collection of CCSK assertions.", "For scale, we tap into the C4 web crawl [27], a huge collection of web pages.", "This provides an opportunity to construct a sizable CCSK collection, but also a challenge in terms of scale and noise.", "The output of Candle is a set of 1.1M CCSK assertions, organized into 60K coherent clusters.", "The set is organized by 3 domains of interest – geography, religion, occupation – with a total of 386 instances, referred to as subjects (or cultural groups).", "Per subject, the assertions cover 5 facets of culture: food, drinks, clothing, rituals, traditions (for geography and religion) or behaviors (for occupations).", "In addition, we also annotate each assertion with its salient concepts.", "Examples for the computed CCSK are shown in Fig.", "REF .", "Candle operates in 6 steps.", "First and second, we identify candidate assertions using simple techniques for subject detection (named entity recognition - NER, and string matching), and generic rule-based filtering.", "Third, we classify assertions into specific cultural facets, which is challenging because we have several combinations of cultural groups and cultural facets, making it very expensive to create specialized training data.", "Instead, we creatively leverage LMs pre-trained on the Natural Language Inference (NLI) task to perform zero-shot classification on our data, with judicious techniques to enhance the accuracy.", "Fourth we use state-of-the-art techniques for assertion clustering, and fifth a simple but effective method to extract concepts in assertions.", "Lastly, we combine several features to score the interestingness of assertions, such as frequency, specificity, distinctiveness.", "This way, we steer away from overly generic assertions (which LMs like GPT-3 tend to generate) and favor assertions that set their subjects apart from others.", "Figure: Example assertions of Candle, with Ceruleansubjects (cultural groups) of cultural warmdomains, LimeGreenfacets and YellowOrangeconcepts.Contributions.", "The main contributions of this work are: An end-to-end methodology to extract high-quality CCSK from very large text corpora.", "New techniques for judiciously classifying and filtering CCSK-relevant text snippets, and for scoring assertions by their interestingness.", "A large collection of CCSK assertions for 386 subjects covering 3 domains (geography, religion, occupation) and several facets (food, drinks, clothing, traditions, rituals, behaviors).", "Experimental evaluations show that the assertions in Candle are of significantly higher quality than those from prior works.", "An extrinsic use case demonstrates that our CCSK can improve performance of GPT-3 in question answering.", "Code and data can be accessed at https://cultural-csk.herokuapp.com/." ], [ "Related work", "Commonsense knowledge acquisition.", "There is a long tradition of CSK acquisition in AI (e.g., [15], [34], [19], [10], [28]).", "Earlier projects, e.g., Cyc [15] and ConceptNet [19], construct commonsense knowledge graphs (CSKGs) based on large-scale human annotations.", "Crowdsourcing CSKG construction has been revived in the ATOMIC project [32], [13].", "CSK extraction from texts has been researched in WebChild [37], TupleKB [6], Quasimodo [30], ASER [47], [46], TransOMCS [45], GenericsKB [3], and Ascent [22], [21].", "Meanwhile, Ilievski et al.", "[14] consolidate CSK from 7 different resources into one integrated KG.", "Those projects, however, have their main focus on either concept-centered knowledge (e.g., Elephants have trunks), social interactions (e.g., X hates Y's guts, as a result, X wants to yell at Y), or event-centered knowledge (e.g., X drinking coffee happens after X pouring the coffee into a mug) and do not cover much cultural knowledge.", "Our approach also starts from texts, but focuses on cultural commonsense knowledge (CCSK), with particular challenges in knowledge representation, assertion filtering and consolidation.", "Cultural commonsense knowledge.", "A few works have focused specifically on CCSK.", "An early approach by Anacleto et al.", "[2] gathers CSK from users from different cultures, entered via the Open Mind Common Sense portal.", "However, the work is limited to a few eating habits (time for meals, what do people eat in each meal?, food for party/Christmas) in 3 countries (Brazil, Mexico, USA), and without published data.", "Acharya et al.", "[1] embark on a similar manual effort towards building a cultural CSKG, limited to a few predefined predicates and answers from Amazon MTurk workers from USA and India.", "Shwartz [33] maps time expressions in 27 different languages to specific hours in the day, also using MTurk annotations.", "StereoKG [7] mines cultural stereotypes of 5 nationalities and 5 religion groups from Twitter and Reddit questions posted by their users, however, being without proper filtering, the method results in quite many noisy and inappropriate assertions.", "GeoMLAMA [42] defines 16 geo-diverse commonsense concepts (e.g., traffic rules, date formats, shower time) and use crowdsourcing to collect knowledge for 5 different countries in 5 corresponding languages.", "The dataset was used to probe multilingual pretrained language models, however, is not shared.", "Moving to computer vision, Liu et al.", "[18] and Yin et al.", "[43] expand existing visual question answering datasets with images from different cultures rather than the Western world.", "As a result, models trained on images from the old datasets (mostly images from Western cultures) perform poorly on the newly added images.", "Our methodology is the first to utilize large text corpora, and it can extract CCSK in the form of natural-language sentences, for a wide range of cultural groups and facets.", "Pre-trained language models and commonsense knowledge.", "Remarkable advances in NLP have been achieved with pre-trained language models (LMs) such as BERT [8] and GPT variants [26], [5].", "LAMA [25] designs methodology and datasets to probe masked LMs in order to acquire CSK that the models implicitly store.", "COMET [4] is a method that finetunes autoregressive LMs on CSK triples, and it can generate possible objects for a given pair of subject-predicate.", "However, the quality of the generated assertions is often considerably lower than that of the training data [20].", "More recently, West et al.", "[40] introduce a prompting technique to collect CSK by feeding GPT-3 [5] with a few human-verified CSK triples and ask it to generate new assertions.", "Although it was shown that the generated resource, called AutoTOMIC, is of encouraging quality, knowledge bases from LMs are inherently problematic, because their is no apparent way to trace assertions to specific sources, e.g., to understand assertion context, or to apply filters at document level.", "In this work, we leverage pre-trained LMs as sub-modules in our system to help with cultural facet classification and assertion clustering.", "We also show that our method can produce more distinctive CCSK assertions than querying GPT-3 with prompts." ], [ "CCSK Representation", "Our representation of CCSK is based on the notions of subjects (from 3 major domains: geography, religion and occupation) and facets.", "These are the key labels for CCSK assertions, which are informative sentences with salient concepts marked up.", "We assume two sets to be given: $\\mathcal {S}$ : A set of subjects (cultural groups) $s_1, \\ldots , s_n$ from a cultural domain, e.g., based on geo-locations (United States, China, Middle East, California), religious groups (Christians, Muslims, Buddhists) or occupations (taxi driver, professor, web developer); $\\mathcal {F}$ : A set $F_1, \\ldots , F_m$ of facets of culture, e.g., food, drinks, clothing, traditions, rituals, behaviors.", "Note that the cultural facets need not be mutually exclusive, e.g., food assertions sometimes overlap with traditions.", "Our objective is to collect a set of CCSK assertions for a given subject and facet.", "Existing commonsense resources store assertions in triple format (e.g., ConceptNet [36], Quasimodo [30]), semantic frames (Ascent [22]) or generic sentences (GenericsKB [3]).", "Although the traditional triple-based and frame-based data models are convenient for structured querying, and well suited for regular assertions like birth dates, citizenships, etc., they often falls short of capturing nuanced natural language assertions, as essential for CSK.", "Moreover, recent advances in pre-trained language models have made it easier to feed downstream tasks with less structured knowledge.", "With Candle, we thus follow the approach of GenericsKB [3], and use natural-language sentences to represent assertions.", "In principle, an assertion could comprise even several sentences.", "The longer the assertions are, however, the harder it is to discern their core.", "In this work, for higher precision and simplicity of computations, we only consider single sentences.", "Definition 1 (Cultural commonsense knowledge assertion) Given a subject $s$ and a facet $F$ , a CCSK assertion is a triple $(s,F,\\textit {sent})$ where sent is a natural-language sentence about facet $F$ of subject $s$ .", "Since natural language often allows to express similar assertions in many different ways, and web harvesting naturally leads to discovering similar assertions multiple times, we employ clustering as an essential component in our approach.", "A cluster (cls) of CCSK assertions for one subject and cultural facet contains assertions with similar meaning, and for presentation purposes, is summarized by a single summary sentence.", "Each cluster also comes with a score denoting its interestingness.", "To further organize assertions, we also identify salient concepts, i.e., important terms inside assertions, that can be used for concept-centric browsing of assertion sets.", "Several examples of CCSK assertions produced by Candle are shown in Fig.", "REF ." ], [ "Methodology", "We propose an end-to-end system, called Candle, to extract and organize CCSK assertions based on the proposed CCSK representation.", "Notably, our system does not require annotating new training data, but only leverages pre-trained models with judicious techniques to enhance the accuracy.", "The system takes in three inputs: an English text corpus (e.g., a large web crawl); a set of subjects (cultural groups); a set of facets of culture.", "Candle consists of 6 modules (see Fig.", "REF ).", "Throughout the system, step by step, we reduce a large input corpus (which could contain billions of documents, mostly noisy) into high-quality clusters of CCSK assertions for the given subjects and facets.", "Each cluster in the output is also accompanied by a representative sentence and an interestingness score.", "We next elaborate on each module." ], [ "Subject detection", "We start the extraction by searching for sentences that contain mentions of the given subjects.", "These will be the candidate sentences used in the subsequent modules.", "To achieve high recall, we utilize generous approaches such as string matching and named entity recognition (NER), and use more advanced filtering techniques in later modules, to ensure high precision.", "For the geography and religion domains, in which subjects are named entities, we use spaCy's NER module to detect subjects.", "Specifically, geo-locations are detected with the GPE tag (geopolitical entities), and religions are detected with the NORP tag (nationalities or religious or political groups).", "For each subject, we also utilize a list of aliases for string matching, which can be the location's alternate names (e.g., United States, the U.S., the States), or demonyms (e.g., Colombians, Chinese, New Yorker), or names for religious adherents (e.g., Christians, Buddhists, Muslims) - which can be detected with the NORP tag as well.", "For the occupation domain, we simply use exact-phrase matching to detect candidates.", "Each occupation subject is enriched with its alternate names and its plural form to enhance coverage." ], [ "Generic assertion filtering", "CSK aims at covering generic assertions, not episodic or personal experiences.", "For example, Germans like their currywurst is a generic assertion, but I visited Germany to eat currywurst or This restaurant serves German currywurst are not.", "GenericsKB [3] is arguably the most popular work on automatically identifying generic sentences in texts and it uses a set of 27 hand-crafted lexico-syntactic rules.", "Candle adopts those rules in this module.", "However, for each domain and facet, we adaptively drop some of the rules if they would reject valuable assertions.", "More details on the adaptations can be found in Appx.", "." ], [ "Cultural facet classification", "To organize CCSK and filter out irrelevant assertions, we classify candidate sentences into several facets of culture.", "Traditional methods for this classification task would require a substantial amount of annotated data to train a supervised model.", "The costs of data annotation are often a critical bottleneck in large-scale settings.", "In Candle, we aim to minimize the degree of human supervision by leveraging pre-trained models for zero-shot classification.", "A family of pre-trained models that is suitable for our setting is textual entailment (a.k.a natural language inference - NLI): given two sentences, does one entail the other (or are they contradictory or unrelated)?", "Our approach to adopting such a model for cultural facet classification is inspired by the zero-shot inference method of Yin et al. [44].", "Given a sentence $sent$ and a facet $F$ , we construct the NLI test as follows: [htbp] $\\textit {Premise} \\leftarrow \\textit {sent}, \\textit {Hypothesis} \\leftarrow \\text{``This text is about } F \\text{''}$ $P[\\textit {sent} \\in F] \\leftarrow P[\\textit {Premise} \\Rightarrow \\textit {Hypothesis}]$ The probability of $Premise$ entailing $Hypothesis$ will be taken as the probability of $sent$ being labeled as $F$ , denoted as $P[sent \\in F]$ .", "For example, with sentence “German October festivals are a celebration of beer and fun”, the candidate entailments will be “This text is about drinks”, “... about food”, “... about traditions”, and so on.", "Multiple of these facets may yield high scores in these NLI tests.", "To enhance precision, we introduce a set of counter-labels for topics that are completely outside the scope of CCSK, for example, politics or business.", "A sentence $sent$ will be accepted as a good candidate for facet $F$ if ${\\left\\lbrace \\begin{array}{ll}P[sent \\in F] \\ge \\rho _+ \\text{, and} \\\\P[sent \\in \\tilde{F}] \\le \\rho _- \\text{ for all counter-labels } \\tilde{F}\\end{array}\\right.", "}$ where $\\rho _+$ and $\\rho _-$ are hyperparameters in the range $[0, 1]$ , giving us the flexibility to tune for either precision or recall.", "In our experiments, we use the BART model [16] finetuned on the MultiNLI dataset [41] for NLI testsModel available at https://huggingface.co/facebook/bart-large-mnli.", "Our crowdsourcing evaluations show that the zero-shot classifiers with the enhanced techniques achieved high precision (see Appx.", "REF )." ], [ "Assertion clustering", "The same assertion can be expressed in many ways in natural language.", "For example, Fried rice is a popular Chinese dish can also be written as Fried rice is a famous dish from China or One of the most popular Chinese food is fried rice.", "Clustering is used to group such assertions, which reduces redundancies, and allows to obtain frequency signals on assertions.", "We leverage a state-of-the-art sentence embeddings method, SentenceBert [29], to compute vector representations for all assertions and use the Hierarchical Agglomerative Clustering (HAC) algorithm for clustering.", "Clustering is performed on assertions of each subject-facet pair.", "Cluster summarization.", "Since each cluster can have from a few to hundreds of sentences, it is important to identify what those sentences convey, in a concise way.", "One way to compute a representative assertion for a cluster is to compute the centroid of the cluster, then take its closest assertion as the representative.", "Yet for natural-language data, this does not work particularly well.", "In Candle, we therefore approach cluster summarization as a generative task, based on a state-of-the-art language model, GPT-3 [5] (see Appx.", "for prompt template).", "Annotator-based evaluations show that GPT-generated representatives received significantly better scores than the base sentences in the clusters (see Sec.", "REF )." ], [ "Concept extraction", "While the cultural groups are regarded as subjects, concepts are akin to objects of the assertions.", "Identifying these concepts enables concept-focused browsing (e.g., browsing Japan assertions only about the Miso soup, etc.).", "We postulate that main concepts of an assertion cluster are terms shared by many members: We extract all n-grams ($n=1..3$ ) of all assertions in a cluster (excluding subjects themselves, and stop words); and retain the ones that occur in more than 60% of the assertions.", "If both a phrase and its sub-phrase appear, we only keep the longer phrase in the final output.", "Noun-phrase concepts are normalized by singularization." ], [ "Cluster ranking and post-filtering", "Ranking commonsense assertions is a crucial task.", "Unlike encyclopedic knowledge, which is normally either true or false, precision of CSK is usually not a binary concept, as it generalizes over many groups.", "With Candle, we aim to pull out the most interesting assertions for each subject, and avoid overly generic assertions such as Chinese food is good or Firefighters work hard, which are very common in the texts.", "Extracting and clustering assertions from large corpora gives us an important signal of an assertion, its frequency.", "However, ranking based on frequency alone may lead to reporting bias.", "As we compile a CCSK collection at large scale, it also enables us to compute the distinctiveness of an assertion against others in the collection.", "The notion of these 2 metrics can be thought of as term frequency and inverse document frequency in the established TF-IDF technique for IR document ranking [35].", "Besides frequency and distinctiveness, we score the interestingness of assertion clusters based on 2 other custom metrics: specificity (how many objects are mentioned in the assertion?)", "and domain relevance (how relevant is the assertion to the cultural facet?).", "Frequency.", "For each subject-facet pair, we normalize cluster sizes into the range $[0,1]$ , using min-max normalization.", "Distinctiveness.", "We compute the IDF of a cluster $cls$ as follows: $IDF(cls) = \\frac{\\sum _{cls^{\\prime } \\in CLS}size(cls^{\\prime })}{\\sum _{cls^{\\prime } \\in CLS}size(cls^{\\prime }) \\times \\sigma (cls,cls^{\\prime })}$ where $CLS$ is the set of all clusters for a given facet (e.g., food) and domain (e.g., geography>country), and $\\sigma (cls,cls^{\\prime }) ={\\left\\lbrace \\begin{array}{ll}1 & \\quad \\text{if } sim(cls, cls^{\\prime }) \\ge \\theta \\\\0 & \\quad \\text{otherwise}\\end{array}\\right.", "}$ Here, $sim(cls, cls^{\\prime })$ is the semantic similarity between the two clusters $cls$ and $cls^{\\prime }$ , and $\\theta $ is a predefined threshold.", "In Candle, to reduce computation, we approximate $sim(cls, cls^{\\prime })$ as the similarity between their summary sentences, which can be computed as the cosine similarity between their embedding vectors.", "When computing these embeddings, the subjects in the sentences are replaced with the same [MASK] tokens so that we only compare the expressed properties.", "Then, we normalize the logarithmic IDF values into the range $[0, 1]$ to get the distinctiveness scores of clusters.", "Specificity.", "We compute the specificity of an assertion based on the fraction of nouns in it.", "Concretely, in Candle, the specificity of a cluster is computed as the specificity of its summary sentence.", "Domain relevance.", "For each facet, we compute the domain relevance of a cluster by taking the average of the probability scores given to its members by the cultural facet classifier.", "Combined score.", "The final interestingness score for cluster $cls$ is the average of the four feature scores.", "A higher score means higher interestingness.", "Post-filtering.", "Lastly, to eliminate redundancies and noise, and further improve the final output quality, we employ a few hand-crafted rules: At most 500 clusters per subject-facet pair are retained, as further clusters mostly represent redundancies or noise.", "We remove clusters that have no concepts extracted, or that are based on too few distinct sentences ($>$ 2/3 same sentences) or web source domains.", "We remove any cluster if either its summary sentence or many of its member sentences match a bad pattern.", "We compile a set of about 200 regular expression patterns, which were written by a knowledge engineer in one day.", "For e.g., we reject assertions that contain “the menu”, “the restaurant” (likely advertisements for specific restaurants), or animal and plant breeds named after locations, such as “American bison”, “German Shepherd”, etc." ], [ "Implementation", "Input corpus.", "In Candle, we use the broad web as knowledge source, because of its diversity and coverage, which are important for long-tail subjects.", "Besides the benefits, the most challenging problem when processing web contents is the tremendous amount of noise, offensive materials, incorrect information etc., hence, choosing a corpus that has been chiefly cleaned is beneficial.", "We choose the Colossal Clean Crawled Corpus (C4) [27] as our input, a cleaned version of the Common Crawl corpus, created by applying filters such as deduplication, English-language text detection, removing pages containing source code, offensive language, too little content, etc.", "We use the C4.En split, which contains 365M English articles, each with text content and source URL.", "Before passing it to our system, we preprocessed all C4 documents using spaCy, which took 2 days on our cluster of 6K CPU cores.", "Subjects.", "We collect CCSK for subjects from 3 cultural domains: geography (272 subjects), religions (14 subjects) and occupations (100 subjects).", "For geography, we split into 4 sub-domains: countries, continents, geopolitical regions (e.g., Middle East, Southeast Asia, etc.)", "and US states, which were collected from the GeoNames databasehttp://www.geonames.org/, which also provides alias names.", "We further enriched these aliases with demonyms from Wikipediahttps://en.wikipedia.org/wiki/Demonym.", "Facets of culture.", "We consider 5 facets: food, drinks, clothing, rituals, and traditions (for geography/religion) or behaviors (for occupation), selected based on an article on facets of culture [23].", "Execution and result statistics.", "After tuning the system's hyperparameters on small withheld data (see Appx.", "), we executed Candle on a cluster of 6K CPU cores (AMD EPYC 7702) and 40 GPUs (a mix of NVIDIA RTX 8000, Tesla A100 and A40 GPUs).", "Regarding processing time, for the domain country (196 subjects), it took a total of 12 hours to complete the extraction, resulting in 8.4K clusters for the facet food (cf.", "Table REF ).", "Occupations and religions took 8 and 6 hours each.", "We provide statistics of the output in Table REF .", "In total, the resulting collection has 1.1M CCSK assertions (i.e., base sentences) which form 60K clusters for the given subjects and facets.", "Table: Processing time and output size of each step in Candle for the domaingeography>countryand facetfood." ], [ "Evaluation", "We perform the following evaluations: A comparison of quality of Candle's output and existing socio-cultural CSK resources: This analysis will show that our CCSK collection is of significantly higher quality than existing resources (Sec.", "REF ), and even outperforms GPT-3-generated assertions (Sec.", "REF ).", "Two extrinsic use cases for CCSK: In this evaluation, we perform two downstream applications, question answering (QA) and a “guess the subject” game, showing that using CCSK assertions from Candle is beneficial for these tasks, and that Candle assertions outperform those generated by GPT-3 (Sec.", "REF ).", "In Appx.", ", we also break down our CCSK collection into domains and facets, analyzing in details the assertion quality for each sub-collection.", "Table: Candle in comparison to other CSK resources.", "Quality evaluated on assertions of 5 popular countries in StereoKG.Abbrv.", ": PLA - plausibility, COM - commonality, DIS - distinctiveness, OFF - offensiveness, LEN - average assertion length." ], [ "Evaluation metrics", "Following previous works [30], [7], we analyze assertion quality along several complementary metrics, annotated by Amazon MTurk (AMT) crowdsourcing.", "Plausibility (PLA).", "This dimension measures whether assertions are considered to be generally true, a CCSK-softened variant of correctness/precision.", "Commonality (COM).", "This dimension measures whether annotators have heard of the assertion before, as a signal for whether assertions cover mainstream or fringe knowledge (akin to salience).", "Distinctiveness (DIS).", "This dimension measures discriminative informativeness of assertions, i.e., whether the assertion differentiates the subject from others.", "Each metric is evaluated on a 3-point Likert scale for negation (0), ambiguity (1) and affirmation (2).", "Distinctiveness (DIS) is only applicable if the answer to the plausibility (PLA) question is either 1 or 2.", "In case the annotators are not familiar with the assertion, we encourage them to perform a quick search on the web to find out the answers for the PLA and DIS questions.", "More details on the AMT setup can be found in Appx.", "." ], [ "Compared resources", "We compare Candle with 3 prominent CSK resources: Quasimodo [30], Acharya et al.", "[1], StereoKG [7].", "The former covers broad domains including assertions for countries and religions, while the others focus on cultural knowledge.", "Other popular resources such as ConceptNet [36], GenericsKB [3], Ascent/Ascent++ [22], [21], ATOMIC [32], ASER [47] and TransOMCS [45] do not have their focus on cultural knowledge and contain very little to zero assertions for geography or religion subjects, hence, they are not qualified for this comparison.", "We evaluate 2 versions of Candle, one where each base assertion is retained independently (Candle-base-sent), the other containing only the cluster representatives (Candle-cluster-reps)." ], [ "Setup", "For comparability, all resources are compared on 100 random assertions of the same 5 country subjects covered in StereoKG [7] - United States, China, India, Germany and France.", "We note that among all compared resources, Acharya et al.", "[1] only contain two subjects (United States and India), so for that resource, we only sample from those.", "For StereoKG, we use their natural-language assertions.", "For Quasimodo and Acharya et al., we verbalize their triples using crafted rules.", "Each assertion is evaluated by 3 MTurk annotators.", "Additionally, we ask if the annotator would consider the assertion as an inappropriate or offensive material.", "More details on the annotation task can be found in Appx.", "." ], [ "Results", "A summary of comparison with other resources is shown in Table REF .", "Resource size and assertion length.", "Candle outperforms all other resources on the number of base sentences.", "When turning to clusters, our resource still has significantly more assertions than Acharya et al.", "(which was constructed manually at small scale) and StereoKG (extracted from Reddit/Twitter questions).", "Quasimodo has comparable size with Candle-cluster-reps for the country and religion domains and has more for the occupation domain.", "The OpenIE-based methods, Quasimodo and StereoKG, produce the shortest assertion (32 and 37 characters on average, respectively).", "The manually-constructed KG (Acharya et al.)", "has the longest assertions (102 characters).", "Candle, having average assertion lengths (69 and 73), stands between those two approaches.", "Assertion quality.", "In general, Candle-cluster-reps considerably outperforms all other baselines on 2 of the 3 metrics (plausibility and distinctiveness).", "Our resource only comes behind Acharya et al.", "on the commonality metric (1.15 and 1.22 respectively), which is expected because Acharya et al.", "only cover a few relations about common rituals (e.g., birthday, wedding, funeral) in two countries, USA and India, and their assertions are naturally known by many workers on Amazon MTurk, who are mostly from these 2 countries [31].", "Importantly, the resource of Acharya et al.", "is based on crowdsourcing and only contains a small set of 225 assertions for a few rituals.", "Candle-cluster-reps even outperforms the manually-constructed KG (Acharya et al.)", "on the plausibility metric.", "This could be caused by an annotation task design that is geared towards abnormalities, or lack of annotation quality assurance.", "Candle also has the highest scores on the distinctiveness metric, while most of the assertions in other resources were marked as not distinguishing by the annotators.", "Between the two versions of Candle, the cluster representatives consistently outperform the base sentences on all evaluated metrics.", "This indicates that still some of the raw sentences in the collection are noisy, on the other hand, the computed cluster representatives are more coherent and generally of better quality.", "We also measured the offensiveness (OFF) of each resource, i.e., the percentage of assertions that were marked as inappropriate or offensive materials by at least one of the human-annotators.", "Quasimodo and StereoKG, extracted from raw social media contents, have the highest number of assertions considered offensive (18% and 13%).", "Meanwhile, Candle's judicious filters only miss a small fraction (1% of final assertions).", "In summary, our Candle CCSK collection has the highest quality by a large margin compared to other resources.", "Our resource provides assertions of high plausibility and distinctiveness.", "The clustering and cluster summarization also help to improve the presentation quality of the CCSK.", "Table: Assertion quality - Candle vs. GPT-3 - evaluated on assertions of 196 countries.Table: Example assertions of Candle and GPT-resource for subject:China, facet:clothing." ], [ "Comparison with direct LM extraction", "Knowledge extraction directly from pre-trained LMs is recently popular, e.g., the LAMA probe [25] or AutoTOMIC [40].", "There are major pragmatic challenges to this approach, in particular, that assertions cannot be contextualized with truly observed surrounding sentences, and that errors cannot be traced back to specific sources.", "Nonetheless, it is intrinsically interesting to compare assertion quality between extractive and generative approaches.", "In this section, we compare Candle with assertions generated by the state-of-the-art LM, GPT-3 [5].", "Generating knowledge with GPT-3.", "We query the largest GPT-3 model (davinci-002) with the following prompt template: “Please write 20 short sentences about notable <facet> in <subject>.” We run each prompt 10 times and set the randomness (temperature) to 0.7, so as to obtain a larger resource.", "We run the query for 5 facets and 210 subjects (196 countries and 14 religions), resulting in 188,061 unique sentences.", "Henceforth we call this dataset GPT-resource, and reuse it in the extrinsic use cases (Sec.", "REF ).", "Evaluation metrics and setup.", "For each resource, we sample 100 assertions for each facet (hence, 500 assertions in total) and perform human evaluation on the 3 metrics - commonality (COM), plausibility (PLA) and distinctiveness (DIS).", "Results.", "The quality comparison between assertions of Candle and GPT-resource is shown in Table REF .", "While plausibility scores are the same, and Candle performs better in commonality, the difference that stands out is in distinctiveness: GPT-3 performs significantly worse, reconfirming a known problem of language models, evasiveness and over-generality [17].", "We illustrate this with anecdotal evidence in Table REF , for subject:China and facet:clothing.", "None of the listed GPT-3 examples is specific for China." ], [ "Extrinsic evaluation", "QA with context-augmented LMs.", "Augmenting LMs input with additional contexts retrieved from knowledge bases has been a popular approach to question answering (QA) [11], [24], which shows that although LMs store information in billions of parameters, they still lack knowledge to answer knowledge-intensive questions, e.g., “What is the appropriate color to wear at a Hindu funeral?” In this experiment, we use GPT-3 as QA agent, and compare its performance in 3 settings: (1) when only the questions are given, and when questions and their related contexts retrieved from (2) Candle or (3) GPT-resource (cf.", "Sec.", "REF ) are given to the LM.", "For questions, we collect cultural knowledge quizzes from multiple websites, which results in 500 multiple-choice questions, each with 2-5 answer options (only one of them is correct).", "For context retrieval, we use the the SentenceBert all-mpnet-base-v2 model, and for each question, retrieve the two most similar assertions from Candle-cluster-reps and GPT-resource.", "We use the GPT-3 davinci-002 model, with temperature=0 and max_length=16 (see Appx.", "for prompt settings).", "We measure the precision of the answers and present the results in Table REF .", "It can be seen that with Candle context, the performance is consistently better than when no context is given on all facets of culture, and better than GPT context on 3 out of 4 facets.", "This shows that GPT-3, despite its hundred billions of parameters, still lacks socio-cultural knowledge for question answering, and external resources such as Candle CCSK can help to alleviate this problem.", "Table: Results of QA using context-augmented LMs.“Guess the country” game.", "The rule of this game is as follows: Given 5 CCSK assertions about a country, a player has to guess the name of the country.", "As input, we select a random set of 100 countries, and take assertions from either Candle or GPT-resource.", "The game has 5 rounds, each is associated with a facet of culture.", "In each round, for each country, we draw the top-5 assertions from each resource (sorted by interestingness in Candle or by frequency in GPT-resource).", "All mentions of the countries in the input sentences are replaced with [...], before being revealed to the player.", "This is a game that requires a player that possesses a wide range of knowledge across many cultures.", "Instead of human players, we choose GPT-3 as our player, which has been shown to be excellent at many QA tasks [5] (prompt settings are presented in Appx. ).", "We measure the precision of the answers and present the results in Table REF .", "It can be seen that the player got significantly more correct answers when given assertions from Candle than from GPT-resource (i.e., assertions written by the player itself!).", "This confirms that assertions in Candle are more informative.", "Table: Precision (%) for the “guess the country” game." ], [ "Conclusion", "We presented Candle—an end-to-end methodology for automatically collecting cultural commonsense knowledge (CCSK) from broad web contents at scale.", "We executed Candle on several cultural subjects and facets of culture and produce CCSK of high quality.", "Our experiments showed the superiority of the resulting CCSK collection over existing resources, which have limited coverage for this kind of knowledge, and also over methods based on prompting LMs.", "Our work expands CSKG construction into a domain that has been largely ignored so far.", "Our data and code are accessible at https://cultural-csk.herokuapp.com/." ], [ "Ethics statement", "No personal data was processed and hence no IRB review was conducted.", "It is in the nature of this research, however, that some outputs reflect prejudices or are even offensive.", "We have implemented multiple filtering steps to mitigate this, and significantly reduced the percentage of offensive assertions, compared with prior work.", "Nonetheless, Candle represents a research prototype, and outputs should not be used in downstream tasks without further thorough review." ], [ "Generic filtering rules", "GenericsKB [3] was built by using a set of 27 hand-crafted lexico-syntactic rules to extract high-quality generic sentences from different text corpora (the ARC corpus, SimpleWikipedia and the Waterloo crawl of education websites).", "For example, the lexical rules look for sentences with short length, starting with a capitalized character, having no bad first words (e.g., determiners), ending with a period, having no URL-like snippets, etc.", "The syntactic rules only accept a sentence if its root is a verb and not the first word, and if there is a noun before the root verb, etc.", "Candle adopts the GenericsKB rules.", "However, as GenericsKB only deals with general concepts (e.g., “tree”, “bird”, etc.", "), some of the rules are not applicable for the cultural subjects that can be named entities.", "Hence, depending on the subjects and facets, we adaptively modify the rules (by dropping some of them) so that we will not miss out valuable assertions.", "For instance, for geography, the has-no-determiners-as-first-word rule will filter out valuable assertions such as The Chinese use chopsticks to eat their food or The currywurst is a traditional German fast food dish, and it must be dropped.", "In another situation, when exploring the “traditions” facet, the remove-past-tense-verb-roots rule would be too aggressive as it rejects assertions about past traditions.", "The rule that rejects sentences with PERSON entities can be used for the geography and occupation subjects, but must not be used for religions, because it will filter out sentences about Buddha or Jesus Christ.", "Full details are in the published code basehttps://github.com/cultural-csk/candle/blob/main/candle/pipeline/component_generic_sentence_filter.py." ], [ "Hyperparameter settings", "Based on tuning on small withheld data, we select the following values for hyperparameters and run Candle on the C4 dataset with these settings.", "For cultural facet classification (cf.", "Sec.", "REF and Eq.", "REF ), we fix $\\rho _+$ to 0.5 and $\\rho _-$ to 0.3.", "For assertion clustering (cf.", "Sec.", "REF ), we use the SentenceBert model all-MiniLM-L6-v2 for computing sentence embeddings.", "For the HAC algortihm, we measure point-wise Euclidean distance of the normalized embeddings.", "Then, we use the Ward's linkage [38], with the maximal distance threshold set to 1.5.", "In the few cases where input sets are larger, we truncate them at 50K sentences per subject-facet pair, since larger inputs only contain further redundancies, that are not worth the cubic effort of clustering.", "This concerns only 15 out of 386 subjects.", "For cluster summarization, we only consider the 500 most populated clusters for each subject-facet pair with a minimum size of 3 sentences.", "More details on prompting GPT-3 for cluster summarization can be found in Appx. .", "For cluster ranking (cf.", "Sec.", "REF ), we fix $\\theta $ in Eq.", "REF to 0.8." ], [ "Intrinsic evaluation", "We break down the Candle CCSK collection into domains and facets and evaluate the assertion quality for each of these sub-collections and get more insights into the produced data.", "Table: Quality of Candle assertions for each domain." ], [ "Per-domain quality", "Candle contains 3 cultural domains - geography, religion and occupation.", "For each domain, we sample 100 assertions and perform crowdsourcing evaluation with the 3 metrics - PLA, COM and DIS (cf.", "SubSec.", "REF ).", "We present the evaluation results in Table REF .", "Besides the raw scores (0, 1, 2), we also binarize and denote them as acceptance rates, i.e., a score greater than zero means “accept”.", "Candle achieves a high plausibility (PLA) score of 1.54 on average.", "Performance on this metric is relatively consistent through all domains.", "Meanwhile, the commonality (COM) metric is highest for the occupation domain and lowest for geography domain.", "More than 80% of plausible assertions are annotated as distinctive (DIS).", "Religion and occupation assertions perform significantly better than geography's on this metric.", "That could be caused by several assertions for geography subjects being correct but too generic (e.g., Japanese food is enjoyed by many people or German beer is good).", "On the other hand, religions and occupations are more distinguishing from one another, while countries or geo-regions usually have cultural overlaps.", "Table: Quality of Candle assertions for each facet and the domain geography>country." ], [ "Per-facet quality", "We select the assertions for the domain country, and for each facet (food, drinks, clothing, traditions, rituals) we sample 100 assertions for crowdsourcing evaluation.", "Besides commonality (COM), plausibility (PLA) and distinctiveness (DIS), here we introduce one more evaluation metric, domain relevance (DOM), to measure if an assertion talks about the cultural facet of interest.", "Only when the DOM score is greater than zero, the other metrics will be evaluated.", "We present the evaluation results in Table REF .", "It can be seen that Candle maintains good quality on all evaluation metrics.", "Notably, scores for the DOM metric are consistently high for all facets, suggesting that the enhanced techniques for zero-shot classification work well on our data.", "Interestingly, the facet drinks outperforms all other facets on 3 of the 4 metrics (DOM, PLA and COM), especially for PLA, its score is significantly higher than others.", "Assertions for drinks and rituals are also more distinctive than for other facets." ], [ "Details of Annotation Task for Assertion Evaluation", "The evaluations of assertion quality (Tables REF , REF , REF and REF ) are conducted on Amazon MTurk (AMT).", "We present CCSK assertions to annotators in the form of natural-language sentences (triples from Quasimodo [30] and Acharya et al.", "[1] were verbalized using crafted rules).", "We evaluate each assertion along 3-4 dimensions on a 3-point Liker scale - negation (0), ambiguity (1) and affirmation (2).", "Each AMT task consists of 5 assertions evaluated by 3 different annotators.", "Workers are compensated $0.50 per task.", "We select Master workers with lifetime's acceptance rate more than 99%.", "We obtain fair inter-annotator agreements given by Fleiss' kappa [9]: 25.0 for DOM, 25.7 for PLA and 25.4 for DIS.", "This number for COM (13.4) is lower than others because it is an objective question (has the annotator heard of the assertion?", ")." ], [ "GPT-3 prompting", "In this work, we use GPT-3 for cluster summarization (Sec.", "REF ), generating CCSK for GPT-resource (Sec.", "REF ), context-augmented QA and “guess the country” game (Sec.", "REF ).", "The prompt templates and settings used for these tasks are presented below.", "Cluster summarization.", "We query the curie-001 model, with zero temperature and maximum length of 50 tokens.", "We only take the first generated sentence as output.", "Given the following sentences:   (1) Sentence 1.", "(2) Sentence 2.", "...   (n) Sentence n. Summarize them using one short sentence: An example prompt is presented in Fig.", "REF .", "Figure: A screenshot of GPT-3 output for cluster summarization.Generating CCSK for GPT-resource.", "We use the largest model (davinci-002) and set temperature to 0.7 and maximum length to 512 tokens.", "For each facet and subject, we run the following prompt template for 10 times: Please write 20 short sentences about notable <facet> in <subject>.", "We query for 5 facets (food culture, drinking culture, clothing habits, rituals, traditions), and 210 subjects (196 countries and 14 religions).", "In Table REF , we show some assertions generated using this prompt template for the subject China and the facet “clothing habits”.", "Context-augmented QA.", "We query the davinci-002 model with zero temperature and maximum length of 16 tokens.", "Answers are then manually mapped to the respective options.", "Example prompts are shown in Fig.", "REF ).", "Figure: Screenshots of GPT-3 output in the QA task, without and with CCSK.“Guess the country” game.", "We use the davinci-002 model, with temperature=0 and a max_length=8.", "Answers given by GPT-3 are checked manually for their correctness.", "Example prompts can be seen in Fig.", "REF .", "Figure: Screenshots of GPT-3 output for the “guess the country” game, with assertions of GPT-resource and Candle for subject:Vietnam and facet:drinks." ] ]
2210.07763
[ [ "Resolving the $R_{\\rm pA}$ and $v_2$ puzzle of $D^0$ mesons in $p-$Pb\n collisions at the LHC" ], [ "Abstract It has been a challenge to understand the experimental data on both the nuclear modification factor and elliptic flow of $D^0$ mesons in $p-$Pb collisions at LHC energies.", "In this work, we study these collisions with an improved multi-phase transport model.", "By including the Cronin effect (or transverse momentum broadening) and independent fragmentation for charm quarks, we provide the first simultaneous description of the $D^0$ meson $R_{\\rm pA}$ and $v_2$ data at $p_{\\rm T} \\leq 8$ GeV$/c$.", "The model also reasonably describes the $D^0$ meson $p_{\\rm T}$ spectra and the low-$p_{\\rm T}$ charged hadron spectra, $R_{\\rm pA}$ and $v_2$.", "Our results show that both parton interactions and the Cronin effect are important for the $D^0$ meson $R_{\\rm pA}$, while parton interactions are mostly responsible for the $D^0$ meson $v_2$.", "It is thus essential to include the Cronin effect for the simultaneous description of the $D^0$ meson $R_{\\rm pA}$ and $v_2$.", "This work implies that the Cronin effect could also be important for heavy hadrons in large systems." ], [ "Resolving the $R_{\\rm pA}$ and $v_2$ puzzle of $D^0$ mesons in $p-$ Pb collisions at the LHC Chao Zhang Department of Physics, East Carolina University, Greenville, NC 27858, USA Institute of Particle Physics and Key Laboratory of Quark&Lepton Physics (MOE), Central China Normal University, Wuhan 430079, China Liang Zheng School of Mathematics and Physics, China University of Geosciences (Wuhan), Wuhan 430074, China Shusu Shi Institute of Particle Physics and Key Laboratory of Quark&Lepton Physics (MOE), Central China Normal University, Wuhan 430079, China Zi-Wei Lin [email protected] Department of Physics, East Carolina University, Greenville, NC 27858, USA It has been a challenge to understand the experimental data on both the nuclear modification factor and elliptic flow of $D^0$ mesons in $p-$ Pb collisions at LHC energies.", "In this work, we study these collisions with an improved multi-phase transport model.", "By including the Cronin effect (or transverse momentum broadening) and independent fragmentation for charm quarks, we provide the first simultaneous description of the $D^0$ meson $R_{\\rm pA}$ and $v_2$ data at $p_{\\rm T} \\le 8$ GeV$/c$ .", "The model also reasonably describes the $D^0$ meson $p_{\\rm T}$ spectra and the low-$p_{\\rm T}$ charged hadron spectra, $R_{\\rm pA}$ and $v_2$ .", "Our results show that both parton interactions and the Cronin effect are important for the $D^0$ meson $R_{\\rm pA}$ , while parton interactions are mostly responsible for the $D^0$ meson $v_2$ .", "It is thus essential to include the Cronin effect for the simultaneous description of the $D^0$ meson $R_{\\rm pA}$ and $v_2$ .", "This work implies that the Cronin effect could also be important for heavy hadrons in large systems.", "Introduction.— Heavy flavor hadrons, due to the large heavy quark mass, are one of the most important tools to study the perturbative Quantum Chromo-Dynamics (pQCD) in high energy hadronic collisions [1], [2], [3].", "Over the last two decades, experiments from the Relativistic Heavy Ion Collider (RHIC) and the Large hadron Collider (LHC) [4], [5], [6], [7] have collected many data supporting the formation of a hot and dense matter called the quark-gluon-plasma (QGP), and the main goal of the field of high energy heavy ion collisions is to study the QGP properties.", "Heavy quarks provide us a great probe because the heavy quark mass is much larger than the temperature of the dense matter; therefore, heavy flavor particles may only partially thermalize [8] and thus better remember the interaction history with the medium.", "Two observables are often measured for heavy flavors in heavy ion collisions: the nuclear modification factor $R_{\\rm AA}$  [9], [10], [11], [12], [13], [14], [15] and the elliptic flow $v_2$  [16], [17], [18], [19], [20], [21], [22].", "Several theoretical models, including the Fokker-Planck approach [23], [24], [25], [26] and the relativistic Boltzmann transport approach [27], [28], [29], [30], [31], have been developed to study the nuclear suppression and collective flows of heavy flavor hadrons at RHIC and LHC.", "It has been realized that $R_{\\rm AA}$ and $v_2$ are sensitive to the temperature- and energy-dependence of transport properties of the QGP such as the heavy quark diffusion and drag coefficients [32], [33].", "They are also sensitive to the hadronization mechanisms including quark coalescence and fragmentation [34], [35], [36], [37], [38], [39].", "Several approaches have shown reasonable agreements with the existing data in large collision systems, suggesting that charm quarks may flow well with the QGP medium due to their frequent interactions with the hot and dense matter [40], [41], [42].", "Similar measurements of heavy flavor mesons have also been made for small systems like $d+$ Au collisions at RHIC and $p-$ Pb collisions at LHC in recent years [43], [44], [45], [46], [47], [48], [49], [50].", "Little to no nuclear suppression $R_{\\rm pA}$ but a sizable elliptic flow $v_2$ have been observed for $D^0$ mesons in $p-$ Pb collisions at the LHC energies, which has posed a big challenge to theoretical models.", "One expects that a sizable $v_2$ comes from significant interactions of charm quarks with the QGP medium, in either hydrodynamics-based models or parton/hadron transport models.", "On the other hand, a significant interaction of charm quarks with the QGP is expected to inevitably suppress high-$p_{\\rm T}$ charm hadrons [51], [52], [53], in contrast to the observed $D^0$ $R_{\\rm pA}$ being almost flat around the value of unity.", "Some theoretical studies can reproduce either the heavy meson $R_{\\rm pA}$ data [54], [55], [56], [57], [58], [59], [60] or $v_2$ data [61], [62].", "For example, the POWLANG model [56] can describe the heavy flavor $R_{\\rm pA}$ but predicts a small charm $v_2$ .", "PQCD calculations that consider cold nuclear medium effects are generally found to describe the charm $R_{\\rm pA}$ data [54], [55], [63], [57], and so is another pQCD model with a parametrized $k_{\\rm T}$ broadening [58].", "Regarding the heavy flavor elliptic flow, the color glass condensate framework can describe the charm and bottom $v_2$ in $p-$ Pb collisions at LHC [61], [62], which indicates the existence of initial-state correlation of heavy quarks in small systems.", "So far, however, there has not been a simultaneous description of both $R_{\\rm pA}$ and $v_2$ of heavy hadrons.", "In this study, we investigate the $D^0$ meson $R_{\\rm pA}$ and $v_2$ in $p-$ Pb collisions at LHC energies with an improved version of a multi-phase transport (AMPT) model.", "Methods.— The AMPT model [64], [65] is a transport model designed to describe the evolution of the dense matter produced in heavy ion collisions.", "The string melting version [66] is expected to be applicable when the QGP is formed, as it contains a fluctuated initial condition, partonic scatterings, quark coalescence, and hadronic interactions.", "Recently, we have developed a new quark coalescence [67], incorporated modern parton distribution functions of the free proton and impact parameter-dependent nuclear shadowing [68], and improved heavy flavor productions [69].", "The AMPT model that we use in this study contains these improvements.", "In the string melting version of AMPT model, the excited strings are converted to partons through the string melting mechanism [66].", "In particular, the strings are first converted to hadrons through the Lund string fragmentation [70], [71], then each hadron is decomposed to partons according to the flavor and spin structures of its valence quarks.", "Because initial charm quarks are produced from the hard pQCD processes during the primary nuclear-nuclear collision, we improve the initial state charm quarks in this work.", "Instead of “melting” the initial charm hadrons into charm quarks via string melting, we extract charm quarks produced from the HIJING model [72] before they enter the Lund string fragmentation to form the initial charm hadrons.", "These initial charm quarks then enter the parton cascade; and a charm quark is allowed to interact after its formation time given by $t_{\\rm F}=E/m_{\\rm T}^2$  [64], where $E$ and $m_{\\rm T}$ are the quark energy and transverse mass, respectively.", "Since the scattering cross section for charm quarks is in general different from that for light ($u,d,s$ ) quarks, we separate the cross section among light quarks ($\\sigma _{\\rm LQ}$ ) from that between a heavy quark and other quarks ($\\sigma _{\\rm HQ}$ ).", "The default values, $\\sigma _{\\rm LQ}=0.5$ mb and $\\sigma _{\\rm HQ}=1.5$ mb, are used unless specified otherwise, and they are determined from the fit to the charged hadron $v_2$ data in $p-$ Pb collisions at 5.02 TeV and $D^0$ meson $v_2$ data in $p-$ Pb collisions at 8.16 TeV, respectively.", "We have also added the independent fragmentation [73] as another hadronization process for heavy quarks, in addition to the usual quark coalescence process [67].", "If a heavy quark and its coalescing partner(s) have a large relative distance or a large invariant mass, they are considered to be unsuitable for quark coalescence; instead the heavy quark will hadronize to a heavy hadron via independent fragmentation.", "We also include the transverse momentum broadening (i.e., the Cronin effect [74]) for the initial heavy quarks [75], [76].", "The Cronin effect is often considered as the broadening of the transverse momentum of a produced parton from a hard process due to multiple scatterings of the involved parton(s) in the nucleus  [77], [78], [79], [80].", "Therefore, the strength of the Cronin effect depends on the number of scatterings a participant (or target) nucleon undergoes while passing the target (or projectile) nucleus [81].", "Specifically, we implement the broadening by adding a transverse momentum kick $k_{\\rm T}$ to each heavy flavor $Q\\bar{Q}$ pair in the initial state (i.e., before the parton cascade), where $k_{\\rm T}$ is sampled from the two-dimensional Gaussian function [75], [76], [81]: $f (\\vec{k_{\\rm T}}) = \\frac{1}{\\pi w^2} e^{-k_{\\rm T}^2/w^2}.$ The Gaussian width parameter is modeled as $w &=&w_0 \\sqrt{1+(n_{\\rm coll}-i) \\delta }.$ Note that a $Q\\bar{Q}$ pair can be produced from either the radiation of one participant nucleon or the collision between two participant nucleons (one from the projectile and the other from the target).", "In the above equation, $i=1$ for the former case and $i=2$ for the latter case, while $n_{\\rm coll}$ is the number of primary NN collisions of the participant nucleon for the former case and the sum of the numbers of primary NN collisions of both participant nucleons for the latter case.", "This way, $w=w_0$ for $p{+}p$ collisions.", "The parameter $w_0$ is given by $w_0=(0.35{\\rm ~GeV}/c)~\\sqrt{b_{\\rm L}^0(2+a_{\\rm L}^0)/b_{\\rm L}/(2+a_{\\rm L})},$ where $a_{\\rm L}^0=0.5$ and $b_{\\rm L}^0=0.9$ GeV$^{-2}$ are the original values in the HIJING1.0 model [72] for the two parameters in the Lund fragmentation function [73], and $a_{\\rm L}$ and $b_{\\rm L}$ are the corresponding values in the AMPT model [82].", "The dependence of $w_0$ on the Lund parameters is based on the observation that the average squared transverse momentum of a hadron relative to the fragmenting parent string is proportional to the string tension, which scales as $1/b_{\\rm L}/(2+a_{\\rm L})$  [64].", "In this study, we follow the earlier work [82] by taking $a_{\\rm L}=0.8$ and using nuclear scaling to determine $b_{\\rm L}$ according to the local nuclear thickness functions, where the $b_{\\rm L}$ value is $0.7$ GeV$^{-2}$ for $p{+}p$ collisions but smaller for nuclear collisions.", "As a result, for $p{+}p$ collisions, $w=0.375$ GeV$/c$ , which is close to the original value of $0.35$ GeV$/c$ for the parameter parj(21) in the HIJING1.0 model [72].", "Note that we take $\\beta =0.90$ for the parameter in the local scaling relation of $b_{\\rm L}$ , slightly smaller than the original parameterized values (0.98 at 5.02 TeV and 1.04 at 8.16 TeV) that were obtained from fitting the charged particle $\\langle p_{\\rm T}\\rangle $ in Pb+Pb collisions at LHC energies [82].", "The coefficient $\\delta $ in Eq.", "(REF ) controls the strength of the Cronin effect; its default value of $\\delta =7.0$ is determined from the comparison with the $D^0$ meson $R_{\\rm pA}$ data.", "In the implementation of the Cronin effect, we give each $Q\\bar{Q}$ pair a transverse boost so that the pair transverse momentum increases by a $\\vec{k_{\\rm T}}$ sampled from the distribution in Eq.", "(REF ).", "Note that such implementation of the Cronin effect tends to create an artificial peak at mid-rapidity in the rapidity distribution of heavy quarks [76], [83], [81], since $y={\\rm arcsinh}(p_{\\rm z}/m_{\\rm T})$ will move towards zero after $p_{\\rm T}$ increases.", "Therefore, we choose to keep the rapidity of $Q\\bar{Q}$ pair the same by providing the necessary longitudinal boost after the transverse momentum broadening.", "We also enforce the momentum conservation of the whole parton system of each event by letting the light (anti)quarks share the opposite value of the total $\\vec{k_{\\rm T}}$ given to all $Q\\bar{Q}$ pairs in the event.", "Results and discussions.— Figure REF shows in the upper panels our results of the nuclear modification factor $R_{\\rm pPb}$ as functions of the transverse momentum for $D^0$ mesons and charged hadrons in minimum bias $p-$ Pb collisions at 5.02 TeV and 8.16 TeV in comparison with the experimental data.", "The middle panels show the elliptic flow coefficient $v_2\\lbrace 2\\rbrace $ in high multiplicity $p-$ Pb collisions.", "All results in Fig.", "REF are obtained with the full AMPT model, which has been improved with the Cronin effect and independent fragmentation for charm quarks with $\\sigma _{\\rm LQ}=0.5$ mb (except for the dot-dashed curves where $\\sigma _{\\rm LQ}=0.3$ mb), $\\sigma _{\\rm HQ}=1.5$ mb, and $\\delta =7.0$ .", "We see from panels (a) and (c) that this AMPT model can simultaneously describe the available $D^0$ meson $R_{\\rm pPb}$ data at 5.02 TeV [49] and $v_2$ data at 8.16 TeV [47] below the $p_{\\rm T}$ of 8 GeV$/c$ .", "In addition, the model well describes the charged hadron $R_{\\rm pPb}$  [84] and $v_2$  [85] at 5.02 TeV (solid curves) and reasonably describes the $K_{\\rm S}^0$ $v_2$ at 8.16 TeV [47] below $p_{\\rm T}\\sim 2$ GeV$/c$ , as shown in panels (b) and (d).", "Furthermore, panels (e) and (f) show the $D^0$ meson and charged hadron $p_{\\rm T}$ spectra in minimum bias $p-$ Pb and $p{+}p$ collisions at 5.02 TeV in comparison with the experiment data.", "We see that the AMPT model can well describe the $D^0$ $p_{\\rm T}$ spectra  [49] in both $p{+}p$ and $p-$ Pb systems, while the agreements with the charged hadron $p_{\\rm T}$ spectra [84] are reasonable below $p_{\\rm T}\\sim 1.5$ GeV$/c$ .", "Figure: R pPb R_{\\rm pPb} of (a) D 0 D^0 mesons and (b) charged hadronsin minimus bias p-p-Pb collisions, v 2 v_2 of (c) D 0 D^0 mesons, (d) charged hadrons and K S 0 K_{\\rm S}^0 in high multiplicity p-p-Pb collisions, and the p T p_{\\rm T} spectra of (e) D 0 D^0 mesons and (f) charged hadrons in minimus bias p-p-Pb and p+pp{+}p collisions at 5.02 TeV from the improved AMPT model in comparison with the experimental data around mid-rapidity.In our analysis, we exactly follow the procedures of the ALICE and CMS experiments [85], [47], [84], [49].", "Specifically, the $D^0$ meson and charged hadron nuclear modification factors are analyzed for minimum bias collisions within $-0.96<y_{\\rm cm}<0.04$ and $-0.3<\\eta _{\\rm cm}<1.3$ , respectively.", "The elliptic flow coefficient is analyzed for high multiplicity $p-$ Pb events within $N_{\\rm track} \\in [185-220)$ at 5.02 TeV and $N_{\\rm track} \\in [185-250)$ at 8.16 TeV, where $N_{\\rm track}$ is the number of charged hadrons with $p_{\\rm T}>0.4$ GeV$/c$ within $|\\eta _{\\rm cm}|<2.4$ .", "To calculate the elliptic flow from two-particle correlations, we apply $|\\Delta \\eta |>2$ at 5.02 TeV and $|\\Delta \\eta |>1$ at 8.16 TeV, where charged hadrons are selected within $|\\eta _{\\rm cm}|<2.4$ while $D^0$ and $K_{\\rm S}^0$ mesons are within $-1.46<y_{\\rm cm}<0.54$ .", "The elliptic flow $v_2\\lbrace 2\\rbrace $ , written as $v_2$ for brevity, is calculated as [85], [47] $v_2(\\rm tri)=\\frac{V_{2\\Delta }({\\rm tri,ref})}{\\sqrt{V_{2\\Delta }({\\rm ref,ref})}},$ where “tri” represents the trigger particle of interest, and “ref” represents a reference charged hadron with $0.3<p_{\\rm T}<3.0$ GeV$/c$ .", "Note that the result of a particle species in this study represents the average of the particle and its corresponding anti-particle.", "Since the available data on $D^0$ mesons are the $R_{\\rm pPb}$ at 5.02 TeV and $v_2$ at 8.16 TeV, we also show in Fig.", "REF (a) and (c) the predictions of $R_{\\rm pPb}$ at 8.16 TeV (dashed curve) and $v_2$ at 5.02 TeV (solid curve).", "We see that the $R_{\\rm pPb}$ results are almost the same at the two energies but $v_2$ shows an increase with the colliding energy.", "This is also the case for the charged hadron $R_{\\rm pPb}$ and $v_2$ , as shown by the dashed curves for 8.16 TeV in Fig.", "REF (b) and (d).", "We note that the model overestimates the $v_2$ of $K_{\\rm S}^0$ mesons at 8.16 TeV when the same light quark cross section $\\sigma _{\\rm LQ}=0.5$ mb, which well reproduces the charged hadron $v_2$ at 5.02 TeV, is used.", "On the other hand, there is no good reason that the parton scattering cross section should be the same at different energies.", "For example, the shear viscosity-to-entropy ratio satisfies $\\eta /s \\propto 1/(n^{2/3} \\sigma )$ for a parton gas in equilibrium under isotropic scatterings [86], [87], where $n$ is the parton number density.", "As a result, for a constant $\\eta /s$ , $\\sigma $ would be smaller at higher densities.", "For anisotropic scatterings, which is the case for the parton cascade in the AMPT model, the relationship between $\\eta /s$ and the parton cross section is more complicated but qualitatively similar [87].", "Therefore, we have also explored the effect of a different light quark cross section.", "As shown by the dot-dashed curves in Fig.", "REF (a)-(d), changing $\\sigma _{\\rm LQ}$ from 0.5 mb to 0.3 mb at 8.16 TeV enables the AMPT model to well reproduce the $K_{\\rm S}^0$ $v_2$ data, but this change has almost no effect on the $D^0$ meson $R_{\\rm pPb}$ and $v_2$ .", "As expected, the smaller $\\sigma _{\\rm LQ}$ leads to a slight enhancement of the charged hadron $R_{\\rm pPb}$ , as shown in Fig.", "REF (b).", "Figure: Ratio of the p T p_{\\rm T} spectrum from the full AMPT model over that fromthe AMPT model with different test configurations for (a) charm quarks and (b) light quarks in p-p-Pb collisions at 5.02 TeV, (c) v 2 v_2 of charm quarks at 8.16 TeV, and (d) v 2 v_2 of light quarks at 5.02 TeV from the AMPT model for p-p-Pb collisions.", "The Cronin effect is turned off with δ=0\\delta =0.", "The inset in panel (d) shows the light quark V 2Δ ( tri , ref ){\\rm V}_{2\\Delta }({\\rm tri,ref}).We now turn off various effects separately to demonstrate the key ingredients that allow the improved AMPT model to simultaneously describe the $D^0$ meson $R_{\\rm pPb}$ and $v_2$ .", "Figure REF (a) shows the ratio of the charm quark $p_{\\rm T}$ spectrum from the full AMPT model over that from different test configurations of the AMPT model for minimum bias $p-$ Pb collisions at 5.02 TeV, while Fig.", "REF (b) shows the ratios for light quarks.", "The dashed curves in panels (a) and (c) represent the results of charm quarks without charm quark scatterings (but with scatterings among light quarks), while the dashed curves in panel (b) and (d) represent the light quark results without scatterings among light quarks.", "We see that parton scatterings suppress the parton yield at relatively high $p_{\\rm T}$ (and enhance the yield at low $p_{\\rm T}$ ) due to the parton energy loss or jet quenching [88], [51]; this effect is especially significant for charm quarks, partially due to the larger scattering cross section for charm quarks.", "From the dotted curves that correspond to turning off the charm Cronin effect, we find that the Cronin effect significantly enhances the charm quark yield at relatively high $p_{\\rm T}$ and essentially cancels out the effect from jet quenching.", "In addition, we see that the nuclear shadowing from the impact parameter-dependent EPS09s parameterization [68] has almost no effect on the light quark $p_{\\rm T}$ spectrum but a modest suppression effect on the charm quark $p_{\\rm T}$ spectrum in the transverse momentum range shown in Fig.", "REF .", "We show in Fig.", "REF (c) and (d) the model results on the charm quark $v_2$ at 8.16 TeV and the light quark $v_2$ at 5.02 TeV, respectively, for the high multiplicity $p-$ Pb collisions.", "From the dashed curves, we see that the charm quark $v_2$ is mostly generated from the scatterings of charm quarks with the medium, while the initial state correlation before rescatterings (or the non-flow) contributes significantly to the light quark $v_2$ but little to the charm quark $v_2$ .", "We also see that the Cronin effect for charm quarks modestly suppresses the charm quark $v_2$ ; it has little effect on the light quark $v_2$ , as expected.", "Note that in Fig.", "REF (d) the light quark $v_2(p_{\\rm T})$ without scatterings among light quarks is comparable to or even higher than that with parton scatterings at $p_{\\rm T}> 2.2$ GeV$/c$ .", "The inset in Fig.", "REF (d) shows the corresponding numerator, ${\\rm V}_{2\\Delta }({\\rm tri,ref})$ , for the light quark elliptic flow, where the result without scatterings is significantly lower than that with parton scatterings as expected.", "Therefore, the relatively high $v_2(p_{\\rm T})$ without scatterings results from the fact that the denominator $\\sqrt{{\\rm V}_{2\\Delta }({\\rm ref,ref})}$ in Eq.", "(REF ), which corresponds to the reference elliptic flow, is much smaller without scatterings.", "Figure: (a) R pPb R_{\\rm pPb} at 5.02 TeV and (b) v 2 v_2 at 8.16 TeV for D 0 D^0 mesons in p-p-Pb collisions from the full AMPT model (solid), the model without charm quark scatterings (dashed), the model without the Cronin effect for charm quarks (dotted), and the model without the Cronin effect at a smaller charm quark scattering cross section (long-dashed) in comparison with the experimental data (symbols).We now examine the effects of transverse momentum broadening and parton scatterings on the $D^0$ meson $R_{\\rm pPb}$ and $v_2$ .", "When the Cronin effect is turned off (with $\\delta =0$ ), we see in Fig.", "REF (a) that the $D^0$ $R_{\\rm pPb}$ is significantly suppressed at high $p_{\\rm T}$ but enhanced at low $p_{\\rm T}$ .", "Therefore, the Cronin effect is very important for the $D^0$ meson $R_{\\rm pPb}$ .", "In addition, parton scatterings (at $\\sigma _{\\rm HQ}=1.5$ mb) are seen to suppress the $D^0$ meson $R_{\\rm pPb}$ at high $p_{\\rm T}$ , qualitatively the same as its effect on charm quarks as shown in Fig.", "REF (a).", "Quantitatively, the effect of parton scatterings on the $D^0$ meson $R_{\\rm pPb}$ is smaller than that on charm quarks; this is because the fraction of charm quarks hadronizing via quark coalescence (instead of fragmentation) increases with the amount of scatterings and thus the system size.", "When charm quark scatterings are turned off (dashed curve) in the AMPT model, the charm quark yield at high $p_{\\rm T}$ is enhanced due to the absence of energy loss.", "On the other hand, more charm quarks hadronize via independent fragmentation (than the case with charm quark scatterings), which reduces the enhancement of $D^0$ mesons at high $p_{\\rm T}$ .", "In Fig.", "REF (b), we see that the $D^0$ meson $v_2$ is mostly very small when charm quark scatterings are turned off (dashed curve); the $D^0$ $v_2$ is thus mostly generated by parton scatterings, similar to the charm quark $v_2$ shown in Fig.", "REF (c).", "Note that, even if charm quarks have zero $v_2$ , the $D^0$ $v_2$ can be finite since it has a contribution from the light quark $v_2$ through quark coalescence [34].", "In the AMPT model without the Cronin effect, the $D^0$ meson $v_2$ becomes modestly higher, as shown by the dotted curve.", "Therefore, the Cronin effect modestly suppresses the $D^0$ $v_2$ .", "We have also decreased the charm quark scattering cross section to 1.0 mb, from the default value of 1.5 mb in the full model, to better fit the $D^0$ meson $v_2$ (long-dashed curve).", "The corresponding $D^0$ meson $R_{\\rm pA}$ result is shown in Fig.", "REF (a) as the long-dashed curve, which is seen to still severely underestimates the data at high $p_{\\rm T}$ .", "The Cronin effect is thus crucial for the simultaneous description of the $D^0$ meson $R_{\\rm pPb}$ and $v_2$ data according our model calculations.", "Many previous theoretical methods and phenomenological models have found the Cronin effect to be important.", "For example, pQCD results [75] have indicated that the Cronin effect is needed to describe the experiment data of open heavy flavors at fixed-target energies.", "The transverse momentum broadening or $k_{\\rm T}$ kick is also needed to describe the quarkonium $p_{\\rm T}$ distribution and heavy flavor azimuthal distributions at RHIC and LHC with the pQCD-based HVQMNR code [76], [81], where an energy-dependent $\\langle k_{\\rm T}^2 \\rangle $ (equivalent to $w^2$ in Eq.", "(REF )) is sometimes introduced to enhance the Cronin effect at higher energies.", "For comparison, a study from the HVQMNR code [81] used $\\langle k_{\\rm T}^2 \\rangle = 1.46~\\rm {GeV}^2$ for $p{+}p$ collisions at 5.02 TeV, higher than our value of $w^2_{pp}= 0.14~{\\rm GeV}^2$ ; it used $\\langle k_{\\rm T}^2 \\rangle =1.91~{\\rm GeV}^2$ for minimum bias $p$ -Pb collisions at 5.02 TeV, also higher than our average value of $\\langle w^2_{pA} \\rangle =1.39~{\\rm GeV}^2$ .", "The AMPT model currently only includes the collisional energy loss via two-body elastic parton scatterings, while the parton radiative energy loss is not included.", "In the relativistic limit, the heavy quark collisional energy loss has been shown to depend on the path length $L$ linearly while the radiative energy loss scales as $L^2$  [89].", "Therefore, the collisional energy loss of charm quarks is expected to be more important than the radiative energy loss for small systems like $p$ -Pb, although the $p_{\\rm T}$ scale below which the collisional energy loss dominates is model-dependent [38], [90], [91].", "In addition, the radiative energy loss of charm quarks through inelastic collisions would suppress the charm $p_{\\rm T}$ spectrum at high $p_{\\rm T}$ , qualitatively the same as the collisional energy loss through elastic collisions.", "Therefore, the inclusion of the charm quark radiative energy loss would not change our conclusion that the Cronin effect is needed to compensate for the effect of energy loss and consequently describe the observed $D^0$ meson $R_{\\rm pPb}$ and $v_2$ simultaneously.", "Since the Cronin effect is expected to be stronger for a larger collision system, our study also suggests that it would be important to include the Cronin effect in studies of light hadron [92] or heavy hadron $R_{\\rm AA}$  [24] in large systems.", "Currently, several models are able to reasonably describe the $D$ meson $R_{\\rm AA}$ and $v_2$  [39], [31], [40], [42].", "The inclusion of the Cronin effect may change the model results and affect the extracted values of the charm quark transport coefficients.", "Therefore, further studies, especially those with predicted (instead of parameterized) Cronin effect, will lead to a better understanding of the roles of cold nuclear matter and hot medium effects on heavy flavor productions in small to large collision systems.", "Summary.— We have studied the $D^0$ meson as well as the charged hadron nuclear modification factor $R_{\\rm pPb}$ and elliptic flow $v_2$ in $p-$ Pb collision at LHC energies with a multi-phase transport model.", "After improving the model with the transverse momentum broadening (or the Cronin effect) and independent fragmentation for charm quarks, we are able to provide the first simultaneous description of both the $R_{\\rm pPb}$ and $v_2$ of $D^0$ mesons below the transverse momentum of 8 GeV$/c$ .", "In addition, the transport model reasonably describes the $D^0$ meson $p_{\\rm T}$ spectra in both $p-$ Pb and $p{+}p$ collisions and the low-$p_{\\rm T}$ charged hadron $p_{\\rm T}$ spectra, $R_{\\rm pPb}$ and $v_2$ .", "We find that both parton scatterings and the Cronin effect significantly affect the $D^0$ meson $R_{\\rm pPb}$ .", "On the other hand, the $D^0$ meson $v_2$ is mostly generated by parton scatterings, while the Cronin effect leads to a modest reduction of the charm $v_2$ .", "In particular, we demonstrate the importance of the Cronin effect for resolving the $D^0$ meson $R_{\\rm pPb}$ and $v_2$ puzzle at LHC energies.", "Since the Cronin effect is expected to grow with the system size, this study also implies the importance of including the Cronin effect in studies of heavy hadron $R_{\\rm AA}$ and $v_2$ in large systems.", "Acknowledgement — We thank Jacek Otwinowski for the clarification about the ALICE trigger.", "This work is supported by the National Natural Science Foundation of China under Grants No.", "12175084, 11890710 (11890711) (C.Z.", "and S.S.) and 11905188 (L.Z.", "), the National Key Research and Development Program of China under Grant No.", "2020YFE0202002 (S.S.), the Chinese Scholarship Council (C.Z.", "), and the National Science Foundation under Grant No.", "PHY-2012947 (Z.-W.L.", ")." ] ]
2210.07767
[ [ "Some New Results on Monochromatic Sums and Products in the Rationals" ], [ "Abstract Our aim in this paper is to show that, for any $k$, there is a finite colouring of the set of rationals whose denominators contain only the first $k$ primes such that no infinite set has all of its finite sums and products monochromatic.", "We actually prove a `uniform' form of this: there is a finite colouring of the rationals with the property that no infinite set whose denominators contain only finitely many primes has all of its finite sums and products monochromatic.", "We also give various other results, including a new short proof of the old result that there is a finite colouring of the naturals such that no infinite set has all of its pairwise sums and products monochromatic." ], [ "Introduction", "The Finite Sums Theorem [3] states that whenever the natural numbers are finitely coloured there exists an infinite sequence all of whose finite sums are the same colour.", "By considering just powers of 2, this immediately implies the corresponding result for products: whenever the naturals are finitely coloured there exists an infinite sequence all of whose products are the same colour.", "But what happens if we want to combine sums and products?", "Hindman [4] showed that one cannot ask for sums and products, even just pairwise: there is a finite colouring of the naturals for which no (injective) sequence has the set of all of its pairwise sums and products monochromatic.", "The question of what happens if we move from the naturals to a larger space is of especial interest.", "Bergelson, Hindman and Leader [1] showed that if we have a finite colouring of the reals with each colour class measurable then there exist a sequence with the set of all of its finite sums and products monochromatic.", "(They actually proved a stronger statement: one may insist that the infinite sums are the same colour as well.)", "However, they also showed that there is a finite colouring of the dyadic rationals such that no infinite sequence has all of its finite sums and products monochromatic.", "The questions of what happens in general for finite colourings, in the rationals or the reals, remain open.", "The arguments in [1] do not extend beyond the dyadics.", "Our aim in this paper is to go further.", "Let $\\mathbb {Q}_{(k)}$ denote the set of rationals whose denominators (in reduced form) involve only the first $k$ primes.", "Then we show that there is a finite colouring of $\\mathbb {Q}_{(k)}$ such that no infinite sequence has all of its finite sums and products monochromatic.", "In fact, we strengthen this result in two ways.", "First of all, we insist that the number of colours does not grow with $k$ , and more importantly we give one colouring that `works for all $\\mathbb {Q}_{(k)}$ at once'.", "The actual statement is: there is a finite colouring of the rationals such that no sequence for which the set of primes that appear in the denominators is finite has the set of its finite sums and products monochromatic.", "This is really made up of two separate results: one about just pairwise sums, asserting that no such bounded sequence can have all of its pairwise sums and products monochromatic, and the other about general finite sums, saying that no such unbounded sequence can have all of its finite sums and products monochromatic.", "Our proofs are based on a careful analysis of the structure of addition and multiplication in $\\mathbb {Q}_{(k)}$ , and also on a result (Lemma REF below) about colouring pairs of naturals that may be of independent interest.", "One application of this lemma is a new short proof of the result of Hindman mentioned above, about pairwise sums and products in the naturals.", "We also prove various other related results.", "For example, we give a finite colouring of the reals such that no sequence that is bounded and bounded away from zero can have its pairwise sums and products monochromatic.", "The plan of the paper is as follows.", "In Section 2 we state and prove our lemma about colouring pairs of naturals, and use it in Section 3 to give a new proof of the result about pairwise sums and products in the naturals.", "In Section 4 we give the above result about the reals, which we then build on in Section 5 to prove the statement about pairwise sums and products in bounded sequences.", "Amusingly, it is not entirely clear that the colouring in Section 5 does not prevent monochromatic finite sums and products from every sequence in the rationals, and so we digress in Section 6 to exhibit such a sequence for this colouring.", "Finally in Section 7 we construct a colouring of the rationals such that if a sequence has the set of its finite sums and products monochromatic and the set of primes that appear in the denominators of its terms is finite, then the sequence has to be bounded – together with the results of Section 5 this establishes the main result.", "Our notation is standard.", "We restrict our attention to the positive rationals and the positive reals (which we write as $\\mathbb {Q}^+$ and $\\mathbb {R}^+$ respectively), since in all situations either it would be impossible to use negative values (for example because the sums are negative but the products are positive) or because, if say we are dealing only with sums, then any colouring of the positive values could be reflected, using new colours, to the negative values.", "Throughout the paper $\\mathbb {N}$ is the set of positive integers.", "We end this introduction by mentioning that in the case of finite sequences very little is known.", "The question of whether or not in every finite colouring of the naturals there exist two (distinct) numbers that, together with their sum and product, all have the same colour, remains tantalisingly open.", "Moreira [6] showed that we may find $x$ and $y$ such that all of $x,x+y,xy$ have the same colour, and in the rationals Bowen and Sabok [2] showed that we can indeed find the full set $x,y,x+y,xy$ .", "But for example for sums and products from a set of size three or more almost nothing is known." ], [ "Some useful lemmas", "In this section we prove the lemma mentioned above that we will make use of several times (Lemma REF ).", "We will also need two slight variants of it, namely Lemma REF and Lemma REF .", "Lemma 1 There exists a finite colouring $\\Phi $ of $\\mathbb {N}^{(2)}=\\lbrace (a,b)\\in \\mathbb {N}\\times \\mathbb {N}:a<b\\rbrace $ such that we cannot find two strictly increasing sequences of naturals, $(a_n)_{n\\ge 1}$ and $(b_n)_{n\\ge 1}$ , such that $a_i<b_i$ for every $i$ and $\\lbrace (a_n+a_m,b_n+b_m):n<m\\rbrace \\cup \\lbrace (a_n,b_m):n<m\\rbrace $ is monochromatic.", "The way this will be of use to us is, roughly speaking, as follows.", "Suppose that we are trying to show that a certain kind of sequence cannot have say its pairwise sums and products monochromatic (in the sense that there is a colouring that prevents this).", "Then it is enough to find two `parameters' $a$ and $b$ so that when we multiply two terms of the sequence the $a$ -values and the $b$ -values add, but when we add two terms the resulting $a$ -value is the $a$ -value of the later term and the $b$ -value is the $b$ -value of the earlier term.", "Before starting the proof, we need a little notation.", "When a natural number is written in binary we call the rightmost 1 the `last digit' of the number (the end), and the leftmost 1 the `first digit' of the number (the start).", "So for example the number 10001010 has start 7 and end 1.", "Also, we say that natural numbers $a$ and $b$ are `right to left disjoint' if the end of $b$ is greater than the start of $a$ .", "We colour a pair $(a,b)$ by $(c_1, c_2, c_3, c_4, c_5)$ , where $c_1$ is the position of the last digit of $a \\text{ mod 2 }$ , $c_2$ is the position of the last digit of $b \\text{ mod 2 }$ , and $c_3$ and $c_4$ are the digits immediately to the left of the last digits of $a$ and $b$ respectively.", "Finally $c_5$ is 0 if the supports of $a$ and $b$ are right to left disjoint, and 1 otherwise.", "Suppose for a contradiction that we can find two sequences $(a_n)_{n\\ge 1}$ and $(b_n)_{n\\ge 1}$ as given in the statement of the lemma.", "Assume that for some $n<m$ , $a_n$ and $a_m$ end at the same position.", "Say that position is $i$ .", "Because $(a_n, b_m)$ and $(a_m, b_{m+1})$ have to have the same colour, it follows that $a_n$ and $a_m$ have the same last 2 digits.", "This implies that the position of the last digit of $a_n+a_m$ is $i+1$ .", "On the other hand $(a_n, b_m)$ and $(a_n+a_m, b_n+b_m)$ must have the same colour, but they have a different $c_1$ , a contradiction.", "Therefore we know that all $a_n$ have to end at different positions.", "By passing to subsequences, we may assume that the $a_n$ have pairwise right to left disjoint supports.", "Since $(a_n, b_m)$ and $(a_{n-1}, b_n)$ have the same colour, the same argument as above shows that for any $1<n<m$ , $b_n$ and $b_m$ must end at different positions.", "Thus by passing to subsequences we may assume that both $a_n$ have right to left disjoint supports and $b_n$ have right to left disjoint supports.", "Finally, we can choose $n$ large enough that $a_1$ and $b_n$ have right to left disjoint supports and $b_1$ and $a_n$ have right to left disjoint supports.", "Thus $c_5=0$ for the pair $(a_1, b_n)$ , but $c_5=1$ for the pair $(a_1+a_n,b_1+b_n)$ (as the right-hand side starts before the left-hand side finishes), a contradiction.", "We will also need two slight variants of this lemma.", "Lemma 2 There exists a finite colouring $\\Psi $ of $\\mathbb {N}^{(2)}$ such that we cannot find two strictly increasing sequences of naturals, $(a_n)_{n\\ge 1}$ and $(b_n)_{n\\ge 1}$ , such that $a_i<b_i$ for every $i$ and $\\lbrace (a_n+a_m+1,b_n+b_m):n<m\\rbrace \\cup \\lbrace (a_n,b_m):n<m\\rbrace $ is monochromatic.", "Let $\\Phi $ be the colouring in Lemma REF .", "Define $\\Psi $ by $\\Psi (a,b)=\\Phi (a,b+1)$ .", "Suppose we can find sequences $(a_n)_{n\\ge 1}$ and $(b_n)_{\\ge 1}$ with the above properties.", "Let $d_n=b_n+1$ .", "Then for $n<m$ we have $\\Phi (a_n, d_m)=\\Phi (a_n,b_m+1)=\\Psi (a_n,b_m)$ and $\\Phi (a_n+a_m,d_n+d_m)=\\Phi (a_n+a_m,b_n+b_m+2)=\\Psi (a_n+a_m,b_n+b_m+1)$ , contradicting Lemma REF .", "The next lemma is proved in a completely analogous manner; we omit the proof.", "Lemma 3 There exists a finite colouring $\\Psi ^{\\prime }$ of $\\mathbb {N}^{(2)}$ such that we cannot find two strictly increasing sequences of natural numbers, $(a_n)_{n\\ge 1}$ and $(b_n)_{n\\ge 1}$ , such that $a_i<b_i$ for every $i\\ge 1$ and $\\lbrace (a_n+a_m-1, b_n+b_m):n<m\\rbrace \\cup \\lbrace (a_n, b_m):n<m\\rbrace $ is monochromatic.$\\square $ Finally, we note a simple fact that we will use repeatedly.", "Lemma 4 There exists a finite colouring $\\varphi :\\mathbb {Z}\\rightarrow \\lbrace 0,1\\rbrace $ such that $\\varphi (k+1)\\ne \\varphi (2k)$ and $\\varphi (k+1)\\ne \\varphi (2k+1)$ for all $k\\notin \\lbrace 0,1\\rbrace $ , and $\\varphi (0)\\ne \\varphi (1)$ and $\\varphi (2)\\ne \\varphi (3)$ .", "We build $\\varphi $ inductively.", "Let $\\varphi (0)=\\varphi (2)=0$ and $\\varphi (1)=\\varphi (3)=1$ .", "We now assume that $l\\le -1$ , $k\\ge 2$ and that $\\varphi $ has been defined on $\\lbrace 2l+2,2l+3,\\cdots ,2k-1\\rbrace $ .", "Since $0<k+1\\le 2k-1$ , $\\varphi (k+1)$ is defined, thus we set $\\varphi (2k)=\\varphi (2k+1)=1-\\varphi (k+1)$ .", "Similarly, since $2l+2\\le l+1\\le 0$ , $\\varphi (l+1)$ is defined, so we set $\\varphi (2l)=\\varphi (2l+1)=1-\\varphi (l+1)$ , which finishes the induction step." ], [ "Colouring the naturals", "To illustrate the usefulness of Lemma REF , we use it here to give a short proof of the result of Hindman [4] about pairwise sums and products in the naturals.", "Because of the use of Lemma REF , what we are really doing is analysing the positions of the digits in binary of the numbers that are themselves the positions of the digits in binary of the terms of the sequence.", "For a natural number $a$ , we write $e_2(a)$ for the end of $a$ (the subscript is because later we will be looking at non-binary bases) and $s_2(a)$ for the start of $a$ .", "We also write $g_a$ for the difference between the positions of the two most significant 1s of $a$ in binary, and call it the `gap' or `left gap' of $a$ .", "Thus for example 10001010 has gap 4.", "Theorem 5 There exists a finite colouring $\\theta $ of $\\mathbb {N}$ such that there is no injective sequence $(x_n)_{n\\ge 1}$ of natural numbers with the property that all numbers $x_n+x_m$ and $x_n x_m$ for all $1\\le n<m$ have the same colour.", "We begin by extending the colouring $\\Phi $ from Lemma REF to a colouring of $(\\mathbb {N}\\cup \\lbrace 0\\rbrace )\\times (\\mathbb {N}\\cup \\lbrace 0\\rbrace )$ by setting $\\Phi (a,b)$ to be 0 if $a=0$ or $b=0$ or $a\\ge b$ .", "Now let $a$ be a natural number.", "We define $\\theta (a)=(p_a, e_2(a)\\text{ mod }2,g_a\\text{ mod }2,\\Phi (e_2(a),s_2(a)),\\Phi (e_2(a),s_2(a)+1),\\varphi ((e_2(a)),t_a)$ where $p_a$ is 1 if $a$ is a power of 2 and 0 otherwise, and $t_a=0$ if $g_a=1$ and 1 otherwise.", "Observe that $\\varphi $ ensures that there are no two numbers $a$ and $b$ of the same colour such that their end positions are $i+1$ and $2i$ respectively, for some $i\\ne 1$ .", "Suppose for a contradiction that there exists a strictly increasing sequence $(x_n)_{n\\ge 1}$ such that all pairwise sums and products have the same colour with respect to $\\theta $ .", "We observe that the first component of the colouring tells us that we cannot have two distinct powers of 2 in our sequence, and so we may assume that no term is a power of 2.", "Let $a_n$ be the position of the last digit of $x_n$ .", "Note that the position of the last digit of $x_nx_m$ is $a_n+a_m$ .", "Similarly, let $b_n$ be the position of the first digit of $x_n$ .", "We know that there will either be infinitely many $x_n$ such that $x_n<2^{b_n}\\sqrt{2}$ , or infinitely many $x_n$ such that $x_n>2^{b_n}\\sqrt{2}$ .", "By passing to a subsequence we may assume that either $x_n<2^{b_n}\\sqrt{2}$ for all $n$ , or $x_n>2^{b_n}\\sqrt{2}$ for all $n$ .", "In the first case, the position of the first digit of $x_nx_m$ is $b_n+b_m$ , while in the second case it is $b_n+b_m+1$ .", "Assume first that all elements of the sequence end at position 1.", "We either have infinitely many terms with the same gap, or infinitely many terms with pairwise distinct gaps.", "If the latter is true we may assume that $(x_n)_{n\\ge 1}$ has pairwise distinct gaps.", "Therefore we can find two $m$ and $n$ such that $x_n=2+2^i+\\cdots $ and $x_{m}=2+2^{j}+\\cdots $ where $2<i<j$ .", "In this case the gap of the sum is $i-2$ , while the gap of the product is $i-1$ , a contradiction.", "Therefore we may assume that all $x_n$ end at position 1 and they have the same gap $g^{\\prime }$ .", "If $g^{\\prime }>1$ then by the pigeonhole principle (and passing to a subsequence) we may assume that all terms have the same digit in position $g^{\\prime }+2$ .", "Now it is easy to see that the sum of any two terms has gap $g^{\\prime }$ , while the product has gap $g^{\\prime }+1$ , a contradiction.", "Hence we must have $g^{\\prime }=1$ .", "In other words, we may assume that all terms end $2+2^2+\\cdots $ , and by the pigeonhole principle we may further assume that the digit in position 3 is the same for all terms.", "A simple computation shows that the sum of any two terms has gap 1, while the product does not, a contradiction.", "This shows that we must have infinitely many terms that do not end at position 1.", "Then, by passing to a subsequence, we may assume that no term of the sequence ends at position 1.", "If two terms $x_n$ and $x_m$ end at the same position, say $i\\ne 1$ , then they cannot have the same gap.", "Indeed, if that were the case, the position of the last digit of $x_n+x_m$ is $i+1$ , while the position of the last digit of $x_nx_m$ is $2i$ , a contradiction.", "Thus we have $x_n=2^i+2^{i+k_1}+\\cdots $ and $x_m=2^i+2^{i+k_2}\\cdots $ for some $0<k_1<k_2$ (without loss of generality).", "The gap of the product is $k_1$ .", "If $k_1\\ne 1$ then the gap of the sum is $k_1-1$ , a contradiction.", "But among any three terms that have the same end positions (and thus different gaps), we must always have two with gaps not equal to 1.", "In other words, for any end position there are at most two terms that end there.", "By passing to a subsequence we may assume that the terms have right to left disjoint supports.", "To sum up, by passing to a subsequence, we may assume that the terms $x_n$ are strictly increasing and have pairwise left to right disjoint supports.", "Thus the start and end positions form two increasing sequences, and since for $n<m$ we have $e_{x_n+x_m}=a_n$ and $s_{x_n+x_{m}}=b_m$ , we are done by Lemma REF or Lemma REF ." ], [ "Colouring the reals", "In this section we prove the result about the reals mentioned in the introduction, that there is a colouring for which no sequence that is bounded and bounded away from zero has all of its pairwise sums and products monochromatic.", "There is a fair amount of notation, which will also be used in later sections, but all of it is very simple and self-explanatory.", "The aim is to analyse carefully how the `starting' few 1s (in binary) of the numbers behave, and especially how close together those first few 1s are.", "For $x\\in \\mathbb {R}^+$ , we define $a(x)$ to be the unique integer such that $2^{a(x)}\\le x<2^{a(x)+1}$ .", "Moreover, for $x\\in \\mathbb {R}^+\\setminus \\lbrace 2^k:k\\in \\mathbb {Z}\\rbrace $ , we define $b(x)=a(x-2^{a(x)})$ .", "In other words, for $x$ not an integer power of 2, $b(x)$ is the unique integer such that $2^{a(x)}+2^{b(x)}\\le x<2^{a(x)}+2^{b(x)+1}$ .", "For $x\\in \\mathbb {R}^+\\setminus \\lbrace 2^k:k\\in \\mathbb {Z}\\rbrace $ we also define $c(x)$ to be the unique integer such that $2^{a(x)+1}-2^{c(x)+1}\\le x<2^{a(x)+1}-2^{c(x)}$ .", "Note that if $x\\in \\mathbb {N}$ then $a(x)$ is what we called the start of $x$ in Section 2 and Section 3.", "If $x$ is not a power of 2, then $b(x)$ is the position of the second most significant digit 1 in the base 2 expansion of $x$ , and $c(x)$ is the position of the leftmost zero when $x$ is written in binary without leading 0s.", "We now define $A_0=\\lbrace x\\in \\mathbb {R}^+:2^{a(x)}<x<2^{a(x)+\\frac{1}{2}}\\rbrace $ , $A_1=\\lbrace x\\in \\mathbb {R}^+:2^{a(x)+\\frac{1}{2}}< x<2^{a(x)+1}\\rbrace $ , $C_1=\\lbrace 2^k:k\\in \\mathbb {Z}\\rbrace $ and $C_2=\\lbrace 2^{k+\\frac{1}{2}}:k\\in \\mathbb {Z}\\rbrace $ .", "We observe that $A_0, A_1, C_1$ , and $C_2$ are pairwise disjoint sets that partition $\\mathbb {R}^+$ , and $A_0$ and $A_1$ are open in $\\mathbb {R}^+$ , while $C_1$ and $C_2$ are countable.", "Recalling the colouring $\\varphi $ in Lemma REF , define $G_i=\\lbrace x\\in \\mathbb {R}^+\\setminus C_1:\\varphi (a(x))=i\\rbrace $ for $i\\in \\lbrace 0,1\\rbrace $ .", "Since $G_i$ is the union of all the open intervals $(2^k,2^{k+1})$ where $k\\in \\mathbb {Z}$ and $\\varphi (k)=i$ , we see that $G_i$ is open in $\\mathbb {R}^{+}$ .", "Moreover, $C_1$ , $G_0$ and $G_1$ also form a partition of the positive reals, where $C_1$ is countable and $G_0$ and $G_1$ are open.", "Next we define $C_3=\\lbrace 2^k+2^l:k,l\\in \\mathbb {Z}$ and $l<k\\rbrace $ , and $H_i=\\lbrace x\\in \\mathbb {R}^+\\setminus (C_1\\cup C_3): a(x)-b(x)\\equiv i\\mod {3}\\rbrace $ for $i\\in \\lbrace 0,1,2\\rbrace $ .", "By writing $H_i$ as the union of all open intervals $(2^k+2^l,2^k+2^{l+1})$ where $k,l\\in \\mathbb {Z}$ , $l<k$ and $k-l\\equiv i\\mod {3}$ , we have that $H_i$ is open in $\\mathbb {R}^+$ for $i\\in \\lbrace 0,1,2\\rbrace $ .", "As before, $C_1$ , $C_3$ , $H_0$ , $H_1$ and $H_2$ partition the positive reals.", "Define now $C_4=\\lbrace 2^k-2^l:k,l\\in \\mathbb {Z}$ and $l<k\\rbrace $ , and $J_i=\\lbrace x\\in \\mathbb {R}^+\\setminus C_4: a(x)-c(x)\\equiv i\\mod {3}\\rbrace $ for $i\\in \\lbrace 0,1,2\\rbrace $ .", "Note that $C_1\\subset C_4$ and $C_3\\cap C_4=\\lbrace 2^{k+1}+2^k:k\\in \\mathbb {Z}\\rbrace \\ne \\emptyset $ .", "By writing $J_i$ as the union of all open intervals $(2^{k+1}-2^{l+1},2^{k+1}-2^{l})$ where $k,l\\in \\mathbb {Z}$ , $l<k$ and $k-l\\equiv i\\mod {3}$ , we see that $J_i$ is open in $\\mathbb {R}^+$ for $i\\in \\lbrace 0,1,2\\rbrace $ .", "Also, $C_4$ , $J_0$ , $J_1$ and $J_2$ partition the positive reals.", "Finally, we define $C_5=\\lbrace 2^{k+1}(1-2^{l-k})^{\\frac{1}{2}}:k,l\\in \\mathbb {Z}$ and $l<k\\rbrace $ , and $B_i=\\lbrace x\\in \\mathbb {R}^+\\setminus (C_1\\cup C_5):x<2^{a(x)+1}(1-2^{c(x)-a(x)})^{\\frac{1}{2}}$ and $a(x)-c(x)\\equiv i\\mod {3}$ , or $x>2^{a(x)+1}(1-2^{c(x)-a(x)})^{\\frac{1}{2}}$ and $a(x)-c(x)\\equiv i+1\\mod {3}\\rbrace $ for $i\\in \\lbrace 0,1,2\\rbrace $ .", "Note that $C_2\\subset C_5$ .", "Since $B_i$ can be written as the union of all the sets of the form $(2^{k+1}-2^{l+1}, 2^{k+1}(1-2^{l-k})^{\\frac{1}{2}})$ where $l,k\\in \\mathbb {Z}$ , $l<k$ and $k-l\\equiv i\\mod {3}$ , and all the sets of the form $(2^{k+1}(1-2^{l-k})^{\\frac{1}{2}},2^{k+1}-2^l)$ where $k,l\\in \\mathbb {Z}$ , $l<k$ and $k-l\\equiv i+1\\mod {3}$ , we see that $B_i$ is open in $\\mathbb {R}^+$ for all $i\\in \\lbrace 0,1,2\\rbrace $ .", "Also, $C_1$ , $C_5$ , $B_0$ , $B_1$ and $B_2$ partition the positive reals.", "We are now ready to define our colouring $\\nu $ .", "To start with, we let $C_1$ , $C_2$ , $C_3\\setminus C_4$ , $C_4\\setminus C_1$ and $C_5\\setminus C_2$ be five colour classes of $\\nu $ .", "If $x\\in \\mathbb {R}^+\\setminus (C_1\\cup C_2\\cup C_3\\cup C_4\\cup C_5)$ , then we set $\\nu (x)=(w_1,w_2,w_3,w_4,w_5)$ , where $w_i=i$ if $x\\in A_i$ , $w_2=i$ if $x\\in G_i$ , $w_3=i$ if $x\\in H_i$ , $w_4=i$ if $x\\in J_i$ and $w_5=i$ if $x\\in B_i$ .", "It is important to note that, with the exception of the five countable classes defined first, the colour classes of $\\nu $ are open (as a consequence of $C_1\\cup \\cdots \\cup C_5$ being closed).", "Theorem 6 Let $(x_n)_{n\\ge 1}$ be an injective sequence of positive reals with the property that all numbers $x_n+x_m$ and $x_n x_m$ for all $1\\le n<m$ have the same colour.", "Then $(x_n)_{n\\ge 1}$ cannot be bounded and bounded away from zero.", "The colour class of the pairwise sums and products of $(x_n)_{n\\ge 1}$ cannot be any of $C_1$ , $C_2$ , $C_3\\setminus C_4$ , $C_4\\setminus C_1$ and $C_5\\setminus C_2$ .", "Indeed, the proofs for $C_1$ and $C_2$ are an easy exercise.", "The proofs for $C_3$ and $C_5$ , while routine, are lengthy, and so are presented in the Appendix.", "The proof for $C_4$ is very similar to the one for $C_3$ , and so we omit it.", "Therefore $x_n+x_m$ and $x_nx_m$ are all in $\\mathbb {R}^+\\setminus (C_1\\cup C_2\\cup C_3\\cup C_4\\cup C_5)$ for all $n<m$ .", "Suppose for a contradiction that $(x_n)_{n\\ge 1}$ is bounded and bounded away from zero.", "This immediately implies that the sequence of integers $(a(x_n))_{n\\ge 1}$ is bounded.", "By passing to a subsequence, we may assume that $(a(x_n))_{n\\ge 1}$ is constant, and thus equal to some fixed integer $k$ .", "Moreover, by the pigeonhole principle and passing to another subsequence, we may assume that either $x_n<2^{a(x_n)+\\frac{1}{2}}$ for all $n$ or $2^{a(x_n)+\\frac{1}{2}}\\le x_n$ for all $n$ .", "Let $n$ and $m$ be two distinct natural numbers.", "Since $a(x_n)=a(x_m)=k$ we have that $2^{k+1}<x_n+x_m<2^{k+2}$ and $2^{2k}<x_nx_m<2^{2k+2}$ .", "This implies that $a(x_n+x_m)=k+1$ and that either $a(x_nx_m)=2k$ , or $a(x_nx_m)=2k+1$ .", "Let $i\\in \\lbrace 0,1\\rbrace $ be such that $x_n+x_m\\in G_i$ and $x_nx_m\\in G_i$ .", "In other words we must have $\\varphi (a(x_n+x_m))=\\varphi (a(x_nx_m))$ , which implies that $\\varphi (k+1)=\\varphi (2k)$ or $\\varphi (k+1)=\\varphi (2k+1)$ , and thus $k\\in \\lbrace 0,1\\rbrace $ .", "We consider first the case when $k=1$ .", "This means that $2<a_n<4$ and $a(x_nx_m)=a(x_n+x_m)=2$ for all distinct naturals $n$ and $m$ .", "Hence we must have $2<x_n<2^{\\frac{3}{2}}$ for all $n$ .", "We first assume that the integer sequence $(b(x_n))_{n\\ge 1}$ is bounded.", "By passing to a subsequence, we may assume that $(b(x_n))_{n\\ge 1}$ is constant and equal to a fixed integer $l<k=1$ .", "Since $x_n\\ge 2^{a(x_n)}+2^{b(x_n)}$ for all $n$ , we cannot have $l=0$ , or else $x_n\\ge 2+1=3>2^{\\frac{3}{2}}$ , and so $l\\le -1$ .", "Let $m$ and $n$ be two distinct natural numbers.", "By the above we have that $x_n=2+2^l+u$ and $x_m=2+2^l+v$ for some $0\\le u,v<2^l$ .", "Next we have that $x_n+x_m=4+2^{l+1}+u+v$ and $0\\le u+v<2^{l+1}$ , thus $b(x_n+x_m)=l+1$ , and consequently $a(x_n+x_m)-b(x_n+x_m)=2-(l+1)=1-l$ .", "On the other hand, $x_nx_m=4+2^{l+2}+(2^l+2)(u+v)+uv+2^{2l}$ .", "The sum of terms involving the variables $u$ and $v$ can be bounded as follows: $(2^l+2)(u+v)+uv+2^{2l}<(2^l+2)2^{l+1}+2^{2l}+2^{2l}=2^{2l+2}+2^{l+2}$ .", "Therefore we trivially have $4+2^{l+2}<x_nx_m$ and $x_nx_m<4+2^{l+2}+2^{2l+2}+2^{l+2}=4+2^{l+3}+2^{2l+2}<4+2^{l+4}$ .", "This tells us that either $b(x_nx_m)=l+2$ , or $b(x_nx_m)=l+3$ , thus either $a(x_nx_m)-b(x_nx_m)=-l$ , or $a(x_nx_m)-b(x_nx_m)=-l-1$ .", "In both cases $a(x_nx_m)-b(x_nx_m)$ and $a(x_n+x_m)-b(x_n+x_m)$ are not congruent $\\mod {3}$ , a contradiction.", "Therefore we must have that $(b(x_n))_{n\\ge 1}$ is unbounded and, by passing to a subsequence, we may assume that $(b(x_n))_{n\\ge 1}$ is strictly decreasing.", "Let $n$ be a natural number and $l=b(x_n)$ .", "We know that there exists $u$ such that $0\\le u<2^l$ and $x_n=2+2^l+u$ .", "We now pick an integer $s<l$ such that $u+2^s<2^l$ , and then a natural number $m$ such that $b(x_m)<s$ .", "Let $t=b(x_m)$ and $x_m=2+2^t+v$ , where $0\\le v<2^t$ .", "It follows that $x_n+x_m=4+2^l+u+2^t+v$ .", "By all the above we have that $u+2^t+v<u+2^{t+1}\\le u+2^s<2^l$ .", "Thus $b(x_n+x_m)=l$ and $a(x_n+x_m)-b(x_n+x_m)=2-l$ .", "Finally, since $2+2^l\\le x_n<2+2^{l+1}$ and $2+2^t\\le x_m<2+2^{t+1}$ , we first have that $4+2^{l+1}<4+2^{l+1}+2^{t+1}+2^{l+t}\\le x_nx_m$ .", "Moreover, $x_nx_m<4+2^{l+2}+2^{t+2}+2^{l+t+2}<4+2^{l+3}$ .", "Putting these together we see that either $b(x_nx_m)=l+1$ or $b(x_nx_m)=l+2$ .", "Thus either $a(x_nx_m)-b(x_nx_m)=1-l$ , or $a(x_nx_m)-b(x_nx_m)=-l$ , neither of which is congruent to $a(x_n+x_m)-b(x_n+x_m)\\mod {3}$ , a contradiction.", "This concludes the case when $k=1$ .", "We must therefore have $k=0$ .", "In other words $a(x_n)=0$ , $2^{\\frac{1}{2}}\\le x_n<2$ , and $a(x_n+x_m)=a(x_nx_m)=1$ for all distinct natural numbers $n$ and $m$ .", "Since there is at most one $n$ such that $x_n=2^{\\frac{1}{2}}$ , by passing to a subsequence we may assume that $2^{\\frac{1}{2}}<x_n<2$ for all $n$ .", "We observe that if $2^{\\frac{1}{2}}<x_n<\\frac{3}{2}$ and $2^{\\frac{1}{2}}<x_n<\\frac{3}{2}$ for two distinct $m$ and $n$ , then $2\\cdot 2^{\\frac{1}{2}}=2^{\\frac{3}{2}}<x_n+x_m<3$ , thus $x_n+x_m\\in A_1$ , while $2<x_nx_m<9/4<2^{\\frac{3}{2}}$ , so $x_nx_m\\in A_0$ , a contradiction.", "Therefore, by passing to a subsequence, we may assume that $\\frac{3}{2}\\le x_n<2$ .", "This immediately implies that $x_n\\ge 2^1-2^{-1}=2^{a(x_n)+1}-2^{-2+1}$ , and so $c(x_n)\\le -2$ for all $n$ .", "We first assume that the integer sequence $(c(x_n))_{n\\ge 1}$ is bounded.", "Thus by passing to a subsequence we may assume that it is constant and equal to a fixed integer $l\\le -2$ .", "Let $m$ and $n$ be two distinct natural numbers.", "Then we have that $2-2^{l+1}\\le x_n<2-2^l$ and $2-2^{l+1}\\le x_m<2-2^l$ .", "Summing the above we obtain $4-2^{l+2}\\le x_n+x_m<4-2^{l+1}$ , and thus $c(x_n+x_m)=l+1$ and consequently $a(x_n+x_m)-c(x_n+x_m)=-l$ .", "On the other hand, multiplying the above gives $4-2^{l+3}+2^{2l+2}\\le x_nx_m<4-2^{l+2}+2^{2l}$ .", "The lower bound is trivially greater than $4-2^{l+3}$ , and $2^{l+2}-2^{2l}>2^{l+1}$ , so $4-2^{l+2}+2^{2l}<4-2^{l+1}$ .", "This means that $c(x_nx_m)$ is either $l+1$ or $l+2$ .", "Since $c(x_nx_m)=l+2$ implies $a(x_nx_m)-c(x_nx_m)=-l-1$ which is not congruent to $-l=a(x_n+x_m)-c(x_n+x_m)\\mod {3}$ , we conclude that $c(x_nx_m)=l+1$ for all $n\\ne m$ , which can be written as $4-2^{l+2}\\le x_nx_m<4-2^{l+1}$ for all $n\\ne m$ .", "Observe that if $x_n<2(1-2^l)^{\\frac{1}{2}}$ and $x_m<2(2-2^l)^{\\frac{1}{2}}$ for two distinct positive integers $m$ and $n$ , then $x_nx_m<4(1-2^l)=4-2^{l+2}$ , which contradicts $c(x_nx_m)=l+1$ .", "Therefore, by passing to a subsequence, we may assume that $x_n\\ge 2(1-2^l)^{\\frac{1}{2}}$ for all $n$ .", "Let $n\\ne m$ be two natural numbers.", "Then $x_n+x_m\\ge 4(1-2^l)^{\\frac{1}{2}}=4(1-2^{c(x_n+x_m)-a(x_n+x_m)})^{\\frac{1}{2}}$ .", "Let $i\\in \\lbrace 0,1,2\\rbrace $ such that $-l=a(x_n+x_m)-c(x_n+x_m)\\equiv i+1\\mod {3}$ .", "This means that $x_n+x_m\\in B_i$ , and consequently $x_nx_m\\in B_i$ .", "On the other hand, since $x_n<2-2^l$ and $x_m<2-2^l$ , it is easy to check that the product $x_nx_m<4-2^{l+2}+2^{2l}=4(1-2^l+2^{2l-2})<4(1-2^l)^{\\frac{1}{2}}$ .", "Since $a(x_nx_m)-c(x_nx_m)=1-(l+1)=-l$ we have that $x_nx_m<4(1-2^{c(x_nx_m)-a(x_nx_m)})^{\\frac{1}{2}}$ , AND thus $x_nx_m\\in B_j$ where $j\\in \\lbrace 0,1,2\\rbrace $ and $j\\equiv -l\\mod {3}\\equiv i+1\\mod {3}$ .", "But this is a contradiction since it implies that $i\\ne j$ , so that the sum and the product are in different $B$ -classes.", "Therefore we must have that the sequence $(c(x_n))_{n\\ge 1}$ is unbounded and, by passing to a subsequence, we may assume that it is strictly decreasing.", "Let us first assume that there exist $n<m$ such that $x_n=2-2^{c(x_n)+1}$ and $x_m=2-2^{c(x_m)+1}$ .", "Then we have that $x_n+x_m=4-2^{c(x_n)+1}-2^{c(x_m)+1}$ , and since $c(m)<c(n)$ we get that $4-2^{c(x_n)+2}<x_n+x_m<4-2^{c(x_n)+1}$ , so $c(x_n+x_m)=c(x_n)+1$ and consequently $a(x_n+x_m)-c(x_n+x_m)=-c(x_n)$ .", "On the other hand, $x_nx_m=4-2^{c(x_n)+2}-2^{c(x_m)+2}+2^{c(x_n)+c(x_m)+2}=4-2^{c(x_n)+2}-2^{c(x_m)+2}(1-2^{c(x_n)})$ .", "Hence we have that $4-2^{c(x_n)+3}<4-2^{c(x_n)+2}-2^{c(x_m)+2}<x_nx_m<4-2^{c(x_n)+2}$ .", "It follows that $c(x_nx_m)=c(x_n)+2$ , so$a(x_nx_m)-c(x_nx_m)=-c(x_n)-1$ , a contradiction.", "Finally, after passing to a subsequence, we may assume that for every $n$ there exists $u_n$ such that $0<u_n<2^{c(x_n)}$ and $x_n=2-2^{c(x_n)+1}+u_n$ .", "Let $n$ be a natural number and let $s\\in \\mathbb {Z}$ be such that $u_n+2^s<2^{c(x_n)}$ .", "Since the sequence $(c(x_n))_{n\\ge 1}$ is strictly decreasing and unbounded, we can find $m>n$ such that $c(x_m)<\\min \\lbrace s,\\log _2 u_n-1\\rbrace $ .", "It then follows that $x_n+x_m=4-2^{c(x_n)+1}+u_n-2^{c(x_m)+1}+u_m$ .", "We observe that $0<u_n-2^{c(x_m)+1}+u_m<u_n-2^{c(x_m)+1}+2^{c(x_m)}<u_n+2^{c(x_m)}<u_n+2^s<2^{c(x_n)}$ .", "This means that $4-2^{c(x_n)+1}<x_n+x_m<4-2^{c(x_n)+1}+2^{c(x_n)}=4-2^{c(x_n)}$ , and so $c(x_n+x_m)=c(x_n)$ and consequently $a(x_n+x_m)-c(x_n+x_m)=1-c(x_n)$ .", "We are now going to analyse the product $x_nx_m$ .", "We have that $2-2^{c(x_n)+1}<x_n<2-2^{c(x_n)}$ and $2-2^{c(x_m)+1}<x_m<2-2^{c(x_m)}$ .", "By multiplying the above inequalities we obtain that $4-2^{c(x_n)+2}-2^{c(x_m)+2}+2^{c(x_n)+c(x_m)+2}<x_nx_m$ and $x_nx_m<4-2^{c(x_n)+1}-2^{c(x_m)+1}+2^{c(x_n)+c(x_m)}$ .", "We consider these two inequalities separately.", "First we have that $4-2^{c(x_n)+1}-2^{c(x_m)+1}+2^{c(x_n)+c(x_m)}=4-2^{c(x_n)+1}-2^{c(x_m)}(2-2^{c(x_n)})<4-2^{c(x_n)+1}$ , thus $x_nx_m<4-2^{c(x_n)+1}$ .", "Secondly we have that $4-2^{c(x_n)+2}-2^{c(x_m)+2}+2^{c(x_n)+c(x_m)+2}>4-2^{c(x_n)+2}-2^{c(x_m)+2}>4-2^{c(x_n)+3}$ , since $c(x_m)<c(x_n)$ .", "Putting everything together we get that $4-2^{c(x_n)+3}<x_nx_m<4-2^{c(x_n)+1}$ , tand hus either $c(x_nx_m)=c(x_n)+1$ or $c(x_nx_m)=c(x_n)+2$ .", "This means that either $a(x_nx_m)-c(x_nx_m)=-c(x_n)$ , or $a(x_nx_m)-c(x_nx_m)=-c(x_n)-1$ , neither of which is congruent to $a(x_n+x_m)-c(x_n+x_m)=1-c(x_n)\\mod {3}$ , a contradiction.", "It is important to point out that the colouring $\\nu $ cannot be used to rule out similar statements about sums and products from a sequence $(x_n)_{n \\ge 1}$ that tends to zero.", "Indeed, since each colour class of $\\nu $ is measurable (being either countable or open), the result of [1] tells us that there is a sequence with all of its products and all of its sums (even infinite sums) having the same colour for $\\nu $ ." ], [ "Combining an extension of $\\theta $ over the rationals with {{formula:67a63ff3-8e39-44ae-aa02-106d2d8cfebf}}", "In this section we will build a colouring of the positive rationals via an `extension' of the colouring $\\theta $ , whilst also constrained by $\\nu $ .", "This colouring will force any bounded sequence with monochromatic pairwise sums and products to have the set of primes which divide the denominators of the terms of the sequence to be infinite.", "Roughly speaking, we will be concerned with how a number ends, not just how it starts, and therefore we will be considering numbers written not in binary (of course) but rather in the smallest base for which they terminate.", "The analysis is considerably more complicated than it would be for binary.", "There is also the issue that different numbers will have different `smallest bases', but it turns out that this will not cause too much of a problem.", "Let $(p_n)_{n\\ge 1}$ be the enumeration of all primes in increasing order, and $P_n=\\displaystyle {\\prod _{k=1}^n p_k}$ for all $n\\in \\mathbb {N}$ .", "Let also $T_n=\\mathbb {Q}_{(n)}\\cap (0,1)$ .", "In other words, $T_n$ consists of all the rationals between 0 and 1 for which, in reduced form, the denominator does not have any $p_t$ with $t>n$ as a factor.", "For completeness, define $T_0=\\emptyset $ .", "If $x \\in T_n \\setminus T_{n-1}$ we may say that $P_n$ is the `minimal base' of $x$ .", "For $n\\in \\mathbb {N}$ and $x\\in T_n$ , we define $s_n(x)$ to be the position of the leftmost significant digit and $e_n(x)$ the position of the rightmost significant digit in the base $P_n$ expansion of $x$ .", "For example, if $x$ has the base 6 expansion $405.00213$ then $s_3(x)=2$ and $e_3(x)=-5$ .", "For $x\\in \\mathbb {N}$ , so that $e_2(x)$ and $s_2(x)$ are the position of the rightmost significant digit and leftmost significant digit respectively in the binary expansion of $x$ , we set $d(x)$ to be the digit in position $e_2(x)+1$ .", "Finally, for $x,y\\in \\mathbb {N}$ , define $g(x,y)=0$ if $e_2(y)>s_2(x)$ and $g(x,y)=1$ if $e_2(y)\\le s_2(x)$ .", "The colouring $\\Phi $ of $\\mathbb {N}^{(2)}$ defined previously can be rewritten as follows: $\\Phi (x,y)=(e_2(x)\\mod {2},e_2(y)\\mod {2},d(x),d(y),g(x,y))$ .", "We also define the colouring $\\Psi ^{\\prime }$ of $\\mathbb {N}^{(2)}$ , which is very similar in spirit to the previously defined colouring $\\Psi $ , by $\\Psi ^{\\prime }(x,y)=\\Phi (1,2)$ if $x=1$ , and $\\Psi ^{\\prime }(x,y)=\\Phi (x-1,y)$ if $x>1$ .", "We are now ready to define a colouring $\\mu $ of $\\mathbb {Q}$ as follows.", "If $x\\ge 1$ , let $\\mu (x)=\\nu (x)$ .", "Otherwise, for any $x\\in \\mathbb {Q}\\cap (0,1)$ , there exists a unique $n\\in \\mathbb {N}$ such that $x\\in T_n\\setminus T_{n-1}$ .", "Consequently, define $\\mu (x)=(\\nu (x), \\Phi (-s_n(x),-e_n(x)),\\Psi ^{\\prime }(-s_n(x),-e_n(x))).$ The following is what we wish to prove.", "Theorem 7 Let $(x_n)_{n\\ge 1}$ be a bounded sequence of positive rationals such that the set $\\lbrace x_n+x_m, x_nx_m:n\\ne m\\rbrace $ is monochromatic with respect to $\\mu $ .", "Then for any $k\\in \\mathbb {N}$ there exist $l$ and $n$ such that $x_n\\in T_l\\setminus T_k$ .", "Because the sequence $(x_n)_{n\\ge 1}$ is monochromatic with respect to $\\mu $ it is also monochromatic with respect to $\\nu $ .", "Since $(x_n)_{n\\ge 1}$ is bounded, Theorem REF tells us that $(x_n)_{n\\ge 1}$ must converge to 0, and so we may assume that all terms are less than 1.", "Assume for a contradiction that there exists $k\\in \\mathbb {N}$ such that $x_n\\in T_k$ for all $n\\in \\mathbb {N}$ .", "By passing to a subsequence, we can assume that for all $n$ $x_n\\in T_{t}\\setminus T_{t-1}$ for some $t\\le k$ .", "In other words, the minimal base of the form $P_s$ for $x_n$ is $P_t$ , for all $n\\ge 1$ .", "Since $(x_n)_{n\\ge 1}$ converges to 0, $s_t(x_n)$ and $e_t(x_n)$ must tend to $-\\infty $ .", "In particular, we may assume from now on that $s_t(x_n)<-1$ for all $n$ .", "Moreover, by passing to a subsequence, we may assume that the sequence is strictly decreasing and that all of its terms have pairwise left to right disjoint support – in other words, if $n<m$ then $e_t(x_n)>s_t(x_m)$ .", "Also, by the pigeonhole principle, there exists a subsequence for which all terms have the same last digit, say $0<d<P_t$ , and by passing to that subsequence we may assume that this is the case for $(x_n)_{n\\ge 1}$ itself.", "Let $n<m$ be positive integers.", "Then, because $x_n$ and $x_m$ have disjoint supports in base $P_t$ , which is their minimal base, $x_n+x_m$ also has minimal base $P_t$ .", "Furthermore, $s_t(x_n+x_m)=s_t(x_n)$ and $e_t(x_n+x_m)=e_t(x_m)$ .", "It is also easy to see that if both $x_n$ and $x_m$ have minimal base $P_t$ then so does $x_nx_m$ .", "We note that if $x\\in T_t\\setminus T_{t-1}$ , then $-e_t(x)$ is the smallest positive integer $u$ such that $x(P_t)^u\\in \\mathbb {N}$ .", "Clearly $x_nx_m (P_t)^{-e_t(x_n)-e_t(x_m)}\\in \\mathbb {N}$ , and thus $e_t(x_nx_m)\\ge e_t(x_n)+e_t(x_m)$ .", "Now suppose that there exists $k^{\\prime }\\in \\mathbb {N}$ smaller than $-e_t(x_n)-e_t(x_m)$ such that $x_nx_m (P_t)^{k^{\\prime }}\\in \\mathbb {N}$ .", "It follows that $x_n(P_t)^{-e_t(x_n)}x_m(P_t)^{-e_t(x_m)}(P_t)^{k^{\\prime }+e_t(x_n)+e_t(x_m)}\\in \\mathbb {N}$ .", "But $x_n(P_t)^{-e_t(x_n)}\\equiv x_m(P_t)^{-e_t(x_m)}\\equiv d\\mod {P}_t$ .", "Because the power of $P_t$ is negative, we must have that $P_t$ divides $d^2$ , and since $P_t$ is a product of distinct primes, we must in fact have that $P_t$ divides $d$ , a contradiction.", "Therefore $e_t(x_nx_m)=e_t(x_n)+e_t(x_m)$ .", "Finally, for $x\\in T_t\\setminus T_{t-1}$ , $s_t(x)$ is the unique integer $l$ such that $(P_t)^{l+1}>x\\ge (P_t)^l$ .", "By the pigeonhole principle we either have $x_n\\ge \\sqrt{P_t}(P_t)^{s_t(x_n)}$ for infinitely many $n$ or $x_n<\\sqrt{P_t}(P_t)^{s_t(x_n)}$ for infinitely many $n$ .", "By passing to a subsequence we may assume that we are either in the first case for all $n$ or in the second case for all $n$ .", "In the first case $s_t(x_nx_m)=s_t(x_n)+s_t(x_m)+1$ for all $m\\ne n$ , while in the second case $s_t(x_nx_m)=s_t(x_n)+s_t(x_m)$ .", "Let $a_n=-s_t(x_n)>1$ and $b_n=-e_t(x_n)>a_n$ for all $n\\in \\mathbb {N}$ .", "Note that both $(a_n)_{n\\ge 1}$ and $(b_n)_{n\\ge 1}$ are strictly increasing sequences of natural numbers.", "Then $\\mu $ tells us that either $\\Phi (a_n, b_m)=\\Phi (a_n+a_m, b_n+b_m)$ for all $n<m$ , or $\\Phi (a_n-1, b_m)=\\Phi (a_n+a_m-2, b_n+b_m)$ for all $n<m$ , which contradicts Lemma REF or Lemma REF ." ], [ "Exploring $\\mu $ further", "It turns out that for $\\mu $ we can find an injective sequence with all pairwise sums and products monochromatic, and actually even all finite sums and products monochromatic.", "This shows that neither $\\theta $ nor $\\nu $ nor their product can provide a counterexample for the `finite sums and products' problem in the set of all rationals.", "We include this result just out of interest; the reader can skip this section if desired.", "We say that the sequence $(y_n)_{n\\ge 1}$ is a product subsystem of the sequence $(x_n)_{n\\ge 1}$ if there exists a sequence $(H_n)_{n\\ge 1}$ of finite sets of natural numbers such that for every $n\\ge 1$ , $\\max H_n <\\min H_{n+1}$ and $y_n=\\displaystyle \\prod _{t\\in H_n}x_t$ .", "Theorem 8 There exists a sequence $(y_n)_{n\\ge 1}$ in $\\mathbb {Q}\\cap (0,1)$ such that all of its finite sums and finite products are monochromatic with respect to $\\mu $ .", "Starting with $r_1=2$ , we may inductively choose an increasing sequence $ (r_n)_{n\\ge 1}$ of natural numbers such that for all $n\\in \\mathbb {N}$ we have $\\displaystyle \\sum _{i=1}^n{1\\over p_{r_i}}<1$ .", "By the Finite Sums Theorem (or rather a simple corollary of it – see Corollary 5.15 in [5]) we can choose a product subsystem $(x_n)_{n\\ge 1}$ of $\\left(\\dfrac{1}{p_{r_i}}\\right)_{n\\ge 1}$ such that all finite products of $(x_n)_{n\\ge 1}$ are monochromatic with respect to $\\nu $ – in other words, they are all members of a colour class of $\\nu $ , say $U$ .", "The colouring $\\nu $ of $\\mathbb {R}^+$ consists of five countable classes and several classes that are open in $\\mathbb {R}^+$ .", "Recall that the countable colour classes are $C_1=\\lbrace 2^k:k\\in \\mathbb {Z}\\rbrace $ , $C_2=\\lbrace 2^{k+\\frac{1}{2}}:k\\in \\mathbb {Z}\\rbrace $ , $C_3\\setminus C_4=\\lbrace 2^k+2^l:k,l\\in \\mathbb {Z}$ and $l<k\\rbrace \\setminus C_4$ , $C_4\\setminus C_1=\\lbrace 2^k-2^l:k,l\\in \\mathbb {Z}$ and $l<k\\rbrace \\setminus C_1$ , and $C_5\\setminus C_2=\\lbrace 2^{k+1}(1-2^{l-k})^{\\frac{1}{2}}:k,l\\in \\mathbb {Z}$ and $l<k\\rbrace \\setminus C_2$ .", "It is easy to see that $C_2$ contains only irrational numbers.", "Observe also that $C_5$ contains only irrational numbers, because $\\left(1-\\frac{1}{2^n}\\right)^{\\frac{1}{2}}$ is irrational for any $n\\in \\mathbb {N}$ .", "(Indeed, suppose $\\left(1-\\frac{1}{2^n}\\right)^{\\frac{1}{2}}=\\frac{p}{q}$ for some coprime $p,q\\in \\mathbb {N}$ ; we then get that $\\frac{2^n-1}{2^n}=\\frac{p^2}{q^2}$ , so $2^n$ and $2^n-1$ have to be perfect squares, but no two perfect squares in $\\mathbb {N}$ differ by 1.)", "The classes $C_1$ , $C_3\\setminus C_4$ , and $C_4\\setminus C_1$ consist of rational number that have denominator (in reduced form) a power of 2, and thus none of them can be in $U$ as no $x_n$ has this property since $r_1=2$ .", "Furthermore, $C_2$ and $C_5\\setminus C_2$ consist of irrational numbers, so $C_2\\ne U$ and $C_5\\setminus C_2\\ne U$ .", "We conclude that $U$ is an open colour class of $\\nu $ that contains all the finite products of $(x_n)_{n\\ge 1}$ .", "We are now going to find a subsequence $(y_n)_{n\\ge 1}$ of $(x_n)_{n\\ge 1}$ such that all its finite sums are in $U$ as well.", "We proceed by induction.", "Let $y_1=x_1$ .", "Now assume $n\\ge 1$ and that we have chosen $y_1>y_2>\\cdots >y_n$ such that $y_i\\in \\lbrace x_j:j\\in \\mathbb {N}\\rbrace $ for all $1\\le i\\le n$ , and that for any finite non-empty set $A$ of $\\lbrace 1,2,\\cdots ,n\\rbrace $ we have $\\displaystyle \\sum _{i\\in A} y_i\\in U$ .", "Because $U$ is open in $\\mathbb {R}^{+}$ , we can pick $\\epsilon _A>0$ such that $\\left(\\displaystyle \\sum _{i\\in A} y_i,\\displaystyle \\sum _{i\\in A} y_i+\\epsilon _A\\right)\\subset U$ for any finite non-empty set $A$ of $\\lbrace 1,2,\\cdots ,n\\rbrace $ .", "Let $\\epsilon =\\min \\lbrace \\epsilon _A, y_i:\\emptyset \\ne F\\subseteq \\lbrace 1,2,\\cdots ,n\\rbrace , 1\\le i\\le n\\rbrace $ .", "Pick $m$ such that for all $j\\ge m$ we have$x_j<\\epsilon $ , and set $y_{n+1}=x_m$ .", "This finishes the induction step.", "Therefore, all the finite sums and all the finite products of the sequence $(y_n)_{n\\ge 1}$ are in $U$ , and so are monochromatic for the colouring $\\nu $ .", "To complete the proof we show that if $z$ is either a finite sum or a finite product of $(y_n)_{n\\ge 1}$ and $z\\in T_k\\setminus T_{k-1}$ , then $e_k(z)=-1$ , and consequently $s_k(z)=-1$ .", "First assume that $z$ is a finite product of elements of $(y_n)_{n\\ge 1}$ .", "This implies that $z$ is a finite product of elements of $\\left(\\dfrac{1}{p_n}\\right)_{n\\ge 1}$ .", "Therefore there exists a finite set $A$ of natural numbers such that $z=\\displaystyle \\prod _{i\\in A}\\dfrac{1}{p_i}$ , and thus $z\\in T_k\\setminus T_{k-1}$ , where $k=\\max A$ .", "We observe that $zP_k=\\displaystyle \\prod _{i\\in \\lbrace 1,2,\\ldots ,k\\rbrace \\setminus A}p_i<P_k$ , so $z=\\dfrac{z^{\\prime }}{P_k}$ for some $1\\le z^{\\prime }<P_k$ , which implies that $e_k(z)=s_k(z)=-1$ .", "Finally, let $z=\\displaystyle \\sum _{i\\in A}y_i$ for some finite set $A=\\lbrace j_1,j_2,\\cdots ,j_s\\rbrace $ of natural numbers of size $s>1$ , where $j_1<j_2<\\cdots <j_s$ .", "Since $(y_n)_{n\\ge 1}$ is a subsequence of $(x_n)_{n\\ge 1}$ , which is a product subsystem of $\\left(\\dfrac{1}{p_{r_n}}\\right)_{n\\ge 1}$ , for each $i\\in \\lbrace 1,2,\\ldots ,s\\rbrace $ there exists a finite set $F_i$ of natural numbers such that $\\max F_i<\\min F_{i+1}$ if $i<s$ , and $y_{j_i}=\\displaystyle \\prod _{t\\in F_i}\\dfrac{1}{p_{r_t}}$ .", "Denote by $m_i$ the maximum of $F_i$ for all $i\\in \\lbrace 1,2,\\ldots ,s\\rbrace $ , and let $k=r_{m_s}$ , so that $z\\in T_k\\setminus T_{k-1}$ .", "We first note that $\\displaystyle \\sum _{i=1}^s\\dfrac{1}{p_{r_{m_i}}}<1$ , and thus $\\displaystyle \\sum _{i=1}^s{\\dfrac{p_k}{p_{r_{m_i}}}}<p_k$ .", "We now see that $zP_k=\\left(\\displaystyle \\sum _{i=1}^s y_{j_i}\\right)\\displaystyle \\prod _{m=1}^kp_m=\\left(\\displaystyle \\sum _{i=1}^s\\prod _{t\\in F_i}\\dfrac{1}{p_{r_t}}\\right)\\displaystyle \\prod _{m=1}^kp_m\\le \\left(\\displaystyle \\sum _{i=1}^s\\dfrac{1}{p_{r_{m_i}}}\\right)\\displaystyle \\prod _{m=1}^kp_m=\\left(\\sum _{i=1}^s\\dfrac{p_k}{p_{r_{m_i}}}\\right)\\displaystyle \\prod _{m=1}^{k-1}p_m<P_k$ , by the above observation.", "Therefore, as before, $z=\\dfrac{z^{\\prime \\prime }}{P_k}$ for some $1\\le z^{\\prime \\prime }<P_k$ , which implies $e_k(z)=s_k(z)=-1$ ." ], [ "Unbounded sequences in the rationals", "In this section we give a finite colouring of the rationals such that no unbounded sequence whose denominators contain only finitely many primes can have the set of all its finite sums and products monochromatic.", "The general aim is to write numbers as an integer part (which will be considered in binary) and a fractional part (which will be considered in the `minimal base' as in Section 5), although actually we will also make use of the integer part written in that minimal base of the fractional part.", "By using the finite sums, we hope to show that the `centres clear out', meaning that the fractional parts tend to 0 (or 1) and the integer parts have ends that tend to infinity.", "This will then give us the disjointness of support that we need to apply results like Lemma REF in certain ways.", "Theorem 9 There exists a finite colouring $\\alpha $ of the positive rationals such that there exists no unbounded sequence $(x_n)_{n\\ge 1}$ that has the set of all its finite sums and products monochromatic with respect to $\\alpha $ , with the set of primes that divide the denominators of its terms being finite.", "Let $S_n=\\lbrace x\\in \\mathbb {Q}^+: x$ has a terminating base $P_n$ expansion$\\rbrace $ for all $n>0$ , and $S_0=\\emptyset $ .", "We first define the colouring $\\alpha ^{\\prime }$ of $\\mathbb {Q}^+\\setminus (\\mathbb {N}\\cup \\lbrace 2^k:k\\in \\mathbb {Z}\\rbrace \\cup (0,2])$ as follows: for $x\\in S_r\\setminus S_{r-1}$ we set $\\alpha ^{\\prime }(x)=(a(x)\\text{ mod } 2,a(\\hbox{\\rm frac}(x))\\text{ mod }2, \\epsilon (\\hbox{\\rm frac}(x))\\text{ mod }2, e_r(\\lfloor x\\rfloor )\\text{ mod }2, e_2(\\lfloor x\\rfloor )\\text{ mod }2,\\newline e_r(\\lfloor x\\rfloor +1)\\text{ mod }2,e_2(\\lfloor x\\rfloor +1)\\text{ mod }2,a(r(x))\\text{ mod }3,p(x),q(x), q^{\\prime }(x),s(x),s^{\\prime }(x))$ , where $1-2^{\\epsilon (\\hbox{\\rm frac}(x)})\\le \\hbox{\\rm frac}(x)<1-2^{\\epsilon (\\hbox{\\rm frac}(x))-1}$ , and as before $e_r(x)$ is the position of the rightmost significant digit in base $P_r$ and $e_2(x)$ is the position of the rightmost significant digit in binary, and also $r(x)=\\dfrac{x-2^{a(x)}}{2^{a(x)}}$ , $p(x)$ is 0 if $\\lfloor x\\rfloor $ is a power of 2 and 1 otherwise, $q(x)$ is 0 if $a(x)-b(x)>e_r(\\lfloor x\\rfloor )$ and 1 otherwise, $q^{\\prime }(x)$ is 0 if $a(x)-b(x)>e_r(\\lfloor x\\rfloor +1)$ and 1 otherwise, $s(x)$ is 0 if $a(x)-c(x)>e_r(\\lfloor x\\rfloor )$ and 1 otherwise, $s^{\\prime }(x)$ is 0 if $a(x)-c(x)>e_r(\\lfloor x\\rfloor +1)$ and 1 otherwise.", "Here $\\lfloor x\\rfloor $ and $\\hbox{\\rm frac}(x)$ are the integer and the fractional parts of $x$ respectively.", "We are now ready to define the colouring $\\alpha $ .", "Let $x\\in \\mathbb {Q}^+$ .", "Then $\\alpha (x)=(0,\\theta (x))$ if $x\\in \\mathbb {N}$ , $\\alpha (x)=1$ if $x\\in \\lbrace 2^k:k\\in \\mathbb {Z},k<0\\rbrace $ , $\\alpha (x)=2$ if $x\\le 2$ , $x\\notin \\mathbb {N}$ and $x\\notin \\lbrace 2^k:k\\in \\mathbb {Z}, k<0\\rbrace $ , and $\\alpha (x)=(1,\\alpha ^{\\prime }(x))$ otherwise.", "Suppose for a contradiction that a sequence as specified in the statement of the theorem exists.", "Since it is unbounded, we may assume that all its terms are greater than 2.", "Since $\\theta $ prevents any sequence of natural numbers from having monochromatic pairwise sums and products, we may assume, by passing to a subsequence, that none of the $x_n$ are natural numbers – and hence, since the set of the finite sums and products is monochromatic, also no finite sum or product of the $x_n$ is a natural number.", "Moreover, by looking at sums of two terms, it is easy to see that $p$ prevents the integer parts from being powers of 2, and thus we can assume that no $x_n$ has its integer part a power of 2.", "By assumption, and after passing to a subsequence, we may assume that there exists $r\\in \\mathbb {N}$ such that $x_n\\in S_r\\setminus S_{r-1}$ for all $n$ .", "Since $S_r\\setminus S_{r-1}$ is closed under multiplication, all the finite products are in $S_r\\setminus S_{r-1}$ too.", "Let $x_n=y_n+z_n$ , where $y_n\\in \\mathbb {N}$ is the integer part of $x_n$ and $0<z_n<1$ is its fractional part.", "By passing to a subsequence we may assume that the sequence $(y_n)_{n\\ge 1}$ is strictly increasing and tending to infinity.", "Suppose that the sequence $(z_n)_{n\\ge 1}$ is bounded away from both 0 and 1, which is equivalent to saying that $a(z_n)$ and $\\epsilon (z_n)$ are both bounded.", "Therefore, by passing to a subsequence, we may assume that there exist fixed integers $k<0$ and $l<1$ such that $a(z_n)=k$ and $\\epsilon (z_n)=l$ for all $n$ .", "We either have $z_n<\\frac{1}{2}$ for infinitely many $n$ or $z_n\\ge \\frac{1}{2}$ for infinitely many $n$ .", "In the first case, if $z_n$ and $z_m$ are less than $\\frac{1}{2}$ then $\\hbox{\\rm frac}(x_n+x_m)=z_n+z_m$ , and thus $a(\\hbox{\\rm frac}(x_n+x_m))=k+1\\ne a(\\hbox{\\rm frac}(x_n))\\mod {2}$ , a contradiction.", "In the second case, if $z_n$ and $z_m$ are at least $\\frac{1}{2}$ then $\\hbox{\\rm frac}(x_n+x_m)=z_n+z_m-1$ , so that $1-\\hbox{\\rm frac}(x_n+x_m)=1-z_n+1-z_m$ which implies that $\\epsilon (\\hbox{\\rm frac}(x_n+x_m))=l+1\\ne \\epsilon (\\hbox{\\rm frac}(x_n))\\mod {2}$ , a contradiction.", "This tells us that, by passing to a subsequence, we may either assume that $z_n$ converges to 0 or that it converges to 1.", "By passing to a subsequence we may assume that either $x_n<2^{a(x_n)+\\frac{1}{2}}$ for all $n$ or $x_n\\ge 2^{a(x_n)+\\frac{1}{2}}$ for all $n$ .", "In the first case $a(x_nx_m)=a(x_n)+a(x_m)$ , while in the second case $a(x_nx_m)=a(x_n)+a(x_m)+1$ (for all $n\\ne m$ ).", "Since, for $x\\in \\mathbb {R}^+\\setminus (\\mathbb {N}\\cup C_1)$ , $r(x)$ is the unique number strictly between 0 and 1 such that $x=2^{a(x)}(1+r(x))$ , a simple computation shows that in the first case $r(x_nx_m)=r(x_n)+r(x_m)+r(x_n)r(x_m)$ , while in the second case $r(x_nx_m)=\\dfrac{r(x_n)+r(x_m)+r(x_n)r(x_m)-1}{2}$ for all $n\\ne m$ .", "Suppose that $x_n<2^{a(x_n)+{\\frac{1}{2}}}$ for all $n$ and that $r(x_n)$ is bounded away from 0.", "Then $a(r(x_n))$ is bounded, so by passing to a subsequence we may assume that there is an integer $l<-1$ such that $a(r(x_n))=l$ for all $n$ (Recall that we are in the case where $r(x_n)+r(x_m)+r(x_n)r(x_m)<1$ and thus $a(r(x_n))<-1$ ).", "Since $2^l\\le r(x_n)<2^{l+1}$ and $2^l\\le r(x_m)<2^{l+1}$ , we have that $2^{l+1}<2^{l+1}+2^{2l}\\le r(x_n)+r(x_m)+r(x_n)r(x_m)<2^{l+1}+2^{2l+2}<2^{l+3}$ .", "Thus $a(r(x_nx_m))$ is $l+1$ or $l+2$ , neither of which is congruent to $l$ mod 3, a contradiction.", "Therefore in this first case (namely when $x_n<2^{a(x_n)+{\\frac{1}{2}}}$ for all $n$ ), we must have that $r(x_n)$ converges to 0, which immediately implies that $a(x_n)-b(x_n)$ (the `right gap') goes to infinity.", "Suppose instead that we are in the second case (namely that $x_n\\ge 2^{a(x)+{1\\over 2}}$ for all $n$ ), so that $r(x_nx_m)=\\dfrac{r(x_n)+r(x_m)+r(x_n)r(x_m)-1}{2}$ for all $n\\ne m$ .", "Suppose that $a(x_n)-c(x_n)$ is bounded.", "By passing to a subsequence, we may assume that there exists a fixed $l\\in \\mathbb {N}$ such that $a(x_n)-c(x_n)=l$ for all $n$ .", "Let $2k-2<d\\in \\mathbb {N}$ be such that $\\dfrac{(2^{k+1}-1)^d}{2^{(k+1)d}}<\\dfrac{1}{2}$ , and look at the first $d$ terms.", "We have that $x_j<2^{a(x_j)+1}-2^{c(x_j)}=2^{a(x_j)+1}-2^{a(x_j)-k}=2^{a(x_j)}\\dfrac{2^{k+1}-1}{2^k}$ , so that we have $x_1x_2\\cdots x_d<2^{a(x_1)+\\ldots +a(x_d)}\\dfrac{(2^{k+1}-1)d}{2^{kd}}<2^{a(x_1)+\\ldots +a(x_d)+k-1}$ .", "On the other hand, by assumption, the product is at least $2^{a(x_1)+\\cdots +a(x_d)+\\frac{d}{2}}>2^{a(x_1)+\\cdots +a(x_d)+k-1}$ , a contradiction.", "Therefore we may assume that $a(x_n)-c(x_n)$ is strictly increasing and goes to infinity, which is equivalent to $r(x_n)$ converging to 1.", "To summarise, we either have $r(x_n)$ converging to 0, which is equivalent to $a(x_n)-b(x_n)$ going to infinity, or $r(x_n)$ converging to 1, which is equivalent to $a(x_n)-c(x_n)$ going to infinity.", "We distinguish these two cases.", "Case 1.", "The sequence $(z_n)_{n\\ge 1}$ converges to 0.", "In this case, by passing to a subsequence we may assume that the terms of the sequence $(z_n)_{n\\ge 1}$ have pairwise left to right disjoint supports in base $P_r$ – note that this implies that all finite sums of $(x_n)_{n\\ge 1}$ also have minimal base $P_r$ .", "By passing to a subsequence we may assume that all $y_n$ have the same digit in position $e_r(y_n)+1$ in base $P_r$ , and that $z_n<\\frac{1}{P_r}$ for all $n$ .", "Suppose that there exist $P_r$ terms such that their integer parts end at the same position in base $P_r$ , call it $p$ .", "It is easy to see that the integer part of their sum is the sum of their integer parts, which ends at position $p+1$ , a contradiction.", "Therefore we may assume that the terms of the sequence $(y_n)_{n\\ge 1}$ have left to right disjoint supports in base $P_r$ .", "By exactly the same argument (looking at a sum of two terms only) we can further deduce that the terms of the sequence $(y_n)_{n\\ge 1}$ have left to right disjoint supports in binary as well.", "Assume first that $r(x_n)$ converges to 0.", "We fix $x_1$ and look at $x_1+x_n$ .", "For $n$ sufficiently large we have $q(x_1+x_n)=0$ , because the right gap of the sum is the right gap of $x_n$ , while the end position of $\\lfloor x_1+x_n\\rfloor $ in base $P_s$ is fixed, namely the end position of $y_1$ in base $P_r$ .", "On the other hand, if the fractional part of $x_1$ has end position $a<0$ in base $P_r$ and $n$ is large enough, then $\\lfloor x_1x_n\\rfloor $ has end position $e_r(y_n)+a$ in base $P_r$ , which tends to infinity as $n$ tends to infinity.", "However, due to the fact that the right gap of $x_n$ goes to infinity, we see that for $n$ large enough the right gap of $x_nx_1$ equals the right gap of $x_1$ , which will eventually be less than $e_r(y_n)+a$ .", "So $q(x_1x_n)=1$ , a contradiction.", "Assume now that $r(x_n)$ converges to 1.", "As before, we fix $x_1$ and look at $x_n+x_1$ for $n$ large enough.", "Since $x_n$ and $x_1$ have disjoint supports in binary, we have that $a(x_n+x_1)=a(x_n)$ , and thus $r(x_n+x_1)=\\dfrac{x_n+x_1-2^{a(x_n)}}{2^{a(x_n)}}$ which converges to 1.", "Therefore, as $n$ tends to infinity, $a(x_n+x_1)-c(x_n+x_1)$ also tends to infinity – thus it will eventually be greater that the end position of $\\lfloor x_n+x_1\\rfloor $ in base $P_r$ (which is the end position of $y_1$ in base $P_r$ ), so $s(x_n+x_1)=0$ for all $n$ large enough.", "On the other hand, it is a straightforward computation to show that $a(x_nx_1)-c(x_nx_1)$ is either $a(x_1)-c(x_1)$ or $a(x_1)-c(x_1)+1$ , and thus is bounded.", "However, we have seen above that $e_r(\\lfloor x_nx_1\\rfloor )$ is unbounded.", "We conclude that for all $n$ sufficiently large we have $a(x_nx_1)-c(x_nx_1)<e_r(\\lfloor x_nx_1\\rfloor )$ , and thus $s(x_nx_1)=1$ for all $n$ sufficiently large, a contradiction.", "This concludes Case 1.", "Case 2.", "The sequence $(z_n)_{n\\ge 1}$ converges to 1.", "In this case we have that $x_n=y_n+1-(1-z_n)$ and the sequence $(1-z_n)_{n\\ge 1}$ converges to 0.", "With the same type of argument as the one presented above, we may assume that the terms of the sequence $(y_n+1)_{n\\ge 1}$ have pairwise left to right disjoint supports in binary and in base $P_r$ , and the sequence is strictly increasing (it suffices to show that we cannot have infinitely many terms ending at the same place in binary or in base $P_r$ ).", "Since the full argument for base $P_r$ has been given above, here we just include the argument for binary.", "So suppose that we have $n\\ne m$ such that $e_2(y_n+1)=e_2(y_m+1)=p$ and $y_n+1$ and $y_m+1$ have the same binary digit in position $p+1$ (which we can achieve by passing to a subsequence).", "Then $e_2(\\lfloor x_n\\rfloor +1)=p$ , while $e_2(\\lfloor x_n+x_m\\rfloor +1)=e_2(y_n+y_m+2)=p+1$ , a contradiction.", "We observe that for any $n>1$ , $e_r(\\lfloor x_n+x_1\\rfloor +1)=e_r((y_n+1)+(y_1+1))=e_r(y_1+1)$ .", "Let $e_r(x_1)=u<0$ and pick $n$ such that $1-z_n<\\frac{1}{x_1}$ and $e_r(y_n+1)=v_n>-u$ .", "This implies that $0<1-(1-z_n)x_1<1$ and that $(y_n+1)x_1\\in \\mathbb {N}$ .", "Therefore $x_nx_1=((y_n+1)-(1-z_n))x_1=(y_n+1)x_1-(1-z_n)x_1$ , and thus $\\lfloor x_nx_1\\rfloor +1=\\lfloor x_nx_1+1\\rfloor =\\lfloor (y_n+1)x_1 + 1-(1-z_n)x_1\\rfloor =(y_n+1)x_1$ .", "This means that $e_r(\\lfloor x_nx_1\\rfloor +1)=v_n+u$ for all $n$ sufficiently large, so that the sequence $(e_r(\\lfloor x_nx_1\\rfloor ))_{n\\ge 1}$ is unbounded.", "To complete the proof, we show that if $x_n<2^{a(x_n)+\\frac{1}{2}}$ for all $n\\ge 1$ then for sufficiently large $n$ we have $q^{\\prime }(x_n+x_1)=0$ and $q^{\\prime }(x_nx_1)=1$ , while if $x_n\\ge 2^{a(x_n)+\\frac{1}{2}}$ for all $n\\ge 1$ then for sufficiently large $n$ we have $s^{\\prime }(x_n+x_1)=0$ and $s^{\\prime }(x_nx_1)=1$ .", "Assume first that $x_n<2^{a(x_n)+\\frac{1}{2}}$ for all $n\\ge 1$ .", "As we have seen above, this implies that $a(x_n)-b(x_n)$ tends to infinity (and we may also assume that it is strictly increasing and $a(x_1)-b(x_1)>2$ ).", "Consequently $a(x_n+x_1)-b(x_n+x_1)$ also tends to infinity, and so is eventually larger than $e_r(\\lfloor x_n+x_1\\rfloor +1)$ , whence $q^{\\prime }(x_n+x_1)=0$ for $n$ large enough.", "On the other hand, since $2^{a(x_n)}+2^{b(x_n)}\\le x_n<2^{a(x_n)}+2^{b(x_n)+1}$ and $2^{a(x_1)}+2^{b(x_1)}\\le x_1<2^{a(x_1)}+2^{b(x_1)+1}$ , we have that $2^{a(x_n)+a(x_1)}+2^{a(x_n)+b(x_1)}<x_nx_1<2^{a(x_n)+a(x_1)}+2^{a(x_n)+b(x_1)+1}+2^{a(x_1)+b(x_n)+1}+2^{b(x_n)+b(x_1)+2}<2^{a(x_n)+a(x_1)}+2^{a(x_n)+b(x_1)+2}$ .", "This is because $b(x_n)+b(x_1)+2<a(x_1)+b(x_n)+1<a(x_n)+b(x_1)+1$ .", "Therefore $b(x_nx_1)$ is either $a(x_n)+b(x_1)$ or $a(x_n)+b(x_1)+1$ , and thus $a(x_nx_1)-b(x_nx_1)\\le a(x_1)-b(x_1)$ .", "Since $e_r(\\lfloor x_1x_n\\rfloor +1)$ will eventually be greater than $a(x_1)-b(x_1)$ , we have that $q^{\\prime }(x_nx_1)=1$ for $n$ large enough, a contradiction.", "Finally, assume that $x_n\\ge 2^{a(x_n)+\\frac{1}{2}}$ for all $n\\ge 1$ .", "Thus $a(x_n)-c(x_n)$ goes to infinity (and as above we may assume it to be strictly increasing and such that $a(x_1)-c(x_1)>2$ ), and consequently so does $a(x_n+x_1)-c(x_n+x_1)$ .", "This means that $a(x_n+x_1)-c(x_n+x_1)>e_r(\\lfloor x_n+x_1\\rfloor +1)$ for $n$ large enough, and so $s^{\\prime }(x_n+x_1)=0$ for $n$ large enough.", "On the other hand, $2^{a(x_n)+1}-2^{c(x_n)+1}\\le x_n<2^{a(x_n)+1}-2^{c(x_n)}$ and $2^{a(x_1)+1}-2^{c(x_1)+1}\\le x_1<2^{a(x_1)+1}-2^{c(x_1)}$ .", "This implies that $2^{a(x_n)+a(x_1)+2}-2^{a(x_n)+c(x_1)+3}\\le 2^{a(x_n)+a(x_1)+2}-2^{a(x_n)+c(x_1)+2}-2^{a(x_1)+c(x_n)+2}+2^{c(x_n)+c(x_1)+2}<x_nx_1<2^{a(x_n)+a(x_1)+2}-2^{a(x_n)+c(x_1)+1}$ .", "Here the first inequality holds because $a(x_n)+c(x_1)+2>a(x_1)+c(x_n)+2$ , which implies that $2^{a(x_n)+c(x_1)+2}+2^{a(x_1)+c(x_n)+2}<2^{a(x_n)+c(x_1)+3}$ .", "Therefore $c(x_nx_1)$ is either $a(x_n)+c(x_1)+1$ or $a(x_n)+c(x_1)+2$ , and so $a(x_nx_1)-c(x_nx_1)\\le a(x_1)-c(x_1)$ .", "Since $e_r(\\lfloor x_1x_n\\rfloor +1)$ will eventually be greater than $a(x_1)-c(x_1)$ , we have that $s^{\\prime }(x_nx_1)=1$ for $n$ large enough, a contradiction.", "This concludes Case 2.", "Note that Theorem REF , together with Theorem REF , completes the proof of our main result.", "Theorem 10 There exists a finite colouring of the rational numbers with the property that there exists no infinite sequence such that the set of its finite sums and products is monochromatic and the set of primes that divide the denominators of its terms is finite.$\\square $" ], [ "Concluding remarks", "The first remaining problem is of course to understand what happens with finite sums and products in the rationals.", "The above colourings of $\\mathbb {Q}_{(k)}$ do rely heavily on the representation of numbers in a suitable base, and so do not pass to sequences from the whole of $\\mathbb {Q}$ .", "It would be very good to find `parameters' $a$ and $b$ that would allow Lemma REF to be applied, or perhaps some variant like Lemma REF .", "We have tried to find such parameters in the rationals in general, but have been unsuccessful.", "It would be extremely interesting to decide whether or not such parameters do exist.", "Neil Hindman, Department of Mathematics, Howard University, Washington D.C., 20059, USA.", "Email address: [email protected] Maria-Romina Ivan, Department of Pure Mathematics and Mathematical Statistics, Centre for Mathematical Sciences, Wilberforce Road, Cambridge, CB3 0WB, UK.", "Email address: [email protected] Imre Leader, Department of Pure Mathematics and Mathematical Statistics, Centre for Mathematical Sciences, Wilberforce Road, Cambridge, CB3 0WB, UK.", "Email address: [email protected]" ], [ "Appendix", "Here we provide the cases in the proof of Theorem REF when the colour class is $C_3$ or $C_5$ .", "Proposition 11 There does not exist an injective sequence $(x_n)_{n\\ge 1}$ in $\\mathbb {R}^+$ such that the set of all its pairwise sums and products is contained in $C_3=\\lbrace 2^k+2^l:k,l\\in \\mathbb {Z}$ and $l<k\\rbrace $ .", "Assume for a contradiction that such a sequence $(x_n)_{n\\ge 1}$ exists.", "It is easy to see that if $x<y<z$ are three positive real such that $\\lbrace x+y,x+z,y+z\\rbrace \\subseteq C_3$ then $\\lbrace x,y,z\\rbrace \\subseteq \\mathbb {Q}_{(2)}$ , and so $x_n\\in \\mathbb {Q}_{(2)}$ for all $n\\ge 1$ .", "We know that the set $\\lbrace x_n:n\\in \\mathbb {N}\\rbrace \\cap \\lbrace 2^k:k\\in \\mathbb {Z}\\rbrace $ is finite, otherwise we get a contradiction as the product of two powers of 2 does not lie in $C_3$ .", "We may therefore assume that no $x_n$ is a power of 2.", "Suppose first that $x_n\\in (0,1)$ for all $n\\ge 1$ .", "Suppose that $\\lbrace s_2(x_n):n\\in \\mathbb {N}\\rbrace $ is infinite.", "We may pick $n$ such that $s_2(x_n)<e_2(x_1)$ , but then the binary expansion of $x_1+x_n$ has at least four nonzero digits, and thus $x_1+x_n\\notin C_3$ , a contradiction.", "We may therefore assume (after passing to a subsequence) that there exists $k\\in \\mathbb {Z}$ (with $k<0$ ) such that $s_2(x_n)=k$ for every $n\\ge 1$ .", "Then each $x_n=2^k+y_n$ where $s_2(y_n)<k$ .", "Since there are only finitely many numbers with given values of $s_2(x)$ and $e_2(x)$ , by passing to a subsequence we may also assume that $e_2(y_n)>e_2(y_{n+1})$ for all $n\\ge 1$ .", "We now observe that if $n<m$ then $s_2(x_n+x_m)=k+1$ and $e_2(x_n+x_m)=e_2(x_m)$ , so $x_n+x_m$ has a nonzero digit at positions $k+1$ and $e(x_m)$ , and thus, since it is in $C_3$ , we have $x_n+x_m=2^{k+1}+2^{e(x_m)}$ .", "But then $x_1+x_3=x_2+x_3$ , a contradiction.", "We may therefore assume that $x_n>1$ for all $n\\ge 1$ .", "By Ramsey's theorem for pairs, we may assume either that for all $n\\ne m$ we have $x_n+x_m\\in \\lbrace 2^k+2^l:k,l\\in \\mathbb {Z}$ and $0\\le l<k\\rbrace $ or that for all $n\\ne m$ we have $x_n+x_m\\in \\lbrace 2^k+2^l:k,l\\in \\mathbb {Z}$ and $l<0<k\\rbrace $ .", "Case 1.", "For all $n\\ne m$ we have $x_n+x_m\\in \\lbrace 2^k+2^l:k,l\\in \\mathbb {Z}$ and $0\\le l<k\\rbrace $ .", "Let $y_n=\\lfloor x_n\\rfloor $ and $\\alpha _n=x_n-y_n$ for all $n\\ge 1$ .", "Given $n\\ne m$ , we have $x_n+x_m=y_n+y_m+\\alpha _n+\\alpha _m$ , and so $\\alpha _n+\\alpha _m\\in \\lbrace 0,1\\rbrace $ .", "If $n$ , $m$ and $r$ are pairwise distinct and $\\alpha _n,\\alpha _m,\\alpha _r\\notin \\lbrace 0,\\frac{1}{2}\\rbrace $ , then some two are in $(0,\\frac{1}{2})$ or some two are in $(\\frac{1}{2},1)$ , a contradiction.", "Hence, for all but at most two values of $n$ , we have $\\alpha _n\\in \\lbrace 0,{1\\over 2}\\rbrace $ .", "If $n\\ne m$ and $\\alpha _n=\\alpha _m=\\frac{1}{2}$ , then $x_n\\cdot x_m\\notin \\mathbb {N}$ , again a contradiction.", "We may therefore assume that $\\alpha _n=0$ for all $n\\ge 1$ .", "Since no $x_n$ is a power of 2, $\\lbrace e_2(x_n):n\\in \\mathbb {N}\\rbrace $ is finite.", "The reasoning is similar to that presented above: if $e_2(x_n)>s_2(x_1)$ then the binary expansion of $x_1+x_n$ has at least four nonzero digits.", "We may therefore assume that there exists $k$ such that $e_2(x_n)=k$ for all $n\\ge 1$ .", "By passing to a subsequence, we may further assume that either each $x_n$ end in 01 or each $x_n$ ends in 11, so that $e_2(x_n+x_m)=k+1$ .", "Moreover, we may also assume that $s_2(x_n)<s_2(x_{n+1})$ for all $n\\ge 1$ .", "We now see that if $n<m$ then $s_2(x_n+x_m)=s_2(x_m)$ or $s_2(x_n+x_m)=s_2(x_m)+1$ .", "Pick $i\\ne j$ in $\\lbrace 1,2,3\\rbrace $ and $t\\in \\lbrace 0,1\\rbrace $ such that $s_2(x_i+x_4)=s_2(x_4)+t$ and $s_2(x_j+x_4)=s_2(x_4)+t$ .", "Since $k+1<s_2(x_4)+t$ are two positions of nonzero digits, we must have $x_i+x_4=x_j+x_4=2^{s_2(x_4)+t}+2^{k+1}$ , a contradiction Case 2.", "For all $n\\ne m$ we have $x_n+x_m\\in \\lbrace 2^k+2^l:k,l\\in \\mathbb {Z}$ and $l<0<k\\rbrace $ .", "In this case, for all $n\\ne m$ , $x_n+x_m$ has one nonzero digit to the right of the decimal point and one nonzero digit to the left of the decimal point.", "Suppose first that $\\lbrace e_2(x_n):n\\in \\mathbb {N}\\rbrace $ is unbounded.", "By passing to a subsequence, we may assume that $0>e_2(x_1)>e_2(x_2)>e_2(x_3)$ .", "This implies that $x_1+x_3$ and $x_2+x_3$ each have a nonzero digit in position $e_2(x_3)$ and $x_1+x_2$ has a nonzero digit in position $e_2(x_2)$ .", "Thus there exist $y,z,w\\in \\mathbb {N}$ such that $x_1+x_3=y+2^{e_2(x_3)}$ , $x_2+x_3=z+2^{e_2(x_3)}$ , and $x_1+x_2=w+2^{e_2(x_2)}$ .", "Clearly we have that $y\\ne z$ .", "If $z>y$ , then $x_2-x_1=z-y$ so $2x_2=z-y+w+2^{e_2(x_2)}$ , whence $e_2(x_2)=e_2(2x_2)=e(x_2)+1$ , a contradiction.", "If $y>z$ , then $x_1-x_2=y-z$ , so $2x_1=y-z+w+2^{e_2(x_2)}$ , giving $e_2(x_2)=e_2(2x_1)=e_2(x_1)+1>e_2(x_2)$ , again a contradiction.", "Hence $\\lbrace e_2(x_n):n\\in \\mathbb {N}\\rbrace $ is bounded.", "Thus $\\lbrace s_2(x_n):n\\in \\mathbb {N}\\rbrace $ has to be unbounded.", "We may therefore assume that there exists $k<-1$ such that $e_2(x_n)=k$ for all $n\\ge 1$ .", "(If $e_2(x_n)=e_2(x_m)=-1$ then $x_n+x_m\\in \\mathbb {N}$ .)", "By passing to a subsequence, we may also assume that all terms of the sequence have the same digit in position $k+1$ , and for all $n\\ne m$ we have $e_2(x_n+x_m)=k+1$ .", "We may further assume that $s_2(x_1)<s_2(x_2)<s_2(x_3)<s_2(x_4)$ .", "For $i\\in \\lbrace 1,2,3\\rbrace $ , $x_i+x_4$ has a nonzero digit in position $s_2(x_4)$ or in position $s_2(x_4)+1$ .", "Pick $i\\ne j$ in $\\lbrace 1,2,3\\rbrace $ and $t\\in \\lbrace 0,1\\rbrace $ such that $x_i+x_4$ and $x_j+x_4$ each have a nonzero digit in position $s_2(x_4)+t$ .", "Then $x_i+x_4=x_j+x_4=2^{s_2(x_4)+t}+2^{k+1}$ , a contradiction.", "Proposition 12 There does not exist an injective sequence $(x_n)_{n\\ge 1}$ in $\\mathbb {R}^+$ such that the set of all its pairwise sums and products is contained in $C_5=\\lbrace 2^{k+1}(1-2^{l-k})^{\\frac{1}{2}}:k,l\\in \\mathbb {Z}$ and $l<k\\rbrace $ .", "Assume for a contradiction that such a sequence $(x_n)_{n\\ge 1}$ exists.", "Let $\\alpha $ , $\\beta $ , $\\gamma $ be three numbers in $C_5$ such that $x_1+x_2=\\alpha $ , $x_1+x_3=\\beta $ and $x_2+x_3=\\gamma $ .", "Let also $x_1x_2=\\mu $ , $x_1x_3=\\nu $ and $x_2x_3=\\eta $ , where $\\mu $ , $\\nu $ and $\\eta $ are in $C_5$ .", "We therefore have $x_1^2=\\frac{\\mu \\cdot \\nu }{\\eta }$ , whence $x_1^4$ is rational.", "Case 1.", "Suppose that $\\alpha \\cdot \\beta $ , $\\alpha \\cdot \\gamma $ and $\\beta \\cdot \\gamma $ are all irrational.", "Since $\\alpha ^2$ , $\\beta ^2$ and $\\gamma ^2$ are rational, $\\alpha /\\beta $ , $\\alpha /\\gamma $ and $\\beta /\\gamma $ are all irrational as well.", "It is easy to show that if $K$ and $R$ are two fields such that $\\mathbb {Q}\\subset K\\subset R$ and $\\delta \\in R\\setminus K$ is such that $\\delta ^2\\in \\mathbb {Q}$ , then $K(\\delta )=\\lbrace a+b\\cdot \\delta :a,b\\in K\\rbrace $ .", "Using this fact, it is straightforward to show that $\\beta \\notin \\mathbb {Q}(\\alpha )$ , $\\alpha \\notin \\mathbb {Q}(\\beta )$ and $\\gamma \\notin \\mathbb {Q}(\\alpha ,\\beta )$ .", "Now, we know that $x_1^4$ is rational.", "On the other hand, $x_1=\\frac{\\alpha +\\beta -\\gamma }{2}$ , and so $16\\cdot x_1^4=(\\alpha +\\beta -\\gamma )^4=r_0+r_1\\cdot \\alpha \\cdot \\beta -r_2\\cdot \\alpha \\cdot \\gamma -r_3\\cdot \\beta \\cdot \\gamma $ , where $r_0$ , $r_1$ , $r_2$ , and $r_3$ are positive rationals.", "It then follows that $\\gamma \\cdot (r_2\\cdot \\alpha + r_3\\cdot \\beta )=-16\\cdot x_1^4 +r_0+r_1\\cdot \\alpha \\cdot \\beta $ , which implies that $\\gamma $ is in $\\mathbb {Q}(\\alpha ,\\beta )$ , a contradiction.", "(For the conscientious reader, the coefficients are $r_0=\\alpha ^4+\\beta ^4+\\gamma ^4+6\\cdot \\alpha ^2\\cdot \\beta ^2+6\\cdot \\alpha ^2\\cdot \\gamma ^2+6\\cdot \\beta ^2\\cdot \\gamma ^2$ , $r_1=4\\cdot \\alpha ^2+4\\cdot \\beta ^2+12\\cdot \\gamma ^2$ , $r_2=4\\cdot \\alpha ^2+4\\cdot \\gamma ^2+12\\cdot \\beta ^2$ and $r_3=4\\cdot \\beta ^2+4\\cdot \\gamma ^2+12\\cdot \\alpha ^2$ .)", "Case 2.", "Suppose now that $\\alpha \\cdot \\beta $ is a rational number, say $q$ .", "It is clear that $q>0$ .", "We then have $(x_1+x_2)(x_1+x_3)=q=x_1^2+x_1x_3+x_1x_2+x_2x_3=x_1^2+\\mu +\\nu +\\eta $ .", "We now observe that, by the definition of $C_5$ , all of its elements are square roots of positive rational numbers.", "Hence there exist three positive rational numbers $q_1$ , $q_2$ and $q_3$ , such that $\\mu =\\sqrt{q_1}$ , $\\nu =\\sqrt{q_2}$ and $\\eta =\\sqrt{q_3}$ .", "Moreover, since $x_1^2=\\frac{\\mu \\cdot \\nu }{\\eta }$ , it follows that $x_1^2$ is also a square root of a positive rational.", "More precisely $x_1^2=\\sqrt{q_4}$ where $q_4=\\frac{q_1\\cdot q_2}{q_3}$ .", "We therefore have $q=\\sqrt{q_1}+\\sqrt{q_2}+\\sqrt{q_3}+\\sqrt{q_4}$ .", "Let $M=\\mathbb {Q}(\\sqrt{q_1}, \\sqrt{q_2}, \\sqrt{q_3}, \\sqrt{q_4})$ , and let $d$ be its degree over $\\mathbb {Q}$ .", "On the one hand, the trace of $q$ is $d\\cdot q$ , and on the other had it is the sum of $d\\cdot \\sqrt{q_i}$ for those $q_i$ that are perfect squares.", "This is because, for any positive rational $t$ , the trace of $\\sqrt{t}$ is 0 if $t$ is not a perfect square, and $d\\sqrt{t}$ if $t$ is a perfect square.", "The only way to have equality in the above is if all the $q_i$ are perfect squares, but then $x_1x_2\\in C_5$ is rational, a contradiction." ] ]
2210.07831
[ [ "Learning to Autonomously Reach Objects with NICO and Grow-When-Required\n Networks" ], [ "Abstract The act of reaching for an object is a fundamental yet complex skill for a robotic agent, requiring a high degree of visuomotor control and coordination.", "In consideration of dynamic environments, a robot capable of autonomously adapting to novel situations is desired.", "In this paper, a developmental robotics approach is used to autonomously learn visuomotor coordination on the NICO (Neuro-Inspired COmpanion) platform, for the task of object reaching.", "The robot interacts with its environment and learns associations between motor commands and temporally correlated sensory perceptions based on Hebbian learning.", "Multiple Grow-When-Required (GWR) networks are used to learn increasingly more complex motoric behaviors, by first learning how to direct the gaze towards a visual stimulus, followed by learning motor control of the arm, and finally learning how to reach for an object using eye-hand coordination.", "We demonstrate that the model is able to deal with an unforeseen mechanical change in the NICO's body, showing the adaptability of the proposed approach.", "In evaluations of our approach, we show that the humanoid robot NICO is able to reach objects with a 76% success rate." ], [ "Introduction", "To reach for an object, a robotic agent must first locate the object in its surrounding space, and then move its hand towards it.", "The robotic agent needs to process the raw sensory data, while at the same time coordinating the motors, requiring a high degree of visuomotor control.", "Furthermore, dynamic elements in the environment may introduce temporary or permanent changes in the surrounds or in the robotic agent's bodily characteristics, which can strongly affect the perceived information of the agent [1].", "Therefore, a robot must be able to continuously learn from its interactions with the environment, to react and adapt to unexpected events [2].", "To address this issue, we propose a neural model for robotic visuomotor learning based on a growing self-organizing model [6], that continually learns the relationship between the visual and motoric information streams acquired through the body-environment interaction.", "We train and evaluate the approach on the humanoid robot NICO, the Neuro-Inspired COmpanion [20], [21] in a virtual environment, depicted in Figure REF .", "We show the adaptability of the approach, and investigate if the robot is able to autonomously learn to reach objects.", "We investigate how the internal model copes with mechanical or environmental changes over time.", "Figure: Left: Training curriculum for our embodied neural models.", "Right: Experimental setup with the humanoid robot NICO, reaching the target within the reachable space.Our approach is based on the developmental robotics research paradigm, which investigates behavioral components established from developmental sciences [3] to design computational models that can be embedded into robotic agents.", "A central idea of developmental robotics is the embodiment, such that the robotic agent has a body that mediates perception, affects behavior, and shapes the way it interacts with its surrounding environment [3], [4].", "To learn visuomotor control, our computational model is embedded into a humanoid robotic agent—it learns based on the sensorimotor information generated from the agent's bodily interaction with the external environment.", "Our model first learns individual unimodal information maps for each of the various sensory and motor data streams, then learns associations to create multimodal connections with Hebbian learning between the different maps.", "The learning stage consists of three parts for the acquisition of reaching skills with the humanoid robot NICO, which are developed autonomously initially through random motor babbling.", "First, gaze control is learned to situate NICO in the surrounding environment, then arm control is learned, and in the final learning step, transfer learning from gaze and arm control is used to develop eye-hand coordination.", "In summary, this paper contributes a novel continual learning approach for a robotic reach-for-grasp task that is based on highly adaptable self-organizing Grow-When-Required networks.", "In our experiments, the approach enables NICO not only to learn to reach for objects with a 76% accuracy, but also to autonomously adjust to motor errors in a continuous learning scenario." ], [ "Related Work", "Visuomotor learning emerges from long-term interactions with the surrounding environment.", "Through this body-environment interaction, the robotic agent gains multimodal information from its various sensors, which have been proposed to be integrated into a body schema [7].", "The body schema allows integrating information from various sensory and motor streams to keep an up-to-date representation of the positions of the different body parts in space.", "These body schemas represent an internal model for the robotic agent, which can be fundamental for internal simulation processes and bridge the gap between low-level sensorimotor representations and basic cognitive skills [1], [22].", "Inspired by the studies on body representations suggesting the existence of topographic maps in the brain [10], [11], which have been reported to be fundamental for sensorimotor processing and learning [12], interest in developing self-organizing computational models has been growing [1].", "These multimodal body representations self-organize and adapt over the sensorimotor experience acquired through the body-environment interaction.", "Throughout the recent research on the autonomous acquisition of reaching in humanoid robots, self-organizing maps (SOM) [5] or different variants of SOMs have been in the focus of research—see Schillaci et al.", "[1] for an overview.", "Essentially all the listed approaches have in common that the network structure needs to be defined a priori, which makes them unsuitable for dynamic environments or continuous learning tasks [6].", "Visuomotor learning should ideally be considered as a continuous learning task, in which the internal model adds new neurons whenever the model cannot sufficiently match the present sensorimotor experiences.", "Our experimental design and training curriculum for our embodied neural models is inspired by human infant development: Infants show a remarkable talent for developing visuomotor control and coordination at early stages, as well as rapid cognitive growth [13].", "As summarized by Law et al.", "[13], [14], [15], we find similar main stages of visuomotor control and coordination for robotic agents, i.e, gaze control, arm control and eye-hand coordination.", "Gaze control is the ability to locate a salient region, or region of interest, in the perceived scene and then direct the visual system towards it, such that the salient region is centered.", "The robotic agent becomes situated, as it then has the ability to react and interact with its surrounding environment [17].", "Arm control, on the other hand, is the ability to move the hand towards a location within the reachable space.", "The robotic agent needs to be able to move the hand towards a point in space at which a target is located.", "Eye-hand coordination is the coordinated movement of the gaze and the hand, which is a required motor skill for pointing and/or reaching behaviors [18].", "The gaze and the hand should be able to move freely, but should be associated with each other such that eye-hand correlation and coordination can be supported [15].", "The hand's location needs to be associated with a corresponding direction of gaze, which is accomplished by first moving the hand towards a location in space and then directing the gaze towards the hand.", "The reach action is the exact inverse action of the training process [19], that is, to first direct the gaze towards a visual stimulus in the perceived scene, e.g., an object, and then move the hand towards it." ], [ "The Grow-When-Required Network", "The Grow-When-Required (GWR) network by Marsland et al.", "[6] is a sheet-like self-organizing growing neural network, which adds neurons to the network whenever it cannot sufficiently match the input.", "The GWR has a large collection of neurons, each with their associated weight vectors and lateral connections, which connect neurons that represent similar perceptions.", "Given a learning input $x(t)$ presented to the network $S$ , the best matching unit (BMU) $b$ and second BMU $s$ are computed as follows: $b = \\underset{i \\: \\in \\: S}{\\operatornamewithlimits{argmin}} { \\: || x(t) - w_i || }, \\\\s = \\underset{i \\: \\in \\: S \\setminus \\lbrace b\\rbrace }{\\operatornamewithlimits{argmin}} { \\: || x(t) - w_i || },$ where a connection $(b, s)$ is created if there is none yet, otherwise the age of the connection is set to zero.", "Each neuron is equipped with a habituation counter $h_i \\in [0,1]$ , counting how frequently a neuron has been selected as the BMU.", "The habituation at a time step for the BMU $b$ and its neighbors $n$ is calculated as $h_{i}(t + 1) = \\tau _i * 1.05 * (1 - h_{i}(t)) - \\tau _i,$ with $i \\in \\lbrace b, n\\rbrace $ , where $\\tau _i$ is a constant that regulates a decreasing response and $\\tau _n < \\tau _b$ .", "The activity of the BMU $b$ is computed as $a_{b} = e^{- || x(t) - w_b || },$ where $w_b$ is the weight vector associated with the BMU $b$ .", "Given a predefined activity threshold $a_T$ and habituation threshold $h_T$ , a new neuron is added into the network if the activity is below the activity threshold $a_{b} < a_T$ and the habituation counter is below the habituation threshold $h_{b} < h_T$ .", "The activity threshold $a_T$ and the habituation threshold $h_T$ regulate how many neurons and how fast neurons are added to the network.", "The weights of the BMU $b$ and all its neighbors $n$ are updated using $w_b = \\epsilon _b * h_{b} * (x(t) - w_b), \\\\w_n = \\epsilon _n * h_{n} * (x(t) - w_n),$ where $\\epsilon _b$ and $\\epsilon _n$ are fixed learning parameters such that ${0 < \\epsilon _n < \\epsilon _b < 1}$ .", "Lastly, the algorithm removes all connections with an age larger than the maximum allowed age and removes all isolated neurons.", "These steps are repeated for a set number of epochs, or until a stopping criterion is reached." ], [ "Neural Model for Visuomotor Learning", "As a robot interacts with its environment, highly correlated motor $M$ and sensory $S$ data streams are perceived.", "For our proposed model, we start by training two GWRs, one for the motor modalities and one for the sensor modalities.", "The model hyperparameters are summarized in Table REF .", "Once the GWRs are trained, multimodal associations in the form of neural connections are made between the two GWRs by iterating through the sensorimotor training samples and searching for the pairs of best matching units in each map: $s = \\underset{i}{\\operatornamewithlimits{argmin}} { \\: || S - w_i || } \\\\m = \\underset{j}{\\operatornamewithlimits{argmin}} { \\: || M - w_j || }$ After the BMU is found in both maps, a direct connection is built via a positive Hebbian learning rule based on the activities $a_s$ , $a_m$ of the two BMUs in their respective map: $w_{sm}(t + 1) = w_{sm}(t) + \\alpha * a_{s} * a_{m}, $ where $w_{sm}$ is a direct weight connecting neuron $s$ in the sensory map and neuron $m$ in the motor map.", "The Hebbian learning parameter $\\alpha $ is set to $0.5$ throughout all the experiments.", "Once the system has fully iterated through the sensorimotor training samples and generated the required multimodal associations, the model can be queried by finding the sensory BMU that best matches an observed $S$ , and determining which neuron in the motor map is co-activated most strongly.", "This can then be mapped to a corresponding motor command.", "Overall, the proposed neural architecture consists of three submodules for visuomotor learning, depicted in Figure REF .", "Gaze control is learned first to situate a robotic agent in the surrounding environment, then arm control is learned, and finally, both modules are combined to learn eye-hand coordination.", "Table: Training parameters for each GWR." ], [ "The Humanoid Robot NICO", "The experiments are realized with a virtual NICO humanoid robot [20], a developmental robot platform with rich sensory and motor capabilities, enabling embodied neuro-cognitive models.", "To enable human-like manipulation, each of NICO's arms have 6 degrees of freedom (DoF), in which 3 DoF are for the shoulder, 1 DoF for the movement of the elbow, and 2 DoF for wrist rotation and flexion.", "Furthermore, NICO's neck has 2 DoF to control the head's yaw and pitch, i.e.", "horizontal and vertical movements.", "The visual modality is realized in the form of two parallel cameras mounted in NICO's head.", "NICO has a further modality to provide a proprioceptive sense for all DoF, which is the information about motor values, movements, position, and forces." ], [ "Environmental Setup", "For the experiment, a virtual, simplified realization of the humanoid robot NICO is provided in the robot simulation framework CoppeliaSim [23] via the NICO API.", "The humanoid robot NICO is seated on a 25 cm long cuboid in the virtual environment.", "The target to be reached is implemented as a red ball of 5 cm in diameter placed in front of NICO, such that it is salient from the surrounding environment.", "As the experiment evaluates the capability of the proposed internal model for the task of reaching, only a single target is used.", "To reduce the dimensionality of the data and, therefore, the complexity of the experiments, only the camera in the right eye is used for the visual modality.", "Furthermore, only the right arm with four joints is used to evaluate the task of reaching, as the two joints that rotate the wrist do not influence the arm's position in space." ], [ "Data Collection", "To collect the data and, therefore, the necessary information for visuomotor coordination, we adopt a random motor babbling strategy.", "For each learning stage, the following strategies were designed: 1) Gaze dataset: The target is initially placed in the center of the visual system of NICO, and random motor babbling is performed by randomly selecting a motor command $M_{\\Delta head}$ while observing the change in the sensory data.", "The sensory data $S_{head}$ yields the centroid of the detected target in the image plane by first downscaling the image by a factor of 8 to $80 \\times 60 \\times 3$ to reduce the dimensionality of the sensory data [24], and then applying a color thresholding algorithm to highlight the target.", "The sensorimotor experience is described by the pair $(S_{head}, -M_{\\Delta head})$ , which combines the current centroid $S_{head}$ with the additive inverse of $M_{\\Delta head}$ , as the exact opposite values of the current motor values are needed to bring the target again into the center of the visual system.", "We record a dataset over 1000 iterations of this process, while also augmenting the observed data via interpolation, yielding 6410 samples of sensorimotor experience.", "2) Arm dataset: To collect the dataset for arm control, random motor babbling is performed on the joints of NICO's right arm.", "The sensorimotor experience is described by the pair $(S_{arm}, M_{arm})$ , where $S_{arm}$ is the hand's current position in Cartesian space relative to a fixed point in the NICO's torso, and $M_{arm}$ are the current motor joint values.", "These datapoints are $(x, y, z)$ -coordinates nominally given in centimeters for convenience.", "We record the dataset for 1000 iterations, while also interpolating the data, yielding 14910 samples of sensorimotor experience.", "3) Eye-hand dataset: The dataset for eye-hand coordination is recorded by randomly moving the arm within the reachable workspace.", "In each iteration, a random motor command $M_{arm}$ is executed, and the internal model for gaze control is subsequently used to bring the hand into the center of the visual system.", "The hand is visually located by placing the red ball in the open grasp of the hand, i.e.", "such that the hand would grasp the ball if it were closed.", "If the hand moves out of the visual system, such that the hand cannot be found by the gaze control model, a simple algorithmic behavior moves NICO's head around until a salient region is found, or the iteration is canceled, e.g.", "in order to deal with the situation that the hand moved out of the gaze space.", "The sensorimotor experience is described by a triplet of the current Cartesian location of the hand $S_{arm}$ , the current motor values for the joints of the right arm $M_{arm}$ and the current motor values for the joints of the head $M_{head}$ .", "The dataset is recorded for 1000 iterations, yielding as many triplets of sensorimotor experience.", "4) Environmental change: To test the proposed model's capability to react to environmental changes, a mechanical change in NICO's body is simulated in the form of a damaged joint, namely the upper joint in the right arm responsible for the arm's inward and outward rotation.", "The joint is fixed in its zero position, which alters the reachable space.", "The dataset recording was conducted in the same manner as in the eye-hand coordination experiment.", "However, the dataset collection only lasted for 500 iterations, which is half the number used for the eye-hand coordination training." ], [ "Gaze Control", "The hyperparameter optimization led to values of ${a_T = 0.5}$ , ${h_T = 0.7}$ with 3306 neurons in total in the sensory map, and values of $a_T = 0.9$ , $h_T = 0.3$ with 6000 neurons in total in the motor map.", "In the sensory map, 3014 neurons built a cross-modal connection, while 292 have no connection to the motor map.", "In the motor map, 4633 neurons built a connection, while 1367 neurons have no connection to the sensory map.", "The model is tested for 1000 iterations for the ability to control the gaze with the NICO in the simulation environment, repeated 5 times.", "The median Euclidean error of the gaze centering is $2.2(\\pm 0.0)$ pixels, an overview of the results is in Table REF .", "The results for gaze control are limited by the resolution of the used images, and higher resolution images would allow for more fine-grained head motor control, but also increase the complexity of our model.", "However, our model successfully learns to control the gaze and move the head to center the target in the visual field, and is therefore well suited for the reaching task." ], [ "Arm Control", "For the arm control model, the hyperparameter optimization leads to $a_T = 0.5$ , $h_T = 0.7$ with 6000 neurons in the sensory map, and $a_T = 0.1$ , $h_T = 0.5$ with 6000 neurons in the motor map.", "In the sensory GWR, 299 neurons have no connection, while 119 neurons have no connection in the motor GWR.", "The sensorimotor experience and the weights of the neural model are depicted in Figure REF a.", "The internal model is tested for 1000 iterations for the ability to control the arm with the humanoid NICO in the simulation environment, repeated 5 times.", "The median Euclidean error is $1.10(\\pm 0.03)$ cm, the lowest error is $0.50(\\pm 0.01)$ cm, and the highest error is $25.96(\\pm 1.95)$ cm.", "The error is mainly influenced by the maximum number of neurons, which was capped at 6000, which was set to prevent overfitting and encourage the network to generalize.", "Our model learned to control the arm and move the hand towards the desired location in the surrounding space.", "The median errors are deemed to be within tolerance for the reaching task.", "Figure: Eye-hand coordination learning stage." ], [ "Eye-Hand Coordination", "The hyperparameter optimization for the GWR trained with the absolute head motor values leads to values of $a_T = 0.5$ , $h_T = 0.9$ , and a network with 1000 neurons.", "Transfer learning is used on the GWRs of the arm control model with the acquired sensorimotor experience for eye-hand coordination.", "The sensorimotor experience and the weights of the neural model for eye-hand coordination are depicted in Figure REF b.", "The number of neurons in the arm control model decreases to 5925 and 5937 neurons respectively for the sensory and motor maps.", "The model for eye-hand coordination is then built between the motor map trained on the absolute head motor values, and the sensory map trained on the Cartesian hand position values, which results in 950 neurons in the absolute head motor map, and 791 neurons in the Cartesian map having built a connection, respectively.", "The model is tested for 1000 iterations, repeated 5 times, for the ability to reach the target with the humanoid robot NICO.", "The median Euclidean error from the gaze control model is $2.23(\\pm 0.0)$ pixels, while the median Euclidean error for the arm control model is $2.44(\\pm 0.10)$ cm.", "The minimum error is $0.68(\\pm 0.95)$ cm, while the maximum error is $16.63(\\pm 8.01)$ cm.", "The highest errors stem from NICO reaching too far or too short along the depth axis, which is in part due to motoric redundancy between the visually perceived target and arm motor values.", "This redundancy stems from the unavailability of depth information for a target, as multiple different positions on the depth axis result in the same visually perceived ball location, and therefore also in the same absolute head motor values.", "The target is successfully reached $762 (\\pm 6.72)$ times, judged by the condition of graspability with a tolerance of $\\pm 3$ cm.", "Overall, this corresponds to a $76.2(\\pm 0.67)\\%$ success rate." ], [ "Environmental Change", "To test the ability of our model to adapt and learn continually, we simulate a mechanical change in the NICO robot by locking its shoulder motor in a fixed position after having trained with full-motion capabilities.", "The internal model is tested for 500 iterations, repeated 5 times, for the ability to reach the target after the shoulder was modified.", "The median Euclidean error from the gaze control model is $2.16(\\pm 0.09)$ pixels.", "We then allow the model to adapt and train with the new kinematics.", "For the adapted model, the median Euclidean error is $5.08(\\pm 1.76)$ cm, while the median Euclidean error for the original model is $6.93(\\pm 1.57)$ cm.", "The minimum error for the adapted model is $1.61(\\pm 3.26)$ cm, while the maximum error is $26.70(\\pm 8.84)$ cm.", "For the original model, the minimum error is $1.93(\\pm 3.87)$ cm, and the maximum error $26.83(\\pm 8.24)$ cm.", "With the original model, the target is reached $206.9(\\pm 6.64)$ times, which corresponds to a $20.68(\\pm 0.66)\\%$ success rate.", "With the adapted model, the target is successfully reached $230.8(\\pm 11.17)$ times, which corresponds to a $23.08(\\pm 1.11)\\%$ success rate.", "While the overall decreased success in reaching can be mainly attributed to the locked shoulder joint that makes it impossible to reach certain poses, the trained model reaches the target $23.9$ times more on average than the untrained model.", "This result shows that our model successfully adapts to severe changes in the robot's mechanics and continuously learns from interaction with the environment." ], [ "Conclusion", "In this paper, we presented and investigated an approach that enables the autonomous learning of object reaching with the NICO humanoid robot.", "The necessary sensorimotor information was acquired purely through body-environment interactions.", "In contrast to supervised learning approaches, the reach action is addressed and learned indirectly via a developmental approach by first learning how to control the gaze to situate NICO in the environment, then learning how to control the arm, and then finally learning eye-hand coordination.", "This enables NICO to associate hand positions with a direction of gaze, of which the inverse task is the reach action.", "Grow-When-Required models were trained on and capture the temporal co-occurence of the temporal sensorimotor information through direct connections between the networks via Hebbian learning.", "The approach of modeling the internal model with GWR networks allows each network to add neurons whenever the sensorimotor information cannot be sufficiently matched by any neuron, which results in a continuous growth of the network.", "The results on the task of reaching without any prior information indicate that the internal model with GWRs learns the unimodal information and captures the associations between the temporally correlated sensorimotor information by enabling NICO to reach a target with a median accuracy of $2.44(\\pm 0.10)$ cm.", "The observed reach actions are relatively coarse, which is in line with findings from developmental studies where human infants try coarse reach actions with most of them failing, as the motoric competence and depth perception that would be required for precise tasks is not fully developed yet [2], [14], [13].", "Furthermore, we demonstrate that the internal model is adaptable to environmental or mechanical changes—an already trained model was able to successfully adapt to a locked shoulder joint.", "In future work, we will implement overlapping neural structures [25] in the GWRs to improve the generalization ability, as well as dimensionality reduction techniques and pruning methods to lower the complexity of each network to improve performance and learning time.", "For the perception of depth, we extend our model to acquire feedback from the arm control model, where the sensorimotor information is used to predict the current distance between the eye and the hand.", "Furthermore, we will implement our proposed model into the physical NICO, which needs more sophisticated exploration strategies like goal babbling and/or reward-driven self-organization [26] mechanisms to improve the dataset generation, and compare the reach performance and the adaptability to existing solutions." ] ]
2210.07851
[ [ "A Consistent and Differentiable Lp Canonical Calibration Error Estimator" ], [ "Abstract Calibrated probabilistic classifiers are models whose predicted probabilities can directly be interpreted as uncertainty estimates.", "It has been shown recently that deep neural networks are poorly calibrated and tend to output overconfident predictions.", "As a remedy, we propose a low-bias, trainable calibration error estimator based on Dirichlet kernel density estimates, which asymptotically converges to the true $L_p$ calibration error.", "This novel estimator enables us to tackle the strongest notion of multiclass calibration, called canonical (or distribution) calibration, while other common calibration methods are tractable only for top-label and marginal calibration.", "The computational complexity of our estimator is $\\mathcal{O}(n^2)$, the convergence rate is $\\mathcal{O}(n^{-1/2})$, and it is unbiased up to $\\mathcal{O}(n^{-2})$, achieved by a geometric series debiasing scheme.", "In practice, this means that the estimator can be applied to small subsets of data, enabling efficient estimation and mini-batch updates.", "The proposed method has a natural choice of kernel, and can be used to generate consistent estimates of other quantities based on conditional expectation, such as the sharpness of a probabilistic classifier.", "Empirical results validate the correctness of our estimator, and demonstrate its utility in canonical calibration error estimation and calibration error regularized risk minimization." ], [ "Introduction", "Deep neural networks have shown tremendous success in classification tasks, being regularly the best performing models in terms of accuracy.", "However, they are also known to make overconfident predictions [14], which is particularly problematic in safety-critical applications, such as medical diagnosis [11], [12] or autonomous driving [5], [48].", "In many real world applications it is not only the predictive performance that is important, but also the trustworthiness of the prediction, i.e., we are interested in accurate predictions with robust uncertainty estimates.", "To this end, it is necessary that the models are uncertainty calibrated, which means that, for instance, among all cells that have been predicted with a probability of 0.8 to be cancerous, 80% should indeed belong to a malignant tumor.", "The field of uncertainty calibration has been mostly focused on binary problems, often considering only the confidence score of the predicted class.", "However, this so called top-label (or confidence) calibration [14]) is often not sufficient in multiclass settings.", "A stronger notion of calibration is marginal (or class-wise) [24], that splits up the multiclass problem into $K$ one-vs-all binary ones, and requires each to be calibrated according to the definition of binary calibration.", "The most strict notion of calibration, called canonical (or distribution) calibration [4], [22], [52], requires the whole probability vector to be calibrated.", "The curse of dimensionality makes estimation of this form of calibration difficult, and current estimators, such as the binned estimator $ECE^{bin}$ [35], MMCE [26] and Mix-n-Match [60], have computational or statistical limitations that prevent them from being successfully applied in this important setting.", "Specifically, the binned estimator is sensitive to the binning scheme and is asymptotically inconsistent in many situations [52], MMCE is not a consistent estimator of $L_p$ calibration error and Mix-n-Match, although consistent, is intractable in high dimensions and the authors did not implement it in more than one dimension.", "We propose a tractable, differentiable, and consistent estimator of the expected $L_p$ canonical calibration error.", "In particular, we use kernel density estimates (KDEs) with a Beta kernel in binary classification tasks and a Dirichlet kernel in the multiclass setting, as these kernels are the natural choices to model densities over a probability simplex.", "In Table REF , we summarize and compare the properties of our $ECE^{KDE}$ estimator and other commonly used estimators.", "$ECE^{KDE}$ scales well to higher dimensions and it is able to capture canonical calibration with $\\mathcal {O}(n^2)$ complexity.", "Table: Properties of ECE KDE ECE^{KDE} and other commonly used calibration error estimators.Our contributions can be summarized as follows: 1.", "We develop a tractable estimator of canonical $L_p$ calibration error that is consistent and differentiable.", "2.", "We demonstrate a natural choice of kernel.", "Due to the scaling properties of Dirichlet kernel density estimation, evaluating canonical calibration becomes feasible in cases that cannot be estimated using other methods.", "3.", "We provide a second order debiasing scheme to further improve the convergence of the estimator.", "4.", "We empirically evaluate the correctness of our estimator and demonstrate its utility in the task of calibration regularized risk minimization on variety of network architectures and several datasets." ], [ "Related Work", "Calibration of probabilistic predictors has long been studied in many fields.", "This topic gained attention in the deep learning community since [14] observed that modern neural networks are poorly calibrated and tend to give overconfident predictions due to overfitting on the NLL loss.", "The surge of interest resulted in many calibration strategies that can be split in two general categories, which we discuss subsequently.", "Post-hoc calibration strategies learn a calibration map of the predictions from a trained predictor in a post-hoc manner, using a held-out calibration set.", "For instance, Platt scaling [41] fits a logistic regression model on top of the logit outputs of the model.", "A special case of Platt scaling that fits a single scalar, called temperature, has been popularized by [14] as an accuracy-preserving, easy to implement and effective method to improve calibration.", "However, it has the undesired consequence that it clamps the high confidence scores of accurate predictions [26].", "Similar approaches for post-hoc calibration include histogram binning [58], isotonic regression [57], Bayesian binning into quantiles [34], Beta [23] and Dirichlet calibration [24].", "Recently, [15] proposed a binning-free calibration measure based on the Kolmogorov-Smirnov test.", "In this approach, the recalibration function is obtained via spline-fitting, rather than minimizing a loss function on a calibration set.", "[29] integrate ensamble-based and post-hoc calibration methods in an accuracy-perserving truth discovery framework.", "[62] introduce a new notion of calibration, called decision calibration, however, they do not propose an estimator of calibration error with statistical guarnatees.", "Trainable calibration strategies integrate a differentiable calibration measure into the training objective.", "One of the earliest approaches is regularization by penalizing low entropy predictions [40].", "Similarly to temperature scaling, it has been shown that entropy regularization needlessly suppresses high confidence scores of correct predictions [26].", "Another popular strategy is MMCE (Maximum Mean Calibration Error) [26], where the entropy regularizer is replaced by a kernel-based surrogate for the calibration error that can be optimized alongside NLL.", "It has been shown that label smoothing [49], [32], i.e.", "training models with a weighted mixture of the labels instead of one-hot vectors, also improves model calibration.", "[27] propose to add the difference between predicted confidence and accuracy as auxiliary term to the cross-entropy loss.", "Focal loss [31], [28] has recently been empirically shown to produce better calibrated models than many of the alternatives, but does not estimate a clear quantity related to calibration error.", "[2] derive a differentiable approximation to the commonly-used binned estimator of calibration error by computing differentiable approximations to the 0/1 loss and the binning operator.", "However, this approach does not eliminate the dependence on the binning scheme and it is not clear how it can be extended to calibration of the whole probability vector.", "Kernel density estimation [39], [44], [46] is a non-parametric method to estimate a probability density function from a finite sample.", "[60] propose a KDE-based estimator of the calibration error (Mix-n-Match) for measuring calibration performance.", "Although they demonstrate consistency of the method, it requires a numerical integration step that is infeasible in high dimensions.", "In practice, they only implemented binary calibration, and not canonical calibration.", "Although many calibration strategies have been empirically shown to decrease the calibration error, very few of them are based on an estimator of miscalibration.", "Our estimator is the first consistent, differentiable estimator with favourable scaling properties that has been successfully applied to the estimation of $L_p$ canonical calibration error in the multi-class setting." ], [ "Methods", "We study a classical supervised classification problem.", "Let $(\\Omega , \\mathcal {A}, \\mathbb {P})$ be a probability space, where $\\Omega $ is the set of possible outcomes, $\\mathcal {A}=\\mathcal {A}(\\Omega )$ is the sigma field of events and $\\mathbb {P}:\\mathcal {A}\\rightarrow [0,1]$ is a probability measure, let $\\mathcal {X}=\\mathbb {R}^d$ and $\\mathcal {Y}=\\lbrace 1, ..., K\\rbrace $ .", "Let $x: \\Omega \\rightarrow \\mathcal {X}$ and $y: \\Omega \\rightarrow \\mathcal {Y}$ be random variables, while realizations are denoted with subscripts.", "Suppose we have a model $f:\\mathcal {X} \\rightarrow \\triangle ^K$ , where $\\triangle ^K$ denotes the $K-1$ dimensional simplex as obtained, e.g., from the output of a final softmax layer in a neural network.", "We measure the (mis-)calibration in terms of the $L_p$ calibration error, defined below.", "Definition 3.1 (Calibration error, [35], [25], [54]) The $L_p$ calibration error of $f$ is: $\\operatorname{CE}_p(f) = \\biggl (\\mathbb {E}\\biggl [\\Bigl \\Vert \\mathbb {E}[y \\mid f(x)]-f(x)\\Bigr \\Vert _p^p\\biggr ] \\biggr )^{\\frac{1}{p}}.$ We note that we consider multiclass calibration, and that $f(x)$ and the conditional expectation in eq:LPcalibrationError therefore map to points on a probability simplex.", "We say that a classifier $f$ is perfectly calibrated if $\\operatorname{CE}_p(f) = 0$ .", "In order to empirically compute the conditional expectation in eq:LPcalibrationError, we need to perform density estimation over the probability simplex.", "In a binary setting, this has traditionally been done with binned estimates [35], [14], [25].", "However, this is not differentiable w.r.t.", "the function $f$ , and cannot be incorporated into a gradient based training procedure.", "Furthermore, binned estimates suffer from the curse of dimensionality and do not have a practical extension to multiclass settings.", "We consider an estimator for the $\\operatorname{CE}_p$ based on Beta and Dirichlet kernel density estimates in the binary and multiclass setting, respectively.", "We require that this estimator is consistent and differentiable such that we can train it in a calibration error regularized risk minimization framework.", "This estimator is given by: $\\widehat{\\operatorname{CE}_p(f)^p} = \\frac{1}{n}\\sum _{j=1}^n\\biggl [\\Bigl \\Vert \\widehat{\\mathbb {E}[y \\mid f(x)]}\\Bigr |_{f(x_j)}-f(x_j)\\Bigr \\Vert _p^p\\biggr ] ,$ where $\\widehat{\\mathbb {E}[y \\mid f(x)]}\\Bigr |_{f(x_j)}$ denotes $\\widehat{\\mathbb {E}[y \\mid f(x)]}$ evaluated at $f(x)=f(x_j)$ .", "If probability density $p_{x, y}$ is measurable with respect to the product of the Lebesgue and counting measure, we can define: $p_{x, y}(x_i, y_i) =p_{y|x=x_i}(y_i) \\, p_x(x_i)$ .", "Then we define the estimator of the conditional expectation as follows: $\\mathbb {E}[y\\mid f(x)]&= \\sum _{y_k \\in \\mathcal {Y}} y_k \\, p_{y|f(x)}(y_k) = \\frac{\\sum _{y_k \\in \\mathcal {Y}} y_k \\, p_{f(x), y}(f(x), y_k)}{p_{f(x)}(f(x))} \\nonumber \\\\ &\\approx \\frac{\\sum _{i=1}^n k(f(x) ; f(x_i))y_i}{\\sum _{i=1}^n k(f(x) ; f(x_i))} =: \\widehat{\\mathbb {E}[y \\mid f(x)]}$ where $k$ is the kernel of a kernel density estimate evaluated at point $x$ and $p_{f(x)}$ is uniquely determined by $p_x$ and $f$ .", "Proposition 3.2 Assuming that $p_{f(x)}(f(x))$ is Lipschitz continuous over the interior of the simplex, there exists a kernel $k$ such that $\\widehat{\\mathbb {E}[y \\mid f(x)]}$ is a pointwise consistent estimator of $\\mathbb {E}[y\\mid f(x)]$ , that is: $\\underset{n\\rightarrow \\infty }{\\operatorname{plim}}\\frac{\\sum _{i=1}^n k(f(x) ; f(x_i))y_i}{\\sum _{i=1}^n k(f(x) ; f(x_i))} = \\frac{\\sum _{y_k \\in \\mathcal {Y}} y_k \\, p_{f(x), y}(f(x), y_k)}{p_{f(x)}(f(x))}.$ Let $k$ be a Dirichlet kernel [38].", "By the consistency of the Dirichlet kernel density estimators [38] Lipschitz continuity of the density over the simplex is a sufficient condition for uniform convergence of the kernel density estimate.", "This in turn implies that for a given $f$ , for all $f(x)\\in (0,1)$ , $\\frac{1}{n}\\sum _{i=1}^n k(f(x) ; f(x_i))y_i\\xrightarrow{} \\sum _{y_k \\in \\mathcal {Y}} y_k \\, p_{f(x), y}(f(x), y_k)$ and $\\frac{1}{n}\\sum _{i=1}^n k(f(x) ; f(x_i))\\xrightarrow{} p_{f(x)}(f(x))$ .", "Let $g(x)=1/ x$ , then the set of discontinuities of $g$ applied to the denominator of the l.h.s.", "of (REF ) has measure zero since $\\frac{1}{n} \\sum _{i=1}^n k(f(x) ; f(x_i))=0$ with probability zero.", "From the continuous mapping theorem [30] it follows, that $n/(\\sum _{i=1}^n k(f(x) ; f(x_i))) \\xrightarrow{} 1/p_{f(x)}(f(x))$ .", "Since products of convergent (in probability) sequences of random variables converge in probability to the product of their limits [42], we have that $\\sum _{i=1}^n k(f(x) ; f(x_i))y_i g(\\sum _{i=1}^n k(f(x) ; f(x_i)))\\xrightarrow{} \\sum _{y_k \\in \\mathcal {Y}} y_k \\, p_{f(x), y}(f(x), y_k) g(p_{f(x)}(f(x)))$ , which is equal to the r.h.s.", "of (REF ).", "The most commonly used loss functions are designed to achieve consistency in the sense of Bayes optimality under risk minimization, however, they do not guarantee calibration - neither for finite samples nor in the asymptotic limit.", "Since we are interested in models $f$ that are both accurate and calibrated, we consider the following optimization problem bounding the calibration error $\\operatorname{CE}(f)$ : $f = \\arg \\min _{f\\in \\mathcal {F}}\\, \\operatorname{Risk}(f), \\text{s.t. }", "\\operatorname{CE}(f) \\le B$ for some $B>0$ , and its associated Lagrangian $f = \\arg \\min _{f\\in \\mathcal {F}}\\, \\Bigl (\\operatorname{Risk}(f) + \\lambda \\cdot \\operatorname{CE}(f)\\Bigr ).$" ], [ "Mean squared error in binary classification", "As a first instantiation of our framework we consider a binary classification setting, with mean squared error $\\operatorname{MSE}(f)=\\mathbb {E}[(f(x)-y)^2]$ as the risk function, jointly optimized with the $L_2$ calibration error $\\operatorname{CE}_2$ : $f = \\arg \\min _{f\\in \\mathcal {F}} \\Bigl ( \\operatorname{MSE}(f) +\\lambda \\operatorname{CE}_2(f)^2 \\Bigr )&= \\arg \\min _{f\\in \\mathcal {F}} \\biggl ( \\operatorname{MSE}(f) +\\gamma \\mathbb {E}\\Bigl [\\mathbb {E}[y\\mid f(x)]^2\\Bigr ]\\biggr )$ where $\\gamma =\\frac{\\lambda }{\\lambda +1} \\in [0,1)$ .", "The full derivation using the MSE decomposition [33], [7], [21], [36] is given in Appendix .", "For optimization we wish to find an estimator for $\\mathbb {E}[\\mathbb {E}[y\\mid f(x)]^2]$ .", "Building upon eq:estimatorEYfX, a partially debiased estimator can be written as: $\\widehat{\\mathbb {E}\\Bigl [\\mathbb {E}[y\\mid f(x)]^2\\Bigr ]} \\approx \\frac{1}{n} \\sum _{j=1}^n \\frac{\\left(\\sum _{i\\ne j} \\ k(f(x_j) ; f(x_i))y_i\\right)^2 - \\sum _{i\\ne j} \\left(k(f(x_j);f(x_i))y_i\\right)^2 }{\\left( \\sum _{i\\ne j} k(f(x_j) ; f(x_i)) \\right)^2 - \\sum _{i\\ne j} \\left(k(f(x_j);f(x_i))\\right)^2}.$ Thus, the conditional expectation is estimated using a ratio of unbiased estimators of the square of a mean.", "Proposition 3.3 Equation (REF ) is a ratio of two U-statistics and has a bias converging as $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ .", "The proof is given in Appendix .", "Proposition 3.4 There exist de-biasing schemes for the ratios in eq:SharpnessSlightlyDebiased and eq:estimatorEYfX that achieve an improved $\\mathcal {O}\\left(\\frac{1}{n^2}\\right)$ convergence of the bias.", "Proofs are given in Appendix and .", "In a binary setting, the kernels $k(\\cdot , \\cdot )$ are Beta distributions defined as: $k_{\\operatorname{B}}(f(x_j),f(x_i)) := f(x_j)^{\\alpha _i-1}(1-f(x_j))^{\\beta _i-1} \\frac{\\operatorname{\\Gamma }(\\alpha _i+\\beta _i)}{\\operatorname{\\Gamma }(\\alpha _i)\\operatorname{\\Gamma }(\\beta _i)},$ with $\\alpha _i = \\frac{f(x_i)}{h}+1$ and $\\beta _i = \\frac{1-f(x_i)}{h}+1$ [6], [3], [61], where $h$ is a bandwidth parameter in the kernel density estimate that goes to 0 as $n \\rightarrow \\infty $ .", "We note that the computational complexity of this estimator is $\\mathcal {O}(n^2)$ .", "If we would use this within a gradient descent training procedure, the density can be estimated using a mini-batch and therefore the $\\mathcal {O}(n^2)$ complexity is w.r.t.", "the size of a mini-batch, not the entire dataset.", "The estimator in eq:SharpnessSlightlyDebiased is a ratio of two second order U-statistics that converge as $n^{-1/2}$ [13].", "Therefore, the overall convergence will be $n^{-1/2}$ .", "Empirical convergence rates are calculated in Appendix  and shown to be close to the theoretically expected value." ], [ "Multiclass calibration with Dirichlet kernel density estimates", "There are multiple definitions regarding multiclass calibration that differ in the strictness regarding the calibration of the probability vector $f(x)$ .", "The strongest notion of multiclass calibration, and the one that we consider in this paper, is canonical (also called multiclass or distribution) calibration [4], [22], [52], which requires that the whole probability vector $f(x)$ is calibrated (Definition REF ).", "Its estimator is: $\\widehat{\\operatorname{CE}_{p}(f)^p} = \\frac{1}{n} \\sum _{j=1}^n \\left\\Vert \\frac{\\sum _{i\\ne j} k_{\\operatorname{Dir}}( f(x_j); f(x_i))y_i}{\\sum _{i\\ne j} k_{\\operatorname{Dir}}(f(x_j) ; f(x_i))} - f(x_j) \\right\\Vert _p^p$ where $k_{\\operatorname{Dir}}$ is a Dirichlet kernel defined as: $k_{Dir}(f(x_j),f(x_i)) = \\frac{\\Gamma (\\sum _{k=1}^K \\alpha _{ik})}{\\prod _{k=1}^K \\Gamma (\\alpha _{ik})} \\prod _{k=1}^K f(x_j)_{k}^{\\alpha _{ik}-1}$ with $\\alpha _{i} = \\frac{f(x_i)}{h} + 1$ [38].", "As before, the computational complexity is $\\mathcal {O}(n^2)$ irrespective of $p$ .", "This estimator is differentiable and furthermore, the following proposition holds: Proposition 3.5 The Dirichlet kernel based $\\operatorname{CE}$ estimator is consistent when $p_{f(x)}(f(x))$ is Lipschitz: $&\\underset{n\\rightarrow \\infty }{\\operatorname{plim}}\\frac{1}{n} \\sum _{j=1}^n \\left\\Vert \\frac{\\sum _{i\\ne j}^n k_{\\operatorname{Dir}}( f(x_j); f(x_i))y_i}{\\sum _{i\\ne j}^n k_{\\operatorname{Dir}}(f(x_j) ; f(x_i))} - f(x_j) \\right\\Vert _p^p \\nonumber = \\mathbb {E}\\biggl [ \\Bigl \\Vert \\mathbb {E}[y \\mid f(x)] - f(x)\\Bigr \\Vert _p^p \\biggr ]^p.$ Dirichlet kernel estimators are consistent when the density is Lipschitz continuous over the simplex [38], consequently, by Proposition REF the term inside the norm is consistent for any fixed $f(x_j)$ (note, that summing over $i \\ne j$ ensures that the ratio of the KDE's does not depend on the outer summation).", "Moreover, for any convergent sequence also the norm of that sequence converges to the norm of its limit.", "Ultimately, the outer sum is merely the sample mean of consistent summands, which again is consistent.", "With this development, we have for the first time a consistent, differentiable, tractable estimator of $L_p$ canonical calibration error with $\\mathcal {O}(n^2)$ computational cost and $\\mathcal {O}(n^{-1/2})$ convergence rate, with a debiasing scheme that achieves $\\mathcal {O}(n^{-2})$ bias for $p\\in \\lbrace 1,2\\rbrace $ ." ], [ "Empirical validation of $ECE^{KDE}$", "Accurately evaluating the calibration error is a crucial step towards designing trustworthy models that can be used in societally important settings.", "The most widely used metric for evaluating miscalibration, and the only other estimator that can be straightforwardly extended to measure canonical calibration, is the histogram-based estimator $ECE^{bin}$ .", "However, as discussed in [52], [55], [8], [1], it has numerous flaws, such as: [(i)] it is sensitive to the binning scheme it is severely affected by the curse of dimensionality, as the number of bins grows exponentially with the number of classes it is asymptotically inconsistent in many cases.", "To investigate its relationship with our estimator $ECE^{KDE}$ , we first introduce an extension of the top-label binned estimator to the probability simplex in the three class setting.", "We start by partitioning the probability simplex into equally-sized, triangle-shaped bins and assign the probability scores to the corresponding bin, as shown in Figure REF .", "Then, we define the binned estimate of canonical calibration error as follows: $\\operatorname{CE}_p(f)^p \\approx \\mathbb {E}\\left[\\left\\Vert H(f(x))-f(x)\\right\\Vert _p^p\\right]\\approx \\frac{1}{n} \\sum _{i=1}^n \\left\\Vert H(f(x_i))-f(x_i)\\right\\Vert _p^p$ , where $H(f(x_i))$ is the histogram estimate, shown in Figure REF .", "The surface of the corresponding Dirichlet KDE is presented in Figure REF .", "See Appendix  for [(i)] an experiment investigating their relationship for the three types of calibration (top-label, marginal, canonical) and with varying number of points used for the estimation, and another example of the binned estimator and Dirichlet KDE on CIFAR-10.", "Figure: Extension of the binned estimator ECE bin ECE^{bin} to the probability simplex, compared withthe ECE KDE ECE^{KDE}.", "TheECE KDE ECE^{KDE} achieves a better approximation to the finite sample, and accurately models the fact that samples tend to be concentrated near low dimensional faces of the simplex.subfigure213 subfigure314–526 Synthetic experiments We consider an extension of $ECE^{bin}$ to arbitrary number of classes and investigate its performance compared to $ECE^{KDE}$ .", "Since on real data the ground truth calibration error is unknown, we generate synthetic data with known transformations with the following procedure.", "First, we sample uniformly from the simplex using the Kraemer algorithm [47].", "Then, we apply temperature scaling with $t_1=0.6$ to simulate realistic scenarios where the probability scores are concentrated along lower dimensional faces of the simplex.", "We generate ground truth labels according to the sampled probabilities and therefore obtain a perfectly calibrated classifier.", "Subsequently, the classifier is miscalibrated by additional temperature scaling with $t_2=0.6$ .", "Figure REF depicts the performance of the two estimators as a function of the sample size on generated data for 4 and 8 classes.", "$ECE^{KDE}$ converges to the ground truth value obtained by integration in both cases, whereas $ECE^{bin}$ provides poor estimates even with 20000 points.", "In another experiment with synthetic data we look at the bias of the sharpnessThe sharpness is defined as $\\operatorname{Var}(\\mathbb {E}[y \\mid f(x)])$ [21].", "Here we neglect the term that does not depend on $f(x)$ , and thereby refer to ${\\mathbb {E}\\Bigl [\\mathbb {E}[y\\mid f(x)]^2\\Bigr ]}$ as the sharpness.", "term in a binary setting.", "In Figure REF we plot the estimated value of the sharpness term for varying number of samples, both using the partially debiased ratio from Equation (REF ) and the ratio debiased with the scheme introduced in Appendix .", "A sigmoidal function is applied to the calibrated data to obtain an uncalibrated sample that is used to compute the partially debiased and the fully debiased ratio of the sharpness term.", "The ground truth value is obtained by using 100 million samples to compute the ratio with the partially debiased version, as it converges asymptotically to the true value due to its consistency.", "We use a bandwidth of 0.5 and average over 10000 repetitions for each number of samples that range from 32 to 16384.", "We fix the location of the KDE at $f(x_j)=0.17$ .", "Figure: Performance of ECE bin ECE^{bin} and ECE KDE ECE^{KDE} on synthetic data for varying number of classes, as a function of the sample size.", "Ground truth represents the true value of the integral.", "ECE bin ECE^{bin} is calculated using several common choices for the number of bins (n_bins represents number of bins per-class.)", "n_bins and b are found as optimal values according to Doane's formula and LOO MLE, respectively.", "ECE KDE ECE^{KDE} converges to the true value in all settings, in contrast to ECE bin ECE^{bin}.", "Sharpness term evaluated for different numbers of samples with the partially debiased ratio from Equation () and with the debiasing scheme derived in Appendix on synthetic data.", "Calibration regularized training Empirical setup To showcase our estimator in applications where canonical calibration is crucial, we consider two medical datasets, namely Kather and DermaMNIST.", "The Kather dataset [19] consists of 5000 histological images of human colorectal cancer and it has eight different classes of tissue.", "DermaMNIST [56] is a pre-processed version of the HAM10000 dataset [51], containing 10015 dermatoscopic images of skin lesions, categorized in seven classes.", "Both datasets have been collected in accordance with the Declaration of Helsinki.", "According to standard practice in related works, we trained ResNet [16], ResNet with stochastic depth (SD) [17], DenseNet [18] and WideResNet [59] networks also on CIFAR-10/100 [20].", "We use 45000 images for training on the CIFAR datasets, 4000 for Kather and 7007 for DermaMNIST.", "The code is available at https://github.com/tpopordanoska/ece-kde.", "Baselines Cross-entropy: The first baseline model is trained using cross-entropy (XE), with the data preprocessing, training procedure and hyperparameters described in the corresponding paper for the architecture.", "Trainable calibration strategies KDE-XE denotes our proposed estimator $ECE^{KDE}$ , as defined in eq:canonicalestimator, jointly trained with cross entropy.", "MMCE [26] is a differentiable measure of calibration with a property that it is minimized at perfect calibration, i.e., MMCE is 0 if and only if $\\operatorname{CE}_p=0$ .", "It is used as a regulariser alongside NLL, with the strength of regularization parameterized by $\\lambda $ .", "Focal loss (FL) [31] is an alternative to the cross-entropy loss, defined as $\\mathcal {L}_f = -(1 - f(y|x))^\\gamma \\log (f(y|x))$ , where $\\gamma $ is a hyperparameter and $f(y|x)$ is the probability score that a neural network $f$ outputs for a class $y$ on an input $x$ .", "Their best-performing approach is the sample-dependent FL-53 where $\\gamma = 5$ for $f(y|x) \\in [0, 0.2)$ and $\\gamma = 3$ otherwise, followed by the method with fixed $\\gamma = 3$ .", "Post-hoc calibration strategies [14] investigated the performance of several post-hoc calibration methods and found temperature scaling to be a strong baseline, which we use as a representative of this group.", "It works by scaling the logits with a scalar $T > 0$ , typically learned on a validation set by minimizing NLL.", "Following [26], [31], we also use temperature scaling as a post-processing step for our method.", "Metrics We report $L_1$ canonical calibration using our $ECE^{KDE}$ estimator, calculated according to eq:canonicalestimator.", "Additional experiments with $L_1$ and $L_2$ top-label calibration on CIFAR-10/100 can be found in Appendix .", "Hyperparameters Figure: Effect of the bandwidth bb on the shape of the estimate.A crucial parameter for KDE is the bandwidth $b$ , a positive number that defines the smoothness of the density plot.", "Poorly chosen bandwidth may lead to undersmoothing (small bandwidth) or oversmoothing (large bandwidth), as shown in Figure REF .", "A commonly used non-parametric bandwidth selector is maximum likelihood cross validation [10].", "For our experiments we choose the bandwidth from a list of possible values by maximizing the leave-one-out likelihood (LOO MLE).", "The $\\lambda $ parameter for weighting the calibration error w.r.t the loss is typically chosen via cross-validation or using a holdout validation set.", "We found that for KDE-XE, values of $\\lambda \\in [0.001, 0.2]$ provide a good trade-off in terms of accuracy and calibration error.", "The $p$ parameter is selected depending on the desired $L_p$ calibration error and the corresponding theoretical guarantees.", "The rest of the hyperparameters for training are set as proposed in the corresponding papers for the architectures we benchmark.", "In particular, for the CIFAR-10/100 datasets we used a batch size of 64 for DenseNet and 128 for the other architectures.", "For the medical datasets, we used a batch size of 64, due to their smaller size.", "Experiments An important property of our $ECE^{KDE}$ estimator is differentiability, allowing it be used in a calibration regularized training framework.", "In this section, we benchmark KDE-XE with several baselines on medical diagnosis applications, where the calibration of the whole probability vector is of particular interest.", "For completeness, we also include an experiment on CIFAR-10.", "Table REF summarizes the canonical $L_1$ $ECE^{KDE}$ and Table REF the accuracy, measured across multiple architectures.", "The bandwidth is chosen by LOO MLE.", "For MMCE and KDE-XE the best performing regularization weight is reported.", "In Table REF we notice that KDE-XE consistently achieves very competitive ECE values, while also boosting the accuracy, as shown in Table REF .", "Interestingly, we observe that temperature scaling does not improve canonical calibration error, contrary to its reported improvements on top-label calibration.", "This observation that temperature scaling is less effective for stronger notions of calibration is consistent with a similar finding in [24], where the authors show that although the temperature-scaled model has well calibrated top-label confidence scores, the calibration error is much larger for class-wise calibration.", "Table: Canonical L 1 L_1 ECE KDE ECE^{KDE} (↓\\downarrow ) for different loss functions and architectures, both trained from scratch (Pre T) and after temperature scaling on a validation set (Post T).", "Best results across Pre T methods are marked in bold.Table: Accuracy (↑\\uparrow ) computed for different architectures.", "Best results are marked in bold.Figure REF shows the performance of several architectures and datasets in terms of accuracy and $L_1$ $ECE^{KDE}$ for various choices of the regularization parameter for MMCE and KDE-XE.", "The 95$\\%$ confidence intervals for $ECE^{KDE}$ are calculated using 100 and 10 bootstrap samples on the medical datasets and CIFAR-10, respectively.", "In all settings, KDE-XE Pareto dominates the competitors, for several choices of $\\lambda $ .", "For example, on DermaMNIST trained with DenseNet, KDE-XE with $\\lambda =0.2$ reduces $ECE^{KDE}$ from 66% to 45%.", "Figure: Canonical calibration on various datasets and architectures.", "The numbers next to the points denote the value of the regularization parameter.", "KDE-XE outperforms the competitors, both in terms of accuracy and calibration error, for several choices of λ\\lambda .Training time measurements Table: Training time [sec] per epoch for XE and KDE-XE for different models on CIFAR-10.In Table REF we summarize the running time per epoch of the four architectures, with regularization (KDE-XE) and without regularization (XE).", "We observe only an insignificant impact on the training speed when using KDE-XE, dispelling any concerns w.r.t.", "the computational overhead.", "To summarize, the experiments show that our estimator is consistently producing competitive calibration errors with other state-of-the-art approaches, while maintaining accuracy and keeping the computational complexity at $\\mathcal {O}(n^2)$ .", "We note that within the proposed calibration-regularized training framework, this complexity is w.r.t.", "to a mini-batch, and the added cost is less than a couple percent.", "Furthermore, the $\\mathcal {O}(n^2)$ complexity shows up in other related works [26], [60], and is intrinsic to the problem of density estimators of calibration error.", "As a future work, a larger scale benchmarking will be beneficial for exploring the limits of canonical calibration using Dirichlet kernels.", "Conclusion In this paper, we proposed a consistent and differentiable estimator of canonical $L_p$ calibration error using Dirichlet kernels.", "It has favorable computational and statistical properties, with a complexity of $\\mathcal {O}(n^2)$ , convergence of $\\mathcal {O}(n^{-1/2})$ and a bias that converges as $\\mathcal {O}(n^{-1})$ , which can be further reduced to $\\mathcal {O}(n^{-2})$ using our debiasing strategy.", "The $ECE^{KDE}$ can be directly optimized alongside any loss function in the existing batch stochastic gradient descent framework.", "Furthermore, we propose using it as a measure of the highest form of calibration, which requires the entire probability vector to be calibrated.", "To the best of our knowledge, this is the only metric that can tractably capture this type of calibration, which is crucial in safety-critical applications where downstream decisions are made based on the predicted probabilities.", "We showed empirically on a range of neural architectures and datasets that the performance of our estimator in terms of accuracy and calibration error is competitive against the current state-of-the-art, while having superior properties as a consistent estimator of canonical calibration error.", "Acknowledgments This research received funding from the Research Foundation - Flanders (FWO) through project number S001421N, and the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” programme.", "R.S.", "was supported in part by the Tübingen AI centre.", "Ethical statement The paper is concerned with estimation of calibration error, a topic for which existing methods are deployed, albeit not typically for canonical calibration error in a multi-class setting.", "We therefore consider the ethical risks to be effectively the same as for any probabilistic classifier.", "Experiments apply the method to medical image classification, for which misinterpretation of benchmark results with respect to their clinical applicability has been highlighted as a risk, see e.g.", "[53].", "Checklist For all authors... Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?", "Did you describe the limitations of your work?", "Did you discuss any potential negative societal impacts of your work?", "Please refer to our ethical statement.", "Have you read the ethics review guidelines and ensured that your paper conforms to them?", "If you are including theoretical results... Did you state the full set of assumptions of all theoretical results?", "Did you include complete proofs of all theoretical results?", "If you ran experiments... Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?", "It is in the supplementary material Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?", "Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?", "See Figure REF Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?", "Table REF includes compute times.", "Most of our results are provided in big O complexity.", "If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...", "If your work uses existing assets, did you cite the creators?", "Did you mention the license of the assets?", "We do not release the data.", "Data license is available via the citation.", "Did you include any new assets either in the supplemental material or as a URL?", "Did you discuss whether and how consent was obtained from people whose data you're using/curating?", "Medical datasets used in this paper conform to the Declaration of Helsinki.", "Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?", "Medical datasets used in this paper conform to the Declaration of Helsinki.", "If you used crowdsourcing or conducted research with human subjects... Did you include the full text of instructions given to participants and screenshots, if applicable?", "We did not use crowdsourcing or conducted research with human subjects Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?", "We did not use crowdsourcing or conducted research with human subjects Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?", "We did not use crowdsourcing or conducted research with human subjects Additional derivations Derivation of the MSE decomposition Definition A.1 (Mean Squared Error (MSE)) The mean squared error of an estimator is $\\operatorname{MSE}(f) := \\mathbb {E}[(f(x) - y)^2] .$ Proposition A.2 $\\operatorname{MSE}(f)\\ge \\operatorname{CE}_2(f)^2$ $\\operatorname{MSE}(f):=&\\mathbb {E}[(f(x) - y))^2]= \\mathbb {E}[((f(x) - \\mathbb {E}[y\\mid f(x)]) + (\\mathbb {E}[y\\mid f(x)] - y))^2] \\\\=& \\underbrace{\\mathbb {E}[(f(x) - \\mathbb {E}[y\\mid f(x)])^2]}_{=CE_{2}^2} + \\mathbb {E}[(\\mathbb {E}[y\\mid f(x)] - y)^2] \\\\&+ 2 \\mathbb {E}[(f(x) - \\mathbb {E}[y\\mid f(x)])(\\mathbb {E}[y\\mid f(x)] - y)] \\nonumber $ which implies $\\operatorname{MSE}(f)-\\operatorname{CE}_2(f)^2 =& \\mathbb {E}[(\\mathbb {E}[y\\mid f(x)] - y)^2] \\\\&+ 2 \\mathbb {E}[(f(x) - \\mathbb {E}[y\\mid f(x)])(\\mathbb {E}[y\\mid f(x)] - y)] \\nonumber \\\\=& \\mathbb {E}[(\\mathbb {E}[y\\mid f(x)] - y)^2] + 2 \\mathbb {E}[(f(x)\\mathbb {E}[y\\mid f(x)]] \\\\&- 2 \\mathbb {E}[f(x)y] - 2 \\mathbb {E}[\\mathbb {E}[y\\mid f(x)]^2] +2 \\mathbb {E}[\\mathbb {E}[y\\mid f(x)] y]] \\nonumber \\\\=&\\mathbb {E}[\\mathbb {E}[y\\mid f(x)]^2] + \\mathbb {E}[y^2] - 2 \\mathbb {E}[\\mathbb {E}[y\\mid f(x)] y] \\\\&+ 2 \\mathbb {E}[(f(x)\\mathbb {E}[y\\mid f(x)]]- 2 \\mathbb {E}[f(x)y] \\nonumber \\\\&- 2 \\mathbb {E}[\\mathbb {E}[y\\mid f(x)]^2] +2 \\mathbb {E}[\\mathbb {E}[y\\mid f(x)] y]] \\nonumber \\\\=& \\mathbb {E}[y^2] + 2 \\mathbb {E}[(f(x)\\mathbb {E}[y\\mid f(x)]] - 2 \\mathbb {E}[f(x)y] \\\\&- \\mathbb {E}[\\mathbb {E}[y\\mid f(x)]^2] \\nonumber \\\\=& \\mathbb {E}[(2 f(x) - y - \\mathbb {E}[y\\mid f(x)]) (\\mathbb {E}[y\\mid f(x)])-y] \\\\=& \\mathbb {E}[( f(x) - y) (\\mathbb {E}[y\\mid f(x)] - y)] \\\\&+ \\mathbb {E}[ (f(x) - \\mathbb {E}[y\\mid f(x)]) (\\mathbb {E}[y\\mid f(x)] - y)] .", "\\nonumber $ By the law of total expectation, we will write the above as $\\operatorname{MSE}(f)-\\operatorname{CE}_2(f)^2 = \\mathbb {E}[ \\mathbb {E}[&( f(x) - y) (\\mathbb {E}[y\\mid f(x)] - y) \\\\&+ (f(x) - \\mathbb {E}[y\\mid f(x)]) (\\mathbb {E}[y\\mid f(x)] - y)\\mid f(x)]] .", "\\nonumber $ Focusing on the inner conditional expectation, we have that $\\mathbb {E}[(& f(x) - y) (\\mathbb {E}[y\\mid f(x)] - y) + (f(x) - \\mathbb {E}[y\\mid f(x)]) (\\mathbb {E}[y\\mid f(x)] - y)\\mid f(x)] \\nonumber \\\\=& \\mathbb {E}[y\\mid f(x)] (f(x) - 1)(\\mathbb {E}[y\\mid f(x)] - 1)+(1 - \\mathbb {E}[y\\mid f(x)])f(x) \\mathbb {E}[y\\mid f(x)] \\nonumber \\\\&+ \\mathbb {E}[y\\mid f(x)] (f(x) - \\mathbb {E}[y\\mid f(x)])(\\mathbb {E}[y\\mid f(x)] - 1)\\nonumber \\\\ &+ (1-\\mathbb {E}[y\\mid f(x)]) (f(x) - \\mathbb {E}[y\\mid f(x)])\\mathbb {E}[y\\mid f(x)] \\\\=& (1-\\mathbb {E}[y\\mid f(x)])\\mathbb {E}[y\\mid f(x)] \\ge 0 \\quad \\forall f(x) $ and therefore $\\operatorname{MSE}(f)-\\operatorname{CE}_2(f)^2 = \\mathbb {E}[(1-\\mathbb {E}[y\\mid f(x)])\\mathbb {E}[y\\mid f(x)]]\\ge 0 .$ The expectation in  eq:MSEminusCE2 is over variances of Bernoulli random variables with probabilities $\\mathbb {E}[y\\mid f(x)]$ .", "Derivation of eq:estimatorEYfX By considering $y \\in \\lbrace 0, 1\\rbrace $ , we have the following: $\\mathbb {E}[y\\mid f(x)]&= \\sum _{y_k \\in \\mathcal {Y}} y_k \\, p_{y|f(x)}(y_k) = \\frac{\\sum _{y_k \\in \\mathcal {Y}} y_k \\, p_{f(x), y}(f(x), y_k)}{p_{f(x)}(f(x))} \\\\&= \\frac{p_{f(x), y}(f(x), y_k=1)}{p_{f(x)}(f(x))} = \\frac{p_{f(x)|y}(f(x)|y_k=1)p_y(y_k=1)}{p_{f(x)}(f(x))} \\\\&\\approx \\frac{\\frac{1}{\\sum _{i=1}^n y_i}\\sum _{i=1}^n k(f(x) ; f(x_i))y_i \\frac{\\sum _{i=1}^n y_i}{n}}{\\frac{1}{n}\\sum _{i=1}^n k(f(x) ; f(x_i))} \\\\&\\approx \\frac{\\sum _{i=1}^n k(f(x) ; f(x_i))y_i}{\\sum _{i=1}^n k(f(x) ; f(x_i))} =: \\widehat{\\mathbb {E}[y \\mid f(x)]}$ Derivation of eq:msebinary We consider the optimization problem for some $\\lambda >0$ : $f = \\arg \\min _{f\\in \\mathcal {F}} \\Bigl ( \\operatorname{MSE}(f) +\\lambda \\operatorname{CE}_2(f)^2 \\Bigr ).$ Using eq:MSEminusCE2 we rewrite: $\\operatorname{MSE}(f) +\\lambda \\operatorname{CE}_2(f)^2 \\nonumber &= (1+\\lambda ) \\operatorname{MSE}(f) -\\lambda \\Bigl (\\operatorname{MSE}(f)-\\operatorname{CE}_2(f)^2\\Bigr ) \\nonumber \\\\&= (1+\\lambda ) \\operatorname{MSE}(f) -\\lambda \\mathbb {E}\\biggl [\\Bigl (1-\\mathbb {E}[y\\mid f(x)]\\Bigr )\\mathbb {E}[y\\mid f(x)]\\biggr ].", "$ Rescaling eq:CalibrationRetularized by a factor of $(1+\\lambda )^{-1}$ and a variable substitution $\\gamma =\\frac{\\lambda }{1+\\lambda } \\in [0,1)$ , we have that: $f=&\\arg \\min _{f\\in \\mathcal {F}}\\Bigl ( \\operatorname{MSE}(f) +\\lambda \\operatorname{CE}_2(f)^2\\Bigr ) \\nonumber \\\\ =& \\arg \\min _{f\\in \\mathcal {F}}\\biggl ( \\operatorname{MSE}(f) -\\gamma \\mathbb {E}\\biggl [\\Bigl (1-\\mathbb {E}[y\\mid f(x)]\\Bigr )\\mathbb {E}[y\\mid f(x)]\\biggr ]\\biggr ) \\nonumber \\\\=& \\arg \\min _{f\\in \\mathcal {F}} \\biggl ( \\operatorname{MSE}(f) +\\gamma \\mathbb {E}\\Bigl [\\mathbb {E}[y\\mid f(x)]^2\\Bigr ]\\biggr ) .$ Bias of ratio of U-statistics The unbiased estimator for the square of a mean $\\mu _X^2$ is given by: $\\widehat{\\mu _X^2} = \\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n X_i X_j = \\frac{1}{n(n-1)}\\left(\\left(\\sum _{i=1}^n X_i\\right)^2 - \\sum _{i=1}^n X_i^2\\right).$ This is a second order U-statistics with kernel $h(x_1, x_2)=x_1 x_2$ .", "The bias of the ratio of two of these estimators converges as $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ , as the following lemma proves.", "Lemma B.1 Let $\\theta _1$ and $\\theta _2$ be two estimable parameters and let $U_1$ and $U_2$ be the two corresponding U-statistics of order $m_1$ and $m_2$ , respectively, based on a sample of $n$ i.i.d.", "RVs.", "The bias of the ratio $U_1 / U_2$ of these two U-statistics will converge as $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ .", "Let $R=\\theta _1 / \\theta _2$ be the ratio of two estimable parameters and $r=U_1 / U_2$ the ratio of the corresponding U-statistics.", "Note, that $U_i$ is an unbiased estimator of $\\theta _i$ , $\\mathbb {E}[U_i]=\\theta _i$ , $i=1,2$ , however, the ratio is usually biased.", "To investigate the bias of that ratio we rewrite $r = R \\biggl (1+\\frac{U_1 -\\theta _1 }{\\theta _1} \\biggr )\\biggl (1+\\frac{U_2 -\\theta _2 }{\\theta _2} \\biggr )^{-1}.$ If $\\left|\\frac{U_2 -\\theta _2 }{\\theta _2} \\right|<1$ , we can expand $\\biggl (1+\\frac{U_2 -\\theta _2 }{\\theta _2} \\biggr )^{-1}$ in a geometric series: $r &=R \\biggl (1+\\frac{(U_1 -\\theta _1) }{\\theta _1} \\biggr ) \\biggl (1- \\frac{(U_2 -\\theta _2) }{\\theta _2} +\\frac{(U_2 -\\theta _2)^2 }{\\theta _2^2}-\\frac{(U_2 -\\theta _2)^3 }{\\theta _2^3}+\\frac{(U_2 -\\theta _2)^4 }{\\theta _2^4}-...\\biggr )\\\\&=R\\Biggl (1 +\\frac{(U_1 -\\theta _1) }{\\theta _1}-\\frac{(U_2 -\\theta _2) }{\\theta _2}-\\frac{(U_2 -\\theta _2)(U_1 -\\theta _1) }{\\theta _2\\theta _1} \\nonumber \\\\&\\phantom{asdfasd}+\\frac{(U_2 -\\theta _2)^2 }{\\theta _2^2}+\\frac{(U_2 -\\theta _2)^2(U_1 -\\theta _1)}{\\theta _2^2\\theta _1}-\\frac{(U_2 -\\theta _2)^3 }{\\theta _2^3}-\\frac{(U_2 -\\theta _2)^3(U_1 -\\theta _1) }{\\theta _2^3\\theta _1} \\nonumber \\\\&\\phantom{asdfasd}+\\frac{(U_2 -\\theta _2)^4 }{\\theta _2^4}+\\frac{(U_2 -\\theta _2)^4(U_1 -\\theta _1) }{\\theta _2^4\\theta _1} -... \\Biggr ).$ If $\\zeta _1 > 0$ , a U-statistic $U$ of order $m$ obtained from a sample of $n$ observations converges in distribution [45]: $\\sqrt{n}\\left(U - \\mathbb {E}[U]\\right) \\xrightarrow{} N(0, m^2 \\zeta _1).$ Keeping the terms up to $\\Theta \\left(\\frac{1}{n} \\right)$ : $\\begin{split}r&=R\\Biggl (1 +\\frac{(U_1 -\\theta _1 )}{\\theta _1}-\\frac{(U_2 -\\theta _2) }{\\theta _2}-\\frac{(U_2 -\\theta _2)(U_1 -\\theta _1) }{\\theta _2\\theta _1}+\\frac{(U_2 -\\theta _2)^2 }{\\theta _2^2} + o\\left(\\frac{1}{n}\\right)\\Biggl )\\end{split}$ To examine the bias, we take the expectation value of this expression: $\\begin{split}\\mathbb {E}[r] &= R\\Biggl (1 +\\frac{\\mathbb {E}\\bigl [(U_1 -\\theta _1) \\bigr ]}{\\theta _1}-\\frac{\\mathbb {E}\\bigl [(U_2 -\\theta _2) \\bigr ]}{\\theta _2}-\\frac{\\mathbb {E}\\bigl [(U_2 -\\theta _2)(U_1 -\\theta _1) \\bigr ]}{\\theta _2\\theta _1}+\\frac{\\mathbb {E}\\bigl [(U_2 -\\theta _2)^2 \\bigr ]}{\\theta _2^2}+ o\\left(\\frac{1}{n}\\right) \\Biggr )\\end{split}$ We now make use of the following expressions: $\\mathbb {E}\\bigl [(U_1 -\\theta _1) \\bigr ]&=\\mathbb {E}\\bigl [(U_2 -\\theta _2) \\bigr ]=0\\\\\\mathbb {E}\\bigl [(U_2 -\\theta _2)(U_1 -\\theta _1) \\bigr ]&=\\operatorname{Cov}(U_2, U_1)\\\\\\mathbb {E}\\bigl [(U_2 -\\theta _2)^2 \\bigr ]&= \\operatorname{Var}(U_2)\\\\$ Using these expressions the expectation of $r$ becomes: $\\begin{split}\\mathbb {E}[r] &= R\\Biggl (1 - \\frac{\\operatorname{Cov}(U_2, U_1)}{\\theta _2 \\theta _1} + \\frac{\\operatorname{Var}(U_2)}{\\theta _2^2} + o\\left(\\frac{1}{n}\\right) \\Biggr )\\end{split}$ Using Equation (REF ), the linearity of covariance and with $\\operatorname{Var}(aX)=a^2 \\operatorname{Var}(X)$ we obtain: $\\operatorname{Cov}(U_2, U_1), \\operatorname{Var}(U_2) \\in \\mathcal {O}\\left( \\frac{1}{n} \\right) \\Rightarrow \\mathbb {E}[r] = R\\Biggl (1 + \\mathcal {O}\\left(\\frac{1}{n}\\right) \\Biggr ) .$ De-biasing of ratios of straight averages Let $X$ and $Y$ be random variables and let $\\mu _X$ and $\\mu _Y$ be the means of their distributions, respectively.", "Consider the problem of finding an unbiased estimator for the ratio of means: $R = \\frac{\\mu _Y}{\\mu _X}.$ A first approach to estimate this ratio $R$ is to compute the ratio of the sample means: Let $(X_1, Y_1), ..., (X_n, Y_n)$ be pairs of i.i.d.", "random variables that are jointly distributed: $r = \\hat{R} = \\frac{\\hat{\\mu _Y}}{\\hat{\\mu _X}} =\\frac{\\frac{1}{n}\\sum _{i=1}^n Y_i}{\\frac{1}{n}\\sum _{i=1}^nX_i}=\\frac{\\bar{Y}}{\\bar{X}}.$ This, however, is a biased estimator, which can be seen as follows (we follow [50], [37] here): $r =\\frac{\\bar{Y}}{\\bar{X}} = \\frac{\\mu _Y}{\\mu _X} \\left(\\frac{\\bar{Y}}{\\mu _Y}\\right)\\left(\\frac{\\bar{X}}{\\mu _X}\\right)^{-1}=R\\biggl (1 + \\frac{\\bar{Y}-\\mu _Y}{\\mu _Y} \\biggr )\\biggl (1 + \\frac{\\bar{X}-\\mu _X}{\\mu _X} \\biggr )^{-1}.$ This has now the form of a converging geometric series.", "Thus, if $\\biggl | \\frac{\\bar{X}-\\mu _X}{\\mu _X}\\biggr | < 1,$ we can expand $\\biggl (1 + \\frac{\\bar{X}-\\mu _X}{\\mu _X} \\biggr )^{-1}$ in a geometric series, which is defined as: $\\sum _{k=0}^{\\infty } a \\, b^k = a + ab + a b^2 + ... = \\frac{a}{1-b}.$ In our case we can identify $a=R\\biggl (1 + \\frac{\\bar{Y}-\\mu _Y}{\\mu _Y} \\biggr )$ and $b=-\\frac{\\bar{X}-\\mu _X}{\\mu _X}$ .", "Thus, using the geometric series expansion, we can write: $r&= R\\biggl (1 + \\frac{\\bar{Y}-\\mu _Y}{\\mu _Y} \\biggr )\\biggl (1 - \\frac{(\\bar{X}-\\mu _X)}{\\mu _X} + \\frac{(\\bar{X}-\\mu _X)^2}{\\mu _X^2} - \\frac{(\\bar{X}-\\mu _X)^3}{\\mu _X^3} + \\frac{(\\bar{X}-\\mu _X)^4}{\\mu _X^4} - ...\\biggr )\\\\\\begin{split}&=R\\biggl (1 + \\frac{(\\bar{Y}-\\mu _Y)}{\\mu _Y} - \\frac{(\\bar{X}-\\mu _X)}{\\mu _X} - \\frac{(\\bar{X}-\\mu _X)(\\bar{Y}-\\mu _Y)}{\\mu _Y \\mu _X} + \\frac{(\\bar{X}-\\mu _X)^2}{\\mu _X^2} \\\\& \\phantom{asdfa}+ \\frac{(\\bar{X}-\\mu _X)^2 (\\bar{Y}-\\mu _Y)}{\\mu _X^2 \\mu _Y} - \\frac{(\\bar{X}-\\mu _X)^3}{\\mu _X^3} - \\frac{(\\bar{X}-\\mu _X)^3 (\\bar{Y}-\\mu _Y)}{\\mu _X^3 \\mu _Y} + \\frac{(\\bar{X}-\\mu _X)^4}{\\mu _X^4} + ... \\biggr )\\end{split}$ Neglecting higher order terms Since $\\bar{X}$ and $\\bar{Y}$ are U-statistics, we make use of the asymptotic behaviour of U-statistics.", "If $\\zeta _1 > 0$ , a U-statistics $U_n$ of order $m$ obtained from a sample of $n$ observations behaves as $n \\rightarrow \\infty $ like ([45]): $\\sqrt{n}\\left(U_n - \\mathbb {E}[U_n]\\right) \\xrightarrow{} N(0, m^2 \\zeta _1).$ As we seek an estimator that is unbiased up until order $n^{-2}$ and since $\\mathbb {E}[\\bar{X}] = \\mu _X$ , we can neglect all terms of order 5 or higher since for $n \\rightarrow \\infty $ : $(\\bar{X} - \\mu _X)^5 &\\in \\mathcal {O}(n^{-2.5})\\\\(\\bar{X} - \\mu _X)^4 (\\bar{Y} - \\mu _Y) &\\in \\mathcal {O}(n^{-2.5})$ Therefore, we obtain: $\\begin{split}r&\\approx R\\biggl (1 + \\frac{(\\bar{Y}-\\mu _Y)}{\\mu _Y} - \\frac{(\\bar{X}-\\mu _X)}{\\mu _X} - \\frac{(\\bar{X}-\\mu _X)(\\bar{Y}-\\mu _Y)}{\\mu _Y \\mu _X} + \\frac{(\\bar{X}-\\mu _X)^2}{\\mu _X^2} \\\\& \\phantom{asdfa}+ \\frac{(\\bar{X}-\\mu _X)^2 (\\bar{Y}-\\mu _Y)}{\\mu _X^2 \\mu _Y} - \\frac{(\\bar{X}-\\mu _X)^3}{\\mu _X^3} - \\frac{(\\bar{X}-\\mu _X)^3 (\\bar{Y}-\\mu _Y)}{\\mu _X^3 \\mu _Y} + \\frac{(\\bar{X}-\\mu _X)^4}{\\mu _X^4} \\biggr )\\end{split}$ Identities to compute the terms of the series expansion of $r$ $\\mathbb {E}[\\bar{X}-\\mu _X]&=\\mathbb {E}[\\bar{Y}-\\mu _Y]=0\\\\\\mathbb {E}[(\\bar{X}-\\mu _X)^2]&=\\operatorname{Var}(\\bar{X})=\\frac{1}{n}\\operatorname{Var(X)}\\\\\\mathbb {E}\\biggl [\\Bigl (\\bar{X}- \\mu _X\\Bigr )\\Bigl (\\bar{Y}- \\mu _Y\\Bigr )\\biggr ]&=\\operatorname{Cov}(\\bar{X}, \\bar{Y}) = \\frac{1}{n}\\operatorname{Cov}(X, Y)\\\\\\mathbb {E}\\biggl [\\Bigl (\\bar{X}- \\mu _X\\Bigr )^2\\Bigl (\\bar{Y}- \\mu _Y\\Bigr )\\biggr ]&=\\operatorname{Cov}(\\bar{X}^2, \\bar{Y}) -2\\mu _X \\operatorname{Cov}(\\bar{X}, \\bar{Y})\\\\&= \\frac{1}{n^2}\\Bigl (\\operatorname{Cov}(X^2, Y) -2\\mu _X \\operatorname{Cov}(X, Y)\\Bigr )\\\\\\mathbb {E}\\biggl [\\Bigl (\\bar{X}-\\mu _X\\Bigr )^3\\biggr ]&=\\operatorname{Cov}(\\bar{X}^2, \\bar{X})-2\\mu _X \\operatorname{Var}(\\bar{X})\\\\&=\\frac{1}{n^2}\\operatorname{Cov}(X^2, X) - \\frac{2}{n^2}\\mu _X \\operatorname{Var}(X)\\\\\\mathbb {E}\\Bigl [\\Bigl ( \\bar{X} - \\mu _X \\Bigr )^3 \\Bigl (\\bar{Y}-\\mu _Y \\Bigr ) \\Bigr ]&= \\operatorname{Cov}(\\bar{X}^3, \\bar{Y}) - 3\\mu _X \\operatorname{Cov}(\\bar{X}^2, \\bar{Y)} + 3 \\mu _X^2 \\operatorname{Cov}(\\bar{X}, \\bar{Y})\\\\&=\\frac{3}{n^2}\\operatorname{Var(X)} \\operatorname{Cov}(X, Y)+ \\mathcal {O}(n^{-3})\\\\\\mathbb {E}\\biggl [\\Bigl (\\bar{X}-\\mu _X\\Bigr )^4\\biggr ]&=\\operatorname{Cov}(\\bar{X}^3, \\bar{X}) - 3\\mu _X \\operatorname{Cov}(\\bar{X}^2, \\bar{X)} + 3 \\mu _X^2 \\operatorname{Var}(\\bar{X})\\\\&=\\frac{3}{n^2}\\operatorname{Var}(X)^2+ \\mathcal {O}(n^{-3})$ Bias Using these expressions we can compute the expectation value of $r=\\hat{R}$ : $\\begin{split}\\mathbb {E}[r]&\\approx R\\Biggl (1+\\frac{1}{n}\\biggl (\\frac{\\operatorname{Var(X)}}{\\mu _X^2} - \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y} \\biggr ) + \\frac{1}{n^2}\\biggl (\\frac{(\\operatorname{Cov}(X^2, Y) -2\\mu _X \\operatorname{Cov}(X, Y))}{\\mu _X^2 \\mu _Y}\\\\&\\phantom{asdfa}-\\frac{(\\operatorname{Cov}(X^2, X) - 2\\mu _X \\operatorname{Var}(X))}{\\mu _X^3} - \\frac{3\\operatorname{Var}(X) \\operatorname{Cov}(X, Y)}{\\mu _X^3 \\mu _Y}+\\frac{3 \\operatorname{Var}(X)^2}{\\mu _X^4}\\biggr )\\Biggr )\\end{split}$ The bias or $r=\\widehat{R}$ is defined as: $\\operatorname{Bias}(r) &= \\mathbb {E}[r] - R\\\\&=R\\Biggl (\\frac{1}{n}\\biggl (\\frac{\\operatorname{Var}(X)}{\\mu _X^2} - \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y} \\biggr ) + \\frac{1}{n^2}\\biggl (\\frac{(\\operatorname{Cov}(X^2, Y) -2\\mu _X \\operatorname{Cov}(X, Y))}{\\mu _X^2 \\mu _Y}\\\\&\\phantom{asdfa}-\\frac{(\\operatorname{Cov}(X^2, X) - 2\\mu _X \\operatorname{Var}(X))}{\\mu _X^3} - \\frac{3\\operatorname{Var}(X) \\operatorname{Cov}(X, Y)}{\\mu _X^3 \\mu _Y}+\\frac{3 \\operatorname{Var}(X)^2}{\\mu _X^4}\\biggr )\\Biggr )$ Therefore an unbiased version of $r$ is: $r_{\\text{unbiased}} &= r - R\\Biggl (\\frac{1}{n}\\biggl (\\frac{\\operatorname{Var}(X)}{\\mu _X^2} - \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y} \\biggr ) + \\frac{1}{n^2}\\biggl (\\frac{(\\operatorname{Cov}(X^2, Y) -2\\mu _X \\operatorname{Cov}(X, Y))}{\\mu _X^2 \\mu _Y}\\\\&\\phantom{asdfas}-\\frac{(\\operatorname{Cov}(X^2, X) - 2\\mu _X \\operatorname{Var}(X))}{\\mu _X^3} - \\frac{3\\operatorname{Var}(X) \\operatorname{Cov}(X, Y)}{\\mu _X^3 \\mu _Y}+\\frac{3 \\operatorname{Var}(X)^2}{\\mu _X^4}\\biggr )\\Biggr )$ A corrected version of the estimator $r=\\hat{R}$ is consequently given by: $\\begin{split}r_{corr} &:= r\\Biggl (1-\\frac{1}{n}\\biggl (\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}} - \\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}} \\biggr ) - \\frac{1}{n^2}\\biggl (\\frac{\\widehat{(\\operatorname{Cov}(X^2, Y)} -2\\widehat{\\mu _X} \\widehat{\\operatorname{Cov}(X, Y)})}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y}}\\\\&\\phantom{asdfas}-\\frac{(\\widehat{\\operatorname{Cov}(X^2, X)} - 2\\widehat{\\mu _X} \\widehat{\\operatorname{Var}(X)})}{\\widehat{\\mu _X^3}} - \\frac{3\\widehat{\\operatorname{Var}(X)} \\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X^3} \\widehat{\\mu _Y}}+\\frac{3 \\widehat{\\operatorname{Var}(X)}^2}{\\widehat{\\mu _X^4}}\\biggr )\\Biggr )\\end{split}$ In the above equation we again encounter rations of estimators which again might be biased.", "Since we want to achieve a second order de-biasing we have to again recurse on the terms that have a $\\mathcal {O}\\left(\\frac{1}{n} \\right)$ dependency.", "However, we do not have to recurse on the terms that have a $\\mathcal {O}\\left(\\frac{1}{n^2} \\right)$ dependency, since any recursion would increase the power of the $n$ -dependency.", "Therefore a debiased estimator up to order $\\mathcal {O}(n^2)$ is: $\\begin{split}r_{corr} &:= r\\Biggl (1-\\frac{1}{n}\\biggl (r_{b}^{*} - r_{a}^{*} \\biggr ) - \\frac{1}{n^2}\\biggl (\\frac{\\widehat{(\\operatorname{Cov}(X^2, Y)} -2\\widehat{\\mu _X} \\widehat{\\operatorname{Cov}(X, Y)})}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y}}\\\\&\\phantom{asdfa}-\\frac{(\\widehat{\\operatorname{Cov}(X^2, X)} - 2\\widehat{\\mu _X} \\widehat{\\operatorname{Var}(X)})}{\\widehat{\\mu _X^3}} - \\frac{3\\widehat{\\operatorname{Var}(X)} \\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X^3} \\widehat{\\mu _Y}}+\\frac{3 \\widehat{\\operatorname{Var}(X)}^2}{\\widehat{\\mu _X^4}}\\biggr )\\Biggr )\\end{split}$ where $\\begin{split}r_{a}^{*} &= \\underbrace{\\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}}}_{=r_a}\\Biggl (1+\\frac{1}{(n-1)}\\biggl (\\frac{\\widehat{\\mu _Y}\\widehat{\\operatorname{Cov}(X^2, Y)}+\\widehat{\\mu _X}\\widehat{\\operatorname{Cov}(Y^2, X)}}{\\widehat{\\operatorname{Cov}(X, Y)}\\widehat{\\mu _X}\\widehat{ \\mu _Y}}-4\\biggr )\\\\&\\phantom{asasasasddasdf}- \\frac{1}{(n-1)}\\biggl (\\frac{\\ \\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2} }+ \\frac{ \\widehat{\\operatorname{Var}(Y)}}{ \\widehat{\\mu _Y^2}} + 2\\frac{ \\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X} \\widehat{\\mu _Y}}\\biggr )\\Biggr )\\end{split}$ $r_{b}^{*} = \\underbrace{\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}}_{=r_b}\\Biggl (1+ \\frac{4}{(n-1)}\\biggl (\\frac{\\frac{1}{2}\\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X}\\widehat{ \\operatorname{Var}(X)}}-1\\biggr )-\\frac{4}{(n-1)}\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}\\Biggr ).$ De-biasing of ratios of squared means Now consider the problem of finding an unbiased estimator for the ratio of the squared means of $x$ and $Y$ : $R = \\frac{\\mu _Y^2}{\\mu _X^2}.$ Both the numerator and denominator of $R$ can separately be estimated by a second order U-statistics, respectively: $r = \\hat{R} = \\frac{\\widehat{\\mu _Y^2}}{\\widehat{\\mu _X^2}} =\\frac{\\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{j=1 \\wedge j\\ne i}^n Y_i Y_j}{\\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{j=1 \\wedge j\\ne i}^nX_i X_j}=:\\frac{\\bar{Y_2}}{\\bar{X_2}}.$ The subscript 2 in $\\bar{X}_2$ should emphasize that we are dealing with a second order U-statistics here.", "Again, the ratio $\\frac{\\bar{Y}_2}{\\bar{X}_2}$ ,is a biased estimator.", "Using the approach with the converging geometric series and neglecting the higher order terms, we obtain: $\\begin{split}r&\\approx R\\biggl (1 + \\frac{(\\bar{Y}_2-\\mu _Y^2)}{\\mu _Y^2} - \\frac{(\\bar{X}_2-\\mu _X^2)}{\\mu _X^2} - \\frac{(\\bar{X}_2-\\mu _X^2)(\\bar{Y}_2-\\mu _Y^2)}{\\mu _Y^2 \\mu _X^2} + \\frac{(\\bar{X}_2-\\mu _X^2)^2}{\\mu _X^4} \\\\& \\phantom{asdfa}+ \\frac{(\\bar{X}_2-\\mu _X^2)^2 (\\bar{Y}_2-\\mu _Y^2)}{\\mu _X^4 \\mu _Y^2} - \\frac{(\\bar{X}_2-\\mu _X^2)^3}{\\mu _X^6} - \\frac{(\\bar{X}_2-\\mu _X^2)^3 (\\bar{Y}_2-\\mu _Y^2)}{\\mu _X^6 \\mu _Y^2} + \\frac{(\\bar{X}_2-\\mu _X^2)^4}{\\mu _X^8}\\biggr )\\end{split}$ Identities to compute the terms of the series expansion of $r$ $\\mathbb {E}[\\bar{X}_2-\\mu _X^2]&=\\mathbb {E}[\\bar{Y}_2-\\mu _Y^2]=0\\\\\\mathbb {E}[(\\bar{X}_2-\\mu _X^2)^2]&=\\operatorname{Var}(\\bar{X}_2)\\\\\\mathbb {E}\\biggl [\\Bigl (\\bar{X}_2- \\mu _X^2\\Bigr )\\Bigl (\\bar{Y}_2- \\mu _Y^2\\Bigr )\\biggr ]&=\\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2) \\\\\\mathbb {E}\\biggl [\\Bigl (\\bar{X}_2- \\mu _X^2\\Bigr )^2\\Bigl (\\bar{Y}_2- \\mu _Y^2\\Bigr )\\biggr ]&=\\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2) -2\\mu _X^2 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)\\\\\\mathbb {E}\\biggl [\\Bigl (\\bar{X}_2-\\mu _X^2\\Bigr )^3\\biggr ]&=\\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2)-2\\mu _X^2 \\operatorname{Var}(\\bar{X}_2)\\\\\\mathbb {E}\\Bigl [\\Bigl ( \\bar{X}_2 - \\mu _X^2 \\Bigr )^3 \\Bigl (\\bar{Y}_2-\\mu _Y^2 \\Bigr ) \\Bigr ]&= \\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2) - 3\\mu _X^2 \\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2) + 3 \\mu _X^4 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)\\\\\\mathbb {E}\\biggl [\\Bigl (\\bar{X}_2-\\mu _X^2\\Bigr )^4\\biggr ]&=\\operatorname{Cov}(\\bar{X}_2^3, \\bar{X}_2) - 3\\mu _X^2 \\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2) + 3 \\mu _X^4 \\operatorname{Var}(\\bar{X}_2)$ Bias Computing $\\mathbb {E}[r]$ using the above identities: $\\begin{split}\\mathbb {E}[r]&\\approx R\\Biggl (1-\\frac{\\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^2 \\mu _Y^2}+\\frac{\\operatorname{Var}(\\bar{X}_2)}{\\mu _X^4}+\\biggl (\\frac{\\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2) -2\\mu _X^2 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}\\biggr )\\\\&\\phantom{asdfa}-\\biggl (\\frac{\\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2)-2\\mu _X^2 \\operatorname{Var}(\\bar{X}_2)}{\\mu _X^6}\\biggr ) - \\biggl (\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2) - 3\\mu _X^2 \\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2) + 3 \\mu _X^4 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}\\biggr )\\\\&\\phantom{asdfa}+\\biggl (\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{X}_2) - 3\\mu _X^2 \\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2) + 3 \\mu _X^4 \\operatorname{Var}(\\bar{X}_2)}{\\mu _X^8}\\biggr )\\Biggr )\\end{split}\\\\\\begin{split}&= R\\Biggl (1-\\overbrace{\\frac{6 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^2 \\mu _Y^2}}^{\\text{Term (a)}} + \\overbrace{\\frac{6 \\operatorname{Var}(\\bar{X}_2)}{\\mu _X^4}}^{\\text{Term (b)}} + \\overbrace{\\frac{4 \\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2)}{\\mu _X^4\\mu _Y^2}}^{\\text{Term (c)}} -\\overbrace{\\frac{4 \\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2)}{\\mu _X^6}}^{\\text{Term (d)}} \\\\& \\phantom{asdfa}- \\underbrace{\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}}_{\\text{Term (e)}} + \\underbrace{\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{X}_2)}{\\mu _X^8}}_{\\text{Term (f)}}\\Biggl )\\end{split}$ $\\begin{split}&=R\\Biggl \\lbrace 1-\\Biggl (\\frac{12}{n(n-1)}\\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2}+ \\frac{24}{n} R_a\\Biggr )\\\\&\\phantom{asdfasd}+\\Biggl (\\frac{12}{n(n-1)}\\frac{\\operatorname{Var}(X)^2}{\\mu _X^4}+\\frac{24}{n} R_b\\Biggr )\\\\&\\phantom{asdfasd}+\\Biggl (\\frac{32(n-2)}{n(n-1)^2}\\biggl (\\frac{ \\operatorname{Cov}(X^2, Y)}{\\mu _X^2 \\mu _Y} +\\frac{ 2\\operatorname{Cov}(X, Y)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _X^3 \\mu _Y}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl (\\frac{8}{n} R_a + \\frac{12}{n(n-1)} \\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2}\\biggr )\\Biggr )\\\\&\\phantom{asdfasd}- \\Biggl (\\frac{32(n-2)}{n(n-1)^2}\\biggl (\\frac{ \\operatorname{Cov}(X^2, X)}{\\mu _X^3} +\\frac{ 2\\operatorname{Var}(X)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _X^4}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl ( \\frac{8}{n} R_b + \\frac{12}{n(n-1)} \\frac{\\operatorname{Var}(X)^2}{\\mu _X^4}\\biggr )\\Biggr )\\\\&\\phantom{asdfasd}-\\Biggl (\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3}\\biggl (\\frac{ \\operatorname{Cov}(X^2, Y)}{\\mu _Y \\mu _X^2}+\\frac{4 \\operatorname{Cov}(X, Y)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _Y \\mu _X^3} \\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} R_a + \\frac{30}{n(n-1)} \\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2} \\biggr )\\Biggr )\\\\&\\phantom{asdfasd}+\\Biggl (\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3} \\biggl (\\frac{ \\operatorname{Cov}(X^2, X)}{\\mu _X^3}+\\frac{4 \\operatorname{Var}(X)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _X^4}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} R_b + \\frac{30}{n(n-1)} \\frac{\\operatorname{Var}(X)^2}{\\mu _X^4} \\biggr )\\Biggr )\\end{split}$ where $R_a = \\frac{\\operatorname{Cov(X, Y)}}{\\mu _X \\mu _Y}$ and $R_b = \\frac{\\operatorname{Var}(X)}{\\mu _X^2}$ and where we have used REF , REF REF , REF , REF and REF for terms (a)-(f).", "Therefore, an estimator unbiased up to order two is given by: $\\begin{split}r_{\\text{corr}}&=\\frac{\\widehat{\\mu _Y^2}}{\\widehat{\\mu _X^2}}\\Biggl \\lbrace 1+\\Biggl (\\frac{12}{n(n-1)}\\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}+ \\frac{24}{n} r_{a}^{*}\\Biggr )\\\\&\\phantom{asdfasd}-\\Biggl (\\frac{12}{n(n-1)}\\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}}+\\frac{24}{n} r_{b}^{*}\\Biggr )\\\\&\\phantom{asdfasd}-\\Biggl (\\frac{32(n-2)}{n(n-1)^2}\\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, Y)}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y}} +\\frac{ 2\\widehat{\\operatorname{Cov}(X, Y)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2)}}{\\widehat{\\mu _X^3} \\widehat{\\mu _Y}}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl (\\frac{8}{n} r_{a}^{*} + \\frac{12}{n(n-1)} \\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}\\biggr )\\Biggr )\\\\&\\phantom{asdfasd}+\\Biggl (\\frac{32(n-2)}{n(n-1)^2}\\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X^3}} +\\frac{ 2\\widehat{\\operatorname{Var}(X)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2})}{\\widehat{\\mu _X^4}}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl ( \\frac{8}{n} r_{b}^{*} + \\frac{12}{n(n-1)} \\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}}\\biggr )\\Biggr )\\\\&\\phantom{asdfasd}+\\Biggl (\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3}\\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, Y)}}{\\widehat{\\mu _Y} \\widehat{\\mu _X^2}}+\\frac{4 \\widehat{\\operatorname{Cov}(X, Y})(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2})}{\\widehat{\\mu _Y} \\widehat{\\mu _X^3}} \\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} r_{a}^{*} + \\frac{30}{n(n-1)} \\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}\\Biggr ) \\biggr )\\\\&\\phantom{asdfasd}-\\Biggl (\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3} \\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X^3}}+\\frac{4 \\widehat{\\operatorname{Var}(X)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2)}}{\\widehat{\\mu _X^4}}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} r_{b}^{*} + \\frac{30}{n(n-1)} \\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}} \\biggr )\\Biggr ),\\end{split}$ where we used equations (REF ), (REF ): $\\begin{split}r_{a}^{*} &= \\underbrace{\\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}}}_{=r_a}\\Biggl (1+\\frac{1}{(n-1)}\\biggl (\\frac{\\widehat{\\mu _Y}\\widehat{\\operatorname{Cov}(X^2, Y)}+\\widehat{\\mu _X}\\widehat{\\operatorname{Cov}(Y^2, X)}}{\\widehat{\\operatorname{Cov}(X, Y)}\\widehat{\\mu _X}\\widehat{ \\mu _Y}}-4\\biggr )\\\\&\\phantom{asasasasddasdf}- \\frac{1}{(n-1)}\\biggl (\\frac{\\ \\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2} }+ \\frac{ \\widehat{\\operatorname{Var}(Y)}}{ \\widehat{\\mu _Y^2}} + 2\\frac{ \\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X} \\widehat{\\mu _Y}}\\biggr )\\Biggr )\\end{split}$ $r_{b}^{*} = \\underbrace{\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}}_{=r_b}\\Biggl (1+ \\frac{4}{(n-1)}\\biggl (\\frac{\\frac{1}{2}\\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X}\\widehat{ \\operatorname{Var}(X)}}-1\\biggr )-\\frac{4}{(n-1)}\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}\\Biggr ).$ Term (a) Let us first look at the first term: $\\frac{6 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^2 \\mu _Y^2}$ .", "Using the expression for the covariance between two second order U-statistics we get: $\\frac{6 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^2 \\mu _Y^2}&=\\frac{6}{\\mu _X^2 \\mu _Y^2}\\biggl (\\frac{4}{n}\\mu _X \\mu _Y \\operatorname{Cov(X, Y)} + \\frac{2}{n(n-1)} \\operatorname{Cov}(X, Y)^2\\biggr )\\\\&=\\underbrace{\\frac{24}{n}\\frac{\\operatorname{Cov(X, Y)}}{\\mu _X \\mu _Y}}_{\\in \\mathcal {O}(n^{-1})}+\\underbrace{\\frac{12}{n(n-1)}\\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2}}_{\\in \\mathcal {O}(n^{-2})}$ Since we know that for every recursion (i.e., geometric series expansion) we will get at least another factor of $\\frac{1}{n}$ , we don't have to further recurse on term that is of order $\\mathcal {O}(n^{-2})$ .", "Consequently, we only expand the following term via a geometric series, $R_a = \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y},$ since the ratio of the respective unbiased estimators, $r_a = \\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}},$ is biased.", "Using the same machinery as before, we obtain a corrected version of $r_a$ : $\\begin{split}r_{a}^{*} &= \\underbrace{\\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}}}_{=r_a}\\Biggl (1+\\frac{1}{(n-1)}\\biggl (\\frac{\\widehat{\\mu _Y}\\widehat{\\operatorname{Cov}(X^2, Y)}+\\widehat{\\mu _X}\\widehat{\\operatorname{Cov}(Y^2, X)}}{\\widehat{\\operatorname{Cov}(X, Y)}\\widehat{\\mu _X}\\widehat{ \\mu _Y}}-4\\biggr )\\\\&\\phantom{asasasasddasdf}- \\frac{1}{(n-1)}\\biggl (\\frac{\\ \\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2} }+ \\frac{ \\widehat{\\operatorname{Var}(Y)}}{ \\widehat{\\mu _Y^2}} + 2\\frac{ \\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X} \\widehat{\\mu _Y}}\\biggr )\\Biggr )\\end{split}$ The complete correction of term (a), $\\frac{6 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^2 \\mu _Y^2}$ , looks therefore as follows: $\\begin{split}\\frac{6 \\widehat{\\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}&=\\frac{12}{n(n-1)}\\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}\\\\&\\phantom{as}+ \\frac{24}{n}\\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}}\\Biggl (1+\\frac{1}{(n-1)}\\biggl (\\frac{\\widehat{\\mu _Y}\\widehat{\\operatorname{Cov}(X^2, Y)}+\\widehat{\\mu _X}\\widehat{\\operatorname{Cov}(Y^2, X)}}{\\widehat{\\operatorname{Cov}(X, Y)}\\widehat{\\mu _X} \\widehat{\\mu _Y}}-4\\biggr )\\\\&\\phantom{asasasasdfasddasdf}- \\frac{1}{(n-1)}\\biggl (\\frac{\\ \\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2} }+ \\frac{\\widehat{ \\operatorname{Var}(Y)}}{\\widehat{ \\mu _Y^2}} + 2\\frac{\\widehat{ \\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X} \\widehat{\\mu _Y}}\\biggr )\\Biggr )\\end{split}$ Term (b) The correction of term (b), $\\frac{6 \\operatorname{Var}(\\bar{X}_2)}{\\mu _X^4}$ is analogous to that of term (a).", "Define $R_b &=\\frac{\\operatorname{Var(X)}}{\\mu _X^2}\\\\r_b &=\\frac{\\widehat{\\operatorname{Var(X)}}}{\\widehat{\\mu _X^2}}.$ Then using the geometric series expansion, a corrected version of $r_b$ is given by $r_{b}^{*} = \\underbrace{\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}}_{=r_b}\\Biggl (1+ \\frac{4}{(n-1)}\\biggl (\\frac{\\frac{1}{2}\\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X}\\widehat{ \\operatorname{Var}(X)}}-1\\biggr )-\\frac{4}{(n-1)}\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}\\Biggr ).$ The full correction of term (b) is $\\frac{6\\widehat{\\operatorname{Var}(\\bar{X}_2)}}{\\widehat{\\mu _X^4}} &= \\frac{12}{n(n-1)}\\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}} +\\frac{24}{n} \\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}\\Biggl (1+ \\frac{4}{(n-1)}\\biggl (\\frac{\\frac{1}{2}\\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X}\\widehat{ \\operatorname{Var}(X)}}-1-\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}\\biggr )\\Biggr )$ Term (c) In this section we want to find an expression for term (c): $\\frac{4\\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}.$ To this end, we first need a convenient representation of $\\bar{X}_2^2$ in terms of other U-statistics: $\\bar{X}_2^2=\\Biggl (\\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n X_i X_j \\Biggr )^2= \\frac{2}{n (n-1)} U_{\\alpha }+ \\frac{4(n-2)}{n(n-1)}U_{\\beta } +\\frac{(n-2)(n-3)}{n(n-1)}\\bar{X}_4,$ with the U-statistics: $U_{\\beta }&=\\frac{1}{n(n-1)(n-2)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n X_i^2 X_j X_k\\\\U_{\\alpha }&=\\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n X_i^2 X_j^2\\\\\\bar{X}_4&=\\frac{1}{n(n-1)(n-2)(n-3)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n\\sum _{\\begin{array}{c}l=1\\\\l \\ne k\\\\l \\ne j \\\\ l \\ne i\\end{array}}^n X_i X_j X_k X_l.$ Hence, term (c) becomes: $\\frac{4\\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2} = \\underbrace{\\frac{8}{n (n-1)} \\frac{\\operatorname{Cov}(U_{\\alpha }, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}}_{\\text{First term}}+ \\underbrace{\\frac{16(n-2)}{n(n-1)}\\frac{\\operatorname{Cov}(U_{\\beta }, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}}_{\\text{Second term}} +\\underbrace{\\frac{4(n-2)(n-3)}{n(n-1)}\\frac{\\operatorname{Cov}(\\bar{X}_4, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}}_{\\text{Third term}}$ All all the covariances in the above equation are covariances between U-statistics which are $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ .", "Therefore, the first term, which already has an explicit $\\mathcal {O}\\left(\\frac{1}{n^2}\\right)$ dependence, can be neglected entirely.", "The second term has an explicit $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ , combined with the $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ from the covariance this is in total a $\\mathcal {O}\\left(\\frac{1}{n^2}\\right)$ dependency.", "Hence, we have to find an estimator for that term but do not have to recurse on it.", "On the last term, we do have to recurse, however, we have derived the recursion already in equation (REF ).", "We can rewrite the above equation using the symmetrized U-statistics $U_{\\beta }&=\\frac{1}{n(n-1)(n-2)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n \\frac{1}{3} \\Bigl (X_i^2 X_j X_k + X_i X_j^2 X_k + X_i X_j X_k^2).$ $\\begin{split}\\frac{4\\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}&\\approx \\underbrace{\\frac{32(n-2)}{n(n-1)^2}\\Biggl (\\frac{ \\operatorname{Cov}(X^2, Y)}{\\mu _X^2 \\mu _Y} +\\frac{ 2\\operatorname{Cov}(X, Y)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _X^3 \\mu _Y}\\Biggr )}_{\\text{Second term}}\\\\&\\phantom{asdfasdf}+\\underbrace{\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl (\\frac{8}{n} \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y} + \\frac{12}{n(n-1)} \\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2}\\biggr )}_{\\text{Third term}}\\end{split}$ Taking the recursion of the third term into account, the total correction of term (c) is: $\\begin{split}\\frac{4\\widehat{\\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2)}}{\\widehat{\\mu _X^4} \\widehat{\\mu _Y^2}}&\\approx \\frac{32(n-2)}{n(n-1)^2}\\Biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, Y)}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y}} +\\frac{ 2\\widehat{\\operatorname{Cov}(X, Y)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2)}}{\\widehat{\\mu _X^3} \\widehat{\\mu _Y}}\\Biggr )\\\\&\\phantom{asdf}+\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl (\\frac{8}{n} r_{a}^{*} + \\frac{12}{n(n-1)} \\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}\\biggr )\\end{split}$ Term (d) The computation of the correction of term (d), $\\frac{4\\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2)}{\\mu _X^6}$ , is similar to that of term (c).", "Hence, we only present the resulting correction: $\\begin{split}\\frac{4\\widehat{\\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2)}}{\\widehat{\\mu _X^4} \\widehat{\\mu _Y^2}}&\\approx \\frac{32(n-2)}{n(n-1)^2}\\Biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X^3}} +\\frac{ 2\\widehat{\\operatorname{Var}(X)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2})}{\\widehat{\\mu _X^4}}\\Biggr )\\\\&\\phantom{asdfasdf}+\\frac{4(n-2)(n-3)}{n(n-1)}\\Biggl ( \\frac{8}{n} r_{b}^{*} + \\frac{12}{n(n-1)} \\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}}\\Biggr )\\end{split}$ Term (e) Term (e) is: $\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}$ To be able to compute that term, we reexpress the numerator in terms of several U-statistics: $\\begin{split}\\bar{X}_2^3 &= \\frac{4}{n^2(n-1)^2}U_{I}+\\frac{24(n-2)}{n^2(n-1)^2}U_{II}+\\frac{8(n-2)}{n^2(n-1)^2}U_{III}+\\frac{8(n-2)(n-3)}{n^2(n-1)^2}U_{IV}\\\\&+\\frac{30 (n-2)(n-3)}{n^2(n-1)^2}U_{V}+\\frac{12 (n-2)(n-3)(n-4)}{n^2(n-1)2}U_{VI}+\\frac{(n-2)(n-2)(n-4)(n-5)}{n^2(n-1)^2}\\bar{X}_6,\\end{split}$ where $U_{I}:=\\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n X_i^3 X_j^3,$ $U_{II}:=\\frac{1}{n(n-1)(n-2)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n X_i^3 X_j^2 X_k,$ $U_{III}=\\frac{1}{n(n-1)(n-2)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n X_i^2, X_j^2 X_k^2$ $U_{IV}:=\\frac{1}{n(n-1)(n-2)(n-3)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n\\sum _{\\begin{array}{c}l=1\\\\l \\ne k\\\\l \\ne j \\\\ l \\ne i\\end{array}}^n X_i^3 X_j X_k X_l,$ $U_{V}:=\\frac{1}{n(n-1)(n-2)(n-3)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}l=1\\\\l \\ne k\\\\l \\ne j \\\\ l \\ne i\\end{array}}^n X_i^2 X_j^2 X_k X_l,$ $U_{VI}:=\\frac{1}{n(n-1)(n-2)(n-3)(n-4)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n\\sum _{\\begin{array}{c}l=1\\\\l \\ne k\\\\l \\ne j \\\\ l \\ne i \\end{array}}^n \\sum _{\\begin{array}{c}p=1\\\\p\\ne l \\\\p \\ne k\\\\p \\ne j \\\\ p \\ne i \\end{array}}^n X_i^2 X_j X_k X_l X_p,$ $\\bar{X}_6=\\frac{1}{n(n-1)(n-2)(n-3)(n-4)(n-5)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n\\sum _{\\begin{array}{c}l=1\\\\l \\ne k\\\\l \\ne j \\\\ l \\ne i \\end{array}}^n \\sum _{\\begin{array}{c}p=1\\\\p\\ne l \\\\p \\ne k\\\\p \\ne j \\\\ p \\ne i \\end{array}}^n \\sum _{\\begin{array}{c}q=1\\\\q\\ne p \\\\ q\\ne l \\\\q \\ne k\\\\q \\ne j \\\\ q \\ne i \\end{array}}^n X_i X_j X_k X_l X_p X_q,$ Hence term (e) can be written as: $\\begin{split}\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2} &= \\underbrace{\\frac{4}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n^4}\\right)}\\frac{\\operatorname{Cov}(U_{I}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}+\\underbrace{\\frac{24(n-2)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n^3}\\right)}\\frac{\\operatorname{Cov}(U_{II}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}+\\underbrace{\\frac{8(n-2)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n^3}\\right)}\\frac{\\operatorname{Cov}(U_{III}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}\\\\&+\\underbrace{\\frac{8(n-2)(n-3)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n^2}\\right)}\\frac{\\operatorname{Cov}(U_{IV}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}+\\underbrace{\\frac{30 (n-2)(n-3)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n^2}\\right)}\\frac{\\operatorname{Cov}(U_{V}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}\\\\&+\\underbrace{\\frac{12 (n-2)(n-3)(n-4)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n}\\right)}\\frac{\\operatorname{Cov}(U_{VI}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}+\\underbrace{\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(1\\right)}\\frac{\\operatorname{Cov}(\\bar{X}_6, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}.\\end{split}$ At this point, we can immediately discard the first three terms as they are at least $ \\mathcal {O}\\left(\\frac{1}{n^3}\\right)$ and so can directly be neglected for a second order correction.", "In addition, as we are dealing with covariances between U-statistics they add another $ \\mathcal {O}\\left(\\frac{1}{n}\\right)$ .", "Therefore, the fourth and fifth term are actually $\\mathcal {O}\\left(\\frac{1}{n}\\right) \\mathcal {O}\\left(\\frac{1}{n^2}\\right)=\\mathcal {O}\\left(\\frac{1}{n^3}\\right)$ , so they can be neglected as well.", "Only the last and the second to last term remain: $\\begin{split}\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2} \\approx \\underbrace{\\frac{12 (n-2)(n-3)(n-4)}{n^2(n-1)^2}\\frac{\\operatorname{Cov}(U_{VI}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}}_{\\text{Sixth term}}+\\underbrace{\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\frac{\\operatorname{Cov}(\\bar{X}_6, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}}_{\\text{Seventh term}}.\\end{split}$ Re-expressing the covariances between U-statistics as covariances between random variables $X$ and $Y$ (and using the symmetrized version of $U_{VI}$ ), we obtain: $\\begin{split}\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}&\\approx \\underbrace{\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3}\\biggl (\\frac{ \\operatorname{Cov}(X^2, Y)}{\\mu _Y \\mu _X^2}+\\frac{4 \\operatorname{Cov}(X, Y)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _Y \\mu _X^3} \\biggr )}_{\\text{Sixth term}}\\\\&+\\underbrace{\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y} + \\frac{30}{n(n-1)} \\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2} \\biggr )}_{\\text{Seventh term}}\\end{split}$ Since the term $\\frac{12}{n} \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y}$ is in $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ we have to recurse on it.", "However, we already have derived its correction in equation (REF ).", "Therefore, the total correction of term (e) comes down to: $\\begin{split}\\frac{\\widehat{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}}{\\widehat{\\mu _X^6} \\widehat{\\mu _Y^2}}&=\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3}\\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, Y)}}{\\widehat{\\mu _Y} \\widehat{\\mu _X^2}}+\\frac{4 \\widehat{\\operatorname{Cov}(X, Y})(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2})}{\\widehat{\\mu _Y} \\widehat{\\mu _X^3}} \\biggr )\\\\&+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} r_{a}^{*} + \\frac{30}{n(n-1)} \\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}} \\biggr )\\end{split}$ Term (f) Term (f) is: $\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{X}_2)}{\\mu _X^8}$ The procedure to obtain its correction is analogous to that of term (e), hence we only present the result: $\\begin{split}\\frac{\\widehat{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{X}_2)}}{\\widehat{\\mu _X^8} }&=\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3} \\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X^3}}+\\frac{4 \\widehat{\\operatorname{Var}(X)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2)}}{\\widehat{\\mu _X^4}}\\biggr )\\\\&+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} r_{b}^{*} + \\frac{30}{n(n-1)} \\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}} \\biggr )\\end{split}$ Top-label calibration Following standard practice in related work on calibration, we report the $L_1$ $ECE^{bin}$ for top-label (also called confidence) calibration on CIFAR-10/100.", "$ECE^{bin}$ was calculated using 15 bins and an adaptive width binning scheme, which determines the bin sizes so that an equal number of samples fall into each bin [36], [31].", "The 95% confidence intervals for $ECE^{bin}$ are obtained using 100 bootstrap samples as in [25].", "In all experiments with calibration regularized training, the biased version of $ECE^{KDE}$ was used.", "Table REF summarizes our evaluation of the efficacy of KDE-XE in lowering the calibration error over the baseline XE on CIFAR-10 and CIFAR-100.", "The best performing $\\lambda $ coefficient for KDE-XE is shown in the brackets.", "The results show that KDE-XE consistently reduces the calibration error, without dropping the accuracy.", "Figure REF depicts the $L_2$ $ECE^{bin}$ for several choices of the $\\lambda $ parameter for KDE-XE, using ResNet-110 (SD) on CIFAR-10/100.", "Figure REF shows reliability diagrams with 10 bins for top-label calibration on CIFAR-100 using ResNet and Wide-ResNet.", "Comapared to XE, we notice that KDE-XE lowers the overconfident predictions, and obtains better calibration than MMCE ($\\lambda =2$ ) and FL-53 on average, as summarized by the ECE value in the gray box.", "Table: Top-label L 1 L_1 adaptive-width ECE bin ECE^{bin} and accuracy for XE and KDE-XE for various architectures on CIFAR-10/100.", "Best ECE values are marked in bold.", "The value in the brackets represent the value of the λ\\lambda parameter.Figure: L 2 L_2 ECE bin ECE^{bin} for top-label calibration using ResNet (SD).Figure: Reliability diagrams for top-label calibration on CIFAR-100 using ResNet (top row) and Wide-ResNet (bottom row) for each of the considered baselines.", "Relationship between $ECE^{bin}$ and $ECE^{KDE}$ In the following two sections, we investigate further the relationship between $ECE^{bin}$ , as the most widely used metric, and our $ECE^{KDE}$ estimator.", "For the three types of calibration, $ECE^{bin}$ is calculated with equal-width binning scheme.", "The values for the bandwidth in $ECE^{KDE}$ and the number of bins per class for $ECE^{bin}$ are chosen with leave-one-out maximum likelihood procedure and Doane's formula [9], respectively.", "Figure REF shows an example of $ECE^{bin}$ in a three-class setting on CIFAR-10.", "The points are mostly concentrated at the edges of the histogram, as can be seen from Figure REF .", "The surface of the corresponding Dirichlet KDE is given in REF .", "Figure REF shows the relationship between $ECE^{bin}$ and $ECE^{KDE}$ .", "The points represent a trained Resnet-56 model on a subset of three classes from CIFAR-10.", "In every row, a differnt number of points was used to estimate the $ECE^{KDE}$ .", "We notice the $ECE^{KDE}$ estimates of the three types of calibration closely correspond to their histogram-based approximations.", "Figure: An example of a simplex binned estimator and kernel-density estimator for CIFAR-10Figure: Relationship between ECE bin ECE^{bin} and ECE KDE ECE^{KDE} for the three types of calibration: canonical (first column), marginal (second column) and top-label (third column).", "In every row top to bottom, different number of points (100, 500, 1000 and all points, respectively) are used to approximate ECE KDE ECE^{KDE}.", "Each point represents a ResNet-56 model trained on a subset of three classes from CIFAR-10.", "The number of bins per class (13) is selected using Doane's formula , while the bandwidth is selected using a leave-one-out maximum likelihood procedure (typical chosen values are 0.001 for 100 points and 0.0001 otherwise).", "Bias and convergence rates Figure REF shows a comparison of $ECE^{KDE}$ and $ECE^{bin}$ estimated with a varying number of points.", "The ground truth is computed from 3000 test points with $ECE^{KDE}$ .", "The used model is a ResNet-56, trained on a subset of three classes from CIFAR-10.", "The figure shows that the two estimates are comparable and both are doing a reasonable job in a three-class setting.", "Figure: ECE KDE ECE^{KDE} estimates and their corresponding binned approximations on the three types of calibration for varying number of points used for the estimation.", "The ground truth is calculated using 3000 probability scores of the test set using ECE KDE ECE^{KDE}.", "Optimal number of bins and bandwidth are chosen with Doane's formula and LOO MLE, respectively.", "Typical chosen number of bins is 6-11, and common values for the bandwidth are 0.0001 and 0.001.Figure REF shows the absolute difference between the ground truth and estimated ECE using $ECE^{KDE}$ and a $ECE^{bin}$ with varying number of points.", "The results are averaged over 120 ResNet-56 models trained on a subset of three classes from CIFAR-10.", "Both estimators are biased and have some variance, and the plot shows that the combination of the two is in the same order of magnitude.", "The empirical convergence rates (slope of the log-log plot) is given in the legend and is shown to be close to the theoretically expected value of -0.5.", "We observe that $ECE^{KDE}$ has similar statistical properties in terms of bias and convergence as $ECE^{bin}$ .", "Figure: Absolute difference between ground truth and estimated ECE for varying number of points used for the estimation.", "The ground truth is calculated using 3000 probability scores of the test set.", "Note that the axes are on a log scale." ], [ "Empirical setup", "To showcase our estimator in applications where canonical calibration is crucial, we consider two medical datasets, namely Kather and DermaMNIST.", "The Kather dataset [19] consists of 5000 histological images of human colorectal cancer and it has eight different classes of tissue.", "DermaMNIST [56] is a pre-processed version of the HAM10000 dataset [51], containing 10015 dermatoscopic images of skin lesions, categorized in seven classes.", "Both datasets have been collected in accordance with the Declaration of Helsinki.", "According to standard practice in related works, we trained ResNet [16], ResNet with stochastic depth (SD) [17], DenseNet [18] and WideResNet [59] networks also on CIFAR-10/100 [20].", "We use 45000 images for training on the CIFAR datasets, 4000 for Kather and 7007 for DermaMNIST.", "The code is available at https://github.com/tpopordanoska/ece-kde." ], [ "Baselines", "Cross-entropy: The first baseline model is trained using cross-entropy (XE), with the data preprocessing, training procedure and hyperparameters described in the corresponding paper for the architecture.", "Trainable calibration strategies KDE-XE denotes our proposed estimator $ECE^{KDE}$ , as defined in eq:canonicalestimator, jointly trained with cross entropy.", "MMCE [26] is a differentiable measure of calibration with a property that it is minimized at perfect calibration, i.e., MMCE is 0 if and only if $\\operatorname{CE}_p=0$ .", "It is used as a regulariser alongside NLL, with the strength of regularization parameterized by $\\lambda $ .", "Focal loss (FL) [31] is an alternative to the cross-entropy loss, defined as $\\mathcal {L}_f = -(1 - f(y|x))^\\gamma \\log (f(y|x))$ , where $\\gamma $ is a hyperparameter and $f(y|x)$ is the probability score that a neural network $f$ outputs for a class $y$ on an input $x$ .", "Their best-performing approach is the sample-dependent FL-53 where $\\gamma = 5$ for $f(y|x) \\in [0, 0.2)$ and $\\gamma = 3$ otherwise, followed by the method with fixed $\\gamma = 3$ .", "Post-hoc calibration strategies [14] investigated the performance of several post-hoc calibration methods and found temperature scaling to be a strong baseline, which we use as a representative of this group.", "It works by scaling the logits with a scalar $T > 0$ , typically learned on a validation set by minimizing NLL.", "Following [26], [31], we also use temperature scaling as a post-processing step for our method." ], [ "Metrics", "We report $L_1$ canonical calibration using our $ECE^{KDE}$ estimator, calculated according to eq:canonicalestimator.", "Additional experiments with $L_1$ and $L_2$ top-label calibration on CIFAR-10/100 can be found in Appendix ." ], [ "Hyperparameters", "A crucial parameter for KDE is the bandwidth $b$ , a positive number that defines the smoothness of the density plot.", "Poorly chosen bandwidth may lead to undersmoothing (small bandwidth) or oversmoothing (large bandwidth), as shown in Figure REF .", "A commonly used non-parametric bandwidth selector is maximum likelihood cross validation [10].", "For our experiments we choose the bandwidth from a list of possible values by maximizing the leave-one-out likelihood (LOO MLE).", "The $\\lambda $ parameter for weighting the calibration error w.r.t the loss is typically chosen via cross-validation or using a holdout validation set.", "We found that for KDE-XE, values of $\\lambda \\in [0.001, 0.2]$ provide a good trade-off in terms of accuracy and calibration error.", "The $p$ parameter is selected depending on the desired $L_p$ calibration error and the corresponding theoretical guarantees.", "The rest of the hyperparameters for training are set as proposed in the corresponding papers for the architectures we benchmark.", "In particular, for the CIFAR-10/100 datasets we used a batch size of 64 for DenseNet and 128 for the other architectures.", "For the medical datasets, we used a batch size of 64, due to their smaller size." ], [ "Experiments", "An important property of our $ECE^{KDE}$ estimator is differentiability, allowing it be used in a calibration regularized training framework.", "In this section, we benchmark KDE-XE with several baselines on medical diagnosis applications, where the calibration of the whole probability vector is of particular interest.", "For completeness, we also include an experiment on CIFAR-10.", "Table REF summarizes the canonical $L_1$ $ECE^{KDE}$ and Table REF the accuracy, measured across multiple architectures.", "The bandwidth is chosen by LOO MLE.", "For MMCE and KDE-XE the best performing regularization weight is reported.", "In Table REF we notice that KDE-XE consistently achieves very competitive ECE values, while also boosting the accuracy, as shown in Table REF .", "Interestingly, we observe that temperature scaling does not improve canonical calibration error, contrary to its reported improvements on top-label calibration.", "This observation that temperature scaling is less effective for stronger notions of calibration is consistent with a similar finding in [24], where the authors show that although the temperature-scaled model has well calibrated top-label confidence scores, the calibration error is much larger for class-wise calibration.", "Table: Canonical L 1 L_1 ECE KDE ECE^{KDE} (↓\\downarrow ) for different loss functions and architectures, both trained from scratch (Pre T) and after temperature scaling on a validation set (Post T).", "Best results across Pre T methods are marked in bold.Table: Accuracy (↑\\uparrow ) computed for different architectures.", "Best results are marked in bold.Figure REF shows the performance of several architectures and datasets in terms of accuracy and $L_1$ $ECE^{KDE}$ for various choices of the regularization parameter for MMCE and KDE-XE.", "The 95$\\%$ confidence intervals for $ECE^{KDE}$ are calculated using 100 and 10 bootstrap samples on the medical datasets and CIFAR-10, respectively.", "In all settings, KDE-XE Pareto dominates the competitors, for several choices of $\\lambda $ .", "For example, on DermaMNIST trained with DenseNet, KDE-XE with $\\lambda =0.2$ reduces $ECE^{KDE}$ from 66% to 45%.", "Figure: Canonical calibration on various datasets and architectures.", "The numbers next to the points denote the value of the regularization parameter.", "KDE-XE outperforms the competitors, both in terms of accuracy and calibration error, for several choices of λ\\lambda ." ], [ "Training time measurements", "In Table REF we summarize the running time per epoch of the four architectures, with regularization (KDE-XE) and without regularization (XE).", "We observe only an insignificant impact on the training speed when using KDE-XE, dispelling any concerns w.r.t.", "the computational overhead.", "To summarize, the experiments show that our estimator is consistently producing competitive calibration errors with other state-of-the-art approaches, while maintaining accuracy and keeping the computational complexity at $\\mathcal {O}(n^2)$ .", "We note that within the proposed calibration-regularized training framework, this complexity is w.r.t.", "to a mini-batch, and the added cost is less than a couple percent.", "Furthermore, the $\\mathcal {O}(n^2)$ complexity shows up in other related works [26], [60], and is intrinsic to the problem of density estimators of calibration error.", "As a future work, a larger scale benchmarking will be beneficial for exploring the limits of canonical calibration using Dirichlet kernels." ], [ "Conclusion", "In this paper, we proposed a consistent and differentiable estimator of canonical $L_p$ calibration error using Dirichlet kernels.", "It has favorable computational and statistical properties, with a complexity of $\\mathcal {O}(n^2)$ , convergence of $\\mathcal {O}(n^{-1/2})$ and a bias that converges as $\\mathcal {O}(n^{-1})$ , which can be further reduced to $\\mathcal {O}(n^{-2})$ using our debiasing strategy.", "The $ECE^{KDE}$ can be directly optimized alongside any loss function in the existing batch stochastic gradient descent framework.", "Furthermore, we propose using it as a measure of the highest form of calibration, which requires the entire probability vector to be calibrated.", "To the best of our knowledge, this is the only metric that can tractably capture this type of calibration, which is crucial in safety-critical applications where downstream decisions are made based on the predicted probabilities.", "We showed empirically on a range of neural architectures and datasets that the performance of our estimator in terms of accuracy and calibration error is competitive against the current state-of-the-art, while having superior properties as a consistent estimator of canonical calibration error." ], [ "Acknowledgments", "This research received funding from the Research Foundation - Flanders (FWO) through project number S001421N, and the Flemish Government under the “Onderzoeksprogramma Artificiële Intelligentie (AI) Vlaanderen” programme.", "R.S.", "was supported in part by the Tübingen AI centre.", "The paper is concerned with estimation of calibration error, a topic for which existing methods are deployed, albeit not typically for canonical calibration error in a multi-class setting.", "We therefore consider the ethical risks to be effectively the same as for any probabilistic classifier.", "Experiments apply the method to medical image classification, for which misinterpretation of benchmark results with respect to their clinical applicability has been highlighted as a risk, see e.g.", "[53].", "For all authors... Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?", "Did you describe the limitations of your work?", "Did you discuss any potential negative societal impacts of your work?", "Please refer to our ethical statement.", "Have you read the ethics review guidelines and ensured that your paper conforms to them?", "If you are including theoretical results... Did you state the full set of assumptions of all theoretical results?", "Did you include complete proofs of all theoretical results?", "If you ran experiments... Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?", "It is in the supplementary material Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?", "Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?", "See Figure REF Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?", "Table REF includes compute times.", "Most of our results are provided in big O complexity.", "If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...", "If your work uses existing assets, did you cite the creators?", "Did you mention the license of the assets?", "We do not release the data.", "Data license is available via the citation.", "Did you include any new assets either in the supplemental material or as a URL?", "Did you discuss whether and how consent was obtained from people whose data you're using/curating?", "Medical datasets used in this paper conform to the Declaration of Helsinki.", "Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?", "Medical datasets used in this paper conform to the Declaration of Helsinki.", "If you used crowdsourcing or conducted research with human subjects... Did you include the full text of instructions given to participants and screenshots, if applicable?", "We did not use crowdsourcing or conducted research with human subjects Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?", "We did not use crowdsourcing or conducted research with human subjects Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?", "We did not use crowdsourcing or conducted research with human subjects" ], [ "Derivation of the MSE decomposition", "Definition A.1 (Mean Squared Error (MSE)) The mean squared error of an estimator is $\\operatorname{MSE}(f) := \\mathbb {E}[(f(x) - y)^2] .$ Proposition A.2 $\\operatorname{MSE}(f)\\ge \\operatorname{CE}_2(f)^2$ $\\operatorname{MSE}(f):=&\\mathbb {E}[(f(x) - y))^2]= \\mathbb {E}[((f(x) - \\mathbb {E}[y\\mid f(x)]) + (\\mathbb {E}[y\\mid f(x)] - y))^2] \\\\=& \\underbrace{\\mathbb {E}[(f(x) - \\mathbb {E}[y\\mid f(x)])^2]}_{=CE_{2}^2} + \\mathbb {E}[(\\mathbb {E}[y\\mid f(x)] - y)^2] \\\\&+ 2 \\mathbb {E}[(f(x) - \\mathbb {E}[y\\mid f(x)])(\\mathbb {E}[y\\mid f(x)] - y)] \\nonumber $ which implies $\\operatorname{MSE}(f)-\\operatorname{CE}_2(f)^2 =& \\mathbb {E}[(\\mathbb {E}[y\\mid f(x)] - y)^2] \\\\&+ 2 \\mathbb {E}[(f(x) - \\mathbb {E}[y\\mid f(x)])(\\mathbb {E}[y\\mid f(x)] - y)] \\nonumber \\\\=& \\mathbb {E}[(\\mathbb {E}[y\\mid f(x)] - y)^2] + 2 \\mathbb {E}[(f(x)\\mathbb {E}[y\\mid f(x)]] \\\\&- 2 \\mathbb {E}[f(x)y] - 2 \\mathbb {E}[\\mathbb {E}[y\\mid f(x)]^2] +2 \\mathbb {E}[\\mathbb {E}[y\\mid f(x)] y]] \\nonumber \\\\=&\\mathbb {E}[\\mathbb {E}[y\\mid f(x)]^2] + \\mathbb {E}[y^2] - 2 \\mathbb {E}[\\mathbb {E}[y\\mid f(x)] y] \\\\&+ 2 \\mathbb {E}[(f(x)\\mathbb {E}[y\\mid f(x)]]- 2 \\mathbb {E}[f(x)y] \\nonumber \\\\&- 2 \\mathbb {E}[\\mathbb {E}[y\\mid f(x)]^2] +2 \\mathbb {E}[\\mathbb {E}[y\\mid f(x)] y]] \\nonumber \\\\=& \\mathbb {E}[y^2] + 2 \\mathbb {E}[(f(x)\\mathbb {E}[y\\mid f(x)]] - 2 \\mathbb {E}[f(x)y] \\\\&- \\mathbb {E}[\\mathbb {E}[y\\mid f(x)]^2] \\nonumber \\\\=& \\mathbb {E}[(2 f(x) - y - \\mathbb {E}[y\\mid f(x)]) (\\mathbb {E}[y\\mid f(x)])-y] \\\\=& \\mathbb {E}[( f(x) - y) (\\mathbb {E}[y\\mid f(x)] - y)] \\\\&+ \\mathbb {E}[ (f(x) - \\mathbb {E}[y\\mid f(x)]) (\\mathbb {E}[y\\mid f(x)] - y)] .", "\\nonumber $ By the law of total expectation, we will write the above as $\\operatorname{MSE}(f)-\\operatorname{CE}_2(f)^2 = \\mathbb {E}[ \\mathbb {E}[&( f(x) - y) (\\mathbb {E}[y\\mid f(x)] - y) \\\\&+ (f(x) - \\mathbb {E}[y\\mid f(x)]) (\\mathbb {E}[y\\mid f(x)] - y)\\mid f(x)]] .", "\\nonumber $ Focusing on the inner conditional expectation, we have that $\\mathbb {E}[(& f(x) - y) (\\mathbb {E}[y\\mid f(x)] - y) + (f(x) - \\mathbb {E}[y\\mid f(x)]) (\\mathbb {E}[y\\mid f(x)] - y)\\mid f(x)] \\nonumber \\\\=& \\mathbb {E}[y\\mid f(x)] (f(x) - 1)(\\mathbb {E}[y\\mid f(x)] - 1)+(1 - \\mathbb {E}[y\\mid f(x)])f(x) \\mathbb {E}[y\\mid f(x)] \\nonumber \\\\&+ \\mathbb {E}[y\\mid f(x)] (f(x) - \\mathbb {E}[y\\mid f(x)])(\\mathbb {E}[y\\mid f(x)] - 1)\\nonumber \\\\ &+ (1-\\mathbb {E}[y\\mid f(x)]) (f(x) - \\mathbb {E}[y\\mid f(x)])\\mathbb {E}[y\\mid f(x)] \\\\=& (1-\\mathbb {E}[y\\mid f(x)])\\mathbb {E}[y\\mid f(x)] \\ge 0 \\quad \\forall f(x) $ and therefore $\\operatorname{MSE}(f)-\\operatorname{CE}_2(f)^2 = \\mathbb {E}[(1-\\mathbb {E}[y\\mid f(x)])\\mathbb {E}[y\\mid f(x)]]\\ge 0 .$ The expectation in  eq:MSEminusCE2 is over variances of Bernoulli random variables with probabilities $\\mathbb {E}[y\\mid f(x)]$ ." ], [ "Derivation of eq:estimatorEYfX", "By considering $y \\in \\lbrace 0, 1\\rbrace $ , we have the following: $\\mathbb {E}[y\\mid f(x)]&= \\sum _{y_k \\in \\mathcal {Y}} y_k \\, p_{y|f(x)}(y_k) = \\frac{\\sum _{y_k \\in \\mathcal {Y}} y_k \\, p_{f(x), y}(f(x), y_k)}{p_{f(x)}(f(x))} \\\\&= \\frac{p_{f(x), y}(f(x), y_k=1)}{p_{f(x)}(f(x))} = \\frac{p_{f(x)|y}(f(x)|y_k=1)p_y(y_k=1)}{p_{f(x)}(f(x))} \\\\&\\approx \\frac{\\frac{1}{\\sum _{i=1}^n y_i}\\sum _{i=1}^n k(f(x) ; f(x_i))y_i \\frac{\\sum _{i=1}^n y_i}{n}}{\\frac{1}{n}\\sum _{i=1}^n k(f(x) ; f(x_i))} \\\\&\\approx \\frac{\\sum _{i=1}^n k(f(x) ; f(x_i))y_i}{\\sum _{i=1}^n k(f(x) ; f(x_i))} =: \\widehat{\\mathbb {E}[y \\mid f(x)]}$" ], [ "Derivation of eq:msebinary", "We consider the optimization problem for some $\\lambda >0$ : $f = \\arg \\min _{f\\in \\mathcal {F}} \\Bigl ( \\operatorname{MSE}(f) +\\lambda \\operatorname{CE}_2(f)^2 \\Bigr ).$ Using eq:MSEminusCE2 we rewrite: $\\operatorname{MSE}(f) +\\lambda \\operatorname{CE}_2(f)^2 \\nonumber &= (1+\\lambda ) \\operatorname{MSE}(f) -\\lambda \\Bigl (\\operatorname{MSE}(f)-\\operatorname{CE}_2(f)^2\\Bigr ) \\nonumber \\\\&= (1+\\lambda ) \\operatorname{MSE}(f) -\\lambda \\mathbb {E}\\biggl [\\Bigl (1-\\mathbb {E}[y\\mid f(x)]\\Bigr )\\mathbb {E}[y\\mid f(x)]\\biggr ].", "$ Rescaling eq:CalibrationRetularized by a factor of $(1+\\lambda )^{-1}$ and a variable substitution $\\gamma =\\frac{\\lambda }{1+\\lambda } \\in [0,1)$ , we have that: $f=&\\arg \\min _{f\\in \\mathcal {F}}\\Bigl ( \\operatorname{MSE}(f) +\\lambda \\operatorname{CE}_2(f)^2\\Bigr ) \\nonumber \\\\ =& \\arg \\min _{f\\in \\mathcal {F}}\\biggl ( \\operatorname{MSE}(f) -\\gamma \\mathbb {E}\\biggl [\\Bigl (1-\\mathbb {E}[y\\mid f(x)]\\Bigr )\\mathbb {E}[y\\mid f(x)]\\biggr ]\\biggr ) \\nonumber \\\\=& \\arg \\min _{f\\in \\mathcal {F}} \\biggl ( \\operatorname{MSE}(f) +\\gamma \\mathbb {E}\\Bigl [\\mathbb {E}[y\\mid f(x)]^2\\Bigr ]\\biggr ) .$" ], [ "Bias of ratio of U-statistics", "The unbiased estimator for the square of a mean $\\mu _X^2$ is given by: $\\widehat{\\mu _X^2} = \\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n X_i X_j = \\frac{1}{n(n-1)}\\left(\\left(\\sum _{i=1}^n X_i\\right)^2 - \\sum _{i=1}^n X_i^2\\right).$ This is a second order U-statistics with kernel $h(x_1, x_2)=x_1 x_2$ .", "The bias of the ratio of two of these estimators converges as $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ , as the following lemma proves.", "Lemma B.1 Let $\\theta _1$ and $\\theta _2$ be two estimable parameters and let $U_1$ and $U_2$ be the two corresponding U-statistics of order $m_1$ and $m_2$ , respectively, based on a sample of $n$ i.i.d.", "RVs.", "The bias of the ratio $U_1 / U_2$ of these two U-statistics will converge as $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ .", "Let $R=\\theta _1 / \\theta _2$ be the ratio of two estimable parameters and $r=U_1 / U_2$ the ratio of the corresponding U-statistics.", "Note, that $U_i$ is an unbiased estimator of $\\theta _i$ , $\\mathbb {E}[U_i]=\\theta _i$ , $i=1,2$ , however, the ratio is usually biased.", "To investigate the bias of that ratio we rewrite $r = R \\biggl (1+\\frac{U_1 -\\theta _1 }{\\theta _1} \\biggr )\\biggl (1+\\frac{U_2 -\\theta _2 }{\\theta _2} \\biggr )^{-1}.$ If $\\left|\\frac{U_2 -\\theta _2 }{\\theta _2} \\right|<1$ , we can expand $\\biggl (1+\\frac{U_2 -\\theta _2 }{\\theta _2} \\biggr )^{-1}$ in a geometric series: $r &=R \\biggl (1+\\frac{(U_1 -\\theta _1) }{\\theta _1} \\biggr ) \\biggl (1- \\frac{(U_2 -\\theta _2) }{\\theta _2} +\\frac{(U_2 -\\theta _2)^2 }{\\theta _2^2}-\\frac{(U_2 -\\theta _2)^3 }{\\theta _2^3}+\\frac{(U_2 -\\theta _2)^4 }{\\theta _2^4}-...\\biggr )\\\\&=R\\Biggl (1 +\\frac{(U_1 -\\theta _1) }{\\theta _1}-\\frac{(U_2 -\\theta _2) }{\\theta _2}-\\frac{(U_2 -\\theta _2)(U_1 -\\theta _1) }{\\theta _2\\theta _1} \\nonumber \\\\&\\phantom{asdfasd}+\\frac{(U_2 -\\theta _2)^2 }{\\theta _2^2}+\\frac{(U_2 -\\theta _2)^2(U_1 -\\theta _1)}{\\theta _2^2\\theta _1}-\\frac{(U_2 -\\theta _2)^3 }{\\theta _2^3}-\\frac{(U_2 -\\theta _2)^3(U_1 -\\theta _1) }{\\theta _2^3\\theta _1} \\nonumber \\\\&\\phantom{asdfasd}+\\frac{(U_2 -\\theta _2)^4 }{\\theta _2^4}+\\frac{(U_2 -\\theta _2)^4(U_1 -\\theta _1) }{\\theta _2^4\\theta _1} -... \\Biggr ).$ If $\\zeta _1 > 0$ , a U-statistic $U$ of order $m$ obtained from a sample of $n$ observations converges in distribution [45]: $\\sqrt{n}\\left(U - \\mathbb {E}[U]\\right) \\xrightarrow{} N(0, m^2 \\zeta _1).$ Keeping the terms up to $\\Theta \\left(\\frac{1}{n} \\right)$ : $\\begin{split}r&=R\\Biggl (1 +\\frac{(U_1 -\\theta _1 )}{\\theta _1}-\\frac{(U_2 -\\theta _2) }{\\theta _2}-\\frac{(U_2 -\\theta _2)(U_1 -\\theta _1) }{\\theta _2\\theta _1}+\\frac{(U_2 -\\theta _2)^2 }{\\theta _2^2} + o\\left(\\frac{1}{n}\\right)\\Biggl )\\end{split}$ To examine the bias, we take the expectation value of this expression: $\\begin{split}\\mathbb {E}[r] &= R\\Biggl (1 +\\frac{\\mathbb {E}\\bigl [(U_1 -\\theta _1) \\bigr ]}{\\theta _1}-\\frac{\\mathbb {E}\\bigl [(U_2 -\\theta _2) \\bigr ]}{\\theta _2}-\\frac{\\mathbb {E}\\bigl [(U_2 -\\theta _2)(U_1 -\\theta _1) \\bigr ]}{\\theta _2\\theta _1}+\\frac{\\mathbb {E}\\bigl [(U_2 -\\theta _2)^2 \\bigr ]}{\\theta _2^2}+ o\\left(\\frac{1}{n}\\right) \\Biggr )\\end{split}$ We now make use of the following expressions: $\\mathbb {E}\\bigl [(U_1 -\\theta _1) \\bigr ]&=\\mathbb {E}\\bigl [(U_2 -\\theta _2) \\bigr ]=0\\\\\\mathbb {E}\\bigl [(U_2 -\\theta _2)(U_1 -\\theta _1) \\bigr ]&=\\operatorname{Cov}(U_2, U_1)\\\\\\mathbb {E}\\bigl [(U_2 -\\theta _2)^2 \\bigr ]&= \\operatorname{Var}(U_2)\\\\$ Using these expressions the expectation of $r$ becomes: $\\begin{split}\\mathbb {E}[r] &= R\\Biggl (1 - \\frac{\\operatorname{Cov}(U_2, U_1)}{\\theta _2 \\theta _1} + \\frac{\\operatorname{Var}(U_2)}{\\theta _2^2} + o\\left(\\frac{1}{n}\\right) \\Biggr )\\end{split}$ Using Equation (REF ), the linearity of covariance and with $\\operatorname{Var}(aX)=a^2 \\operatorname{Var}(X)$ we obtain: $\\operatorname{Cov}(U_2, U_1), \\operatorname{Var}(U_2) \\in \\mathcal {O}\\left( \\frac{1}{n} \\right) \\Rightarrow \\mathbb {E}[r] = R\\Biggl (1 + \\mathcal {O}\\left(\\frac{1}{n}\\right) \\Biggr ) .$" ], [ "De-biasing of ratios of straight averages", "Let $X$ and $Y$ be random variables and let $\\mu _X$ and $\\mu _Y$ be the means of their distributions, respectively.", "Consider the problem of finding an unbiased estimator for the ratio of means: $R = \\frac{\\mu _Y}{\\mu _X}.$ A first approach to estimate this ratio $R$ is to compute the ratio of the sample means: Let $(X_1, Y_1), ..., (X_n, Y_n)$ be pairs of i.i.d.", "random variables that are jointly distributed: $r = \\hat{R} = \\frac{\\hat{\\mu _Y}}{\\hat{\\mu _X}} =\\frac{\\frac{1}{n}\\sum _{i=1}^n Y_i}{\\frac{1}{n}\\sum _{i=1}^nX_i}=\\frac{\\bar{Y}}{\\bar{X}}.$ This, however, is a biased estimator, which can be seen as follows (we follow [50], [37] here): $r =\\frac{\\bar{Y}}{\\bar{X}} = \\frac{\\mu _Y}{\\mu _X} \\left(\\frac{\\bar{Y}}{\\mu _Y}\\right)\\left(\\frac{\\bar{X}}{\\mu _X}\\right)^{-1}=R\\biggl (1 + \\frac{\\bar{Y}-\\mu _Y}{\\mu _Y} \\biggr )\\biggl (1 + \\frac{\\bar{X}-\\mu _X}{\\mu _X} \\biggr )^{-1}.$ This has now the form of a converging geometric series.", "Thus, if $\\biggl | \\frac{\\bar{X}-\\mu _X}{\\mu _X}\\biggr | < 1,$ we can expand $\\biggl (1 + \\frac{\\bar{X}-\\mu _X}{\\mu _X} \\biggr )^{-1}$ in a geometric series, which is defined as: $\\sum _{k=0}^{\\infty } a \\, b^k = a + ab + a b^2 + ... = \\frac{a}{1-b}.$ In our case we can identify $a=R\\biggl (1 + \\frac{\\bar{Y}-\\mu _Y}{\\mu _Y} \\biggr )$ and $b=-\\frac{\\bar{X}-\\mu _X}{\\mu _X}$ .", "Thus, using the geometric series expansion, we can write: $r&= R\\biggl (1 + \\frac{\\bar{Y}-\\mu _Y}{\\mu _Y} \\biggr )\\biggl (1 - \\frac{(\\bar{X}-\\mu _X)}{\\mu _X} + \\frac{(\\bar{X}-\\mu _X)^2}{\\mu _X^2} - \\frac{(\\bar{X}-\\mu _X)^3}{\\mu _X^3} + \\frac{(\\bar{X}-\\mu _X)^4}{\\mu _X^4} - ...\\biggr )\\\\\\begin{split}&=R\\biggl (1 + \\frac{(\\bar{Y}-\\mu _Y)}{\\mu _Y} - \\frac{(\\bar{X}-\\mu _X)}{\\mu _X} - \\frac{(\\bar{X}-\\mu _X)(\\bar{Y}-\\mu _Y)}{\\mu _Y \\mu _X} + \\frac{(\\bar{X}-\\mu _X)^2}{\\mu _X^2} \\\\& \\phantom{asdfa}+ \\frac{(\\bar{X}-\\mu _X)^2 (\\bar{Y}-\\mu _Y)}{\\mu _X^2 \\mu _Y} - \\frac{(\\bar{X}-\\mu _X)^3}{\\mu _X^3} - \\frac{(\\bar{X}-\\mu _X)^3 (\\bar{Y}-\\mu _Y)}{\\mu _X^3 \\mu _Y} + \\frac{(\\bar{X}-\\mu _X)^4}{\\mu _X^4} + ... \\biggr )\\end{split}$" ], [ "Neglecting higher order terms", "Since $\\bar{X}$ and $\\bar{Y}$ are U-statistics, we make use of the asymptotic behaviour of U-statistics.", "If $\\zeta _1 > 0$ , a U-statistics $U_n$ of order $m$ obtained from a sample of $n$ observations behaves as $n \\rightarrow \\infty $ like ([45]): $\\sqrt{n}\\left(U_n - \\mathbb {E}[U_n]\\right) \\xrightarrow{} N(0, m^2 \\zeta _1).$ As we seek an estimator that is unbiased up until order $n^{-2}$ and since $\\mathbb {E}[\\bar{X}] = \\mu _X$ , we can neglect all terms of order 5 or higher since for $n \\rightarrow \\infty $ : $(\\bar{X} - \\mu _X)^5 &\\in \\mathcal {O}(n^{-2.5})\\\\(\\bar{X} - \\mu _X)^4 (\\bar{Y} - \\mu _Y) &\\in \\mathcal {O}(n^{-2.5})$ Therefore, we obtain: $\\begin{split}r&\\approx R\\biggl (1 + \\frac{(\\bar{Y}-\\mu _Y)}{\\mu _Y} - \\frac{(\\bar{X}-\\mu _X)}{\\mu _X} - \\frac{(\\bar{X}-\\mu _X)(\\bar{Y}-\\mu _Y)}{\\mu _Y \\mu _X} + \\frac{(\\bar{X}-\\mu _X)^2}{\\mu _X^2} \\\\& \\phantom{asdfa}+ \\frac{(\\bar{X}-\\mu _X)^2 (\\bar{Y}-\\mu _Y)}{\\mu _X^2 \\mu _Y} - \\frac{(\\bar{X}-\\mu _X)^3}{\\mu _X^3} - \\frac{(\\bar{X}-\\mu _X)^3 (\\bar{Y}-\\mu _Y)}{\\mu _X^3 \\mu _Y} + \\frac{(\\bar{X}-\\mu _X)^4}{\\mu _X^4} \\biggr )\\end{split}$" ], [ "Bias", "Using these expressions we can compute the expectation value of $r=\\hat{R}$ : $\\begin{split}\\mathbb {E}[r]&\\approx R\\Biggl (1+\\frac{1}{n}\\biggl (\\frac{\\operatorname{Var(X)}}{\\mu _X^2} - \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y} \\biggr ) + \\frac{1}{n^2}\\biggl (\\frac{(\\operatorname{Cov}(X^2, Y) -2\\mu _X \\operatorname{Cov}(X, Y))}{\\mu _X^2 \\mu _Y}\\\\&\\phantom{asdfa}-\\frac{(\\operatorname{Cov}(X^2, X) - 2\\mu _X \\operatorname{Var}(X))}{\\mu _X^3} - \\frac{3\\operatorname{Var}(X) \\operatorname{Cov}(X, Y)}{\\mu _X^3 \\mu _Y}+\\frac{3 \\operatorname{Var}(X)^2}{\\mu _X^4}\\biggr )\\Biggr )\\end{split}$ The bias or $r=\\widehat{R}$ is defined as: $\\operatorname{Bias}(r) &= \\mathbb {E}[r] - R\\\\&=R\\Biggl (\\frac{1}{n}\\biggl (\\frac{\\operatorname{Var}(X)}{\\mu _X^2} - \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y} \\biggr ) + \\frac{1}{n^2}\\biggl (\\frac{(\\operatorname{Cov}(X^2, Y) -2\\mu _X \\operatorname{Cov}(X, Y))}{\\mu _X^2 \\mu _Y}\\\\&\\phantom{asdfa}-\\frac{(\\operatorname{Cov}(X^2, X) - 2\\mu _X \\operatorname{Var}(X))}{\\mu _X^3} - \\frac{3\\operatorname{Var}(X) \\operatorname{Cov}(X, Y)}{\\mu _X^3 \\mu _Y}+\\frac{3 \\operatorname{Var}(X)^2}{\\mu _X^4}\\biggr )\\Biggr )$ Therefore an unbiased version of $r$ is: $r_{\\text{unbiased}} &= r - R\\Biggl (\\frac{1}{n}\\biggl (\\frac{\\operatorname{Var}(X)}{\\mu _X^2} - \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y} \\biggr ) + \\frac{1}{n^2}\\biggl (\\frac{(\\operatorname{Cov}(X^2, Y) -2\\mu _X \\operatorname{Cov}(X, Y))}{\\mu _X^2 \\mu _Y}\\\\&\\phantom{asdfas}-\\frac{(\\operatorname{Cov}(X^2, X) - 2\\mu _X \\operatorname{Var}(X))}{\\mu _X^3} - \\frac{3\\operatorname{Var}(X) \\operatorname{Cov}(X, Y)}{\\mu _X^3 \\mu _Y}+\\frac{3 \\operatorname{Var}(X)^2}{\\mu _X^4}\\biggr )\\Biggr )$ A corrected version of the estimator $r=\\hat{R}$ is consequently given by: $\\begin{split}r_{corr} &:= r\\Biggl (1-\\frac{1}{n}\\biggl (\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}} - \\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}} \\biggr ) - \\frac{1}{n^2}\\biggl (\\frac{\\widehat{(\\operatorname{Cov}(X^2, Y)} -2\\widehat{\\mu _X} \\widehat{\\operatorname{Cov}(X, Y)})}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y}}\\\\&\\phantom{asdfas}-\\frac{(\\widehat{\\operatorname{Cov}(X^2, X)} - 2\\widehat{\\mu _X} \\widehat{\\operatorname{Var}(X)})}{\\widehat{\\mu _X^3}} - \\frac{3\\widehat{\\operatorname{Var}(X)} \\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X^3} \\widehat{\\mu _Y}}+\\frac{3 \\widehat{\\operatorname{Var}(X)}^2}{\\widehat{\\mu _X^4}}\\biggr )\\Biggr )\\end{split}$ In the above equation we again encounter rations of estimators which again might be biased.", "Since we want to achieve a second order de-biasing we have to again recurse on the terms that have a $\\mathcal {O}\\left(\\frac{1}{n} \\right)$ dependency.", "However, we do not have to recurse on the terms that have a $\\mathcal {O}\\left(\\frac{1}{n^2} \\right)$ dependency, since any recursion would increase the power of the $n$ -dependency.", "Therefore a debiased estimator up to order $\\mathcal {O}(n^2)$ is: $\\begin{split}r_{corr} &:= r\\Biggl (1-\\frac{1}{n}\\biggl (r_{b}^{*} - r_{a}^{*} \\biggr ) - \\frac{1}{n^2}\\biggl (\\frac{\\widehat{(\\operatorname{Cov}(X^2, Y)} -2\\widehat{\\mu _X} \\widehat{\\operatorname{Cov}(X, Y)})}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y}}\\\\&\\phantom{asdfa}-\\frac{(\\widehat{\\operatorname{Cov}(X^2, X)} - 2\\widehat{\\mu _X} \\widehat{\\operatorname{Var}(X)})}{\\widehat{\\mu _X^3}} - \\frac{3\\widehat{\\operatorname{Var}(X)} \\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X^3} \\widehat{\\mu _Y}}+\\frac{3 \\widehat{\\operatorname{Var}(X)}^2}{\\widehat{\\mu _X^4}}\\biggr )\\Biggr )\\end{split}$ where $\\begin{split}r_{a}^{*} &= \\underbrace{\\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}}}_{=r_a}\\Biggl (1+\\frac{1}{(n-1)}\\biggl (\\frac{\\widehat{\\mu _Y}\\widehat{\\operatorname{Cov}(X^2, Y)}+\\widehat{\\mu _X}\\widehat{\\operatorname{Cov}(Y^2, X)}}{\\widehat{\\operatorname{Cov}(X, Y)}\\widehat{\\mu _X}\\widehat{ \\mu _Y}}-4\\biggr )\\\\&\\phantom{asasasasddasdf}- \\frac{1}{(n-1)}\\biggl (\\frac{\\ \\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2} }+ \\frac{ \\widehat{\\operatorname{Var}(Y)}}{ \\widehat{\\mu _Y^2}} + 2\\frac{ \\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X} \\widehat{\\mu _Y}}\\biggr )\\Biggr )\\end{split}$ $r_{b}^{*} = \\underbrace{\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}}_{=r_b}\\Biggl (1+ \\frac{4}{(n-1)}\\biggl (\\frac{\\frac{1}{2}\\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X}\\widehat{ \\operatorname{Var}(X)}}-1\\biggr )-\\frac{4}{(n-1)}\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}\\Biggr ).$" ], [ "De-biasing of ratios of squared means", "Now consider the problem of finding an unbiased estimator for the ratio of the squared means of $x$ and $Y$ : $R = \\frac{\\mu _Y^2}{\\mu _X^2}.$ Both the numerator and denominator of $R$ can separately be estimated by a second order U-statistics, respectively: $r = \\hat{R} = \\frac{\\widehat{\\mu _Y^2}}{\\widehat{\\mu _X^2}} =\\frac{\\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{j=1 \\wedge j\\ne i}^n Y_i Y_j}{\\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{j=1 \\wedge j\\ne i}^nX_i X_j}=:\\frac{\\bar{Y_2}}{\\bar{X_2}}.$ The subscript 2 in $\\bar{X}_2$ should emphasize that we are dealing with a second order U-statistics here.", "Again, the ratio $\\frac{\\bar{Y}_2}{\\bar{X}_2}$ ,is a biased estimator.", "Using the approach with the converging geometric series and neglecting the higher order terms, we obtain: $\\begin{split}r&\\approx R\\biggl (1 + \\frac{(\\bar{Y}_2-\\mu _Y^2)}{\\mu _Y^2} - \\frac{(\\bar{X}_2-\\mu _X^2)}{\\mu _X^2} - \\frac{(\\bar{X}_2-\\mu _X^2)(\\bar{Y}_2-\\mu _Y^2)}{\\mu _Y^2 \\mu _X^2} + \\frac{(\\bar{X}_2-\\mu _X^2)^2}{\\mu _X^4} \\\\& \\phantom{asdfa}+ \\frac{(\\bar{X}_2-\\mu _X^2)^2 (\\bar{Y}_2-\\mu _Y^2)}{\\mu _X^4 \\mu _Y^2} - \\frac{(\\bar{X}_2-\\mu _X^2)^3}{\\mu _X^6} - \\frac{(\\bar{X}_2-\\mu _X^2)^3 (\\bar{Y}_2-\\mu _Y^2)}{\\mu _X^6 \\mu _Y^2} + \\frac{(\\bar{X}_2-\\mu _X^2)^4}{\\mu _X^8}\\biggr )\\end{split}$" ], [ "Bias", "Computing $\\mathbb {E}[r]$ using the above identities: $\\begin{split}\\mathbb {E}[r]&\\approx R\\Biggl (1-\\frac{\\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^2 \\mu _Y^2}+\\frac{\\operatorname{Var}(\\bar{X}_2)}{\\mu _X^4}+\\biggl (\\frac{\\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2) -2\\mu _X^2 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}\\biggr )\\\\&\\phantom{asdfa}-\\biggl (\\frac{\\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2)-2\\mu _X^2 \\operatorname{Var}(\\bar{X}_2)}{\\mu _X^6}\\biggr ) - \\biggl (\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2) - 3\\mu _X^2 \\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2) + 3 \\mu _X^4 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}\\biggr )\\\\&\\phantom{asdfa}+\\biggl (\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{X}_2) - 3\\mu _X^2 \\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2) + 3 \\mu _X^4 \\operatorname{Var}(\\bar{X}_2)}{\\mu _X^8}\\biggr )\\Biggr )\\end{split}\\\\\\begin{split}&= R\\Biggl (1-\\overbrace{\\frac{6 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^2 \\mu _Y^2}}^{\\text{Term (a)}} + \\overbrace{\\frac{6 \\operatorname{Var}(\\bar{X}_2)}{\\mu _X^4}}^{\\text{Term (b)}} + \\overbrace{\\frac{4 \\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2)}{\\mu _X^4\\mu _Y^2}}^{\\text{Term (c)}} -\\overbrace{\\frac{4 \\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2)}{\\mu _X^6}}^{\\text{Term (d)}} \\\\& \\phantom{asdfa}- \\underbrace{\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}}_{\\text{Term (e)}} + \\underbrace{\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{X}_2)}{\\mu _X^8}}_{\\text{Term (f)}}\\Biggl )\\end{split}$ $\\begin{split}&=R\\Biggl \\lbrace 1-\\Biggl (\\frac{12}{n(n-1)}\\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2}+ \\frac{24}{n} R_a\\Biggr )\\\\&\\phantom{asdfasd}+\\Biggl (\\frac{12}{n(n-1)}\\frac{\\operatorname{Var}(X)^2}{\\mu _X^4}+\\frac{24}{n} R_b\\Biggr )\\\\&\\phantom{asdfasd}+\\Biggl (\\frac{32(n-2)}{n(n-1)^2}\\biggl (\\frac{ \\operatorname{Cov}(X^2, Y)}{\\mu _X^2 \\mu _Y} +\\frac{ 2\\operatorname{Cov}(X, Y)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _X^3 \\mu _Y}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl (\\frac{8}{n} R_a + \\frac{12}{n(n-1)} \\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2}\\biggr )\\Biggr )\\\\&\\phantom{asdfasd}- \\Biggl (\\frac{32(n-2)}{n(n-1)^2}\\biggl (\\frac{ \\operatorname{Cov}(X^2, X)}{\\mu _X^3} +\\frac{ 2\\operatorname{Var}(X)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _X^4}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl ( \\frac{8}{n} R_b + \\frac{12}{n(n-1)} \\frac{\\operatorname{Var}(X)^2}{\\mu _X^4}\\biggr )\\Biggr )\\\\&\\phantom{asdfasd}-\\Biggl (\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3}\\biggl (\\frac{ \\operatorname{Cov}(X^2, Y)}{\\mu _Y \\mu _X^2}+\\frac{4 \\operatorname{Cov}(X, Y)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _Y \\mu _X^3} \\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} R_a + \\frac{30}{n(n-1)} \\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2} \\biggr )\\Biggr )\\\\&\\phantom{asdfasd}+\\Biggl (\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3} \\biggl (\\frac{ \\operatorname{Cov}(X^2, X)}{\\mu _X^3}+\\frac{4 \\operatorname{Var}(X)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _X^4}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} R_b + \\frac{30}{n(n-1)} \\frac{\\operatorname{Var}(X)^2}{\\mu _X^4} \\biggr )\\Biggr )\\end{split}$ where $R_a = \\frac{\\operatorname{Cov(X, Y)}}{\\mu _X \\mu _Y}$ and $R_b = \\frac{\\operatorname{Var}(X)}{\\mu _X^2}$ and where we have used REF , REF REF , REF , REF and REF for terms (a)-(f).", "Therefore, an estimator unbiased up to order two is given by: $\\begin{split}r_{\\text{corr}}&=\\frac{\\widehat{\\mu _Y^2}}{\\widehat{\\mu _X^2}}\\Biggl \\lbrace 1+\\Biggl (\\frac{12}{n(n-1)}\\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}+ \\frac{24}{n} r_{a}^{*}\\Biggr )\\\\&\\phantom{asdfasd}-\\Biggl (\\frac{12}{n(n-1)}\\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}}+\\frac{24}{n} r_{b}^{*}\\Biggr )\\\\&\\phantom{asdfasd}-\\Biggl (\\frac{32(n-2)}{n(n-1)^2}\\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, Y)}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y}} +\\frac{ 2\\widehat{\\operatorname{Cov}(X, Y)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2)}}{\\widehat{\\mu _X^3} \\widehat{\\mu _Y}}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl (\\frac{8}{n} r_{a}^{*} + \\frac{12}{n(n-1)} \\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}\\biggr )\\Biggr )\\\\&\\phantom{asdfasd}+\\Biggl (\\frac{32(n-2)}{n(n-1)^2}\\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X^3}} +\\frac{ 2\\widehat{\\operatorname{Var}(X)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2})}{\\widehat{\\mu _X^4}}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl ( \\frac{8}{n} r_{b}^{*} + \\frac{12}{n(n-1)} \\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}}\\biggr )\\Biggr )\\\\&\\phantom{asdfasd}+\\Biggl (\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3}\\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, Y)}}{\\widehat{\\mu _Y} \\widehat{\\mu _X^2}}+\\frac{4 \\widehat{\\operatorname{Cov}(X, Y})(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2})}{\\widehat{\\mu _Y} \\widehat{\\mu _X^3}} \\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} r_{a}^{*} + \\frac{30}{n(n-1)} \\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}\\Biggr ) \\biggr )\\\\&\\phantom{asdfasd}-\\Biggl (\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3} \\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X^3}}+\\frac{4 \\widehat{\\operatorname{Var}(X)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2)}}{\\widehat{\\mu _X^4}}\\biggr )\\\\&\\phantom{asdfasdasd}+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} r_{b}^{*} + \\frac{30}{n(n-1)} \\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}} \\biggr )\\Biggr ),\\end{split}$ where we used equations (REF ), (REF ): $\\begin{split}r_{a}^{*} &= \\underbrace{\\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}}}_{=r_a}\\Biggl (1+\\frac{1}{(n-1)}\\biggl (\\frac{\\widehat{\\mu _Y}\\widehat{\\operatorname{Cov}(X^2, Y)}+\\widehat{\\mu _X}\\widehat{\\operatorname{Cov}(Y^2, X)}}{\\widehat{\\operatorname{Cov}(X, Y)}\\widehat{\\mu _X}\\widehat{ \\mu _Y}}-4\\biggr )\\\\&\\phantom{asasasasddasdf}- \\frac{1}{(n-1)}\\biggl (\\frac{\\ \\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2} }+ \\frac{ \\widehat{\\operatorname{Var}(Y)}}{ \\widehat{\\mu _Y^2}} + 2\\frac{ \\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X} \\widehat{\\mu _Y}}\\biggr )\\Biggr )\\end{split}$ $r_{b}^{*} = \\underbrace{\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}}_{=r_b}\\Biggl (1+ \\frac{4}{(n-1)}\\biggl (\\frac{\\frac{1}{2}\\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X}\\widehat{ \\operatorname{Var}(X)}}-1\\biggr )-\\frac{4}{(n-1)}\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}\\Biggr ).$" ], [ "Term (a)", "Let us first look at the first term: $\\frac{6 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^2 \\mu _Y^2}$ .", "Using the expression for the covariance between two second order U-statistics we get: $\\frac{6 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^2 \\mu _Y^2}&=\\frac{6}{\\mu _X^2 \\mu _Y^2}\\biggl (\\frac{4}{n}\\mu _X \\mu _Y \\operatorname{Cov(X, Y)} + \\frac{2}{n(n-1)} \\operatorname{Cov}(X, Y)^2\\biggr )\\\\&=\\underbrace{\\frac{24}{n}\\frac{\\operatorname{Cov(X, Y)}}{\\mu _X \\mu _Y}}_{\\in \\mathcal {O}(n^{-1})}+\\underbrace{\\frac{12}{n(n-1)}\\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2}}_{\\in \\mathcal {O}(n^{-2})}$ Since we know that for every recursion (i.e., geometric series expansion) we will get at least another factor of $\\frac{1}{n}$ , we don't have to further recurse on term that is of order $\\mathcal {O}(n^{-2})$ .", "Consequently, we only expand the following term via a geometric series, $R_a = \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y},$ since the ratio of the respective unbiased estimators, $r_a = \\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}},$ is biased.", "Using the same machinery as before, we obtain a corrected version of $r_a$ : $\\begin{split}r_{a}^{*} &= \\underbrace{\\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}}}_{=r_a}\\Biggl (1+\\frac{1}{(n-1)}\\biggl (\\frac{\\widehat{\\mu _Y}\\widehat{\\operatorname{Cov}(X^2, Y)}+\\widehat{\\mu _X}\\widehat{\\operatorname{Cov}(Y^2, X)}}{\\widehat{\\operatorname{Cov}(X, Y)}\\widehat{\\mu _X}\\widehat{ \\mu _Y}}-4\\biggr )\\\\&\\phantom{asasasasddasdf}- \\frac{1}{(n-1)}\\biggl (\\frac{\\ \\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2} }+ \\frac{ \\widehat{\\operatorname{Var}(Y)}}{ \\widehat{\\mu _Y^2}} + 2\\frac{ \\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X} \\widehat{\\mu _Y}}\\biggr )\\Biggr )\\end{split}$ The complete correction of term (a), $\\frac{6 \\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}{\\mu _X^2 \\mu _Y^2}$ , looks therefore as follows: $\\begin{split}\\frac{6 \\widehat{\\operatorname{Cov}(\\bar{X}_2, \\bar{Y}_2)}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}&=\\frac{12}{n(n-1)}\\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}\\\\&\\phantom{as}+ \\frac{24}{n}\\frac{\\widehat{\\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X \\mu _Y}}\\Biggl (1+\\frac{1}{(n-1)}\\biggl (\\frac{\\widehat{\\mu _Y}\\widehat{\\operatorname{Cov}(X^2, Y)}+\\widehat{\\mu _X}\\widehat{\\operatorname{Cov}(Y^2, X)}}{\\widehat{\\operatorname{Cov}(X, Y)}\\widehat{\\mu _X} \\widehat{\\mu _Y}}-4\\biggr )\\\\&\\phantom{asasasasdfasddasdf}- \\frac{1}{(n-1)}\\biggl (\\frac{\\ \\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2} }+ \\frac{\\widehat{ \\operatorname{Var}(Y)}}{\\widehat{ \\mu _Y^2}} + 2\\frac{\\widehat{ \\operatorname{Cov}(X, Y)}}{\\widehat{\\mu _X} \\widehat{\\mu _Y}}\\biggr )\\Biggr )\\end{split}$" ], [ "Term (b)", "The correction of term (b), $\\frac{6 \\operatorname{Var}(\\bar{X}_2)}{\\mu _X^4}$ is analogous to that of term (a).", "Define $R_b &=\\frac{\\operatorname{Var(X)}}{\\mu _X^2}\\\\r_b &=\\frac{\\widehat{\\operatorname{Var(X)}}}{\\widehat{\\mu _X^2}}.$ Then using the geometric series expansion, a corrected version of $r_b$ is given by $r_{b}^{*} = \\underbrace{\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}}_{=r_b}\\Biggl (1+ \\frac{4}{(n-1)}\\biggl (\\frac{\\frac{1}{2}\\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X}\\widehat{ \\operatorname{Var}(X)}}-1\\biggr )-\\frac{4}{(n-1)}\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}\\Biggr ).$ The full correction of term (b) is $\\frac{6\\widehat{\\operatorname{Var}(\\bar{X}_2)}}{\\widehat{\\mu _X^4}} &= \\frac{12}{n(n-1)}\\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}} +\\frac{24}{n} \\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}\\Biggl (1+ \\frac{4}{(n-1)}\\biggl (\\frac{\\frac{1}{2}\\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X}\\widehat{ \\operatorname{Var}(X)}}-1-\\frac{\\widehat{\\operatorname{Var}(X)}}{\\widehat{\\mu _X^2}}\\biggr )\\Biggr )$" ], [ "Term (c)", "In this section we want to find an expression for term (c): $\\frac{4\\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}.$ To this end, we first need a convenient representation of $\\bar{X}_2^2$ in terms of other U-statistics: $\\bar{X}_2^2=\\Biggl (\\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n X_i X_j \\Biggr )^2= \\frac{2}{n (n-1)} U_{\\alpha }+ \\frac{4(n-2)}{n(n-1)}U_{\\beta } +\\frac{(n-2)(n-3)}{n(n-1)}\\bar{X}_4,$ with the U-statistics: $U_{\\beta }&=\\frac{1}{n(n-1)(n-2)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n X_i^2 X_j X_k\\\\U_{\\alpha }&=\\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n X_i^2 X_j^2\\\\\\bar{X}_4&=\\frac{1}{n(n-1)(n-2)(n-3)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n\\sum _{\\begin{array}{c}l=1\\\\l \\ne k\\\\l \\ne j \\\\ l \\ne i\\end{array}}^n X_i X_j X_k X_l.$ Hence, term (c) becomes: $\\frac{4\\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2} = \\underbrace{\\frac{8}{n (n-1)} \\frac{\\operatorname{Cov}(U_{\\alpha }, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}}_{\\text{First term}}+ \\underbrace{\\frac{16(n-2)}{n(n-1)}\\frac{\\operatorname{Cov}(U_{\\beta }, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}}_{\\text{Second term}} +\\underbrace{\\frac{4(n-2)(n-3)}{n(n-1)}\\frac{\\operatorname{Cov}(\\bar{X}_4, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}}_{\\text{Third term}}$ All all the covariances in the above equation are covariances between U-statistics which are $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ .", "Therefore, the first term, which already has an explicit $\\mathcal {O}\\left(\\frac{1}{n^2}\\right)$ dependence, can be neglected entirely.", "The second term has an explicit $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ , combined with the $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ from the covariance this is in total a $\\mathcal {O}\\left(\\frac{1}{n^2}\\right)$ dependency.", "Hence, we have to find an estimator for that term but do not have to recurse on it.", "On the last term, we do have to recurse, however, we have derived the recursion already in equation (REF ).", "We can rewrite the above equation using the symmetrized U-statistics $U_{\\beta }&=\\frac{1}{n(n-1)(n-2)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n \\frac{1}{3} \\Bigl (X_i^2 X_j X_k + X_i X_j^2 X_k + X_i X_j X_k^2).$ $\\begin{split}\\frac{4\\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2)}{\\mu _X^4 \\mu _Y^2}&\\approx \\underbrace{\\frac{32(n-2)}{n(n-1)^2}\\Biggl (\\frac{ \\operatorname{Cov}(X^2, Y)}{\\mu _X^2 \\mu _Y} +\\frac{ 2\\operatorname{Cov}(X, Y)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _X^3 \\mu _Y}\\Biggr )}_{\\text{Second term}}\\\\&\\phantom{asdfasdf}+\\underbrace{\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl (\\frac{8}{n} \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y} + \\frac{12}{n(n-1)} \\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2}\\biggr )}_{\\text{Third term}}\\end{split}$ Taking the recursion of the third term into account, the total correction of term (c) is: $\\begin{split}\\frac{4\\widehat{\\operatorname{Cov}(\\bar{X}_2^2, \\bar{Y}_2)}}{\\widehat{\\mu _X^4} \\widehat{\\mu _Y^2}}&\\approx \\frac{32(n-2)}{n(n-1)^2}\\Biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, Y)}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y}} +\\frac{ 2\\widehat{\\operatorname{Cov}(X, Y)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2)}}{\\widehat{\\mu _X^3} \\widehat{\\mu _Y}}\\Biggr )\\\\&\\phantom{asdf}+\\frac{4(n-2)(n-3)}{n(n-1)}\\biggl (\\frac{8}{n} r_{a}^{*} + \\frac{12}{n(n-1)} \\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}}\\biggr )\\end{split}$" ], [ "Term (d)", "The computation of the correction of term (d), $\\frac{4\\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2)}{\\mu _X^6}$ , is similar to that of term (c).", "Hence, we only present the resulting correction: $\\begin{split}\\frac{4\\widehat{\\operatorname{Cov}(\\bar{X}_2^2, \\bar{X}_2)}}{\\widehat{\\mu _X^4} \\widehat{\\mu _Y^2}}&\\approx \\frac{32(n-2)}{n(n-1)^2}\\Biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X^3}} +\\frac{ 2\\widehat{\\operatorname{Var}(X)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2})}{\\widehat{\\mu _X^4}}\\Biggr )\\\\&\\phantom{asdfasdf}+\\frac{4(n-2)(n-3)}{n(n-1)}\\Biggl ( \\frac{8}{n} r_{b}^{*} + \\frac{12}{n(n-1)} \\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}}\\Biggr )\\end{split}$" ], [ "Term (e)", "Term (e) is: $\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}$ To be able to compute that term, we reexpress the numerator in terms of several U-statistics: $\\begin{split}\\bar{X}_2^3 &= \\frac{4}{n^2(n-1)^2}U_{I}+\\frac{24(n-2)}{n^2(n-1)^2}U_{II}+\\frac{8(n-2)}{n^2(n-1)^2}U_{III}+\\frac{8(n-2)(n-3)}{n^2(n-1)^2}U_{IV}\\\\&+\\frac{30 (n-2)(n-3)}{n^2(n-1)^2}U_{V}+\\frac{12 (n-2)(n-3)(n-4)}{n^2(n-1)2}U_{VI}+\\frac{(n-2)(n-2)(n-4)(n-5)}{n^2(n-1)^2}\\bar{X}_6,\\end{split}$ where $U_{I}:=\\frac{1}{n(n-1)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n X_i^3 X_j^3,$ $U_{II}:=\\frac{1}{n(n-1)(n-2)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n X_i^3 X_j^2 X_k,$ $U_{III}=\\frac{1}{n(n-1)(n-2)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n X_i^2, X_j^2 X_k^2$ $U_{IV}:=\\frac{1}{n(n-1)(n-2)(n-3)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n\\sum _{\\begin{array}{c}l=1\\\\l \\ne k\\\\l \\ne j \\\\ l \\ne i\\end{array}}^n X_i^3 X_j X_k X_l,$ $U_{V}:=\\frac{1}{n(n-1)(n-2)(n-3)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}l=1\\\\l \\ne k\\\\l \\ne j \\\\ l \\ne i\\end{array}}^n X_i^2 X_j^2 X_k X_l,$ $U_{VI}:=\\frac{1}{n(n-1)(n-2)(n-3)(n-4)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n\\sum _{\\begin{array}{c}l=1\\\\l \\ne k\\\\l \\ne j \\\\ l \\ne i \\end{array}}^n \\sum _{\\begin{array}{c}p=1\\\\p\\ne l \\\\p \\ne k\\\\p \\ne j \\\\ p \\ne i \\end{array}}^n X_i^2 X_j X_k X_l X_p,$ $\\bar{X}_6=\\frac{1}{n(n-1)(n-2)(n-3)(n-4)(n-5)}\\sum _{i=1}^n \\sum _{\\begin{array}{c}j=1\\\\j \\ne i\\end{array}}^n \\sum _{\\begin{array}{c}k=1\\\\k \\ne j\\\\k \\ne i\\end{array}}^n\\sum _{\\begin{array}{c}l=1\\\\l \\ne k\\\\l \\ne j \\\\ l \\ne i \\end{array}}^n \\sum _{\\begin{array}{c}p=1\\\\p\\ne l \\\\p \\ne k\\\\p \\ne j \\\\ p \\ne i \\end{array}}^n \\sum _{\\begin{array}{c}q=1\\\\q\\ne p \\\\ q\\ne l \\\\q \\ne k\\\\q \\ne j \\\\ q \\ne i \\end{array}}^n X_i X_j X_k X_l X_p X_q,$ Hence term (e) can be written as: $\\begin{split}\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2} &= \\underbrace{\\frac{4}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n^4}\\right)}\\frac{\\operatorname{Cov}(U_{I}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}+\\underbrace{\\frac{24(n-2)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n^3}\\right)}\\frac{\\operatorname{Cov}(U_{II}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}+\\underbrace{\\frac{8(n-2)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n^3}\\right)}\\frac{\\operatorname{Cov}(U_{III}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}\\\\&+\\underbrace{\\frac{8(n-2)(n-3)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n^2}\\right)}\\frac{\\operatorname{Cov}(U_{IV}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}+\\underbrace{\\frac{30 (n-2)(n-3)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n^2}\\right)}\\frac{\\operatorname{Cov}(U_{V}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}\\\\&+\\underbrace{\\frac{12 (n-2)(n-3)(n-4)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(\\frac{1}{n}\\right)}\\frac{\\operatorname{Cov}(U_{VI}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}+\\underbrace{\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}}_{\\in \\mathcal {O}\\left(1\\right)}\\frac{\\operatorname{Cov}(\\bar{X}_6, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}.\\end{split}$ At this point, we can immediately discard the first three terms as they are at least $ \\mathcal {O}\\left(\\frac{1}{n^3}\\right)$ and so can directly be neglected for a second order correction.", "In addition, as we are dealing with covariances between U-statistics they add another $ \\mathcal {O}\\left(\\frac{1}{n}\\right)$ .", "Therefore, the fourth and fifth term are actually $\\mathcal {O}\\left(\\frac{1}{n}\\right) \\mathcal {O}\\left(\\frac{1}{n^2}\\right)=\\mathcal {O}\\left(\\frac{1}{n^3}\\right)$ , so they can be neglected as well.", "Only the last and the second to last term remain: $\\begin{split}\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2} \\approx \\underbrace{\\frac{12 (n-2)(n-3)(n-4)}{n^2(n-1)^2}\\frac{\\operatorname{Cov}(U_{VI}, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}}_{\\text{Sixth term}}+\\underbrace{\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\frac{\\operatorname{Cov}(\\bar{X}_6, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}}_{\\text{Seventh term}}.\\end{split}$ Re-expressing the covariances between U-statistics as covariances between random variables $X$ and $Y$ (and using the symmetrized version of $U_{VI}$ ), we obtain: $\\begin{split}\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}{\\mu _X^6 \\mu _Y^2}&\\approx \\underbrace{\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3}\\biggl (\\frac{ \\operatorname{Cov}(X^2, Y)}{\\mu _Y \\mu _X^2}+\\frac{4 \\operatorname{Cov}(X, Y)(\\operatorname{Var}(X)+\\mu _X^2)}{\\mu _Y \\mu _X^3} \\biggr )}_{\\text{Sixth term}}\\\\&+\\underbrace{\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y} + \\frac{30}{n(n-1)} \\frac{\\operatorname{Cov}(X, Y)^2}{\\mu _X^2 \\mu _Y^2} \\biggr )}_{\\text{Seventh term}}\\end{split}$ Since the term $\\frac{12}{n} \\frac{\\operatorname{Cov}(X, Y)}{\\mu _X \\mu _Y}$ is in $\\mathcal {O}\\left(\\frac{1}{n}\\right)$ we have to recurse on it.", "However, we already have derived its correction in equation (REF ).", "Therefore, the total correction of term (e) comes down to: $\\begin{split}\\frac{\\widehat{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{Y}_2)}}{\\widehat{\\mu _X^6} \\widehat{\\mu _Y^2}}&=\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3}\\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, Y)}}{\\widehat{\\mu _Y} \\widehat{\\mu _X^2}}+\\frac{4 \\widehat{\\operatorname{Cov}(X, Y})(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2})}{\\widehat{\\mu _Y} \\widehat{\\mu _X^3}} \\biggr )\\\\&+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} r_{a}^{*} + \\frac{30}{n(n-1)} \\frac{\\widehat{\\operatorname{Cov}(X, Y)^2}}{\\widehat{\\mu _X^2} \\widehat{\\mu _Y^2}} \\biggr )\\end{split}$" ], [ "Term (f)", "Term (f) is: $\\frac{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{X}_2)}{\\mu _X^8}$ The procedure to obtain its correction is analogous to that of term (e), hence we only present the result: $\\begin{split}\\frac{\\widehat{\\operatorname{Cov}(\\bar{X}_2^3, \\bar{X}_2)}}{\\widehat{\\mu _X^8} }&=\\frac{24 (n-2)(n-3)(n-4)}{n^2(n-1)^3} \\biggl (\\frac{ \\widehat{\\operatorname{Cov}(X^2, X)}}{\\widehat{\\mu _X^3}}+\\frac{4 \\widehat{\\operatorname{Var}(X)}(\\widehat{\\operatorname{Var}(X)}+\\widehat{\\mu _X^2)}}{\\widehat{\\mu _X^4}}\\biggr )\\\\&+\\frac{(n-2)(n-3)(n-4)(n-5)}{n^2(n-1)^2}\\biggl (\\frac{12}{n} r_{b}^{*} + \\frac{30}{n(n-1)} \\frac{\\widehat{\\operatorname{Var}(X)^2}}{\\widehat{\\mu _X^4}} \\biggr )\\end{split}$" ], [ "Top-label calibration", "Following standard practice in related work on calibration, we report the $L_1$ $ECE^{bin}$ for top-label (also called confidence) calibration on CIFAR-10/100.", "$ECE^{bin}$ was calculated using 15 bins and an adaptive width binning scheme, which determines the bin sizes so that an equal number of samples fall into each bin [36], [31].", "The 95% confidence intervals for $ECE^{bin}$ are obtained using 100 bootstrap samples as in [25].", "In all experiments with calibration regularized training, the biased version of $ECE^{KDE}$ was used.", "Table REF summarizes our evaluation of the efficacy of KDE-XE in lowering the calibration error over the baseline XE on CIFAR-10 and CIFAR-100.", "The best performing $\\lambda $ coefficient for KDE-XE is shown in the brackets.", "The results show that KDE-XE consistently reduces the calibration error, without dropping the accuracy.", "Figure REF depicts the $L_2$ $ECE^{bin}$ for several choices of the $\\lambda $ parameter for KDE-XE, using ResNet-110 (SD) on CIFAR-10/100.", "Figure REF shows reliability diagrams with 10 bins for top-label calibration on CIFAR-100 using ResNet and Wide-ResNet.", "Comapared to XE, we notice that KDE-XE lowers the overconfident predictions, and obtains better calibration than MMCE ($\\lambda =2$ ) and FL-53 on average, as summarized by the ECE value in the gray box.", "Table: Top-label L 1 L_1 adaptive-width ECE bin ECE^{bin} and accuracy for XE and KDE-XE for various architectures on CIFAR-10/100.", "Best ECE values are marked in bold.", "The value in the brackets represent the value of the λ\\lambda parameter.Figure: L 2 L_2 ECE bin ECE^{bin} for top-label calibration using ResNet (SD).Figure: Reliability diagrams for top-label calibration on CIFAR-100 using ResNet (top row) and Wide-ResNet (bottom row) for each of the considered baselines." ], [ "Relationship between $ECE^{bin}$ and {{formula:06afab56-d682-4c28-b087-1994aa32058d}}", "In the following two sections, we investigate further the relationship between $ECE^{bin}$ , as the most widely used metric, and our $ECE^{KDE}$ estimator.", "For the three types of calibration, $ECE^{bin}$ is calculated with equal-width binning scheme.", "The values for the bandwidth in $ECE^{KDE}$ and the number of bins per class for $ECE^{bin}$ are chosen with leave-one-out maximum likelihood procedure and Doane's formula [9], respectively.", "Figure REF shows an example of $ECE^{bin}$ in a three-class setting on CIFAR-10.", "The points are mostly concentrated at the edges of the histogram, as can be seen from Figure REF .", "The surface of the corresponding Dirichlet KDE is given in REF .", "Figure REF shows the relationship between $ECE^{bin}$ and $ECE^{KDE}$ .", "The points represent a trained Resnet-56 model on a subset of three classes from CIFAR-10.", "In every row, a differnt number of points was used to estimate the $ECE^{KDE}$ .", "We notice the $ECE^{KDE}$ estimates of the three types of calibration closely correspond to their histogram-based approximations.", "Figure: An example of a simplex binned estimator and kernel-density estimator for CIFAR-10Figure: Relationship between ECE bin ECE^{bin} and ECE KDE ECE^{KDE} for the three types of calibration: canonical (first column), marginal (second column) and top-label (third column).", "In every row top to bottom, different number of points (100, 500, 1000 and all points, respectively) are used to approximate ECE KDE ECE^{KDE}.", "Each point represents a ResNet-56 model trained on a subset of three classes from CIFAR-10.", "The number of bins per class (13) is selected using Doane's formula , while the bandwidth is selected using a leave-one-out maximum likelihood procedure (typical chosen values are 0.001 for 100 points and 0.0001 otherwise)." ], [ "Bias and convergence rates", "Figure REF shows a comparison of $ECE^{KDE}$ and $ECE^{bin}$ estimated with a varying number of points.", "The ground truth is computed from 3000 test points with $ECE^{KDE}$ .", "The used model is a ResNet-56, trained on a subset of three classes from CIFAR-10.", "The figure shows that the two estimates are comparable and both are doing a reasonable job in a three-class setting.", "Figure: ECE KDE ECE^{KDE} estimates and their corresponding binned approximations on the three types of calibration for varying number of points used for the estimation.", "The ground truth is calculated using 3000 probability scores of the test set using ECE KDE ECE^{KDE}.", "Optimal number of bins and bandwidth are chosen with Doane's formula and LOO MLE, respectively.", "Typical chosen number of bins is 6-11, and common values for the bandwidth are 0.0001 and 0.001.Figure REF shows the absolute difference between the ground truth and estimated ECE using $ECE^{KDE}$ and a $ECE^{bin}$ with varying number of points.", "The results are averaged over 120 ResNet-56 models trained on a subset of three classes from CIFAR-10.", "Both estimators are biased and have some variance, and the plot shows that the combination of the two is in the same order of magnitude.", "The empirical convergence rates (slope of the log-log plot) is given in the legend and is shown to be close to the theoretically expected value of -0.5.", "We observe that $ECE^{KDE}$ has similar statistical properties in terms of bias and convergence as $ECE^{bin}$ .", "Figure: Absolute difference between ground truth and estimated ECE for varying number of points used for the estimation.", "The ground truth is calculated using 3000 probability scores of the test set.", "Note that the axes are on a log scale." ] ]
2210.07810
[ [ "A Recommendation Approach based on Similarity-Popularity Models of\n Complex Networks" ], [ "Abstract Recommender systems have become an essential tool for providers and users of online services and goods, especially with the increased use of the Internet to access information and purchase products and services.", "This work proposes a novel recommendation method based on complex networks generated by a similarity-popularity model to predict ones.", "We first construct a model of a network having users and items as nodes from observed ratings and then use it to predict unseen ratings.", "The prospect of producing accurate rating predictions using a similarity-popularity model with hidden metric spaces and dot-product similarity is explored.", "The proposed approach is implemented and experimentally compared against baseline and state-of-the-art recommendation methods on 21 datasets from various domains.", "The experimental results demonstrate that the proposed method produces accurate predictions and outperforms existing methods.", "We also show that the proposed approach produces superior results in low dimensions, proving its effectiveness for data visualization and exploration." ], [ "Introduction", "A recommendation system (RS) is a software system that gathers users' actions and information as inputs to predict their future behavior by utilizing various techniques and then advises them with item recommendations that best fit their preferences [1].", "Items may refer to any product or service across various domains, including online dating, travel destinations, books, movies, and news.", "Recommendation systems aim to lessen the information overload problem and reduce the enormous number of choices a user faces in online systems by highlighting only the most pertinent items [2].", "Depending on the data used to generate recommendations, recommender systems can be broadly classified into two categories: collaborative filtering and content-based recommendation systems.", "In collaborative filtering (CF), the recommendations are based on the similarities between user preferences.", "The premise is that agreement in taste between users is sustained over extended periods, and past data can be used to predict future preferences [3].", "In content-based recommendation, on the other hand, the system recommends items with similar characteristics to items the user has positively rated.", "In other words, the recommendations are determined solely by items' features, properties, or descriptions.", "Networks or graphs are powerful data representations naturally occurring in various real-world settings.", "Many applications have adopted graphs for their usefulness in providing insights into the intrinsic structure of the relationships within data [4], [5].", "Network-based or graph-based RS models, where users and items are represented as nodes connected by weighted links that represent their similarity or relevancy, can achieve high accuracy while effectively handling data sparsity and limited coverage issues.", "Recently, many network-based approaches have been introduced into the field of RS, including networks based on physics theories such as complex networks [6], [7], [8].", "Using a complex network to guide the recommendation process is a promising direction of research within the field [9].", "However, most of the work in this direction uses ad-hoc networks with no underlying model or established properties.", "In this work, we propose a novel recommendation method that uses a complex network generated by a similarity-popularity model to predict ratings.", "Users and items are represented as nodes in a graph possessing a complex network structure with an underlying hidden metric space model controlling the connection between users and items.", "We estimate the positions of users and items in a multidimensional Euclidean space from the observed user-items ratings.", "The coordinates of a user represent the user's preferences, and similarly, for items, the coordinates of an item represent its characteristics.", "The predicted ratings are then determined using the connection probabilities computed based on the similarity between nodes and their popularity.", "Our main contributions are summarized as follows: We develop a novel similarity-popularity recommendation method that takes advantage of the latest developments in the field of complex networks.", "We adopt a complex network model, namely a similarity-popularity model, to direct the recommendation process.", "First, the model parameters are estimated from observed user-item ratings.", "The learned model is then used to predict unknown ratings while providing the ability to visualize and justify the results.", "We conduct extensive experiments on several public datasets from various domains to evaluate the proposed method's effectiveness.", "The experiment results show that our approach produces accurate predictions and outperforms state-of-art methods.", "The rest of this paper is organized as follows.", "Section presents background on complex networks and their models and reviews the related work on network-based RS methods.", "Section introduces the proposed method.", "In Section , we conduct a detailed experimental analysis to evaluate the performance of the proposed method.", "Finally, we conclude the paper and highlight future research directions in Section ." ], [ "Background and Related Work", "This section covers essential concepts from the field of complex networks with emphasis on network models, in particular similarity-popularity models.", "We then give an overview of recent related research in the field of recommender systems." ], [ "Complex networks", "The interest in studying complex networks arises from the fact that most real networks are naturally structured as complex networks.", "The nontrivial topological features of complex networks can be seen in virtually all types of networks, such as social, technological, and biological networks.", "For example, in real-life social networks, the interaction between people can be represented as a complex network.", "The nodes in this graph are the people, and the edges are the relationships among them [10].", "Another natural network that adheres to the structure of complex networks is that of academic citations among papers, where nodes are the papers, and a directed edge indicates a citation [11].", "Complex networks also include technological networks that serve or deliver resources, such as the electric grid network or the network of roads and airports.", "One of the most studied examples of this type of network is the Internet which exhibits the structure of a scale-free network [9], [12].", "Food webs are also one of the most studied types of complex networks.", "In a food web, species are nodes, and directed edges between species are created when one species preys on the other [13].", "Recent studies show that trust networks [14], [15], [16] in collaborative filtering systems also possess complex network characteristics, specifically those of scale-free networks [17], [18].", "Initially, complex networks have been studied as part of the mathematical graph theory field that traditionally focused on regular graphs.", "In 1959 Paul Erdős and Alfréd Rényi proposed a random network model [19] structured as a simple, straightforward design of a large-scale graph with no apparent design principles.", "Scientists were influenced afterward by their work and treated all real-life networks as if they were random [20], [21].", "However, later studies showed that most networks are, in fact, more complex than random.", "For this, the field of network science witnessed a rebirth when Watts and Strogatz proposed small-world networks in 1998 [22], followed by scale-free networks one year later by Barabási and Albert [23].", "Since then, complex networks have been considered graphs with nontrivial topological features that often occur in natural structures such as social or biological networks.", "Networks with characteristics that are not purely random or regular are considered complex.", "Figure REF shows some examples of the mentioned models.", "This section will discuss scale-free complex network models and their properties.", "Figure: Illustrations of various complex network models (a) A random model.", "(b) A small-world model with short paths.", "(c) A scale-free model with short paths and highly connected nodes or hubs.", "The node's size and color reflect its degree.Scale-free networks gained considerable interest within the network science community upon publishing the Barabási-Albert model [23], which generates networks with scale-free properties.", "It is now considered one of the models that most reflect real-world natural network properties.", "The Barabási-Albert model focuses on two main features that the previous model lacked, the preferential attachment property and dynamic growth of the network.", "It also shares certain characteristics with the Erdős-Rényi random network model [19] and Watts-Strogatz small-world model [22].", "These include the small-world property and the tendency to cluster, which do occur in the Watts-Strogatz model but not in simple networks such as the Erdős-Rényi model.", "Real-world networks contain highly connected nodes, or hubs, that have a much higher degree than the average degree of the network.", "The preferential attachment property states that new nodes are more likely to connect with these hubs, which describes the \"rich get richer\" phenomenon observed in the real world [20].", "Hub nodes do not appear in random or small-world models; their existence affects the degree distribution and changes it from a Poisson distribution in random and small-world networks to a power-law distribution with a long tail." ], [ "Similarity-popularity models", "The preferential attachment principle relies solely on nodes' popularity, represented by their degree, to construct the network.", "Similarity-popularity models are a class of complex network models extending this principle to include similarity between nodes as a factor contributing to link formation.", "The hidden metric space model [24] is an example of a similarity-popularity model in which similarity between nodes is encoded using a hidden metric space underlying the network.", "The one-dimensional ring or circle is the simplest suggested model to construct such space [25].", "This technique generates a hidden metric space for any scale-free network by first positioning nodes uniformly randomly along the ring.", "Each node is then assigned its hidden degree $\\kappa $ following a power-law distribution $P(\\kappa )\\sim \\kappa ^{-\\gamma }$ , where $\\gamma > 2$ .", "Finally, each pair of nodes is connected with probability $r(d; \\kappa ,\\acute{\\kappa })$ : $r(d;\\kappa ,\\acute{\\kappa })=r(d/d_c)=(1+d/d_c )^{-\\alpha },$ where $\\alpha >1$ , $d$ is the distance between the two nodes, and $d(c)\\sim \\kappa \\acute{\\kappa }$ is the characteristic distance of the two nodes, with $\\kappa ,\\acute{\\kappa }$ being their hidden degrees.", "According to this formula, the likelihood of a connection between any two nodes increases when the hidden degree $\\kappa $ grows and decreases as their distance $d$ increases.", "The generated network possesses three characteristics observed in complex real-life networks: A pair of nodes with large degrees is more likely to become connected even if the distance $d$ is large.", "A pair of nodes with only one node having a high degree is connected if the distance $d$ is moderate.", "Final, a pair of unpopular nodes with low degrees is connected if and only if the distance $d$ between them is small.", "Closely related to the hidden metric space model is the popularity similarity optimization model (PSO) [26], which combines scale-free properties with hidden metric space properties to generate networks in a hyperbolic space where angular and radial coordinates represent nodes' positions.", "In PSO, the probability of connecting two nodes does not rely exclusively on popularity but on optimizing a balance between node popularity, abstracted by their radial coordinate and similarity, abstracted by the distance between their angular coordinates.", "As a result, PSO generates networks that possess a scale-free structure with degrees following power-law distribution and present many properties of real-world networks.", "Similarity-popularity models have been successfully applied to various tasks including, network modeling [27], information routing [25] and link prediction [28], [29], [30].", "Using networks to guide the recommendation process has attracted increasing interest within the research community.", "The earliest work done in this direction is presented in [31], where users are represented by nodes in a directed graph and connected according to their similarity.", "Later on, many recommendation approaches adopted graph representations at their core [32], either to model users and items as a bipartite graph, where users and items lie in separate groups with edges representing ratings, or as a single network where both users and items co-exist in the same space.", "More recently, authors in [33] proposed a graph-based recommendation method by constructing a connected undirected graph to model their system.", "Their method (GraphRec) aims to find accurate yet novel items.", "In their work, only items are represented as nodes in the graph, and links between them indicate positive correlations.", "After constructing the graph, observed user ratings are used to filter the items.", "Positively correlated items ensure the relevancy of an item to the user preference, providing hence accurate recommendations.", "This idea is similar to our work.", "However, in our case, we position both items and users within the graph using a similarity-popularity model.", "Authors in [34] proposed a collaborative filtering method that embeds both users and items in a Euclidean space, where the distance between nodes is inversely proportional to their ratings.", "Thus, the distance between users and items is shorter for items rated high.", "In their work, the training phase begins by constructing a model that finds the ideal location of each item and user in a high-dimensional Euclidean space based on observed ratings.", "Recommending items to a user is then done by searching for the nearest items in the space.", "The accuracy of their method was validated by experiments and showed improved results.", "Even though their method and the proposed method in our paper share similar fundamentals, the distances between items and users are not directly taken from the observed ratings in our case.", "Instead, we use an underlying model with additional factors to construct the graph.", "As a result, the generated network resembles real-world networks, including the existence of hubs and power-law degree distribution.", "The authors construct a bipartite graph to represent items and users separately in another work [8] focusing on applying analytical techniques used in complex networks to implement graph-based CF.", "The distances between nodes are calculated using cosine similarity.", "Once the graph is constructed, items are recommended to users following the preferential attachment principle, whereby popular items are assumed to be more likely to be chosen by a user.", "Even though the intuition behind popular items recommendation can be accepted as a solution to overcome the cold-start issue when user preferences are unknown, this approach has the obvious limitation of recommending popular items at the expense of relevant ones.", "Other related works that focused on using networks include guiding recommendation process by random graph modeling [6], which confirms the feasibility of modeling users-items as random graphs.", "However, the characteristics of real-life networks do not match those of purely random graphs since real-world user-item interactions are more complex and contain a higher tendency to form clusters.", "Hence, it would be more beneficial to use a more realistic underlying model to better reflect the connections between users and items.", "Several approaches represent trust relationships as a graph, where links between users indicate their trust instead of their similarity.", "In trust-based networks, users can trust each other either explicitly [35] or implicitly [36] through their ratings or other observed actions.", "Once the trust between users has been established, it can be used to guide the recommendations process as in similarity-based CF.", "In [35], the authors propose a recommendation system that takes advantage of trust between users in an online social network.", "They obtained favorable results by proposing a model that uses the predefined relationships between users in the network to reach and filter information and knowledge.", "Their model consists of a bipartite graph of users and object nodes representing recommended items, and the weights of the connection between users and objects are ratings.", "In our case, instead of using a predefined neighborhood produced from the friendship network, the network is constructed based on previous ratings, similarities, and popularities of items without an additional source of information.", "The experimental results obtained by the authors in [14] confirm the viability of using a scale-free network for recommendation as they experimentally show that trust networks possess the small-world property in which the network diameter is significantly small compared to the number of nodes.", "They use this property to estimate the trust between users based on the distance that separates them in the network.", "The computed trust value is then used to weigh the recommendations obtained from different users.", "The trust network approach also overcomes the cold-start and data sparsity issues.", "The authors in [36] argue that most e-commerce RSs users do not have explicitly trusted users, either because the functionality to establish trust between users explicitly is unavailable or due to cold-start and data sparsity issues.", "Thus, they proposed constructing a weighted directed trust network for each user using both explicit and implicit trust.", "The network consists of nodes representing users' neighbors and only those are used to compute the recommendations.", "Using trust in recommender systems increases the resistance to malicious attacks.", "However, the trust network's primary purpose is to overcome the cold-start issue.", "New users can find trusted users even if the system cannot find similar users as their preferences are still unknown.", "However, the main drawback of trust-based recommendation is the need for additional information that is not always present nor easy to elicit from data.", "In this paper, we build a network from ratings only, which broadens the applicability of our proposed approach." ], [ "The Proposed Method", "This paper proposes a recommendation approach based on a similarity-popularity model, where users and items are modeled as nodes in a graph having a scale-free network structure.", "Before delving into the details of the approach, however, we start by introducing notation.", "We consider a recommendation system consisting of $n$ users and $m$ items and denote by $r_{ij}$ the rating given by user $i$ to the item $j$ and by $R$ the set of all ratings.", "We denote by $\\bar{r}_i$ the average rating for user $i$ and by $\\bar{r}_j$ the average rating for item $j$ .", "The minimum and maximum ratings are denoted by $r_{min}$ and $r_{max}$ , respectively.", "We will often need to transform the ratings into probabilities.", "For this, we define the set $\\tilde{R}$ of scaled ratings $\\tilde{r}_{ij}$ obtained from raw ratings by applying the function $\\phi $ defined as: $\\tilde{r}_{ij} \\equiv \\phi (r_{ij}) = \\dfrac{r_{ij} - r_{min}}{r_{max} - r_{min}} \\left(p_{max}-p_{min} \\right) + p_{min},$ where $p_{min}$ and $p_{max}$ are the minimum and maximum probabilities, respectively.", "The function $\\phi $ maps the raw ratings to the closed interval $[p_{min}, p_{max}]$ .", "To avoid boundary issues, we exclude the two extremities 0 and 1 when selecting $p_{min}$ and $p_{max}$ so that we have $0< p_{min}, p_{max} <1$ .", "We also define the translation $\\varphi $ as follows: $\\varphi (r) = r - r_{min}+1.$ In the proposed approach, each user $i$ is assigned a point $x^i$ in a $D$ -dimensional Euclidean space: $x^i=\\left(x^i_1, x^i_2, \\ldots , x^i_D\\right)^T , i = 1,\\ldots , n.$ The users' coordinates reflect their preferences concerning the nature of the recommended items.", "Hence, each hidden dimension represents a hidden feature relevant to the recommendation or a combination of several such features.", "For example, in a movie recommendation system, relevant features may be the length of the movie the user usually watches or how much action, drama, or comedy the user prefers.", "Likewise, each item is assigned a position $y^j$ in the same $D$ -dimensional Euclidean space.", "$y^j=\\left(y^j_1, y^j_2, \\ldots , y^j_D\\right)^T , j = 1,\\ldots , m.$ Likewise, the position of an item represents its description or list of features.", "The rating $r_{ij}$ given by user $i$ to item $j$ is assumed to be proportional to the probability of connection between the two corresponding nodes, that is, $r_{ij} \\propto {p}_{ij}$ , where ${p}_{ij}$ is prescribed by a similarity-popularity model.", "The first model we propose is a similarity-popularity model based on a hidden metric space model inspired by [24] and will henceforth be referred to as SPHM1.", "The connection probability under SPHM1 is defined as: ${p}^{SPHM1}_{ij}={\\left(1+\\dfrac{d^2(x^i,y^j)}{\\sqrt{\\kappa _i^{SPHM1}\\kappa _j^{SPHM1}}}\\right)}^{-\\alpha },$ where $\\kappa _i^{SPHM1}$ and $\\kappa _j^{SPHM1}$ are the expected degrees of $i$ and $j$ respectively, and $d(x^i,y^j)$ is the hidden Euclidean distance between them.", "Hence, the connection probability decreases with the squared hidden distance $d^2(x^i,y^j)$ (dissimilarity) and increases with the popularity $\\kappa _i^{SPHM1}$ and $\\kappa _j^{SPHM1}$ .", "The rationale is that users tend to favor items that fit their preferences, that is, items that are close to them in the Euclidean space.", "However, items with a high degree can be liked by users even if they are considerably different from what they usually prefer.", "Also, some items having a small or average degree may be liked by faraway users having a large degree.", "The parameter $\\alpha $ controls the network clustering and the effect of the distance and popularity on the connection probability.", "As illustrated in Figure REF , larger values of $\\alpha $ cause a faster decay in probability as the distance grows and a slower increase in probability as popularity increases.", "Consequently, with everything else fixed, low values of $\\alpha $ result in denser graphs.", "In contrast, larger values of $\\alpha $ result in sparser, more clustered graphs with mostly short-range connections and long-range connections only occurring between highly popular nodes.", "Figure: Effect of the parameter α\\alpha on the network topology.", "The two line-plots show the evolution of the connection probability p ij p_{ij} as a function of the squared distance with fixed popularity (upper left) and popularity with fixed distance (bottom left) for different values of α\\alpha .", "The graph plots show one sample network resulting from each value of α\\alpha .The second similarity-popularity model (SPHM2) is deduced from the first model by defaulting the value of $\\alpha $ to 1, which results in a simpler model which requires less effort for parameter tuning and validation: $p^{SPHM2}_{ij}=\\left(1+{\\dfrac{d^2(x^i,y^j)}{\\sqrt{\\kappa _i^{SPHM2}\\kappa _j^{SPHM2}}}}\\right)^{-1}.$ The third model uses the dot product to encode the similarity between nodes instead of the Euclidean distance.", "This results in a gradient that is easier to compute and thus faster optimization: $p^{SPDP}_{ij}= \\sqrt{\\kappa _i^{SPDP}\\kappa _j^{SPDP}} \\exp \\left({x^i \\cdot y^j}\\right).$ Unlike the two previous models, $p^{SPDP}$ is not restricted to the interval $[0, 1]$ and hence does not qualify as a valid connection probability.", "We can instead think about it as a score assigned to node couples, where higher scores indicate a higher likelihood of connection, which is a common practice in the link prediction literature [37].", "As illustrated in Figure REF , users and items in the proposed approach co-exist in a graph embedded in a $D$ -dimensional space.", "They are connected based on two factors, their similarity (or distance) and popularity (or degree).", "For instance, item $j_1$ is popular, which increases its probability of connecting to distant dissimilar users, especially if they have large degrees, such as user $i_1$ .", "For the same reason, the latter is highly likely to connect to many items.", "On the other hand, low-degree items such as $j_2$ have less probability of connecting to users.", "Thus, it will not be able to connect to nearby users with a low degree such as $i_2$ .", "This phenomenon can be observed in real life, where some movies, for instance, have a broad audience and are enjoyed by people who do not usually appreciate that type of movies.", "Similarly, some moviegoers have a taste for a wide range of movie genres and styles.", "Figure: The proposed approach models the system as a complex network where users and items connect according to similarity and popularity.", "The dissimilarity between nodes is encoded by the distance separating them, whereas popularity is an intrinsic property of the node represented here by its size.Figure REF also hints at the possibility of using the model for visualization, which requires training the model in two or three dimensions.", "Such a low dimensionality, however, can negatively impact recommendation accuracy.", "One can overcome this issue by training the model in a high-dimensional space, then picking the desired neighborhood of users and items and their distances, and finally, rebuilding the space in two or three dimensions using any graph embedding technique.", "To learn the proposed models from data, we first set the users' and items' degrees.", "For the two first models, SPHM1 and SPHM2, we set the degrees as follows: $\\kappa _i^{SPHM1} = \\kappa _i^{SPHM2} &= \\varphi (\\bar{r}_i), \\quad i =1,\\ldots , n\\\\\\kappa _j^{SPHM1} = \\kappa _j^{SPHM2} &= \\varphi (\\bar{r}_j), \\quad j =1,\\ldots , m.$ This step guarantees that all degrees are greater than or equal to one.", "For the SPDP model, we transform the average ratings using the function $\\phi $ instead: $\\kappa _i^{SPDP} &= \\phi (\\bar{r}_i), \\quad i =1,\\ldots , n\\\\\\kappa _j^{SPDP} &= \\phi (\\bar{r}_j), \\quad j =1,\\ldots , m.$ Once the node degrees are fixed, computing the distances between nodes amounts to finding the coordinates of each user and each item so that the predicted ratings are as similar as possible to the actual ratings.", "This problem can be formulated as that of minimizing the following objective function: $J_{L_2}(x^1, \\ldots , x^n, y^1, \\ldots , y^m) = \\\\ \\sum _{\\tilde{r}_{ij}\\in \\tilde{R} } \\left({p}_{ij} - \\tilde{r}_{ij}\\right)^2 + \\lambda \\left( \\sum _{i=1}^{n} \\Vert {x^i}\\Vert ^2_2 + \\sum _{j=1}^{m} \\Vert {y^j}\\Vert ^2_2\\right),$ where ${p}_{ij}$ is is the predicted probability or score computed by one the proposed models (Eq.", "(REF ), (REF ) or (REF )), $\\lambda $ is the regularization coefficient, and $\\Vert \\cdot \\Vert _2$ stands for the $L_2$ norm.", "The hyper-parameter $\\lambda $ is chosen from a pre-defined set of values during the validation phase.", "Instead of using the squared error and $L_2$ normalization, it is possible to use the absolute error with $L_1$ normalization: $J_{L_1}(x^1, \\ldots , x^n, y^1, \\ldots , y^m) = \\\\ \\sum _{\\tilde{r}_{ij}\\in \\tilde{R} } \\left| {p}_{ij} - \\tilde{r}_{ij} \\right| + \\lambda \\left( \\sum _{i=1}^{n} \\Vert {x^i}\\Vert _1 + \\sum _{j=1}^{m} \\Vert {y^j}\\Vert _1\\right).$ The problem of minimizing the objective functions (REF ) and (REF ) falls into the category of nonlinear optimization problems.", "Since the objective function is not convex, only local minima can be computed in general.", "Several algorithms for local optimization exist that can be used to solve this minimization problem, such as simple gradient descent or conjugate gradient.", "For completeness, we include the gradients of all combinations of the two cost functions $J_{L_2}$ and $J_{L_1}$ with the three proposed models in Table REF .", "Table: Gradients of all combination of cost functions and proposed models.", "The function sign\\operatorname{sign} returns 1 if its argument is positive, -1 if negative, and 0 if null.In summary, the proposed models' parameters fall into three categories.", "The dimension $D$ , the constant $\\alpha $ , and the regularization coefficient $\\lambda $ are hyper-parameters determined during the validation phase using a simple grid search.", "The users' and items' degrees $\\kappa _i$ , $\\kappa _j$ are computed directly from the observed ratings.", "Finally, the positions of users and items, $x^i$ , $y^j$ , are found by optimizing the objective functions $J_{L_2}$ and $J_{L_1}$ .", "The last step represents the bulk of the computational effort required to build the models." ], [ "Experimental Evaluation", "In this section, we report the results of an extensive experimental analysis of the proposed method.", "We start by presenting the data used for the evaluation, the performance criteria, the competing methods, and the experimental setup.", "We then present and discuss the obtained results." ], [ "Data", "The proposed method is compared to baseline and state-of-the-art methods found in the literature on 21 publicly available datasets.", "The datasets are of various domains and sizes and are widely used in the literature to evaluate recommender systems.", "Datasets were pre-processed to retain only users having at least five ratings.", "Table REF shows the datasets' names, description, and basic statistics.", "Table: Datasets description and statistics.The experiments use 5-fold cross-validation to maximize the results' accuracy.", "The data is randomly shuffled and split into five equal batches.", "Each batch is used once as a test set while the rest is used for training, resulting in 80%-20% splits for training and testing, respectively.", "The training set is further divided into a (proper) training set containing 90% of the initial training samples and a validation test containing the remaining 10% of the samples.", "The validation set is used to tune the model hyperparameters using grid search.", "Once the best parameters are found, the model is retrained on the whole training set before testing.", "Finally, the average performance over all five folds is reported." ], [ "Evaluation criteria", "Several metrics have been proposed for evaluating recommendation systems.", "These metrics measure the accuracy of the predicted ratings provided by the recommendation method by comparing them to ground truth ratings.", "The model is used to predict a previously unseen rating by generating an estimate of it denoted by $\\hat{r}_{ij}$ , whereas $r_{ij}$ denotes the actual rating from the ground truth data.", "The most used rating prediction accuracy metrics are the Root Mean Squared Error (RMSE) and the Mean Absolute Error (MAE) [47].", "These metrics allow the evaluation of the quality of numeric predictions.", "The MAE takes the sum of differences between the actual and predicted ratings and divides it by the number of ratings considered: $MAE = \\frac{1}{|R^{Test}|} \\sum _{r_{ij} \\in R^{Test}} | \\hat{r}_{ij}-r_{ij} |$ where $R^{Test}$ is the test set.", "Unlike MAE, the RMSE emphasizes more significant errors by first squaring each error value, averaging the squared errors, then taking the square root of the average: $RMSE = \\sqrt{\\frac{1}{|R^{Test}|} \\sum _{r_{ij} \\in R^{Test}} \\left( \\hat{r}_{ij}-r_{ij} \\right)^2}.$ To aggregate the performance over all datasets, we first rank the methods according to their performance on each dataset with the convention that lower ranks are better then report the mean rank over all datasets." ], [ "Competing methods", "The proposed method is experimentally compared against classical baselines as well as state-of-the-art algorithms to assess its performance.", "The following methods have been used in this experiment: ItemKNN: An item-based CF method that computes the average rating of the $k$ most similar neighbors to the target item.", "The similarity between the items is based on Pearson correlation Coefficient.", "Since it serves as a baseline, this method was used un-tuned with a constant number of neighbors, $k = 25$ .", "SVD++: An efficient algorithm that extends the classical SVD model by including implicit feedback and mixing the strengths of latent factor models and neighborhood models [48].", "PMF: An efficient matrix factorization model that factorizes the explicit user-item rating matrix as a product of two lower-rank users and items matrices to be used for predicting ratings [49].", "BiasedMF: An efficient biased matrix factorization model that extends factorizing explicit user-item rating matrices by introducing users and item biases [50]." ], [ "Implementation", "All the variants of the proposed method are implemented using C++.", "Since an efficient and effective solver is essential for solving the optimization problems resulting from our models, we use CG_DESCENT [51], which is a stable and well-tested code that implements a nonlinear conjugate gradient with a guaranteed descent method for unconstrained optimization [52].", "To iteratively proceed towards the solution, the CG_DESCENT solver requires, at each iteration, the value of the objective function and its gradient.", "For each of the proposed models, we implemented and tested two variations of the objective function, the $L_1$ and $L_2$ versions, as described in Section .", "For the competing methods, a Java code on top of LibRec Java recommender system library [53] was used." ], [ "Parameter settings and tuning", "The experiment is conducted on 21 datasets.", "For each dataset, an extensive number of experiments were conducted to tune the hyper-parameters using grid search on a validation set.", "The dimensions $D$ for all proposed similarity-popularity models and factors $f$ in PMF, BiasedMF and SVD++ are selected from the set $\\lbrace 5, 10, 20\\rbrace $ .", "For the proposed similarity-popularity models, PMF, and BiasedMF, we tried the following combination of regularization $\\lambda \\in \\lbrace 0.1, 0.01\\rbrace $ .", "For PMF and BiasedMF we tried learning rate $\\gamma \\in \\lbrace 0.1, 0.01\\rbrace $ .", "For SPHM1, we included $\\alpha \\in \\lbrace 2, 3, 4, ..., 8, 9\\rbrace $ in the grid search.", "For simplicity, we fixed $k$ in ItemKNN method to $k = 25$ .", "All other parameters are kept to LibRec default values." ], [ "Experimental results", "In the first experiment, we compare the performance of each variant of the proposed method.", "Namely, we compare the three proposed models, SPHM1, SPHM2, and SPDP, with the $L_1$ and $L_2$ objectives giving a total of six variants.", "Table REF reports the performance results in terms of RMSE and MAE of all six models on all datasets.", "The last row represents the mean rank of each model.", "The results show that SPHM2 with the $L_1$ objective function outperforms the other variants in terms of MAE in 17 of the 21 datasets, with a mean rank of 1.57.", "It is followed by SPHM1 with the $L_1$ objective, which comes in second with a mean rank of 2.52.", "Regarding RMSE, SPHM2 with the $L_2$ objective function and SPDP with the $L_1$ objective function perform better in 9 of the 21 datasets.", "However, SPHM2-L2 ranks better with an average of 1.62 compared to SPDP-L1 with a mean rank of 2.90.", "Table: Comparison of the different variants of the proposed method in terms of RMSE and MAE.", "The last row indicates the average rank over all datasets; the lower, the better.The top-performing methods in terms of MAE and RMSE, respectively SPHM2-L1 and SPHM2-L2, are picked for the second experiment, where we compare their performance against state-of-the-art recommendation methods.", "Table REF reports the RMSE and MAE results of SPHM2-L1 and SPHM2-L2 and the competing methods.", "The mean ranks over all datasets are reported in the last row and plotted for convenience in Figure REF .", "The results show that SPHM2-L1 outperforms the other methods in 18 of 21 datasets in terms of MAE with a mean rank of 1.38, followed by SPHM2-L2 with a mean rank of 2.29.", "SPHM2-L2 outperforms other methods in 20 of 21 datasets in terms of RMSE with a mean rank of 1.05, followed by SPHM2-L1 with a mean rank of 2.81.", "Table: Comparison of the proposed approach against competing methods in terms of RMSE and MAE.", "The last row indicates the average rank over all datasets; the lower, the better.Figure: Mean rank comparison over all datasets in terms of RMSE and MAE.We conduct an additional experiment to evaluate the data visualization prospect of the proposed approach against the competing methods.", "Data visualization requires embedding users and items in a two or three-dimensional space, allowing data exploration and interpretation.", "The experiment consists in training all methods on three dimensions except obviously for ItemKNN, which does not assign coordinates to items and users.", "The resulting mean ranks based on RMSE and MAE depicted in Figure REF show that the proposed method SPHM2-L2 performs better in terms of MAE and RMSE, followed closely by SPDP-L1 then BiasedMF.", "Figure: Mean rank comparison over all datasets in terms of RMSE and MAE in 3D." ], [ "Conclusion and Future Work", "This work introduced a novel recommendation method that utilizes a similarity-popularity complex network model to generate recommendations.", "The network, which has users and items as nodes, is first constructed from observed ratings and then used to predict unseen ratings.", "We explored the effectiveness of dot-product and Euclidean distance-based similarity with $L_1$ and $L_2$ norms for error cost and regularization.", "We implemented the proposed models, conducted extensive experiments on multiple datasets from diverse domains, and demonstrated the superiority of the proposed approach against state-of-the-art recommendation methods.", "We also showed that the proposed approach outperforms existing methods in low dimensions, proving the method's effectiveness for data visualization and exploration.", "The proposed similarity-popularity model is a starting point in the direction of applying complex network latent spaces as models that guide the recommendations process.", "Additional similarity-popularity models other than the hidden metric space model and dot-product can be examined to assess their suitability for recommendation systems.", "Furthermore, other objective functions can be tested, especially one that combines both L1 and L2, such as an elastic net.", "Another direction is to apply the proposed method to other forms of the recommendation problem, such as item ranking, context-aware, and sequence-aware recommendation tasks.", "We also propose to investigate adding content-based features along with similarity and popularity to tackle further recommendation systems challenges, such as recommendation diversification and the cold start problem." ], [ "Acknowledgments", "This research work is supported by the Research Center, CCIS, King Saud University, Riyadh, Saudi Arabia.", "48" ] ]
2210.07816
[ [ "Intra-session Context-aware Feed Recommendation in Live Systems" ], [ "Abstract Feed recommendation allows users to constantly browse items until feel uninterested and leave the session, which differs from traditional recommendation scenarios.", "Within a session, user's decision to continue browsing or not substantially affects occurrences of later clicks.", "However, such type of exposure bias is generally ignored or not explicitly modeled in most feed recommendation studies.", "In this paper, we model this effect as part of intra-session context, and propose a novel intra-session Context-aware Feed Recommendation (INSCAFER) framework to maximize the total views and total clicks simultaneously.", "User click and browsing decisions are jointly learned by a multi-task setting, and the intra-session context is encoded by the session-wise exposed item sequence.", "We deploy our model on Alipay with all key business benchmarks improved.", "Our method sheds some lights on feed recommendation studies which aim to optimize session-level click and view metrics." ], [ "Introduction", "In recent years, feed recommendation (FR) has gained increasing popularity by providing never-ending and content-blended feeds in a waterfall form of item exhibitions.", "Generally, ranking in FR shares similar methodology with traditional learning-to-rank (LTR) methods, including traditional pointwise methods, as well as pairwise and listwise [3], [2] methods which consider surrounding effect around items.", "In the original configuration of modeling, it is assumed that user observes item candidates with equal probabilities.", "This assumption has been questioned by some eye-tracking studies and works have been done to reimburse the positional, exposure or selection bias [5].", "Recently there are also increasing efforts to make the model more consistent with the actual live environments, including session-based recommendation [13], [8], sequential recommendation [17] and context-aware recommendation [4].", "Within the scope of these works, both inter- and intra- context impacts are thoroughly studied and components such as RNN, GNN, transformer are widely adopted to capture the context-aware user preferences.", "However, these studies seldom consider the interactive scenario of user browsing decisions, even for algorithms designed especially for FR [18], [9].", "In a typical waterfall form of feeds, people first see the top item in default, then decide if click it or browse the next item.", "Obviously, this browsing decision is different from the click decision, and the next item could not be observed without that browsing operation In our definition, a user click itself is not considered as the end of session, but it is the end if the user never returned from the clicked page.. As a result, the likelihood of a user's later click is affected by previous browsing behavior, which results in substantial exposure bias.", "Furthermore, this time dependency on different item positions suggests that global views or clicks might be better business metrics than click-through rate (CTR), to highlight the user stickiness.", "Given such an objective, optimizing the instant CTR could reach localized optima which is deviated from the global optimum (for example, putting the most favorable item on the top might not always be a statistically best idea, since user would probably click it then leave the session immediately.).", "Unfortunately, most FR models by so far neither do not have explicit modeling of the browsing behaviors, nor try to solve the global metrics.", "While some optimization-based method such as reinforcement learning (RL) could be theoretically suitable to solve such type of problems, they are usually subject to computational complexity or exploration cost thus industrial application is limited.", "In this work, we try to build a supervised framework in which user sequential behaviors are explicitly modeled as a series of Markov events of clicks and scrolls, and total views and clicks are ranking objectives.", "This is to the best of our knowledge the first time to eliminate the session exposure bias of FR by such a methodology.", "In this paper, we propose a novel INtra-Session Context-Aware FEed Recommendation (INSCAFER) framework which considers the aforementioned mechanism on mobile-based applications.", "On the mobile devices, browsing items are achieved by user operations of scrolling the screen down.", "To model user scroll and click behaviors, we define the intra-session context as the user browsing experience within the current session.", "We model the browsing and clicking events within a session as a Markov Chain, with the intra-session context as a latent variable, and click and scroll decisions conditioned on it.", "We solve the problem by maximizing the negative likelihood loss of the session events sequence, which is converted into a multi-task classification training with historical click and scroll as ground truth labels.", "The entire framework is similar to Generative Pre-Train (GPT) [11], including a pre-training stage and a sequence generation stage.", "During pre-training stage, an intra-session context encoder is co-trained with a long-term interest net and a Multi-gate Mixture-of-Experts (MMOE) [10] module.", "The pre-trained encoder is then deployed on the server, and a recommendation sequence generation task is conducted with intra-session context dynamically calculated during servicing stage.", "The major contributions of this paper are as follows: We explicitly consider the user scroll decisions and subsequent time-dependency of intra-session behaviors, which is more close to the real FR scenario.", "The model loss and architecture are designed from a theoretical starting point, with the objective of the expected total views and clicks.", "We design a clear and fast framework similar to GPT to train and launch the model in a large-scale industrial system.", "Figure: The logical flow of user events in a typical feed session.", "At each position tt, user decides if click the current item and/or browse the next item, or quit the session.", "The session-level recommending objective is to maximize the total views and clicks.The basic idea of our work is motivated from the concept of session events shown in Figure REF , which characterizes typical FR scenario from traditional recommendation.", "User views the first item by default when entering into the session.", "Upon the $t$ th item is $seen$ , user makes two independent decisions: $click$ indicating whether click the current item, and $scroll$ indicating whether scrolling the screen down to browse the next item.", "The session stops when a 'not scroll any more' decision is made.", "At a specific position, the probability of click and scroll can be expressed by multiplication of two conditional probabilities $P(scroll, click \\vert seen)P(seen \\vert user, item, position)$ where ‘item’ is the item profile, and ‘user’ denotes the user perception affected by both the user long-term interest as well as the short-term experience within the current session.", "From this manner of definition, $user$ should be a latent variable recurrently affected by previous events.", "As a result, expressions of Eq.", "REF at different positions are not independent and identically distributed (i.i.d.)", "(e.g., a false $click$ will stops the session such that later $seen$ is always false), which violates the basic assumption of pointwise LTR.", "Instead, Figure REF indicates an unidirectional time-dependency of timely event sequence, i.e., the former decisions will impact the later decisions, but reversely not.", "Similar conception can be found from the 'user cascade model' in [6] and also in DIEN [19].", "On the contrary, bidirectional time-dependent methods such as [14] might be more suitable for Top-K recommendation instead of FR.", "Here we introduce some abbreviated notations of events, in which the latent variable $h$ denotes $user$ , $x$ represents the item embedding, $c$ denotes probability of $click$ and $s$ denotes probability of $scroll$ .", "$T$ is the total length of session and a specific position is $t \\in [0, T]$ .", "The bold version of variable denotes a sequence of events in a timely order, e.x.", "$\\mathbf {c} := \\lbrace c_0, c_1, \\cdots , c_T \\rbrace $ .", "Given an ordered exhibition of items $\\mathbf {x}$ , the joint probability of feed session events is $P(\\mathbf {c}, \\mathbf {s}, \\mathbf {h} | \\mathbf {x}) &= \\prod _{t=0}^{T} P(c_t|h_t, x_t) P(s_{t}|h_t, x_t)P(h_{t+1}|h_t, x_t, s_{t-1}) $ considering the Markov dependency.", "The objective is then to maximize the expectation of total views and clicks of all sessions $\\max V = \\mathbb {E}\\sum _{t=0}^T (c_t + s_t)$" ], [ "Model", "Similar with traditional LTR, optimizing the objective in Eq.", "(REF ) can be derived from maximizing the likelihood of Eq.", "(REF ), by parameterizing the conditional probabilities by trainable $\\theta $ $\\begin{split}\\min _{\\mathbf {h}, \\theta } L(\\mathbf {h}; \\theta ) = & -\\text{log} P_{\\theta }(\\mathbf {c}, \\mathbf {s} | \\mathbf {h}, \\mathbf {x}) = -\\text{log} \\frac{P_{\\theta }(\\mathbf {c}, \\mathbf {s}, \\mathbf {h} | \\mathbf {x})}{P_{\\theta }(\\mathbf {h} | \\mathbf {x})} \\\\= & -\\text{log} \\prod _{t=0}^{T} P_{\\theta }(c_t|h_t, x_t) P_{\\theta }(s_{t}|h_t, x_t) \\\\= & - \\sum _{t=0}^{T} \\text{log} P_{\\theta }(c_t|h_t, x_t) + \\text{log} P_{\\theta }(s_t|h_t, x_t) \\end{split}$ which indicates that the session-wise learning can be decomposed into consecutive position-wise multitask learning, jointly with estimation of the recurrent contextual state $h$ .", "At each position, learning is multitask with BCEs of two classifications, $click$ and $scroll$ .", "Although derivation of Eq.", "(REF ) seems trivial, there is no previous work which explicitly model the scroll behavior aside with click as multitask problem, to the best of our knowledge.", "Eq.", "(REF ) implies our model structure, as shown in Figure REF , with three modules: 1) the Interest net with a multi-head attention block followed by a Factorization Machines (FM) [12] layer which studies the user long-term interest; 2) the intra-session context encoder which models user short-term interest shift, mutual-item influence and the positional impact encoded by GRU, followed by a self-attention [15] block; and 3) an MMOE [10] module with $click$ and $scroll$ subtasks.", "We name our approach as intra-session context-aware feed recommendation (INSCAFER)." ], [ "Learning and service framework", "INSCAFER has a similar learning paradigm with the famous Generative Pre-Train (GPT) framework [11], which is decoupled into two stages: the offline pre-training and online sequence generation, as shown in Figure REF .", "During the offline training stage, the context encoder is first initialized with the user embedding, then recurrently updated with each exposed item embedding concatenated with the position lookup embedding as input; while the MMOE logits of click ($\\text{logit}^{\\text{c}}$ ) and scroll ($\\text{logit}^{\\text{s}}$ ) are supervised by their labels More precisely, this stage is similar with the 'supervised fine-tuning task' of GPT.. During the service stage, a greedy sequence generation task is executed with the pre-trained context encoder retrieved and inferenced.", "The position embedding is self-incremented and looked up during this stage.", "With $K$ items left, the next item is decided by maximizing the following softmax: $\\text{arg}\\max _k \\frac{\\exp {(\\text{logit}^{\\text{c}}_k + \\text{logit}^{\\text{s}}_k)}}{\\sum _{k^{\\prime }}^K \\exp {(\\text{logit}^{\\text{c}}_{k^{\\prime }} + \\text{logit}^{\\text{s}}_{k^{\\prime }})}}$ Figure: Framework of INSCAFER.", "Learn: Sequential context encoding.", "Service: Greedy sequence generation.We apply our approach on Alipay, a world-leading E-Payment platform which also provides a comprehensive recommendation application, including goods, restaurants and online services.", "We first perform substantial offline evaluations, including the classification metrics of click and scroll, to prove the superiority of INSCAFER.", "Ablation study is also conducted to verify the necessary of different components.", "Data https://tianchi.aliyun.com/dataset/dataDetail?dataId=109858&lang=en-us and codes https://github.com/AaronJi/RecINSCAFER have been made public." ], [ "Configurations", "Figure REF shows the distribution of view occurrences and CTR according to the top 8 positions.", "One can see both of them decay quickly as position becomes larger Note CTR at the first position is lower than the second and the third ones.", "The reason is that it is by design user will observe the first item by default.", "Only active and interested user will scroll down the screen and see the next items.", "Our model is able to capture such effects by explicit modeling., resulting in tremendous exposure bias for long sessions.", "Significant deviation would be introduced with an i.i.d.", "assumption.", "System has about 200 million users and a billion number of views per day.", "Upon each query at most 15 items are selected and sorted for exhibition.", "Embedding dimension is set to 16.", "Two heads are encoded in the multi-head attention layer and the cosine form of similarity is used.", "The shape of GRU latent state is the same with item embedding.", "There are 8 experts, 4 tasks and 64 as the expert shape in the MMOE module.", "The tower nets in MMOE are MLP with [128, 32] hidden units and sigmoid as the last activation.", "The ADAM optimizer is used with learning rate of 0.0001.", "Training costs more than 180k steps and loss converges after about 60k steps.", "Figure: View occurrences and CTR curves w.r.t.", "positions.", "Data is slightly rescaled due to confidential requirement." ], [ "Offline Evaluation", "Classification metric such as Area Under the ROC Curve (AUC) is evaluated for both $click$ and $scroll$ tasks.", "Experiments are repeated 10 times to report the averaged metric.", "Considering that our main purpose is to validate the effectiveness of loss in Eq.", "(REF ), we choose offline baselines which are close to our online model, as well as some combinations of them and modules of INSCAFER: DIN: the Deep Interest Network [20], as a standardized solution of industrial pointwise CTR model.. DIN-ListNet: A listwise version of Vanilla with loss replace by a listwise loss defined in [3].", "MMOE: An MMOE model with $click$ and $scroll$ as subtasks, sharing the same bottom structure with INSCAFER [10].", "GRU4rec: A session-based method with a latent state calculated by GRU [7].", "GRU4rec-MMOE: The GRU4rec method combined with the MMOE structure to include $scroll$ consideration.", "Ptr-net: The pointer network method [16] which applies the seq2seq idea combined with attention blocks, and has similar sequence generation mechanism with us.", "We train it in the supervised mode of $click$ like work in [1].", "We summarize these results in Table REF , with evaluation of the click-through rate abbreviated as ctr and scroll rate abbreviated as scr.", "One can found that our INSCAFER has the best AUC for ctr and scr, and the third best MRR of scr.", "As comparisons, DIN and Ptr-net have reasonable ctr performances, but can not predict scr well, verifying that $scroll$ has a different distribution with $click$ .", "Simply switching its loss to the listwise form as in DIN-ListNet, can not solve the problem either.", "GRU4rec with the context consideration has an improved ctr performance but scr is still bad.", "On the other hand, MMOE can have good scr prediction because of its multitask setting but at the cost of ctr performance degradation.", "Combining GRU4rec with MMOE has similar balanced ctr and scr performances.", "Table: Comparison of Offline PerformanceIn Table REF , we also perform some ablation tests, by each time excluding the GRU unit, the self-attention block, or the position embedding.", "Not surprisingly, INSCAFER still has the best performance, suggesting these key components are all crucial." ], [ "Live Experiments", "The live experiment starts on September 3th, 2021 and lasts for about a week, in which its baseline is DIN ensembled with a conversion rate (CVR) model as well as a model optimizing user views, corresponding to comprehensive business considerations.", "Due to limited online resource and business performance requirements, we only launch INSCAFER and DIN to compare with the baseline.", "Gains of some important business metrics of INSCAFER and DIN are shown in Table REF .", "Not surprisingly, DIN can improve CTR performance further but at the cost of other metrics.", "In the contrary, our method have increased almost all key indicators, especially for views per user ($1.16\\%$ ), scrolls per user ($1.56\\%$ ), and total conversions ($1.56\\%$ ).", "This solid live result indicates our model formulation has a better depiction of FR scenario and provides a more reasonable solution for global metric optimization.", "Table: Gains of Live Metrics to Online Baseline" ], [ "Conclusion", "In this paper, we propose a novel INSCAFER method which solves the feed recommendation problem by considering the intra-session context.", "In feed recommendation, user's previous operations, especial decisions to browsing the next item or not, affect distribution of later operations, resulting in substantial exposure bias.", "We start our model formation from maximizing the likelihood of the joint probability of session events, with explicit consideration of the timely dependency of user intra-session behaviors.", "Such intra-session context is studied during training and considered in servicing by a GPT-like framework.", "Both offline and live experiments have verified our method's superiority over several popular baselines." ] ]
2210.07815
[ [ "A Formal-Methods Approach to Provide Evidence in Automated-Driving\n Safety Cases" ], [ "Abstract The safety of automated driving systems must be justified by convincing arguments and supported by compelling evidence to persuade certification agencies, regulatory entities, and the general public to allow the systems on public roads.", "This persuasion is typically facilitated by compiling the arguments and the compelling evidence into a safety case.", "Reviews and testing, two common approaches to ensure correctness of automotive systems cannot explore the typically infinite set of possible behaviours.", "In contrast, formal methods are exhaustive methods that can provide mathematical proofs of correctness of models, and they can be used to prove that formalizations of functional safety requirements are fulfilled by formal models of system components.", "This paper shows how formal methods can provide evidence for the correct break-down of the functional safety requirements onto the components that are part of feedback loops, and how this evidence fits into the argument of the safety case.", "If a proof is obtained, the formal models are used as requirements on the components.", "This structure of the safety argumentation can be used to alleviate the need for reviews and tests to ensure that the break-down is correct, thereby saving effort both in data collection and verification time." ], [ "Introduction", "Automated Driving Systems (ADS) relieve the human driver from the driving task and let the driver engage in other activities while travelling [1].", "Among several potential benefits of ADS, one in particular is to prevent accidents caused by driver errors and thereby increase traffic safety [2].", "An indication that such potential traffic safety benefit exists is provided by extrapolation from previous experience with driver assistance systems [3], and by analyses showing that ADS have the potential to prevent or reduce the severity of several accident scenarios involving human drivers [4].", "However, these studies only apply to situations that human drivers do not handle safely, and accidents involving human drivers; they do not provide insights into situations that human drivers already handle safely.", "ADS are designed to take over operation of the Dynamic Driving Task (DDT) in environments that are included in a specific Operational Design Domain (ODD) [1].", "To provide a net increase in traffic safety, an ADS must perform the DDT such that it overall is safer than a human driver in all the environments of the ODD.", "It must also limit operation to environments included in the ODD, or risk unsafe behavior.", "As human drivers in general have a very low failure rate [5] and as the ADS cannot expect human supervision or intervention in its ODD, the ADS becomes highly safety critical.", "Two pertinent problems arise when developing these safety-critical ADS: [label=()] the development methods and processes that are applied must ensure safety of ADS [6], and this fact must be supported by compelling evidence to persuade certification agencies, regulatory entities, and the general public of the safety of ADS.", "In essence, safety of the ADS must be demonstrated, rather than assumed until proven unsafe [7].", "ADS have a bidirectional interaction with their surrounding environment; that is, an ADS must adapt its behavior to the environment, and the environment behavior is affected by the ADS's actions.", "This paper considers ADS that can be represented by an architecture with three components as shown in Fig.", "REF .", "The sense component senses and perceives the environment, the plan component is responsible for decisions on when and how to act, and the act component executes the decisions using the respective actuators.", "These components interact with the environment, here represented by the ODD, in a feedback loop.", "block/.style=rectangle, draw, text width=4.5em, text centered, rounded corners, minimum height=2.3em, arrow/.style=-Stealth[],thick Figure: A simplified architecture for an ADS.The feedback loop between an ADS and its environment means that compelling safety evidence must be gathered in closed-loop conditions, and the compiled evidence must be shown to be representative of all the environments in the ODD.", "One way to accomplish both of these points is to perform real-world driving that covers the entire ODD, and drive enough distance such that safety can be evaluated.", "However, testing in real-world driving conditions as the only means for aiding safety-critical development and producing compelling evidence for safety is infeasible for all but trivial ODDs [5], [7], [8].", "To overcome this obstacle, the process of evidence collection often follows a divide-and-conquer approach.", "This is done by first breaking down the system-level ADS requirements to its components, and then further to smaller elements until the effort to support each requirement with compelling evidence is acceptable [9], [10].", "A benefit of this approach is that the evidence can be collected by methods that are specialized for the specific type of requirement, but which may not be feasible for system-level requirements.", "A drawback, however, is that fulfillment of the component-level requirements must now imply the fulfillment of the system-level requirements in the entire ODD.", "To ensure that there are no gaps in the broken-down requirements, verification must still be performed for the complete ADS, but to a lesser extent.", "Reviews and Testing are two often recommended methods to ensure correctness of electronic automotive features [10].", "Both of these methods are used to find faults or insufficiencies throughout the development process.", "They are complementary and often find different kinds of issues.", "However, neither is exhaustive, mainly because reviews are laborious, and because testing cannot explore the typically infinite set of possible behaviors in its entirety.", "These challenges are exacerbated for feedback systems, especially when discrete decisions are taken, and when errors take long time to propagate into failures.", "For instance, to avoid collisions with pedestrians, an ADS must approach road-side pedestrians with a suitable speed so that it can guarantee to stop safely, should the pedestrian step out into the road.", "A bad decision by the ADS can cause a collision several seconds or minutes later; deciding to pass the pedestrian although it is too close to the road will erroneously remove the braking option, but this error will not become apparent until, and if, the pedestrian enters the road.", "Formal methods are a category of methods that can prove and ensure correctness of feedback-system models with respect to the requirements.", "In contrast to testing, these methods are exhaustive and provide evidence that no faults or insufficiencies are present in the component model, at least not w.r.t.", "the specified behaviour.", "They are also automatic or semi-automatic, so they typically require less labor than reviews.", "This paper shows how formal methods can be used to justify that components relying on feedback fulfill their requirements, and thereby give compelling evidence for the safety of the component.", "Furthermore, this paper demonstrates how formal methods can be used to close the gap between the ADS system requirements and the broken-down component requirements.", "This is done by showing how the formalization of the system-level safety requirements of the ADS puts assumptions and verification conditions on the different components, and on the ODD.", "The assumptions thus obtained also give a formal description of the conditions that must be fulfilled for the formal proof to be valid, and this paper puts forth an argument that those conditions in certain instances may be evaluated in open-loop settings, thereby considerably decreasing the verification effort.", "Both the above contributions are illustrated and argued based on a relevant example from the industry.", "Related Work ISO 26262 lists formal methods as techniques for ensuring dependability on the software architectural and unit design level [10].", "At these levels, the software is typically considered as open-loop input/output systems, and this is also the intended setting in the standard.", "In this context, formal methods can provide evidence that the software is dependable [11].", "Formal methods are not considered at other levels of the design in the standard, and hence there are no recommendations regarding formal methods applied to feedback systems.", "Formal methods have been used successfully in the automotive domain to prove that complex feedback systems are correct with respect to safety in a given environment [12], [13], [14], [15], [16], [17].", "However, these works do not demonstrate how the artifacts from the formal methods contribute to a convincing argument that safety is achieved.", "Previous research has established that there are opportunities for using evidence from formal methods to convincingly argue for safety in all levels of the design of safety-critical systems [18], [19].", "Moreover, it is argued that the assumptions on the environment are an important part of the model, and their inclusion allow more focus on the evidence and validation instead of ensuring that the break-down of requirements is correct [18], [19]; an argument which is supported by the contributions in Section  .", "There is also a challenge in how to treat probabilistic requirements when employing formal methods [19], which is also addressed in Section  .", "When using formal methods to argue that a system is safe, it is imperative that both the underlying formalism and the tools that are being used are correct.", "Obviously, there must be a convincing argument that this is indeed the case, but such argument is out of scope of this paper as it is typically available from literature associated with the respective methods.", "There are many aspects to consider with respect to the correctness of the formalism and any tool being used, and a generic argument that captures all these aspects is provided by [20] [20].", "Safety Case A safety-critical system, such as an ADS, must behave such that safe operation is ensured in its entire ODD, where, commonly, safe is taken to mean that there is an absence of an unreasonable risk of harm.", "When a safety-critical ADS is developed, it must be ensured that its behavior indeed is safe, but it must also be justified by compelling evidence that the risk of harm is low enough.", "This latter part is required to persuade certification agencies, regulatory entities, and the general public.", "The justification of an ADS's safety can be compiled into a safety case, which is a structured argument, supported by a body of evidence that provides a compelling, comprehensible and valid case that a system is safe for a given application in a given operating environment [21].", "There are three principal elements in a safety case: requirements, arguments, and evidence [22].", "The safety case approach has been used in many safety critical industries to demonstrate safety, and is also recommended by automotive safety standards such as the ISO 26262 [10] and ISO/PAS 21448 [9].", "Since a safety case is used to demonstrate that a product is safe, it is imperative that its structure is clear, comprehensible, and accurate.", "The Goal Structuring Notation (GSN) is a standardized graphical argument notation [22], [23], which can be used to structure a safety case.", "It explicitly documents the individual elements of a safety argument and their relationships to the gathered evidence.", "GSN defines core elements, two types of relationships between the core elements, and an undeveloped element decorator, as shown in Fig.", "REF .", "The two relationships SupportedBy and InContextOf declare a relationship between a source element and a target element.", "The elements are linked together in a logical structure known as the GSN goal structure, which is a directed acyclic graph.", "The top-level goal in a goal structure is gradually refined through a series of more detailed goals until a direct link to evidence is made [22].", "The undeveloped element decorator indicates that a line of argument has not been developed in the current context.", "Figure: The core elements of a GSN goal structure.", "The line with a solid arrowhead denotes aSupportedByrelationship and the line with the hollow arrowhead denotes anInContextOfrelationship.", "The diamond indicates an undeveloped element.", "Formal Methods Formal methods are a class of mathematically rigorous techniques and tools used to specify, design, verify, and synthesize components, mainly by mechanizing rigorous reasoning about correctness of these components [24].", "As such, the field is very broad, and a wide plethora of tools are available for many different parts of the development process and at different levels of abstraction.", "This paper is concerned with formal methods applied to the function layer of the ADS, i.e., details of the hardware and the software are abstracted away.", "Formal methods are based on languages with formal syntax and semantics that leave no room for ambiguity.", "Common to all classes of formal methods is that a requirement on the system, in this case an ADS, is formalized into a specification that details correct behavior of the system.", "In the case of input/output systems, the formal specification relates the required output to certain input, but in this paper the specification details the allowed behavior over time for feedback systems.", "Since the safety requirements for an ADS describe disallowed and mandated behaviors over time, logical formalisms that support modelling and reasoning about properties with respect to time, such as Linear Temporal Logic (LTL) [25] and differential dynamic logic (dL) [26], are typically used to formalize the requirements.", "Often safety requirements are characterized as nothing bad shall happen [25], which is easily formalized using the modal operators to describe necessity in LTL and in dL.", "For instance, $\\Box \\phi $ in LTL asserts that the property $\\phi $ always holds and $[\\mathcal {M}]\\phi $ in dL asserts that after all behaviours of model $\\mathcal {M}$ , the property $\\phi $ holds.", "Given such a formal specification, formal verification and formal synthesis provide evidence of correctness of formal models of feedback systems in the form of a machine checked formal proof of the fulfillment of the specification.", "For formal verification, all parts in Fig.", "REF , and their possible interactions, are modelled in a formal language, and then the formal verification tool attempts to prove that the formal model fulfills the formal specification.", "For formal synthesis, the goal is instead to automatically construct, typically, a model of plan such that the resulting feedback system is guaranteed to fulfill the formal specification.", "In the case of synthesis, the models of sense, act, and the ODD are commonly referred to as the assumptions, whereas the required behavior of the feedback system is referred to as the guarantees.", "This distinction is not as common in the case of formal verification, but in this paper the modelled parts will be referred to as the assumptions of the system regardless of method.", "Proposed Safety Argument Approach For feedback systems like ADS, the break-down of requirements onto components can be difficult since requirements on the system-level refer to behaviors of the closed-loop system in the context of the ODD, whereas the component requirements specify the behavior at the interfaces between the components.", "The fulfillment of the component requirements must imply the fulfillment of the system requirements; if this is not the case, there is either a behavior disallowed by the system requirements that is allowed by the component requirements, or there is a behavior mandated by the system requirements that is not mandated by the component requirements.", "When the involved requirements are safety critical, this potential discrepancy between the behaviors that are disallowed or mandated at different levels might lead to an unsafe ADS.", "Compelling evidence that this potential discrepancy does not decrease the safety to an unacceptable level must be dealt with in the safety case [10], [27].", "The approach in this paper uses formal methods to prove that the break-down has no such discrepancy.", "The system requirement is formalized into a guarantee to be fulfilled, and the formal system model is composed of the formal models of the components and the ODD.", "The approach is to use these models as specifications for the components.", "The component specifications can be developed into formal contracts [28] that provide unambiguous requirements for the separate components.", "By this decoupling, specialized methods can be used to develop and verify the broken-down requirements.", "Hence, if the guarantee is proven to be fulfilled by the formal models, and if the verification of the components indicate that they fulfill their formal specifications, then the system requirement is fulfilled in the ODD, and there is compelling evidence for this fact in the form of a formal proof.", "For instance, the guarantee could be a formalization of there are no impacts with pedestrians.", "Likely, for the ADS to fulfill this guarantee, the component act must be able to provide some deceleration below some minimal level.", "More specifically, assume that the component plan requests an acceleration $a^{req} $ in a certain range, then act must ensure that the true acceleration $a$ of the vehicle fulfills $a \\le a^{req} $ .", "Then the property $a \\le a^{req} $ is considered the specification for the act component.", "In the case that the system requirement is quantitative, specifying a probability or an occurrence rate of an event, then the formal guarantee is specified such that the event must not occur.", "In the break-down process, the models of the components are assigned a probability or occurrence rate with which they may be violated, such that the cumulative violations of the assumptions in the formal model does not exceed what is allowed by the system requirement.", "The approach is illustrated in Section   with an example where a safety requirement with respect to pedestrians is broken down to safety requirements on the closed-loop system.", "It is then shown how the artifacts of the formalization of these requirements can be used to structure an argument in a safety case.", "The argument is illustrated graphically by a GSN goal structure in Fig.", "REF to make the relations between requirements and arguments clear.", "Precautionary Safety and Risk Norm The example in this paper is based on the concept of Precautionary Safety (PCS) [29].", "PCS attempts to ensure safety by adjusting an ADS's behavior based on its capabilities, external conditions, and expected exposure to incidents.", "The notion of safety in this context follows the concept of Quantitative Risk Norm (QRN) [30], where certain accident types with certain severity (in terms of human injury) are assigned a minimum allowed mean time between accidents (risk norm).", "Based on an assessment of Swedish national accident injury data for pedestrians and an estimation of total hours driven, an acceptable mean time between accidents can be established.", "The severity is highly correlated with impact speed [31], so the allocated risk norms are in Table REF given based on impact speed.", "Table: Risk Norms for Pedestrians.", "Different Mean Time Between Failures for Different Accident Severities.The ADS is intended to operate up to a speed of 70km/h on urban roads, and up to 100km/h on highways, so these are the environments that make up the ODD.", "The allowed failure rate of the ADS depends both on the risk norm and on the incident exposure rate in the ODD, i.e., the mean time between the occurrence of pedestrians.", "This exposure rate is different for different road segments, as pedestrians are more likely to appear on the road in low-speed urban settings than on highways with free-flowing traffic.", "The assumed exposure rates for urban and highway driving are given in Table REF (c.f.", "[29] [29]).", "Table: Exposure Levels with Respect to Speed and Road TypeThe QRN argument that the ADS feature is sufficiently safe with respect to pedestrians is illustrated graphically by the goals G1, G2, and G3 in Fig.", "REF .", "G1 represents the top-level safety requirement that the ADS feature shall be safe with respect to pedestrians.", "G1 is then made more specific in G2 by relating safety to its definition of having sufficiently high mean time between accidents.", "G2 is fulfilled via the strategy S1, which argues that the ADS is sufficiently safe whenever the risk norms in Table REF are fulfilled, as exemplified by G3.", "Note that the four goals corresponding to the other risk norms in parallel to G3 are not shown.", "G3 is broken down based on exposure level, corresponding to the different road types.", "Each combination of exposure level and risk norm gives rise to an impact probability per event, where the maximum allowed impact probability in G4 is calculated by the ratio of exposure and risk norm.", "Not shown in parallel to G4 in the branch rooted in G3 are the six other exposure classes from Table REF .", "All other goals in parallel to G3 are supported by an analogous argument structure.", "This paper and the PCS paper [29] agrees on the safety case so far, but differs below G4.", "Figure: GSN illustrating the argument made in this paper.", "A bigger version is available at https://doi.org/10.5281/zenodo.7142341.In [29] [29], an already implemented reactive collision-avoidance module's capability to avoid collisions by braking is determined in simulations.", "The capability is presented as the probability of a certain impact speed given the speed before braking starts.", "The mean time between certain impact speeds for certain roads is then calculated as the ratio of the exposure on that road and the probability of that impact speed given the speed of the road.", "The mean time between impact speeds is then compared to the QRN to determine the maximum allowed speed that the ADS may drive on that road; if the mean time between accidents of a certain impact speed for a certain initial speed is lower than the QRN, then the ADS may not drive that fast on that road.", "For instance, exposure on urban roads with speed 70km/h is 10 000h, and the probability of an impact speed of 1020km/h for a speed of 60km/h is 5.", "The ratio is 200 000h, considerably less than the QRN of 1 000 000h, so the ADS cannot be allowed to drive at 60km/h on urban roads with a speed of 70km/h.", "Thus, safety is ensured by taking precaution; the maximum speed is adapted based on the expected capability of mitigating impacts and the exposure to incidents.", "However, this way of applying the PCS concept assumes an already implemented ADS.", "If it is desired to drive at 70km/h on urban roads, PCS indicates that the probability of impact speeds of 1020km/h ought to be at most 1.", "One way of reaching such performance would be to iteratively implement improvements and simulate and test until the capability is sufficiently safe for driving at 70km/h.", "This paper shows how formal methods can be used to instead break down the requirement to components to achieve a systematic development process.", "It is also shown how the break-down through formal methods helps with structuring an argument supporting the fulfillment of the safety case.", "The main idea of PCS still applies; the ADS ought to take precaution by considering what might happen, not solely what is happening, to avoid unacceptable risk of harm.", "Formal methods explore the entire state space of the formal model in order to prove the guarantee, so they fit well as a tool to ensure that all possible events are considered by the ADS.", "Formal Methods in the Safety Case To ease the development effort, the goal G4 in Fig.", "REF may be broken down to G6, G7, and G8 that are assigned to the respective components in Fig.", "REF , and in that way separate the concerns for design and verification of the different components.", "This is a typical way to combat complexity by allowing application of specialized tools and methods to the design and verification of the components [10], [28].", "As stated earlier, these components' goals must imply the goal G4 in Fig.", "REF lest the risk of harm may be unacceptable.", "Formal methods can provide proofs as evidence that, in the context of the ODD, the component models fulfill the guarantee.", "However, the guarantee is a qualitative logical formula, whereas G4 is a quantitative goal with a probability.", "To deal with this, the strategy for fulfilling G4 is split into one qualitative part and one quantitative part, as can be seen in Fig.", "REF where G4 is supported by the two strategies S3 and S4.", "The idea here is to disregard the probability in G4 and formalize the remainder as the guarantee, and then find a formal model that satisfies that guarantee.", "To reintroduce the probability, it is finally argued that the goal G4 is fulfilled as long as the assumptions in the formal model are violated with at most the same probability as in G4.", "The entire argument hinges on the correctness of the formal model with respect to the guarantee, and this is captured by G5.", "It is made explicit with the context relation that the formal model is composed of the assumptions on the ODD and the models of the components sense, plan, and act, and the guarantee which correctness is evaluated against.", "The sole evidence that G5 is fulfilled is provided by the machine checked formal proof.", "Now assume that the behavior of the components sense, plan, and act fulfill their corresponding formal assumptions with probabilities such that the entire formal model is fulfilled with a probability of at least 0.99.", "Then G4 is fulfilled because the guarantee that “no impacts occur” is violated at most with a probability of 0.01, which in turn means that impacts in the range 010km/h can occur at most with a probability of 0.01.", "Therefore, G4 is broken down into G6, G7, and G8, each detailing the probability of the assumed behaviors of sense, plan, and act, respectively, are being violated.", "Here, the probabilities are assigned arbitrarily, but it is ensured that their sum does not exceed the probability of G4.", "It can be argued via strategy S3 that the break-down of G4 to G6, G7, and G8 is correct, and it can be argued via strategy S4 that G4 is fulfilled because G6, G7, and G8 are fulfilled.", "The component models can serve as formal specifications for the individual components, and relevant standards may be used to develop and verify the three components according to these specifications [9], [10].", "To be a bit more specific, it is now illustrated with more detail what the context of G5 may look like.", "Since the intention is to show how a formal-methods approach could be used in the safety case and not discuss any specific formal method in detail, the approach in this paper is illustrated with abstract artifacts and simple models for the three components in Fig.", "REF .", "The first type of artifact, the guarantee $\\mathcal {R}$REF , is the formal specification based on G4.", "$\\mathcal {R}$ 1 There are no impacts with pedestrians.", "$\\mathcal {R}$REF can be formalized in different ways depending on the formalism used.", "For example, $\\mathcal {R}$REF can be formalized in LTL using the formula $\\Box \\lnot \\mathit {collision} $ where $\\mathit {collision}$ is a predicate describing the undesirable property of the occurrence of an impact; and in dL using the formula $[\\mathcal {M}](\\lnot \\mathit {collision})$ , where $\\mathcal {M}$ is the formal model.", "Irrespective of the formalism used, formal models of the components are required in order to either synthesize a controller that guarantees $\\mathcal {R}$REF or verify that a given design fulfills $\\mathcal {R}$REF .", "Thus, the second type of artifact is formal models of the ADS components in Fig.", "REF .", "As for the formal specification, the ADS components can be modelled in different ways depending on the formalism of choice.", "To formalize the components, parameters about the ADS vehicle and pedestrians are considered, as shown in Table REF .", "Table: Parameters Considered in the ADS ModelThis paper considers a plan component with a safety controller such as the one in [16] [16] to guarantee safety.", "For the sake of brevity, $\\mathcal {M}$REF presents a very abstract model of plan.", "The required acceleration $a^{req}$ is set to the minimum acceleration $a^{min}$ if the predicate $\\lnot \\mathit {safe} $ is satisfied.", "This predicate can be used to describe decision-making conditions such as checking if there is some choice of $a^{req} \\in [a^{min}, a^{max} ]$ such that, later on, stopping before the pedestrian is infeasible.", "$\\mathcal {M}$ 1 ($\\mathit {plan}$ ) $\\quad \\lnot \\mathit {safe} \\rightarrow a^{req} = a^{min} $ Of course, to prove that $\\mathcal {M}$REF fulfills $\\mathcal {R}$REF , certain assumptions must be made on the other components.", "Typically such assumptions are identified as a result of the formal modelling and analysis.", "In this context, consider $\\mathcal {M}$REF , $\\mathcal {M}$REF , and $\\mathcal {M}$REF as the assumptions on the ODD, sense, and act, respectively.", "The assumption on the ODD describes that the ADS vehicle velocity $v$ is non-negative, and it defines the limits on pedestrian velocity $v_p $ .", "$\\mathcal {M}$ 2 ($\\mathit {ODD}$ ) $\\quad v \\ge 0 \\wedge 0 \\le v_p \\le 10$ The range of allowed values for $v_p $ defines what may happen in the environment, and the exhaustiveness of formal methods make sure that all different combinations with all different timings are evaluated.", "A controller that fulfills the requirement in the presence of $v_p $ certainly takes precaution for what might happen, and not only reacts to what is happening.", "$\\mathcal {M}$REF describes that, if the true position $x_p $ of the pedestrian is within the detection range of the sensor, then the error in the estimated position of the pedestrian $(\\hat{x}_p- x_p)$ is at most $\\epsilon $ .", "Furthermore, if $x_p $ is in front of the ADS vehicle, then $\\hat{x}_p $ is also estimated to be in front.", "$\\mathcal {M}$ 3 ($\\mathit {sense}$ ) $\\left(x_p \\le \\mathit {range} \\rightarrow \\hat{x}_p- x_p \\le \\epsilon \\right)\\, \\wedge \\, \\left(x_p \\ge x \\rightarrow \\hat{x}_p \\ge x\\right)$ The tolerances make it possible to assign probabilities to the fulfillment of the specifications.", "The exhaustiveness of formal methods evaluates all combinations, so the exact probability distribution does not need to be known.", "The assumptions on the act described by $\\mathcal {M}$REF state that if the requested acceleration $a^{req}$ is within the bounds, then the tracking error tolerance between actual acceleration and requested acceleration is at most $\\delta $ .", "$\\mathcal {M}$ 4 ($\\mathit {act}$ ) $\\quad a^{min} \\le a^{req} \\le a^{max} \\rightarrow a \\le a^{req} + \\delta $ Assume that the formal model $\\mathcal {M}$ composed by $\\mathcal {M}$REF – $\\mathcal {M}$REF is correct with respect to the guarantee $\\mathcal {R}$REF , and that there exists a formal proof $\\mathcal {P}$ that this is indeed the case.", "The proof $\\mathcal {P}$ provides enough evidence that G5 in Fig.", "REF is fulfilled.", "For G7 to be fulfilled, the realized controller in the component plan must fulfill the behavior specified by $\\mathcal {M}$REF .", "This may be assured, for instance, by following the recommendations in ISO 26262 [10] to achieve a fault tolerant realization.", "Since $\\mathcal {M}$REF is a simple condition relating inputs to outputs, much of its verification can be performed in open loop, which results in less effort to collect evidence that G7 is fulfilled.", "This paper does not consider strategies to validate the ODD, so in A1 in Fig.", "REF , $\\mathcal {M}$REF is considered an assumption in the argument, which means that there is no justification or evidence that $\\mathcal {M}$REF is fulfilled.", "Obviously, to ensure real-life safety, this assumption must be validated for the roads that the ADS vehicle is allowed to drive on.", "That endeavor may benefit greatly from having a formalized formulation of assumed properties of the ODD.", "The model $\\mathcal {M}$REF specifies the required behavior of sense for the guarantee to be fulfilled, and G6 details the probability with which this behavior may be violated.", "This requirement can, to a large extent, be verified in open loop on recorded data, provided that the behavior in the recordings adhere to the assumptions of $\\mathcal {M}$REF .", "This could provide a substantial benefit since the data may be collected before all components are realized, and because the recorded data is still relevant after implementation has changed in the components.", "This is not the case for the approach by [29] [29] when assessing the capability of the complete vehicle.", "Furthermore, the verification of sense does not need to ensure that there are enough outcomes with different impact speeds, as the requirement is independent of the closed-loop outcome.", "This last point can potentially save effort both in data collection and verification time.", "The last model, $\\mathcal {M}$REF , which specifies the acceleration tracking performance, must still be verified in closed-loop conditions.", "However, the break-down of requirement G4 into G8 makes the verification independent of the other components, which may simplify the verification method.", "To realize sense and act below the goals G6 and G8, the development process could, for instance, employ ISO/PAS 21448 and ISO 26262.", "Discussion The approach in Section   is one possible argument structure to include formal proofs of correctness as evidence in safety cases.", "Whether the approach is beneficial depends on the overhead of the formal methods in comparison to the verification effort, and as such, the approach may be beneficial for some systems, and not for others, and it might be beneficial for only a subset of the requirements in one system.", "The benefits are also dependent on the formal modelling and the system decomposition.", "Some system decompositions may not be possible to formally model in a given formalism, and the formal specifications of a component might fail to be verifiable, or exceedingly hard to verify.", "Such complications will trigger redesigns that could be costly.", "On the other hand, a notable benefit is that the presented approach in itself is agnostic to the chosen methods and technologies with which the components are realized.", "Admittedly, some might be more amenable to be used in conjunction with the formal specifications.", "Furthermore, if formal methods are employed at an early stage in the project, they may catch inconsistencies and design flaws that would be costly to find during system-level testing.", "The process of formalization of requirements can itself be beneficial, and the approach detailed in this paper allows more utilization of such work.", "The proposed approach uses a strategy to split a quantitative goal into one qualitative and one quantitative part.", "The qualitative part disregards the probabilities involved and employs formal methods to provide evidence to show that the qualitative part is fulfilled.", "However, there exists formal approaches that can prove correctness of stochastic systems such as probabilistic model checking [32] where quantitative extensions of temporal logic are used to specify quantitative properties.", "While it is beneficial to investigate the suitability of such approaches to provide evidence to quantitative goals in the safety argument, they are not considered in this paper.", "Conclusions This paper presents an approach to structure the safety case for employing formal methods in safety cases for automated driving systems.", "The artifacts from the formal analysis are used as formal specifications on the components of the system, thus providing a break-down of safety requirements to individual components.", "The formal proof provided by the formal method is used as evidence that the break-down is correct.", "If the safety requirement specify a quantitative target, then this is handled by assigning quantitative targets for the specifications on the components, in parallel to the formal model.", "This approach gives several potential benefits by limiting the effort of safety assurance, and by breaking down requirements on a complex system into well defined and unambiguous specifications on individual components.", "First of all, less effort is needed to verify that the break-down of the requirement is correct, as this is proven by the formal method.", "Second, the formal specifications on the components provide separation of the concerns, and may also allow for verification in open-loop settings instead of closed-loop settings for some components, which leads to more flexibility and less effort to verify the individual components.", "Case studies of complex systems are needed to validate this approach." ], [ "Related Work", "ISO 26262 lists formal methods as techniques for ensuring dependability on the software architectural and unit design level [10].", "At these levels, the software is typically considered as open-loop input/output systems, and this is also the intended setting in the standard.", "In this context, formal methods can provide evidence that the software is dependable [11].", "Formal methods are not considered at other levels of the design in the standard, and hence there are no recommendations regarding formal methods applied to feedback systems.", "Formal methods have been used successfully in the automotive domain to prove that complex feedback systems are correct with respect to safety in a given environment [12], [13], [14], [15], [16], [17].", "However, these works do not demonstrate how the artifacts from the formal methods contribute to a convincing argument that safety is achieved.", "Previous research has established that there are opportunities for using evidence from formal methods to convincingly argue for safety in all levels of the design of safety-critical systems [18], [19].", "Moreover, it is argued that the assumptions on the environment are an important part of the model, and their inclusion allow more focus on the evidence and validation instead of ensuring that the break-down of requirements is correct [18], [19]; an argument which is supported by the contributions in Section  .", "There is also a challenge in how to treat probabilistic requirements when employing formal methods [19], which is also addressed in Section  .", "When using formal methods to argue that a system is safe, it is imperative that both the underlying formalism and the tools that are being used are correct.", "Obviously, there must be a convincing argument that this is indeed the case, but such argument is out of scope of this paper as it is typically available from literature associated with the respective methods.", "There are many aspects to consider with respect to the correctness of the formalism and any tool being used, and a generic argument that captures all these aspects is provided by [20] [20]." ], [ "Safety Case", "A safety-critical system, such as an ADS, must behave such that safe operation is ensured in its entire ODD, where, commonly, safe is taken to mean that there is an absence of an unreasonable risk of harm.", "When a safety-critical ADS is developed, it must be ensured that its behavior indeed is safe, but it must also be justified by compelling evidence that the risk of harm is low enough.", "This latter part is required to persuade certification agencies, regulatory entities, and the general public.", "The justification of an ADS's safety can be compiled into a safety case, which is a structured argument, supported by a body of evidence that provides a compelling, comprehensible and valid case that a system is safe for a given application in a given operating environment [21].", "There are three principal elements in a safety case: requirements, arguments, and evidence [22].", "The safety case approach has been used in many safety critical industries to demonstrate safety, and is also recommended by automotive safety standards such as the ISO 26262 [10] and ISO/PAS 21448 [9].", "Since a safety case is used to demonstrate that a product is safe, it is imperative that its structure is clear, comprehensible, and accurate.", "The Goal Structuring Notation (GSN) is a standardized graphical argument notation [22], [23], which can be used to structure a safety case.", "It explicitly documents the individual elements of a safety argument and their relationships to the gathered evidence.", "GSN defines core elements, two types of relationships between the core elements, and an undeveloped element decorator, as shown in Fig.", "REF .", "The two relationships SupportedBy and InContextOf declare a relationship between a source element and a target element.", "The elements are linked together in a logical structure known as the GSN goal structure, which is a directed acyclic graph.", "The top-level goal in a goal structure is gradually refined through a series of more detailed goals until a direct link to evidence is made [22].", "The undeveloped element decorator indicates that a line of argument has not been developed in the current context.", "Figure: The core elements of a GSN goal structure.", "The line with a solid arrowhead denotes aSupportedByrelationship and the line with the hollow arrowhead denotes anInContextOfrelationship.", "The diamond indicates an undeveloped element." ], [ "Formal Methods", "Formal methods are a class of mathematically rigorous techniques and tools used to specify, design, verify, and synthesize components, mainly by mechanizing rigorous reasoning about correctness of these components [24].", "As such, the field is very broad, and a wide plethora of tools are available for many different parts of the development process and at different levels of abstraction.", "This paper is concerned with formal methods applied to the function layer of the ADS, i.e., details of the hardware and the software are abstracted away.", "Formal methods are based on languages with formal syntax and semantics that leave no room for ambiguity.", "Common to all classes of formal methods is that a requirement on the system, in this case an ADS, is formalized into a specification that details correct behavior of the system.", "In the case of input/output systems, the formal specification relates the required output to certain input, but in this paper the specification details the allowed behavior over time for feedback systems.", "Since the safety requirements for an ADS describe disallowed and mandated behaviors over time, logical formalisms that support modelling and reasoning about properties with respect to time, such as Linear Temporal Logic (LTL) [25] and differential dynamic logic (dL) [26], are typically used to formalize the requirements.", "Often safety requirements are characterized as nothing bad shall happen [25], which is easily formalized using the modal operators to describe necessity in LTL and in dL.", "For instance, $\\Box \\phi $ in LTL asserts that the property $\\phi $ always holds and $[\\mathcal {M}]\\phi $ in dL asserts that after all behaviours of model $\\mathcal {M}$ , the property $\\phi $ holds.", "Given such a formal specification, formal verification and formal synthesis provide evidence of correctness of formal models of feedback systems in the form of a machine checked formal proof of the fulfillment of the specification.", "For formal verification, all parts in Fig.", "REF , and their possible interactions, are modelled in a formal language, and then the formal verification tool attempts to prove that the formal model fulfills the formal specification.", "For formal synthesis, the goal is instead to automatically construct, typically, a model of plan such that the resulting feedback system is guaranteed to fulfill the formal specification.", "In the case of synthesis, the models of sense, act, and the ODD are commonly referred to as the assumptions, whereas the required behavior of the feedback system is referred to as the guarantees.", "This distinction is not as common in the case of formal verification, but in this paper the modelled parts will be referred to as the assumptions of the system regardless of method." ], [ "Proposed Safety Argument Approach", "For feedback systems like ADS, the break-down of requirements onto components can be difficult since requirements on the system-level refer to behaviors of the closed-loop system in the context of the ODD, whereas the component requirements specify the behavior at the interfaces between the components.", "The fulfillment of the component requirements must imply the fulfillment of the system requirements; if this is not the case, there is either a behavior disallowed by the system requirements that is allowed by the component requirements, or there is a behavior mandated by the system requirements that is not mandated by the component requirements.", "When the involved requirements are safety critical, this potential discrepancy between the behaviors that are disallowed or mandated at different levels might lead to an unsafe ADS.", "Compelling evidence that this potential discrepancy does not decrease the safety to an unacceptable level must be dealt with in the safety case [10], [27].", "The approach in this paper uses formal methods to prove that the break-down has no such discrepancy.", "The system requirement is formalized into a guarantee to be fulfilled, and the formal system model is composed of the formal models of the components and the ODD.", "The approach is to use these models as specifications for the components.", "The component specifications can be developed into formal contracts [28] that provide unambiguous requirements for the separate components.", "By this decoupling, specialized methods can be used to develop and verify the broken-down requirements.", "Hence, if the guarantee is proven to be fulfilled by the formal models, and if the verification of the components indicate that they fulfill their formal specifications, then the system requirement is fulfilled in the ODD, and there is compelling evidence for this fact in the form of a formal proof.", "For instance, the guarantee could be a formalization of there are no impacts with pedestrians.", "Likely, for the ADS to fulfill this guarantee, the component act must be able to provide some deceleration below some minimal level.", "More specifically, assume that the component plan requests an acceleration $a^{req} $ in a certain range, then act must ensure that the true acceleration $a$ of the vehicle fulfills $a \\le a^{req} $ .", "Then the property $a \\le a^{req} $ is considered the specification for the act component.", "In the case that the system requirement is quantitative, specifying a probability or an occurrence rate of an event, then the formal guarantee is specified such that the event must not occur.", "In the break-down process, the models of the components are assigned a probability or occurrence rate with which they may be violated, such that the cumulative violations of the assumptions in the formal model does not exceed what is allowed by the system requirement.", "The approach is illustrated in Section   with an example where a safety requirement with respect to pedestrians is broken down to safety requirements on the closed-loop system.", "It is then shown how the artifacts of the formalization of these requirements can be used to structure an argument in a safety case.", "The argument is illustrated graphically by a GSN goal structure in Fig.", "REF to make the relations between requirements and arguments clear." ], [ "Precautionary Safety and Risk Norm", "The example in this paper is based on the concept of Precautionary Safety (PCS) [29].", "PCS attempts to ensure safety by adjusting an ADS's behavior based on its capabilities, external conditions, and expected exposure to incidents.", "The notion of safety in this context follows the concept of Quantitative Risk Norm (QRN) [30], where certain accident types with certain severity (in terms of human injury) are assigned a minimum allowed mean time between accidents (risk norm).", "Based on an assessment of Swedish national accident injury data for pedestrians and an estimation of total hours driven, an acceptable mean time between accidents can be established.", "The severity is highly correlated with impact speed [31], so the allocated risk norms are in Table REF given based on impact speed.", "Table: Risk Norms for Pedestrians.", "Different Mean Time Between Failures for Different Accident Severities.The ADS is intended to operate up to a speed of 70km/h on urban roads, and up to 100km/h on highways, so these are the environments that make up the ODD.", "The allowed failure rate of the ADS depends both on the risk norm and on the incident exposure rate in the ODD, i.e., the mean time between the occurrence of pedestrians.", "This exposure rate is different for different road segments, as pedestrians are more likely to appear on the road in low-speed urban settings than on highways with free-flowing traffic.", "The assumed exposure rates for urban and highway driving are given in Table REF (c.f.", "[29] [29]).", "Table: Exposure Levels with Respect to Speed and Road TypeThe QRN argument that the ADS feature is sufficiently safe with respect to pedestrians is illustrated graphically by the goals G1, G2, and G3 in Fig.", "REF .", "G1 represents the top-level safety requirement that the ADS feature shall be safe with respect to pedestrians.", "G1 is then made more specific in G2 by relating safety to its definition of having sufficiently high mean time between accidents.", "G2 is fulfilled via the strategy S1, which argues that the ADS is sufficiently safe whenever the risk norms in Table REF are fulfilled, as exemplified by G3.", "Note that the four goals corresponding to the other risk norms in parallel to G3 are not shown.", "G3 is broken down based on exposure level, corresponding to the different road types.", "Each combination of exposure level and risk norm gives rise to an impact probability per event, where the maximum allowed impact probability in G4 is calculated by the ratio of exposure and risk norm.", "Not shown in parallel to G4 in the branch rooted in G3 are the six other exposure classes from Table REF .", "All other goals in parallel to G3 are supported by an analogous argument structure.", "This paper and the PCS paper [29] agrees on the safety case so far, but differs below G4.", "Figure: GSN illustrating the argument made in this paper.", "A bigger version is available at https://doi.org/10.5281/zenodo.7142341.In [29] [29], an already implemented reactive collision-avoidance module's capability to avoid collisions by braking is determined in simulations.", "The capability is presented as the probability of a certain impact speed given the speed before braking starts.", "The mean time between certain impact speeds for certain roads is then calculated as the ratio of the exposure on that road and the probability of that impact speed given the speed of the road.", "The mean time between impact speeds is then compared to the QRN to determine the maximum allowed speed that the ADS may drive on that road; if the mean time between accidents of a certain impact speed for a certain initial speed is lower than the QRN, then the ADS may not drive that fast on that road.", "For instance, exposure on urban roads with speed 70km/h is 10 000h, and the probability of an impact speed of 1020km/h for a speed of 60km/h is 5.", "The ratio is 200 000h, considerably less than the QRN of 1 000 000h, so the ADS cannot be allowed to drive at 60km/h on urban roads with a speed of 70km/h.", "Thus, safety is ensured by taking precaution; the maximum speed is adapted based on the expected capability of mitigating impacts and the exposure to incidents.", "However, this way of applying the PCS concept assumes an already implemented ADS.", "If it is desired to drive at 70km/h on urban roads, PCS indicates that the probability of impact speeds of 1020km/h ought to be at most 1.", "One way of reaching such performance would be to iteratively implement improvements and simulate and test until the capability is sufficiently safe for driving at 70km/h.", "This paper shows how formal methods can be used to instead break down the requirement to components to achieve a systematic development process.", "It is also shown how the break-down through formal methods helps with structuring an argument supporting the fulfillment of the safety case.", "The main idea of PCS still applies; the ADS ought to take precaution by considering what might happen, not solely what is happening, to avoid unacceptable risk of harm.", "Formal methods explore the entire state space of the formal model in order to prove the guarantee, so they fit well as a tool to ensure that all possible events are considered by the ADS." ], [ "Formal Methods in the Safety Case", "To ease the development effort, the goal G4 in Fig.", "REF may be broken down to G6, G7, and G8 that are assigned to the respective components in Fig.", "REF , and in that way separate the concerns for design and verification of the different components.", "This is a typical way to combat complexity by allowing application of specialized tools and methods to the design and verification of the components [10], [28].", "As stated earlier, these components' goals must imply the goal G4 in Fig.", "REF lest the risk of harm may be unacceptable.", "Formal methods can provide proofs as evidence that, in the context of the ODD, the component models fulfill the guarantee.", "However, the guarantee is a qualitative logical formula, whereas G4 is a quantitative goal with a probability.", "To deal with this, the strategy for fulfilling G4 is split into one qualitative part and one quantitative part, as can be seen in Fig.", "REF where G4 is supported by the two strategies S3 and S4.", "The idea here is to disregard the probability in G4 and formalize the remainder as the guarantee, and then find a formal model that satisfies that guarantee.", "To reintroduce the probability, it is finally argued that the goal G4 is fulfilled as long as the assumptions in the formal model are violated with at most the same probability as in G4.", "The entire argument hinges on the correctness of the formal model with respect to the guarantee, and this is captured by G5.", "It is made explicit with the context relation that the formal model is composed of the assumptions on the ODD and the models of the components sense, plan, and act, and the guarantee which correctness is evaluated against.", "The sole evidence that G5 is fulfilled is provided by the machine checked formal proof.", "Now assume that the behavior of the components sense, plan, and act fulfill their corresponding formal assumptions with probabilities such that the entire formal model is fulfilled with a probability of at least 0.99.", "Then G4 is fulfilled because the guarantee that “no impacts occur” is violated at most with a probability of 0.01, which in turn means that impacts in the range 010km/h can occur at most with a probability of 0.01.", "Therefore, G4 is broken down into G6, G7, and G8, each detailing the probability of the assumed behaviors of sense, plan, and act, respectively, are being violated.", "Here, the probabilities are assigned arbitrarily, but it is ensured that their sum does not exceed the probability of G4.", "It can be argued via strategy S3 that the break-down of G4 to G6, G7, and G8 is correct, and it can be argued via strategy S4 that G4 is fulfilled because G6, G7, and G8 are fulfilled.", "The component models can serve as formal specifications for the individual components, and relevant standards may be used to develop and verify the three components according to these specifications [9], [10].", "To be a bit more specific, it is now illustrated with more detail what the context of G5 may look like.", "Since the intention is to show how a formal-methods approach could be used in the safety case and not discuss any specific formal method in detail, the approach in this paper is illustrated with abstract artifacts and simple models for the three components in Fig.", "REF .", "The first type of artifact, the guarantee $\\mathcal {R}$REF , is the formal specification based on G4.", "$\\mathcal {R}$ 1 There are no impacts with pedestrians.", "$\\mathcal {R}$REF can be formalized in different ways depending on the formalism used.", "For example, $\\mathcal {R}$REF can be formalized in LTL using the formula $\\Box \\lnot \\mathit {collision} $ where $\\mathit {collision}$ is a predicate describing the undesirable property of the occurrence of an impact; and in dL using the formula $[\\mathcal {M}](\\lnot \\mathit {collision})$ , where $\\mathcal {M}$ is the formal model.", "Irrespective of the formalism used, formal models of the components are required in order to either synthesize a controller that guarantees $\\mathcal {R}$REF or verify that a given design fulfills $\\mathcal {R}$REF .", "Thus, the second type of artifact is formal models of the ADS components in Fig.", "REF .", "As for the formal specification, the ADS components can be modelled in different ways depending on the formalism of choice.", "To formalize the components, parameters about the ADS vehicle and pedestrians are considered, as shown in Table REF .", "Table: Parameters Considered in the ADS ModelThis paper considers a plan component with a safety controller such as the one in [16] [16] to guarantee safety.", "For the sake of brevity, $\\mathcal {M}$REF presents a very abstract model of plan.", "The required acceleration $a^{req}$ is set to the minimum acceleration $a^{min}$ if the predicate $\\lnot \\mathit {safe} $ is satisfied.", "This predicate can be used to describe decision-making conditions such as checking if there is some choice of $a^{req} \\in [a^{min}, a^{max} ]$ such that, later on, stopping before the pedestrian is infeasible.", "$\\mathcal {M}$ 1 ($\\mathit {plan}$ ) $\\quad \\lnot \\mathit {safe} \\rightarrow a^{req} = a^{min} $ Of course, to prove that $\\mathcal {M}$REF fulfills $\\mathcal {R}$REF , certain assumptions must be made on the other components.", "Typically such assumptions are identified as a result of the formal modelling and analysis.", "In this context, consider $\\mathcal {M}$REF , $\\mathcal {M}$REF , and $\\mathcal {M}$REF as the assumptions on the ODD, sense, and act, respectively.", "The assumption on the ODD describes that the ADS vehicle velocity $v$ is non-negative, and it defines the limits on pedestrian velocity $v_p $ .", "$\\mathcal {M}$ 2 ($\\mathit {ODD}$ ) $\\quad v \\ge 0 \\wedge 0 \\le v_p \\le 10$ The range of allowed values for $v_p $ defines what may happen in the environment, and the exhaustiveness of formal methods make sure that all different combinations with all different timings are evaluated.", "A controller that fulfills the requirement in the presence of $v_p $ certainly takes precaution for what might happen, and not only reacts to what is happening.", "$\\mathcal {M}$REF describes that, if the true position $x_p $ of the pedestrian is within the detection range of the sensor, then the error in the estimated position of the pedestrian $(\\hat{x}_p- x_p)$ is at most $\\epsilon $ .", "Furthermore, if $x_p $ is in front of the ADS vehicle, then $\\hat{x}_p $ is also estimated to be in front.", "$\\mathcal {M}$ 3 ($\\mathit {sense}$ ) $\\left(x_p \\le \\mathit {range} \\rightarrow \\hat{x}_p- x_p \\le \\epsilon \\right)\\, \\wedge \\, \\left(x_p \\ge x \\rightarrow \\hat{x}_p \\ge x\\right)$ The tolerances make it possible to assign probabilities to the fulfillment of the specifications.", "The exhaustiveness of formal methods evaluates all combinations, so the exact probability distribution does not need to be known.", "The assumptions on the act described by $\\mathcal {M}$REF state that if the requested acceleration $a^{req}$ is within the bounds, then the tracking error tolerance between actual acceleration and requested acceleration is at most $\\delta $ .", "$\\mathcal {M}$ 4 ($\\mathit {act}$ ) $\\quad a^{min} \\le a^{req} \\le a^{max} \\rightarrow a \\le a^{req} + \\delta $ Assume that the formal model $\\mathcal {M}$ composed by $\\mathcal {M}$REF – $\\mathcal {M}$REF is correct with respect to the guarantee $\\mathcal {R}$REF , and that there exists a formal proof $\\mathcal {P}$ that this is indeed the case.", "The proof $\\mathcal {P}$ provides enough evidence that G5 in Fig.", "REF is fulfilled.", "For G7 to be fulfilled, the realized controller in the component plan must fulfill the behavior specified by $\\mathcal {M}$REF .", "This may be assured, for instance, by following the recommendations in ISO 26262 [10] to achieve a fault tolerant realization.", "Since $\\mathcal {M}$REF is a simple condition relating inputs to outputs, much of its verification can be performed in open loop, which results in less effort to collect evidence that G7 is fulfilled.", "This paper does not consider strategies to validate the ODD, so in A1 in Fig.", "REF , $\\mathcal {M}$REF is considered an assumption in the argument, which means that there is no justification or evidence that $\\mathcal {M}$REF is fulfilled.", "Obviously, to ensure real-life safety, this assumption must be validated for the roads that the ADS vehicle is allowed to drive on.", "That endeavor may benefit greatly from having a formalized formulation of assumed properties of the ODD.", "The model $\\mathcal {M}$REF specifies the required behavior of sense for the guarantee to be fulfilled, and G6 details the probability with which this behavior may be violated.", "This requirement can, to a large extent, be verified in open loop on recorded data, provided that the behavior in the recordings adhere to the assumptions of $\\mathcal {M}$REF .", "This could provide a substantial benefit since the data may be collected before all components are realized, and because the recorded data is still relevant after implementation has changed in the components.", "This is not the case for the approach by [29] [29] when assessing the capability of the complete vehicle.", "Furthermore, the verification of sense does not need to ensure that there are enough outcomes with different impact speeds, as the requirement is independent of the closed-loop outcome.", "This last point can potentially save effort both in data collection and verification time.", "The last model, $\\mathcal {M}$REF , which specifies the acceleration tracking performance, must still be verified in closed-loop conditions.", "However, the break-down of requirement G4 into G8 makes the verification independent of the other components, which may simplify the verification method.", "To realize sense and act below the goals G6 and G8, the development process could, for instance, employ ISO/PAS 21448 and ISO 26262." ], [ "Discussion", "The approach in Section   is one possible argument structure to include formal proofs of correctness as evidence in safety cases.", "Whether the approach is beneficial depends on the overhead of the formal methods in comparison to the verification effort, and as such, the approach may be beneficial for some systems, and not for others, and it might be beneficial for only a subset of the requirements in one system.", "The benefits are also dependent on the formal modelling and the system decomposition.", "Some system decompositions may not be possible to formally model in a given formalism, and the formal specifications of a component might fail to be verifiable, or exceedingly hard to verify.", "Such complications will trigger redesigns that could be costly.", "On the other hand, a notable benefit is that the presented approach in itself is agnostic to the chosen methods and technologies with which the components are realized.", "Admittedly, some might be more amenable to be used in conjunction with the formal specifications.", "Furthermore, if formal methods are employed at an early stage in the project, they may catch inconsistencies and design flaws that would be costly to find during system-level testing.", "The process of formalization of requirements can itself be beneficial, and the approach detailed in this paper allows more utilization of such work.", "The proposed approach uses a strategy to split a quantitative goal into one qualitative and one quantitative part.", "The qualitative part disregards the probabilities involved and employs formal methods to provide evidence to show that the qualitative part is fulfilled.", "However, there exists formal approaches that can prove correctness of stochastic systems such as probabilistic model checking [32] where quantitative extensions of temporal logic are used to specify quantitative properties.", "While it is beneficial to investigate the suitability of such approaches to provide evidence to quantitative goals in the safety argument, they are not considered in this paper." ], [ "Conclusions", "This paper presents an approach to structure the safety case for employing formal methods in safety cases for automated driving systems.", "The artifacts from the formal analysis are used as formal specifications on the components of the system, thus providing a break-down of safety requirements to individual components.", "The formal proof provided by the formal method is used as evidence that the break-down is correct.", "If the safety requirement specify a quantitative target, then this is handled by assigning quantitative targets for the specifications on the components, in parallel to the formal model.", "This approach gives several potential benefits by limiting the effort of safety assurance, and by breaking down requirements on a complex system into well defined and unambiguous specifications on individual components.", "First of all, less effort is needed to verify that the break-down of the requirement is correct, as this is proven by the formal method.", "Second, the formal specifications on the components provide separation of the concerns, and may also allow for verification in open-loop settings instead of closed-loop settings for some components, which leads to more flexibility and less effort to verify the individual components.", "Case studies of complex systems are needed to validate this approach." ] ]
2210.07798
[ [ "Optimizing optical potentials with physics-inspired learning algorithms" ], [ "Abstract We present our new experimental and theoretical framework which combines a broadband superluminescent diode (SLED/SLD) with fast learning algorithms to provide speed and accuracy improvements for the optimization of 1D optical dipole potentials, here generated with a Digital Micromirror Device (DMD).", "To characterize the setup and potential speckle patterns arising from coherence, we compare the superluminescent diode to a single-mode laser by investigating interference properties.", "We employ Machine Learning (ML) tools to train a physics-inspired model acting as a digital twin of the optical system predicting the behavior of the optical apparatus including all its imperfections.", "Implementing an iterative algorithm based on Iterative Learning Control (ILC) we optimize optical potentials an order of magnitude faster than heuristic optimization methods.", "We compare iterative model-based offline optimization and experimental feedback-based online optimization.", "Our methods provide a new route to fast optimization of optical potentials which is relevant for the dynamical manipulation of ultracold gases." ], [ "Introduction", "The precise control and manipulation of light fields are required for many diverse areas of research ranging from microscopy [1] to quantum simulators [2].", "In particular, optical beam shaping constitutes a common task, for which wavefront manipulating devices, such as Spatial Light Modulators (SLM) or Digital Micromirror Devices (DMD), are especially suited.", "The beam shaping is important for experiments with ultracold gases, where optical dipole potentials have proven to be a versatile tool to provide the demanded level of control.", "In combination with a DMD, almost arbitrary shaping of the optical potential is possible, both in 1D [3], [4], [5], [6] and 2D [7], [8], [9], [10], [11] settings.", "These potentials can, as an example, be used for generating homogeneous box potentials in ultracold gases experiments [9].", "In addition to static potentials, dynamical perturbations and time-averaged potentials can also be created, by projecting sequences of patterns onto the DMD [4], [12].", "There exist two main approaches to the optimization of optical potentials: precalculating DMD patterns based on physical assumptions [13], [14] and models [7] or iteratively updating DMD patterns based on experimental feedback [7], [6].", "The first approach avoids the need for feedback measurement but is limited by model precision and thus requires detailed system characterization, while the second gives the most accurate results but typically requires a large number of experimental iterations.", "Here we implement a purely data-driven approach that combines different learning algorithms to get the best of both worlds.", "Using a digital twin of the system makes it possible to shape different types of target potentials without the need for experimental feedback.", "Yet because of speckles caused by imperfections, a model featuring just a few known experimental parameters (parametric model) can only predict its behavior up to limited precision and might not always be suitable for precise potential shaping.", "In some cases, though, such as in “clean\" systems with pin-hole filtering, simulations combined with input beam characterization deliver very low errors on precalculated DMD patterns [7].", "We improve the prediction performance compared to parametric models by employing data-driven learning techniques.", "Learning methods are already used for estimating the transfer matrix of complex optical systems [15] and provide good results.", "As their main disadvantage, they generally require a large amount of data for sufficient training.", "In our approach, we develop a physics-inspired model that requires a smaller amount of training data and thus saves experimental time.", "Despite any improvement in model precision, the effect of residual error sources can only be mitigated by using experimental feedback [7], [6].", "Therefore, we introduce a feedback optimization method based on Iterative Learning Control (ILC) [16], [17].", "This method is directly applicable to various types of experiments with wavefront manipulating devices.", "Since system knowledge is directly employed in the update law that adjusts the DMD setting based on feedback, the number of required experimental iterations is significantly reduced compared to heuristic methods [6].", "Because of the learning-based nature of both the ML model and ILC method, they benefit from high predictability of the system behavior, which can be achieved by using light sources with low coherence.", "White light was used for Bose-Einstein condensates (BEC) trapping in order to minimize the impact of speckles on density fluctuations [18], and the advantages of using a superluminescent diode (SLED) in comparison with a single-frequency laser were shown also in combination with a DMD [12].", "Indeed, we find that using the SLED improves model prediction results, while the feedback-based methods give errors comparable to measurement errors for both light sources.", "In perspective, the ability to efficiently shape a large number of potentials using a DMD is a precious tool for the dynamical manipulation of quantum gases.", "The realization of non-harmonic optimal protocols [19] and quantum thermal machines [20] are but two examples among countless possible applications.", "The paper is organized as follows: in Sec.", "we describe our experimental setup and test the coherence properties of the light sources by measuring interferences of a SLED in comparison with a single-frequency laser; in Sec.", "we introduce the physics-inspired learning model used for representing the optical system including its imperfections; in Sec.", "we use iterative learning control algorithms for optimizing optical dipole potentials; in Sec.", "we summarize our results." ], [ "Experimental setup", "In this section, we describe our experimental setup and compare the optical coherence properties of a single-frequency laser and broadband SLED.", "The optical apparatus was designed and optimized for manipulating 1D optical dipole potentials in our atom chip experiment [21].", "The simplified optical setup is shown in Fig.REF (a).", "In the setup, the light source is easily interchangeable: we here use either a superluminescent 110 mW fiber-coupled diode with a spectral width of 12.7 nm and a central wavelength of 837 nm or a single-frequency 780 nm laser.", "The fiber is connected to a collimator and illuminates the DMD with collimated light.", "The DMD is placed in the focus of the first lens.", "The first two lenses together form a 4-f telescope.", "The focal point between the two lenses is a Fourier plane with respect to the DMD's image plane.", "The slit, which is adjustable in the transverse direction and is parallel to the observable 1D optical potential, is placed in the Fourier plane and closed to 0.625 mm.", "Being placed in the Fourier plane, the slit acts as a low-pass filter for the transverse spatial frequencies of the light field.", "This means it cuts off high-frequency $k$ -modes of the DMD pattern effectively leading to a lowering of the resolution.", "The other 4 lenses form an objective designed to correct chromatic aberrations for the broadband SLED.", "The system acts with a demagnification of 17.5 and resolution $\\sigma _\\parallel =3.3\\,\\mu $ m in the direction parallel to the slit (the resolution is given by our atom chip experiment for which the optical apparatus was designed and optimized) and $\\sigma _\\perp =25$ $\\mu m$ in the transverse direction (orthogonal to the slit).", "In the end, an image of the optical potential is acquired with a CCD camera sensor with 2.4 $\\mu m$ pixel size.", "We use a $10.8$ $\\mu m$ pitch near-infrared DMD, which is a 2D array of 1280x800 micromirrors.", "In the image plane (on the CCD camera) 5 DMD pixels correspond to $\\sigma _\\parallel $ in the longitudinal direction and 40 DMD pixels correspond to $\\sigma _\\perp $ in the transverse direction.", "Since many pixels in the transverse direction contribute to the local intensity (close to the center pixels contribute almost equally to the optical potential while far off-center pixels are contributing less), we can perform very smooth gray scaling, which is very important for high-precision optimization of 1D optical dipole potentials via DMD.", "Yet, while a narrower slit allows for better gray scaling and with that lower discretization error and higher accuracy, it leads to significantly reduced light intensity.", "The main advantage of using the SLED compared to the single-frequency laser as a light source is its reduced temporal coherence [22], [23] which is why random interferences (speckles) are reduced compared to the laser (see Fig.REF (b,c)).", "We anticipate the speckles influencing the predictability of the system.", "Therefore, we characterize coherence effects for both light sources.", "More specifically, we want to understand whether we can consider the individual pixels as coherent or incoherent sources.", "The imaged intensity is in general given by the squared sum of the electric fields of the individual pixels.", "In the incoherent case, it reduces to the sum of intensities from the individual pixels such that the system behaves linearly with respect to intensity.", "We prepare two series of patterns, each of them containing two superpixels moving away from each other (by superpixel, we understand a square pattern of 3x3 micromirrors).", "In the image plane on the CCD camera, the size of the superpixel is below the resolution of the optical apparatus and the superpixel can be considered as a point source, at the same time minimizing diffraction from individual DMD mirrors.", "We measure the intensity in the center between two superpixels.", "First, we projected both superpixels simultaneously such that their optical fields can interfere.", "Second, we only project one of the superpixels onto the DMD at a time to infer the intensity of a single superpixel.", "Then the intensity measured with the first configuration was compared to the sum of intensities measured with the second configuration.", "In the first series of patterns (purple squares in Fig.REF ), the pair of superpixels is moving orthogonal to the slit orientation (transverse direction), where the slit is closed to 0.625 mm and resolution is lowered to $\\sigma _\\perp $ =25 $\\mu m$ .", "For both SLED and laser in Fig.REF (b,c), the slit diffraction maxima are clearly distinguishable.", "For the laser, positive interference can be observed in the 0th maximum and negative in the 1st.", "For the SLED, interference is observable on a very small scale (below $\\sigma _\\perp $ ) and in general, the behavior is linear.", "In the second series (gray squares in Fig.REF ) superpixels are moving in the longitudinal direction, where the slit is fully open to 13 mm.", "For both SLED and laser, in Fig.REF (d,e), the behavior of the system can be considered linear.", "It is worth mentioning that the resolution $\\sigma _\\parallel $ =3.3 $\\mu m$ is very close to the camera pixel size 2.4 $\\mu m$ and small interference effects are hardly accessible on this scale." ], [ "Physics-inspired learning model", "In this section, we describe how we use Machine Learning (ML) tools and experimental data to obtain a digital twin of the system.", "It should represent precisely the experimental system, while at the same time having a small number of parameters to reduce the required number of experimental measurements.", "In the following, we outline our approach based on a physics-inspired model.", "To formulate our problem, we treat the DMD as a 2D array to which we associate a binary configuration matrix $u_{ij}$ .", "For each pixel, a value of 1 corresponds to the mirror position that reflects light to the optical system, while 0 corresponds to the mirror position that reflects light away from the system.", "The potential ${V}$ is a vector obtained by selecting one row of camera pixels in the imaged output.", "We refer to the coordinate along this row (and therefore, parallel to the slit) as $z$ .", "We then cast model training in the language of regression [24]: given a set of data points composed by $K$ couples $(u^{(k)}_{ij}, V^{(k)}_i)$ , and a family of functions $\\mathcal {M}_{{\\alpha }}$ , parametrized by ${\\alpha }=[ \\alpha _1, ... ,\\alpha _N]$ , we find the values of ${\\alpha }$ that minimize a loss function $L(u^{(k)}_{ij}, V^{(k)}_i, {\\alpha })$ defined below.", "Regression problems are known to be affected by the bias-variance tradeoff [25].", "This implies that large models, having $\\dim ({\\alpha })>>K\\dim ({V})$ , tend to approximate well the training data (low bias) but fail to accurately generalize the prediction on test data (high variance).", "This phenomenon is known as overfitting, and it puts a challenging limitation to our program.", "In order to alleviate this problem, our approach consists in developing a physics-inspired model (see Fig.REF (a)) based on the knowledge we have about optical systems.", "This way, we can reduce the number of its coefficients to the minimum necessary to represent the system precisely enough, while avoiding overfitting.", "More specifically, we employ a dimensional reduction technique which we refer to as the “virtual input” (cf.", "[17]).", "The virtual input measures the relative optical intensity induced by the pixels in the corresponding column.", "Since the narrow slit averages along the transverse direction, we can use the pixels on each column to create a set of gray scales at each point along the longitudinal direction $z$ .", "In the limit of a very narrow slit, all the pixels in a column contribute almost equally to the final intensity, so we can define the virtual input ${\\nu }$ as the vector: $\\nu _j = \\frac{1}{N_{row}}\\sum _i u_{ij}\\,.$ This quantity represents the fraction of pixels that are turned on in each column, with respect to the total number of rows inside the area of interest (AOI).", "The AOI has a size of $(N_{row},N_{col})$ , which is determined by the beam size and position on the DMD.", "In the case of non-zero slit widths, the mapping between a virtual input according to Eq.", "(REF ) and the actual relative optical intensity due to the column pattern is non-linear and depends on the beam and on the transverse shape of the point spread function (PSF).", "Therefore, we have to take this effect into account when we design our model.", "Once $u_{ij}$ is transformed into $\\nu _j$ some information is lost since different binary matrices map onto the same virtual input.", "To invert this mapping we have to choose a subset of binary matrices over which the mapping is one-to-one.", "We define the inverse map by turning on the pixels one by one on alternating rows above and below $i_c$ , according to the order $i_c, i_c+1, i_c-1, i_c+2, i_c-2,...$ .", "More compactly: $u_{ij} = \\theta \\left(\\frac{N_{row} \\nu _j}{2} - \\vert i - i_c - \\frac{1}{4}\\vert \\right)\\,,$ where $\\theta (x)$ is the Heaviside step function and $i_c$ the central row of the AOI.", "In practice, the subset of binary matrices defined by this mapping is wide enough so that it can be used for shaping arbitrary potentials.", "The model ${V} = \\mathcal {M}_{{\\alpha }}({\\nu })$ that we propose for the task is depicted in Fig.REF (a), and reads as follows: $V_i = \\left|\\sum _{j=-M}^{M} g_j P(\\nu _{i-j}, q_1, ..., q_{N_P})p_{i-j} \\right|+ c_i$ where $P(x,q_1,...,q_{N_P}) = \\sum _{n=1}^{N_P} q_n x^n$ is a polynomial of degree $N_P=5$ with no constant term, and the vector of parameters ${\\alpha }$ is the concatenation of ${g}$ ,${q}$ ,${p}$ and ${c}$ with $\\dim ({\\alpha }) \\sim 1400$ .", "The polynomial function $P$ represents in an abstract way the non-linear relation between virtual input and resulting local relative intensity as discussed above.", "The remaining parameters are chosen to mimic features of the physical system: the convolutional kernel ${g}$ of size $N_g = 2M+1 \\sim 71$ plays the role of the longitudinal PSF, the position-dependent term ${p}$ , of size $N_V \\sim 650$ , mimics the inhomogeneous light beam and the offset term ${c}$ , also of size $N_V$ , gives the background.", "In the convolution sum we employ zero padding, which means that the summand is zero if $i-j$ exceeds the index range $[1, N_V]$ .", "For the training, we choose the loss function as: $L({\\nu }^{(k)}, \\mathbf {V}^{(k)}, {\\alpha }) = \\sum _{k=1}^K \\frac{\\sum _{i=1}^{N_V} | [\\mathcal {M}_{{\\alpha }}({\\nu }^{(k)})]_i - V^{(k)}_i |}{\\sum _{i=1}^{N_V} V^{(k)}_i}\\,,$ which is the mean absolute error between prediction and measurement, normalized by the average potential itself.", "This way, potentials with different average values will contribute equally during the training.", "To test the performance of the ML model, we create a data set of 10,000 random virtual inputs by sampling a white noise probability distribution.", "We filter the sample with Gaussian filters with 10 values of $\\sigma _{data}$ ranging from $0.2$ to 100 DMD pixels (which corresponds to the range $[0.1,65] \\mu $ m mapped to the image plane).", "This way, the dataset was subdivided into 10 subsets with varying upper spatial cutoff frequency.", "For each virtual input, the corresponding potential was measured and stored (along with the corresponding input).", "We tested the dependence of the test loss on the training dataset size (see Fig.REF (b)).", "Both the training and test datasets were assembled by mixing the subsets with $\\sigma _{data} \\ge 2.0\\ \\mu $ m. The model is then trained on data chunks of increasing size $K_{train}$ , while keeping the test dataset size fixed to $K_{test}=300$ .", "This sequence is repeated 4 times, choosing new data sets at each time to compute the standard deviation of the test loss.", "Even with $K_{train}=8$ , the test loss is already below $3\\%$ and the improvement for $K_{train} \\sim 10^2$ is around half a percent.", "Adding even more data points does not appreciably improve the prediction quality.", "This result is particularly important since we aim for methods that are readily applicable to atomic density data measurement.", "In a typical experiment with trapped ultracold atomic gases, taking more than $K \\sim \\mathcal {O}(10^2)$ atom densities pictures (with averaging over a few shots each) for potential optimization is a time-costly procedure.", "The fact that the model we developed can be trained with less than 100 potentials means that it is in principle possible to employ a data set obtained by atom-density estimation of the potential [6].", "To further analyze the learning process, we tested the dependence of the test loss on the cutoff frequency (see Fig.REF (c,d)) of both the training and test data sets.", "The frequency subsets are not mixed.", "At each time, we choose independently the cutoff frequency of the training $\\sigma _{train}$ and test $\\sigma _{test}$ datasets.", "The dataset sizes for training $K_{train} = 160$ and for testing $K_{test} = 39$ are fixed, and this sequence is repeated 5 times.", "Fig.REF (c) shows qualitatively the test loss as a function of $\\sigma _{train}$ and $\\sigma _{test}$ .", "The best results are obtained close to the diagonal, that is, where $\\sigma _{train} \\sim \\sigma _{test}$ , so that training and test data look most similar.", "The test loss becomes worse when $\\sigma _{train}$ or $\\sigma _{test}$ are around or below the DMD pixel size (magenta line on the plot), indicating a possible mismatch between the behavior of the system and the ML model on the scale of the DMD pixel.", "For a more quantitative interpretation, we show two different regimes in Fig.REF (d), where a representative scenario of the training on low frequencies (dark blue) is compared to another curve representing the high-frequency case.", "In the first case, the test loss exhibits a low plateau above the DMD pixel size and an abrupt increase below, corresponding to a breakdown of the low-frequency trained model in the high-frequency regime.", "The second curve instead shows how including the high frequencies during the training does not solve the problem, as now the prediction performance severely deteriorates along the whole frequency range." ], [ "Potential optimization", "In this section, we introduce an optimization algorithm based on iterative learning control (ILC) methods inspired by [17].", "The algorithm was used to optimize different target potentials on the experimental setup and we compared the results with the heuristic optimization method described in [6] (see Sec.", "for details)." ], [ "Online and Offline ILC", "ILC methods employ measurements of the considered output trajectory to iteratively solve a reference tracking problem, i.e., to find an input trajectory such that the output of a system follows a desired target trajectory as closely as possible, even in the presence of model errors and uncertainties.", "The price to pay for this property is the requirement of running in a kind of feedback loop using experimental data.", "Therefore, we show how to employ ILC algorithms either using feedback from the physical experiment referred to as “online\" ILC or from its digital counterpart, the physics-inspired model presented in Sec., referred to as “offline\" ILC.", "See Fig.REF for a schematic of the ILC algorithm.", "The only difference in the algorithms is that in the first case the potential ${V}$ is measured, while in the second it is predicted by the model.", "This way, previous (training) data is structured by the ML model while we can seamlessly improve beyond the predictive capability of the model through further online iterations.", "Let us call ${\\nu }^n$ the virtual input at the $n$ -th iteration, and ${e}^n = {V}^{tar} - {V}^n$ the deviation from the target.", "Following standard ILC approaches, the correction of the virtual input is obtained by convolution, denoted by $*$ (see Sec.", "), with an update filter ${L}_{\\nu }$ , and is then added to the old virtual input to update it, i.e., ${\\nu }^{n+1} = {\\nu }^{n} + {L}_{\\nu } * {e}^n\\,.$ The process is repeated until either convergence or the desired error level is reached.", "In order to choose an appropriate update filter, we approximate the system in a linear and time-invariant form ${V} \\approx {g}_z * {\\nu }\\,.$ In practice, ${V}$ is recorded for a trial input, and then fitted using a Gaussian guess for the longitudinal PSF $g_z(z) = A \\exp (-(z-z_0)^2/(2\\sigma ^2))$ , to obtain estimates for the parameters $z_0,\\sigma ,A$ .", "We then use a pseudo-inversion-based update filter [26] ${L}_{\\nu } = \\frac{\\overline{{G}}}{\\gamma _{\\nu } + \\overline{{G}}{G}}$ where ${G} = \\mathcal {F}[{g}_z]$ is the discrete Fourier transform of the Gaussian PSF, $\\overline{{G}}$ its complex conjugate and all the operations are to be understood element-wise.", "Here, $\\gamma _{\\nu } > 0$ is a regularization parameter of the system inversion which effectively reduces the high-frequency content of the input updates and, therefore, of the explored virtual inputs ${\\nu }^n$ .", "In the presence of measurement noise, optimal choices for $\\gamma _{\\nu }$ are ultimately given by the experimental signal-to-noise ratio [27].", "The results shown in the main text are obtained using $\\gamma _{\\nu } = 0.1\\ \\max \\limits _i|G_i|^2$ ." ], [ "Experimental results", "We tested both the offline and online ILC algorithms by optimizing three target potentials, using the heuristic method as a reference.", "The results are shown in Fig.REF .", "In order to emulate an experiment with a trapped BEC, we represent the longitudinal magnetic trap with a harmonic potential $V_{mag}(z) = \\frac{1}{2}m \\omega ^2 z^2$ with $\\omega = 2\\pi \\times 10$  Hz and $m$ the mass of a $^{87}$ Rb atom.", "The effective potential $U(z)$ as experienced by the atoms is the sum of the magnetic trap $V_{mag}(z)$ and the optical dipole potential $V(z)$ , which is estimated from measured intensity (see Sec.", "for details).", "We design targets so that the effective potential $U(z)$ will be either constant, sinusoidal, or linear.", "All methods start with the DMD completely turned off.", "Qualitatively all the methods are successful, as the final results are barely distinguishable from the target potential.", "For quantitative evaluation of the optimization performance, we compute the locally normalized root mean square error (RMS): $\\epsilon _{RMS} = \\sqrt{\\sum _{i=1}^{N_V} \\left(\\frac{V^{tar}_i - V^n_i}{V^{tar}_i}\\right)^2}\\,.$ We say that the normalization is local because it is computed point by point.", "As a measure of the minimum error that can be achieved due to shot-to-shot fluctuations, we acquire 100 pictures of the optimized potential and compute the average $\\epsilon _{RMS}$ , substituting $V^{tar}$ by the average potential over the sample.", "The measured optimization error of the offline ILC is shown with a constant blue line in Fig.REF .", "This error level is already comparable to what we obtain with $\\sim 100$ iterations of the heuristic algorithm, which would be equivalent to more than 1 hour of experimental time per potential shape for BEC experiments (and more than 5 hours with averaging over at least 5 images).", "For a quasi-1D BEC with an atomic density of 100 $\\mu m^{-1}$ , atomic shot noise is at the level of 10 % [28].", "Therefore, the offline ILC optimization alone already provides a measurement-limited error.", "Moreover, since any constant offset in optimized potential does not have an effect on the atoms, the effective error is lower than $\\epsilon _{RMS}$ .", "Neglecting constant offsets, the energy scale of the remaining imperfections is around 10 Hz, which is 1$\\%$ of the typical chemical potential ($\\sim $ 1 kHz).", "This means that offline optimization is already potentially useful for practical applications in experiments with ultracold gases.", "The online ILC and heuristic algorithms both converge to the same error level.", "Yet online ILC reaches convergence in around $\\sim 10$ iterations while the heuristic algorithm needs around $\\sim 100$ iterations, which is a great advantage for schemes incorporating experimental feedback.", "Moreover, the ILC algorithm does not need to manually select an optimization schedule, resulting in increased flexibility and bypassing tedious parameter tuning.", "Convergence time can be decreased even more by using the result of offline optimization as an initial guess for online ILC.", "In this case, we find that the online ILC reaches convergence in only a few iterations, resulting in an even larger speed-up compared to the heuristic method.", "In any case, the final error is far below the atomic shot noise, so it is hardly accessible in static BEC configurations.", "We ran the same tests using a single-frequency laser as a light source.", "The physics-inspired model performs worse in the prediction of laser-generated potentials, therefore the output of offline ILC is noticeably worse than the predictions for SLED.", "On the other hand, the online ILC algorithm optimizes the optical potentials created with the laser to the same level as with the SLED.", "Based on these findings, we can state that our method is perfectly suited for experimental setups employing SLEDs as well as single-frequency lasers." ], [ "Conclusions", "In this paper, we presented our experimental setup for generating and efficiently optimizing 1D optical potentials.", "We combined a digital micromirror device for potential shaping control together with a SLED light source.", "We performed measurements estimating the quantitative difference between SLED's and laser's coherence properties and showing the advantages of using SLED due to its generally linear behavior.", "We have implemented learning algorithms that enable efficient optimization of optical potentials.", "We have shown how to build a physics-inspired model, which acts as a digital twin of the experimental setup.", "The model is able to recreate the main features of the optical system based on a small set of experimental data without the need to use deep (neural) networks and large training data sets, with the advantage of saving experimental time.", "The application of the Iterative Learning Control optimization method provides a more than ten-fold speed-up compared to heuristic approaches.", "The ILC algorithms used offline with the trained models are able to optimize optical potentials with a precision acceptable for most experiments with trapped 1D ultracold gases.", "Using online ILC with experimental feedback allows us to optimize optical potentials to error levels comparable to measurement error, giving a ten-fold speed-up compared to more straightforward heuristic algorithms.", "By combining both ILC strategies, namely using the result of an optimized digital twin configuration as an initial guess for online ILC, we get the optimized potential with just a few experimental iterations.", "Regarding optimization performance for the SLED and laser, we find that the SLED outperforms the laser using offline ILC.", "However, the online ILC performs equally well for laser and SLED giving the same level of optimization error.", "The model we developed combined with Iterative Learning Control provides a very fast way to optimize optical potentials with a DMD which might be used in a large variety of experimental setups.", "Our work offers a prospect for fast optimization of optical dipole potentials which is extremely important for time-costly experiments or for very large sequences of patterns in dynamic situations." ], [ "DMD mount", "Due to the specific construction of the Texas Instruments DLP650LNIR DMD's micromirror control mechanism [29], the DMD is mounted 45$^{\\circ }$ rotated so all the optical elements are placed in one plane.", "Any pattern getting rotated 45$^{\\circ }$ right before projecting on the DMD.", "We verified that during optimization the rotation only leads to distortions of the potential which are below the resolution and therefore optimization is not affected." ], [ "Intensity to optical dipole potential conversion", "The number of pixels in the output $V_i$ as obtained from the camera does not necessarily coincide with the number of pixels in the input $\\nu _i$ .", "In order to use the model, we first interpolate $V_i$ to the input grid size, using the interp1 function in Matlab.", "We assume the relation between light intensity and optical dipole potential to be linear $V = \\alpha _V I$ with $\\alpha _V$ as found in [30].", "Since we work with red-detuned light, $\\alpha _V$ is negative.", "We also suppose the relation between the CCD readout $R$ and intensity to be linear.", "We then compute $\\alpha _{CCD} = Ir_{pow}/R $ by measuring the light intensity with a power meter.", "To not saturate the CCD we use a reduced amount of light intensity.", "To calculate the finally expected dipole potentials we employ a factor $r_{pow} = I_{full}/I_{low}$ that accounts for the source operating at low power." ], [ "Mathematical details", "This paper heavily relies on the discretization of functions of a real variable $f(z)$ in order to obtain finite size vectors.", "If we define a coordinate grid $z_i = (i-1) \\Delta - \\bar{z}$ for $i=1,...,n$ , then we refer to any discretized function as $f_i = f(z_i)$ and we denote with ${f}$ the $\\mathbb {R}^n$ vector whose elements are $f_i$ .", "Let us call $\\mathcal {F}$ the discrete Fourier transform acting on a vector of size $n$ , and $\\mathcal {F}^{-1}$ its inverse: F[a]k = j=1n aj e-2 in(j-1)(k-1) F-1[b]k = 1n j=1n bj e2 in(j-1)(k-1) Then, we can define the discrete convolution of two vectors ${a} * {b}$ as ${a} * {b} = \\mathcal {F}^{-1}[\\mathcal {F}[{a}]\\mathcal {F}[{b}]]$ where the product on the right-hand side is element-wise." ], [ "Heuristic algorithm", "In order to assess the advantages of the ILC methods, we employed an adapted version of the heuristic algorithm described in [6] as a reference.", "It is an iterative algorithm that updates the state of each pixel based on the local differences between measured and target potentials.", "The optimization happens in two phases, the first fast but rough and the second slower but more precise.", "During the rough phase (see Fig.REF first $\\sim $ 20 iterations) in each column DMD pixels are turned on until the difference gets lower than the chosen threshold.", "During the precise phase pixels can be moved away or turned off." ], [ "Acknowledgments", "We would like to thank Mohammadamin Tajik and João Sabino for the discussions and technical support.", "This project was funded by the DFG/FWF CRC 1225 'Isoquant', Project-ID 273811115, by the Austrian Science Fund (FWF) P 36236-N, by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy – Cluster of Excellence Matter and Light for Quantum Computing (ML4Q) EXC 2004/1 – 390534769, and from the German Federal Ministry of Education and Research through the funding program quantum technologies—from basic research to market under the project FermiQP, 13N15891.", "M.P.", "has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 101032523." ] ]
2210.07776
[ [ "A construction of Sbrana-Cartan hypersurfaces in the discrete class" ], [ "Abstract The classical classifications of the locally isometrically deformable Euclidean hypersurfaces obtained by U. Sbrana in 1909 and E. Cartan in 1916 includes four classes, among them the one formed by submanifolds that allow just a single deformation.", "The question of whether these Sbrana-Cartan hypersurfaces do, in fact, exist was not addressed by either of them.", "Positive answers to this question were given by Dajczer-Florit-Tojeiro in 1998 for the ones called of hyperbolic type and by Dajczer-Florit in 2004 when of elliptic type which is the other possibility.", "In both cases the examples constructed are rather special.", "The main result of this paper yields an abundance of examples of hypersurfaces of either type and seems to point in the direction of a classification although that goal remains elusive." ], [ "Preliminaries", "In this section, we first give a brief description of the so called Gauss parametrization for a class of Euclidean hypersurfaces.", "For plenty of additional information we refer to Chapter 7 in [7].", "After that, we characterize in terms of the Gauss parametrization the Sbrana-Cartan hypersurfaces in the continuous class as well as the infinitesimally bendable hypersurfaces that are neither surface-like nor ruled.", "In particular, it follows that the later class is much larger than the former.", "For additional details on these subjects, including several facts used throughout this paper, we refer to [5], [7] and [8]." ], [ "The Gauss parametrization", "Let $f\\colon M^n\\rightarrow \\mathbb {R}^{n+1}$ , $n\\ge 3$ , be an isometric immersion that at each point has precisely two nonzero principal curvatures.", "The relative nullity subspace $\\Delta (x)$ of $f$ at $x\\in M^n$ is the $(n-2)$ -dimensional kernel of its second form $A(x)$ .", "The relative nullity subspaces form a smooth integrable distribution and the totally geodesic leaves of the corresponding relative nullity foliation are mapped by $f$ into open subsets of $(n-2)$ -dimensional affine subspaces of $\\mathbb {R}^{n+1}$ .", "The Gauss parametrization of $f$ , given in terms of a pair $(g,\\gamma )$ formed by a surface $g\\colon L^2\\rightarrow \\mathbb {S}^n$ in the unit sphere and a function $\\gamma \\in C^\\infty (L)$ , goes as follows: Let $U\\subset M^n$ be an open saturated subset of leaves of relative nullity and $\\pi \\colon U\\rightarrow L^2$ the projection onto the quotient space.", "The Gauss map $N$ of $f$ induces an immersion $g\\colon L^2\\rightarrow \\mathbb {S}^n$ in the unit sphere given by $g\\circ \\pi =N$ and the support function ${\\langle }f,N{\\rangle }$ , which is constant along the relative nullity leaves, induces a function $\\gamma \\in C^\\infty (L)$ .", "Let $\\Lambda $ denote the normal bundle of $g$ and set $h=i\\circ g$ where $i\\colon \\mathbb {S}^n\\rightarrow \\mathbb {R}^{n+1}$ is the inclusion.", "Then the map $\\psi \\colon \\Lambda \\rightarrow \\mathbb {R}^{n+1}$ given by $\\psi (x,w)=\\gamma (x)h(x)+h_*\\mbox{grad\\,}\\gamma (x)+i_*w$ locally parametrizes (at regular points) the hypersurface $f$ in such a way that the fibers of $\\Lambda $ are identified with the leaves of the relative nullity foliation." ], [ "The Sbrana-Cartan hypersurfaces", "We describe succinctly in local coordinate terms the Sbrana-Cartan hypersurfaces in the discrete and continuous class by means of the Gauss parametrization.", "For proofs and additional information we refer to [5] and Chapter 11 of [7].", "Let $g\\colon L^2\\rightarrow \\mathbb {S}^n$ be a surface in the unit sphere.", "A system of local coordinates $(u,v)$ on $L^2$ is called real-conjugate for $g$ if its second fundamental form $\\alpha _g\\colon TL\\times TL\\rightarrow N_gL$ satisfies $\\alpha _g(\\partial _u,\\partial _v)=0$ where $\\partial _u=\\partial /\\partial u$ and $\\partial _v=\\partial /\\partial v$ .", "The coordinates are called complex-conjugate if the condition $\\alpha _g(\\partial _z,\\bar{\\partial }_z)=0$ holds where $\\partial _z=(1/2)(\\partial _u-i\\partial _v)$ , that is, if we have $\\alpha _g(\\partial _u,\\partial _u) + \\alpha _g(\\partial _v,\\partial _v)=0.$ The surface $g\\colon L^2\\rightarrow \\mathbb {S}^n$ is called hyperbolic (respectively, elliptic) if $L^2$ is endowed with real-conjugate (respectively, complex-conjugate) coordinates.", "Set $F={\\langle }\\partial _u,\\partial _v{\\rangle }$ and let $\\Gamma ^1 ,\\Gamma ^2 $ be the Christoffel symbols given by $\\nabla _{\\partial _u}\\partial _v=\\Gamma ^1 \\partial _u+\\Gamma ^2 \\partial _v.$ That $g$ is hyperbolic means the surface $h=i\\circ g\\colon L^2\\rightarrow \\mathbb {R}^{n+1}$ satisfies $ h_{uv}-\\Gamma ^1 h_u-\\Gamma ^2 h_v + Fh=0$ where subscripts indicate partial derivatives.", "That the surface $g$ is elliptic means that $h=i\\circ g\\colon L^2\\rightarrow \\mathbb {R}^{n+1}$ satisfies $ h_{z\\bar{z}}-\\Gamma h_z-\\bar{\\Gamma }h_{\\bar{z}}+Fh=0,$ where, using the $-linear extensions of the metric of $ L2$and the corresponding connection, the Christoffel symbols aregiven by $ zz=z+z$and $ F=z,z$.\\vspace{8.5pt}$ A hyperbolic surface $g\\colon L^2\\rightarrow \\mathbb {S}^n$ called of first species of real type if $ \\Gamma ^1 _u=\\Gamma ^2 _v=2\\Gamma ^1 \\Gamma ^2 .$ The surface is called of second species of real type if it is not of first species and the function $\\tau =\\tau (u,v)$ given by $ \\tau =\\frac{\\Gamma ^1 _u-2\\Gamma ^1 \\Gamma ^2 }{\\Gamma ^2 _v-2\\Gamma ^1 \\Gamma ^2 }$ is positive but not identically one and the necessarily unique solution of the system of differential equations $ {\\left\\lbrace \\begin{array}{ll}\\tau _u=2\\Gamma ^2 \\tau (1-\\tau )\\vspace{4.25pt}\\\\\\tau _v=2\\Gamma ^1 (1-\\tau ).\\end{array}\\right.", "}$ An elliptic surface $g\\colon L^2\\rightarrow \\mathbb {S}^n$ is called of first species of complex type if $ \\Gamma _z=2\\Gamma \\bar{\\Gamma }(=\\bar{\\Gamma }_{\\bar{z}}).$ It is called of second species of complex type if it is not of first species and the function $\\rho (z,\\bar{z})$ with values in the unit sphere and determined by $ Im(\\rho (\\Gamma _z-2\\Gamma \\bar{\\Gamma }))=0$ is the necessarily unique non real solution of the differential equation $\\rho _{\\bar{z}}+\\Gamma (\\rho -\\bar{\\rho })=0.$ The pair $(g,\\gamma )$ is called a hyperbolic pair if $g\\colon L^2\\rightarrow \\mathbb {S}^n$ is an hyperbolic surface and $\\gamma \\in C^\\infty (L)$ satisfies (REF ), that is, the same differential equation than the coordinate functions of $h$ .", "Similarly, the pair $(g,\\gamma )$ is called a elliptic pair if $g\\colon L^2\\rightarrow \\mathbb {S}^n$ is an elliptic surface and $\\gamma \\in C^\\infty (L)$ satisfies (REF ).", "Theorem 2 ([5]) Let $f\\colon M^n\\rightarrow \\mathbb {R}^{n+1}$ , $n\\ge 3$ , be a Sbrana-Cartan hypersurface that is neither surface-like nor ruled on any open subset of $M^n$ .", "Then, $f$ is parametrized on each connected component of an open dense subset of $M^n$ in terms of the Gauss parametrization by either an hyperbolic or elliptic pair $(g,\\gamma )$ , where $g\\colon L^2\\rightarrow \\mathbb {S}^n$ is a surface of first or second species of real or complex type.", "Conversely, any simply connected hypersurface parametrized in terms of the Gauss parametrization by such a pair $(g,\\gamma )$ is a Sbrana-Cartan hypersurface either in the continuous class or in the discrete class, according to whether $g$ is, respectively, of first or second species." ], [ "Infinitesimally bendable hypersurfaces", "We first describe succinctly in local coordinate terms the infinitesimally bendable hypersurfaces by means of the Gauss parametrization.", "The equivalent description in terms of envelopes of hyperplanes is given in the sequel.", "For proofs and additional information we refer to [8] and Chapter 14 of [7].", "An infinitesimal bending ${\\cal T}$ of $f\\colon M^n\\rightarrow \\mathbb {R}^{n+1}$ free of flat points is called trivial if it is the variational vector field of a one-parameter variation by isometries of $\\mathbb {R}^{n+1}$ , namely, if $G(t,x)={\\cal C}(t)f(x)+v(t)$ where ${\\cal C}\\colon I\\rightarrow O(n+1)$ with $0\\in I\\subset \\mathbb {R}$ is a smooth family of orthogonal transformations of $\\mathbb {R}^{n+1}$ and $v\\colon I\\rightarrow \\mathbb {R}^{n+1}$ a smooth map.", "Then the variational vector field of $G$ at $0\\in I$ is ${\\cal T}(x)=(\\partial G/\\partial t)_{t=0}={\\cal D}f(x)+v^{\\prime }(0)$ where ${\\cal D}={\\cal C}^{\\prime }(0)$ is a skew-symmetric linear endomorphism of $\\mathbb {R}^m$ .", "If ${\\cal T}$ is an infinitesimal bending of $f$ then there is the associated smooth variation $F\\colon \\mathbb {R}\\times M^n\\rightarrow \\mathbb {R}^{n+1}$ of $f$ by immersions $f_t=F(t,\\cdot )$ with ${\\cal T}$ as the variational vector field that is given by $F(t,x)=f(x)+t{\\cal T}(x)$ .", "Since $\\Vert f_{t*}X\\Vert ^2=\\Vert f_*X\\Vert ^2+t^2\\Vert {\\cal T}_*X\\Vert ^2$ it is classically said that the immersions $f_t$ are isometric to $f$ up to the first order.", "The hypersurface $f\\colon M^n\\rightarrow \\mathbb {R}^{n+1}$ is called infinitesimally bendable if it admits an infinitesimal bending that is nontrivial when restricted to any open subset of $M^n$ .", "The Sbrana-Cartan hypersurfaces that are either surface-like, ruled or belong the continuous class are infinitesimally bendable.", "In the later case, there is the variational vector field of the associated one-parameter family of isometric deformations.", "A hyperbolic pair $(g,\\gamma )$ is called a special hyperbolic pair if $g\\colon L^2\\rightarrow \\mathbb {S}^n$ satisfies $ \\Gamma ^1 _u=\\Gamma ^2 _v.$ The elliptic pair $(g,\\gamma )$ is called a special elliptic pair if $g\\colon L^2\\rightarrow \\mathbb {S}^n$ satisfies $ \\Gamma _z=\\bar{\\Gamma }_{\\bar{z}},$ that is, if $\\Gamma _z\\;\\mbox{is real}$ .", "In the following result by the infinitesimal bending being unique we understand that this is the case up to multiplying the bending by a real constant and adding a trivial one.", "Theorem 3 ([8]) Let $f\\colon M^n\\rightarrow \\mathbb {R}^{n+1}$ , $n\\ge 3$ , be an infinitesimally bendable hypersurface that has two nonzero principal curvatures everywhere and is neither surface-like nor ruled on any open subset of $M^n$ .", "Then $f$ is parametrized on each connected component of an open dense subset of $M^n$ in terms of the Gauss parametrization by a special hyperbolic or a special elliptic pair.", "Conversely, any hypersurface given in terms of the Gauss parametrization by a special hyperbolic or special elliptic pair admits locally a unique infinitesimal bending.", "Associated to an infinitesimal bending ${\\cal T}$ of the hypersurface $f\\colon M^n\\rightarrow \\mathbb {R}^{n+1}$ with second fundamental form $A$ there is the symmetric tensor $B\\in \\Gamma (\\mbox{End}(TM))$ defined by ${\\langle }BX,Y{\\rangle }={\\langle }(\\tilde{\\nabla }_X{\\cal T}_*)Y,N{\\rangle }={\\langle }\\tilde{\\nabla }_X{\\cal T}_*Y-{\\cal T}_*\\nabla _XY, N{\\rangle }$ which is a Codazzi tensor, that is, $(\\nabla _XB)Y=(\\nabla _YB)X$ and also satisfies the equation $ BX\\wedge AY-BY\\wedge AX=0$ for any $X,Y\\in \\mathfrak {X}(M)$ .", "That $B$ vanishes says that the bending is trivial.", "Conversely, any nonzero symmetric Codazzi tensor $B\\in \\Gamma (\\mbox{End}(TM))$ satisfying (REF ) determines a unique infinitesimal bending of $f$ ; see Section $2.5$ in [2] or Section 5 in [8] for details.", "We have from [8] that any infinitesimally bendable hypersurface $f\\colon M^n\\rightarrow \\mathbb {R}^{n+1}$ can be described as the envelope of a two-parameter family of affine hyperplanes.", "This goes as follows: Let $U\\subset \\mathbb {R}^2$ be an open subset endowed with coordinates $(u,v)$ and let $\\lbrace \\varphi _j\\rbrace _{0\\le j\\le n+1}$ be a set of solutions of the differential equation $\\varphi _{z_1z_2}+M\\varphi =0$ where $(z_1,z_2)$ can be either $(u,v)$ or $(1/2(u+iv),1/2(u-iv))$ and $M\\in C^\\infty (U)$ .", "Assume that the map $\\varphi =(\\varphi _1,\\ldots ,\\varphi _{n+1})\\colon U\\rightarrow \\mathbb {R}^{n+1}$ is an immersion and consider the two-parameter family of affine hyperplanes given by $G(u,v)=\\varphi _1x_1+\\cdots +\\varphi _{n+1}x_{n+1}-\\varphi _0=0$ where $(x_1,\\ldots ,x_{n+1})$ are the canonical coordinates of $\\mathbb {R}^{n+1}$ .", "Then $f$ is the solution of the system of equations $G=G_u=G_v=0$ .", "Moreover, the corresponding elliptic pair is given by $g=\\frac{1}{\\Vert \\varphi \\Vert }(\\varphi _1,\\ldots ,\\varphi _{n+1})\\;\\;\\mbox{and}\\;\\;\\gamma =\\frac{\\varphi _0}{\\Vert \\varphi \\Vert }.$ Notice that by this approach we do not require the strong condition that a set of solutions of a PDE are the coordinate functions of a surface in a sphere.", "We point out that the hypersurface is Sbrana-Cartan in the continuous class if the function $\\phi =\\Vert \\varphi \\Vert ^2$ verifies the strong additional condition $\\phi _{z_1z_2}=0$ ; see [8] for details.", "Proposition 4 The Sbrana-Cartan hypersurfaces that belong the discrete class are not infinitesimally bendable.", "Proof: For an hyperbolic Sbrana-Cartan hypersurface in the discrete class the function $\\tau $ given by (REF ) is not identically one.", "In particular, we have that (REF ) does not hold, that is, we have $\\Gamma ^1 _u\\ne \\Gamma ^2 _v$ , and thus the hypersurface is not infinitesimally bendable.", "Similarly, for an elliptic Sbrana-Cartan hypersurface in the discrete class the function $\\rho $ given by (REF ) is not real.", "In particular, we have that (REF ) does not hold, that is, we have $\\Gamma _z\\ne \\bar{\\Gamma }_{\\bar{z}}$ , and again the hypersurface is not infinitesimally bendable.", "Remark 5 In view of the above result we have that an infinitesimally bendable hypersurface as in Theorem REF is not a Sbrana-Cartan hypersurface if either (REF ) in the real case or (REF ) in the complex do not hold fully." ], [ "The main result", "Throughout this section $f\\colon M^n\\rightarrow \\mathbb {R}^{n+1}$ , $n\\ge 3$ , stands for an infinitesimally bendable hypersurface with infinitesimal bending ${\\cal T}$ that is free of flat points and is neither surface-like nor ruled restricted to any open subset of $M^n$ .", "We recall that the splitting tensor $C\\colon \\Gamma (\\Delta )\\rightarrow \\Gamma (\\mbox{End}(\\Delta ^\\perp ))$ of the $(n-2)$ -dimensional relative nullity distribution $\\Delta $ of $f$ is defined by $C(T,X)=-(\\nabla _XT)_{\\Delta ^\\perp }$ and $C_T$ denotes the element of $\\Gamma (\\mbox{End}(\\Delta ^\\perp ))$ given by $C_TX=C(T,X)$ .", "The assumption on its geometry gives that $f$ restricted to each connected component of an open dense subset of $M^n$ satisfies that $C_T$ for some $T\\in \\Gamma (\\Delta )$ has either two nonzero distinct real or complex conjugate eigenvalues at any point.", "In fact, we have from Proposition $7.4$ in [7] that $f$ is surface-like with respect to the decomposition $TM=\\Delta \\oplus \\Delta ^\\perp $ if the splitting tensor satisfies $C_T\\in \\mbox{span}\\lbrace I\\rbrace $ for any $T\\in \\Gamma (\\Delta )$ .", "If there is $T\\in \\Gamma (\\Delta )$ such that $C_T$ has a single real eigenvalue of multiplicity 2 then the argument in pp.", "$372-373$ in [5] gives that $f$ is ruled.", "The hypersurface $f\\colon M^n\\rightarrow \\mathbb {R}^{n+1}$ is called hyperbolic (respectively, elliptic) if there exists a tensor $J\\in \\Gamma (\\mbox{Aut}(\\Delta ^\\perp ))$ that satisfies: (i) $J^2=I$ with $J\\ne I$ (respectively, $J^2=-I$ ), (ii) $\\nabla _TJ=0$ for all $T\\in \\Gamma (\\Delta )$ , (iii) $C_T\\in \\mbox{span}\\lbrace I,J\\rbrace $ for all $T\\in \\Gamma (\\Delta )$ but $C(\\Gamma (\\Delta ))\\lnot \\subset \\mbox{span}\\lbrace I\\rbrace $ .", "Under the assumptions required for $f$ it follows from Proposition 20 in [8] that its restriction to any connected component of an open dense subset of $M^n$ is either hyperbolic or elliptic.", "Lemma 6 The relative nullity distribution $\\Delta $ of $f$ is contained at any point of $M^n$ in the relative nullity $\\Delta _t$ of $f_t=f+t{\\cal T}$ for any $t\\in \\mathbb {R}$ .", "Proof: From the classical Beez-Killing rigidity theorem and either Theorem 1 in [6] or Proposition $14.3$ in [7] we have that the second fundamental form $A_t$ of $f_t$ has rank at most 2 for any $t\\in \\mathbb {R}$ .", "The unit normal vector field $N_t$ of $f_t$ decomposes as $N_t=f_*Z_t+b N$ where $Z_t\\in \\mathfrak {X}(M)$ and $b=b(t,x)={\\langle }N_t,N{\\rangle }$ .", "Let $\\mathcal {Y}\\in \\mathfrak {X}(M)$ be given by ${\\langle }\\mathcal {Y},X{\\rangle }+{\\langle }N,{\\cal T}_*X{\\rangle }=0$ for all $X\\in \\mathfrak {X}(M)$ .", "Then using (REF ) we have $0={\\langle }N_t,f_{t*}X{\\rangle }&={\\langle }f_*Z_t+bN,f_*X+t{\\cal T}_*X{\\rangle }\\\\&={\\langle }Z_t,X{\\rangle }+t{\\langle }f_*Z_t,{\\cal T}_*X{\\rangle }+tb{\\langle }N,{\\cal T}_*X{\\rangle }\\\\&={\\langle }f_*Z_t-t{\\cal T}_*Z_t-tbf_*\\mathcal {Y},f_*X{\\rangle },$ that is, $ (f_*Z_t-t{\\cal T}_*Z_t-tbf_*\\mathcal {Y})_{f_*TM}=0.$ Let $L_0\\in \\Gamma (\\mbox{End}(TM))$ be defined by $f_*L_0X=\\pi {\\cal T}_*X$ where $\\pi \\colon \\mathbb {R}^{n+1}\\rightarrow f_*TM$ is the orthogonal projection.", "Then (REF ) reads as $ (I-tL_0)Z_t=tb\\mathcal {Y}.$ Since $L_0$ is skew-symmetric by (REF ) then ker($I-tL_0)=0$ .", "If $S_t=(I-tL_0)^{-1}$ then $ Z_t=tbS_t\\mathcal {Y}$ where $b\\ne 0$ since otherwise $S_t$ would have a nontrivial kernel.", "Given $T\\in \\Gamma (\\Delta )$ and being $\\pi $ parallel along the leaves of relative nullity, we have $(\\tilde{\\nabla }_T f_*L_0)X=\\tilde{\\nabla }_Tf_*L_0X-f_*L_0\\nabla _TX=\\pi (\\tilde{\\nabla }_T{\\cal T}_*)X.$ We obtain from (REF ) that $\\Delta \\subset \\ker B$ .", "On the other hand, we have by Proposition $2.2$ in [2] that $(\\tilde{\\nabla }_T{\\cal T}_*)X={\\langle }AT,X{\\rangle }f_*\\mathcal {Y}+{\\langle }BT,X{\\rangle }N=0.$ Therefore $ \\tilde{\\nabla }_T f_*L_0=0$ for any $T\\in \\Gamma (\\Delta )$ .", "From $(2.16)$ in [2] or $(13)$ in [3] we obtain $\\tilde{\\nabla }_X\\mathcal {Y}=-f_*BX-{\\cal T}_*AX$ and hence $ \\tilde{\\nabla }_T\\mathcal {Y}=0$ for any $T\\in \\Gamma (\\Delta )$ .", "Taking the covariant derivative of (REF ) with respect to $T$ and using (REF ) and (REF ) it follows that $\\tilde{\\nabla }_Tf_*Z_t=tT(b)S_t\\mathcal {Y}.$ Then $\\tilde{\\nabla }_TN_t=T(b)(tS_t\\mathcal {Y}+N).$ Since $b\\ne 0$ we obtain from (REF ) that $\\tilde{\\nabla }_TN_t$ is a multiple of $N_t$ which is of unit length.", "Hence $\\tilde{\\nabla }_TN_t=0$ for any $T\\in \\Gamma (\\Delta )$ , and this proves that $\\Delta \\subset \\Delta _t$ for any $t\\in \\mathbb {R}$ .", "In the sequel, we assume that $\\epsilon >0$ is small enough so that $\\mbox{ker}A(t)=\\Delta _t=\\Delta $ for any $t\\in (-\\epsilon ,\\epsilon )$ .", "Notice that the orthogonal complement $\\Delta _t^\\perp $ does not have to coincide with $\\Delta ^\\perp $ .", "Let $C^t$ denote the splitting tensor of $\\Delta $ with respect to the metric determined by $f_t$ .", "An argument of continuity applied to $C^t$ gives that $f_t$ for some $\\epsilon >0$ is neither surface-like nor ruled on an open subset of $M^n$ .", "Lemma 7 If $f$ is either hyperbolic or elliptic then the immersions $f_t$ , $t\\in [0,\\epsilon )$ for small enough $\\epsilon >0$ remain hyperbolic or elliptic with respect to a tensor $J_t\\in \\Gamma (\\mbox{End}(\\Delta _t^\\perp ))$ .", "Proof: Since the $f_t$ are Sbrana-Cartan hypersurfaces that for $t$ small enough are neither surface-like nor ruled, then from the proof of Theorem $11.16$ in[7] we have that they are either elliptic or hyperbolic with respect to $J_t\\in \\Gamma (\\mbox{End}(\\Delta _t^\\perp ))$ for each $t$ .", "From that proof we also have that the splitting tensor $C^t$ for each $t$ satisfies $C^t_T\\in \\mbox{span}\\lbrace I,J_t\\rbrace $ for all $T\\in \\Gamma (\\Delta )$ .", "Then, by continuity, the conditions $J_t^2=I$ or $J_t^2=-I$ remain for small values of $t$ .", "The Gauss map $N_t$ of $f_t$ determines an immersion $g_t\\colon L_t^2\\rightarrow \\mathbb {S}^n$ on the quotient space of leaves of relative nullity $L_t^2$ into the unit sphere.", "By Lemma REF we have that $L_t=L$ and hence the family of immersions $g_t\\colon L^2\\rightarrow \\mathbb {S}^n$ depends smoothly on the parameter $t$ .", "Since the tensors $J_t\\in \\Gamma (\\mbox{End}(\\Delta _t^\\perp ))$ have to satisfy the property $(iii)$ it follows that the dependence on the parameter $t$ is smooth.", "By Proposition $11.11$ in [7] we have that the tensor $J_t$ is the horizontal lifting of a tensor $\\bar{J}_t\\in \\Gamma (\\mbox{End}(TL))$ , i.e., $\\bar{J}_t\\circ \\pi _*=\\pi _*\\circ J_t$ , such that $\\bar{J}_t^2=I$ ($\\bar{J}_t\\ne I$ ) or $\\bar{J}_t^2=-I$ according to $J_t$ .", "Moreover, the immersion $g_t$ is hyperbolic or elliptic with respect to $\\bar{J}_t$ , accordingly, which in this case means that the second fundamental form of $g_t$ satisfies $\\alpha ^{g_t}(\\bar{J}_tX,Y)=\\alpha ^{g_t}(X,\\bar{J}_tY)$ for any $X,Y\\in \\mathfrak {X}(L)$ .", "Finally, from the proof of Theorem $11.16$ in [7] we have that a deformation of $f_t$ for $t\\ne 0$ is determined by a tensor tensor $\\bar{D}_t\\in \\Gamma (\\mbox{End}(TL))$ satisfying: (i) $\\bar{D}_t\\in \\mbox{span}\\lbrace \\bar{I},\\bar{J}_t\\rbrace $ , $\\bar{D}_t\\ne \\pm \\bar{I}$ , (ii) $\\det \\bar{D}_t=1$ , (iii) $\\bar{D}_t$ is a Codazzi tensor with respect to the metric induced by $g_t$ ." ], [ "The hyperbolic case", "We focus on the case when $f$ is hyperbolic, that is, we have $\\bar{J}_t^2=\\bar{I}$ for $t\\in I=[0,\\epsilon )$ .", "Lemma 8 There is locally a smooth one-parameter family of tangent frames $\\lbrace U^t,V^t\\rbrace $ by vectors fields $U^t,V^t\\in \\mathfrak {X}(L)$ of unit length with respect to the metric of $g_t$ that are eigenvectors of $\\bar{J}_t$ associated to the eigenvalues 1 and $-1$ , respectively.", "Proof: Let $U,V\\in \\mathfrak {X}(L)$ be a frame of eigenvectors of $\\bar{J}$ of unit length with respect to the metric induced by $g=g_0$ associated to the eigenvalues 1 and $-1$ , respectively.", "With respect to this frame we have $\\bar{J}_t=\\begin{bmatrix}a & c\\\\b & d\\end{bmatrix}.$ That $\\bar{J}_t^2=\\bar{I}$ is equivalent to $a+d=0$ and $1-a^2=bc$ .", "If $\\mu ,\\nu \\in C^\\infty (I\\times L)$ are given by $\\mu =b/(1-d)$ and $\\nu =-c/(1+a)$ then the vector fields $U+\\mu V$ and $V+\\nu U$ are eigenvectors of $\\bar{J}_t$ .", "We obtain the desired frames by normalizing them.", "Let $U^t,V^t$ be the frame given by Lemma REF .", "Since $\\det \\bar{D}_t=1$ there is $\\theta \\in C^\\infty (I\\times L)$ such that $\\bar{D}_t$ in terms of $\\lbrace U^t,V^t\\rbrace $ is of the form $\\bar{D}_t=\\begin{bmatrix}\\theta & 0\\\\0 & \\theta ^{-1}\\end{bmatrix}.$ The following calculations are all done for a fixed $t$ that is omitted in the sequel for simplicity of notation.", "Since $U,V$ are unit vector fields we have ${\\langle }\\nabla _UV,V{\\rangle }=0={\\langle }\\nabla _VU,U{\\rangle }$ .", "Thus $ \\nabla _UV=\\Lambda _1U-\\Lambda _1FV\\;\\;\\mbox{and}\\;\\;\\nabla _VU=-\\Lambda _2FU+\\Lambda _2V$ where $F={\\langle }U,V{\\rangle }$ .", "Then the Codazzi equation for $\\bar{D}$ is equivalent to the system of equations ${\\left\\lbrace \\begin{array}{ll}U(\\theta ^{-1})=\\Lambda _2(\\theta -\\theta ^{-1})\\vspace{4.25pt}\\\\V(\\theta )=\\Lambda _1(\\theta ^{-1}-\\theta ).\\end{array}\\right.", "}$ Multiplying the first equation by $2\\theta ^3$ , the second by $2\\theta $ and setting $\\tau =\\theta ^2$ yields ${\\left\\lbrace \\begin{array}{ll}U(\\tau )=2\\Lambda _2\\tau (1-\\tau )\\vspace{4.25pt}\\\\V(\\tau )=2\\Lambda _1(1-\\tau ).\\end{array}\\right.", "}$ Then any positive solutions of this system other than $\\tau =1$ gives a Codazzi tensor $\\bar{D}$ with $\\det \\bar{D}=1$ .", "The integrability condition of the system is $U(\\Lambda _1)-\\Lambda _1\\Lambda _2+F(\\Lambda _1)^2-\\tau (V(\\Lambda _2)-\\Lambda _1\\Lambda _2+F(\\Lambda _2)^2)=0$ that is trivially satisfied if $ \\Lambda _1\\Lambda _2=U(\\Lambda _1)+F(\\Lambda _1)^2=V(\\Lambda _2)+F(\\Lambda _2)^2.$ Lemma 9 The hyperbolic surface $g_t\\colon L^2\\rightarrow \\mathbb {S}^n$ is of the first species of real type if and only if the condition (REF ) holds.", "Proof: Let $(u,v)$ be a coordinate system such that $\\partial _u=aU$ and $\\partial _v=bV$ for $a,b\\in C^{\\infty }(L)$ .", "Then ${\\left\\lbrace \\begin{array}{ll}\\nabla _{\\partial _u}\\partial _v=b \\Lambda _1\\partial _u+a(U(\\log b)-F\\Lambda _1)\\partial _v\\vspace{4.25pt}\\\\\\nabla _{\\partial _v}\\partial _u=b(V(\\log a)-F\\Lambda _2)\\partial _u+a\\Lambda _2\\partial _v.\\end{array}\\right.", "}$ Since $[\\partial _u,\\partial _v]=0$ we have $ \\Lambda _1=V(\\log a)-F\\Lambda _2\\;\\;\\mbox{and}\\;\\; \\Lambda _2=U(\\log b)-F\\Lambda _1,$ and hence $\\nabla _{\\partial _u}\\partial _v=b\\Lambda _1\\partial _u+a\\Lambda _2\\partial _v.$ Hence the surface is of the first species of real type if $\\partial _u(b\\Lambda _1)=\\partial _v(a\\Lambda _2)=2ab\\Lambda _1\\Lambda _2$ which using (REF ) is verified to be equivalent to (REF ).", "Next we prove Theorem REF in the case when $f$ is hyperbolic.", "Proof: We assume that the hypersurface is parametrized by a special hyperbolic pair.", "Let $g_t\\colon L^2_t\\rightarrow \\mathbb {S}^n$ be the one-parameter family of surfaces determined by the Gauss maps of $f_t$ and $t\\in [0,\\epsilon )$ for $\\epsilon >0$ given by Lemmas REF and REF .", "For $U^t$ and $V^t$ as in Lemma REF we have from (REF ) that $\\Lambda _1$ and $\\Lambda _2$ are smooth on the parameter $t$ .", "Hence, if there is a sequence $\\lbrace t_n\\rbrace \\rightarrow 0$ in $[0,\\epsilon )$ such that for each $t_n$ the surface $g_{t_n}$ is of the first species of real type then (REF ) holds for each $t_n$ .", "By continuity (REF ) also holds for $t=0$ in contradiction with the assumption on $f$ not to be Sbrana-Cartan." ], [ "The elliptic case", "Next we treat the case when $f$ is elliptic, that is, when $\\bar{J}^2_t=-\\bar{I}$ for $t\\in I$ .", "Let $\\lbrace U^t,V^t=\\bar{J}_tU^t\\rbrace $ be a smooth one-parameter family of tangent frames.", "Since $\\bar{D}_t\\in \\mbox{span}\\lbrace \\bar{I},\\bar{J}\\rbrace $ and $\\det \\bar{D}_t=1$ then $\\bar{D}_t$ in that frame is of the form $\\bar{D}_t=\\begin{bmatrix}\\cos \\theta &-\\sin \\theta \\\\\\sin \\theta &\\cos \\theta \\end{bmatrix}$ where $\\theta \\in C^\\infty (L)$ .", "We consider the complexified tangent bundle $TL^ as well as the $ -linear extension of the tensors $\\bar{D}_t$ and $\\bar{J}_t$ denoted equally.", "Then the vector field $Z^t=1/2(U^t-iV^t)$ and $\\bar{Z^t}$ are eigenvectors of $\\bar{D}_t$ with respect to the eigenvalues $\\rho =\\cos \\theta +i\\sin \\theta $ and $\\bar{\\rho }$ , respectively.", "The following calculations are all done for a fixed $t$ that is omitted in the sequel for simplicity of notation.", "Now we consider the $-bilinear extensions of the metric andthe Levi-Civita connection.", "Then we write\\begin{equation} \\nabla _Z\\bar{Z}=\\Lambda _1Z+\\Lambda _2\\bar{Z}\\end{equation} and observe that $ ZZ=ZZ$.$ The Codazzi equation for $\\bar{D}$ evaluated in $Z$ and $\\bar{Z}$ is equivalent to $ Z(\\bar{\\rho })=\\bar{\\Lambda }_1(\\rho -\\bar{\\rho }).$ Since $\\rho \\bar{\\rho }=1$ , then that $Z(\\rho \\bar{\\rho })=0$ together with (REF ) yields $Z(\\rho )=\\rho ^2\\bar{\\Lambda }_1(\\bar{\\rho }-\\rho )=-\\rho ^2Z(\\bar{\\rho }).$ Now a straightforward computation of the integrability condition for (REF ) gives $Z(\\Lambda _1)-\\Lambda _1\\bar{\\Lambda }_1-\\Lambda _1\\Lambda _2=\\rho ^2(\\bar{Z}(\\bar{\\Lambda }_1)-\\Lambda _1\\bar{\\Lambda }_1-\\bar{\\Lambda }_1\\bar{\\Lambda }_2).$ Then this condition holds trivially if $ Z(\\Lambda _1)=\\Lambda _1\\bar{\\Lambda }_1+\\Lambda _1\\Lambda _2.$ Lemma 10 The elliptic surface $g_t\\colon L^2\\rightarrow \\mathbb {S}^n$ is of the first species of complex type if and only if the condition (REF ) holds.", "Proof: Let $(u,v)$ be complex conjugate coordinates such that $\\frac{1}{2}(\\partial _u-i\\partial _v)=\\partial _z=\\phi (z)Z$ and write $\\nabla _{\\partial _z}\\partial _{\\bar{z}}=\\Gamma \\partial _z+\\bar{\\Gamma }\\partial _{\\bar{z}}.$ A straightforward computation, using () gives ${\\left\\lbrace \\begin{array}{ll}\\nabla _{\\partial _z}\\partial _{\\bar{z}}=\\phi \\bar{\\phi }\\Lambda _1Z+(\\phi Z(\\bar{\\phi })+\\phi \\bar{\\phi }\\Lambda _2)\\bar{Z}\\vspace{4.25pt}\\\\\\nabla _{\\partial _{\\bar{z}}}\\partial _z=(\\bar{\\phi }\\bar{Z}(\\phi )+\\phi \\bar{\\phi }\\bar{\\Lambda }_2)Z+\\phi \\bar{\\phi }\\bar{\\Lambda }_1\\bar{Z}.\\end{array}\\right.", "}$ Hence $\\nabla _{\\partial _z}\\partial _{\\bar{z}}=\\nabla _{\\partial _{\\bar{z}}}\\partial _z$ yields $ Z(\\bar{\\phi })+\\bar{\\phi }\\Lambda _2=\\bar{\\phi }\\bar{\\Lambda }_1$ and $ \\Gamma =\\bar{\\phi }\\Lambda _1.$ Recall that the surface is of the first species of complex type if $ \\Gamma _z=2\\Gamma \\bar{\\Gamma }.$ Since (REF ) gives $\\Gamma _z=\\phi Z(\\bar{\\phi })\\Lambda _1+\\phi \\bar{\\phi }Z(\\Lambda _1)$ we have by (REF ) that (REF ) is equivalent to (REF ).", "Lemma 11 Assume that $g_0=g$ is not a minimal surface.", "Then there is a local smooth one-parameter family of vector fields $U^t,V^t\\in \\mathfrak {X}(L)$ of unit norm with respect to the metric induced by $g_t$ such that $\\bar{J}^tU^t=V^t$ , $\\bar{J}^tV^t=-U^t$ for $t\\in (-\\epsilon ,\\epsilon )$ and small $\\epsilon >0$ .", "Proof: Take a local frame $U^t,V^t=\\bar{J}^tU^t\\in \\mathfrak {X}(L)$ .", "Again we omit for simplicity of notation the index $t$ .", "We search for functions $d,e\\in C^\\infty (I\\times L)$ such that $\\tilde{U}=dU+eV$ and $\\tilde{V}=\\bar{J}\\tilde{U} =dV-eU$ have unit length, that is, we look for solutions of the system ${\\left\\lbrace \\begin{array}{ll}d^2\\Vert U\\Vert ^2+2de{\\langle }U,V{\\rangle }+e^2\\Vert V\\Vert ^2=1\\vspace{4.25pt}\\\\e^2\\Vert U\\Vert ^2-2de{\\langle }U,V{\\rangle }+d^2\\Vert V\\Vert ^2=1.\\end{array}\\right.", "}$ Equivalently, we want solutions of the equations $(d^2+e^2)(\\Vert U\\Vert ^2+\\Vert V\\Vert ^2)=2$ and $(d^2-e^2)\\Vert U\\Vert ^2+4de{\\langle }U,V{\\rangle }-(d^2-e^2)\\Vert V\\Vert ^2=0.$ The later is the equation of second degree in $d/e$ given by $\\frac{d^2}{e^2}(\\Vert U\\Vert ^2-\\Vert V\\Vert ^2)+4\\dfrac{d}{e}{\\langle }U,V{\\rangle }+\\Vert V\\Vert ^2-\\Vert U\\Vert ^2=0$ which is not trivial since the surfaces $g_t$ are not minimal for $t$ small.", "Finally, we prove Theorem REF in the case when $f$ is elliptic.", "Proof: We assume that the hypersurface is parametrized by a special elliptic pair and argue similarly to the hyperbolic case but now we use Lemmas REF and (REF ).", "Notice that since $g$ is not of the first species of complex type then it is not a minimal surface.", "Example 12 Any simply connected isometric minimal immersion $f\\colon M^{2n}\\rightarrow \\mathbb {R}^{2n+1}$ , $n\\ge 2$ , of a nonflat Kaehler manifold is a Sbrana-Cartan hypersurface in the continuous class since there is an associated one-parameter family $f^{\\theta }\\colon M^{2n}\\rightarrow \\mathbb {R}^{2n+1}$ , $\\theta \\in [0,\\pi )$ of isometric submanifolds given by $f^{\\theta }=\\cos \\theta f+\\sin \\theta \\bar{f}$ all having the same Gauss map; see Theorem $15.8$ in [7].", "Moreover, the conjugate submanifold $\\bar{f}=f^{\\pi /2}$ satisfies ${\\langle }f_*X,\\bar{f}_*X{\\rangle }=0$ for any $X\\in \\mathfrak {X}(M)$ , and hence is an infinitesimal bending of $f$ ; see Proposition $15.9$ in [7].", "Then the submanifold $f_t=f+t\\bar{f}$ for any $t\\in \\mathbb {R}$ is also a Sbrana-Cartan hypersurface in the continuous class since it is homotetic to an element in the associated family to $f$ ." ], [ "Acknowledgment", "The first author thanks the Mathematics Department of the University of Murcia where most of this work was developed for the kind hospitality during his visit.", "The second author is supported by CAPES-PNPD Grant 88887.469213/2019-00.", "This research is part of the grant PID2021-124157NB-I00, funded by MCIN/ AEI/10.13039/501100011033/ “ERDF A way of making Europe\".", "Marcos Dajczer IMPA – Estrada Dona Castorina, 110 22460–320, Rio de Janeiro – Brazil e-mail: [email protected] Miguel Ibieta Jimenez Universidade de São Paulo Instituto de Ciências Matemáticas e de Computação Av.", "Trabalhador São Carlense 400 13566–590, São Carlos – Brazil e-mail: [email protected]" ] ]
2210.07813
[ [ "On the asymptotic behaviour of Sudler products for badly approximable\n numbers" ], [ "Abstract Given a badly approximable number $\\alpha$, we study the asymptotic behaviour of the Sudler product defined by $P_N(\\alpha) = \\prod_{r=1}^N 2 | \\sin \\pi r \\alpha |$.", "We show that $\\liminf_{N \\to \\infty} P_N(\\alpha) = 0$ and $\\limsup_{N \\to \\infty} P_N(\\alpha)/N = \\infty$ whenever the sequence of partial quotients in the continued fraction expansion of $\\alpha$ exceeds $7$ infinitely often.", "This improves results obtained by Lubinsky for the general case, and by Grepstad, Neum\\\"uller and Zafeiropoulos for the special case of quadratic irrationals.", "Furthermore, we prove that this threshold value $7$ is optimal, even when restricting $\\alpha$ to be a quadratic irrational, which gives a negative answer to a question of the latter authors." ], [ "Introduction", "For $\\alpha \\in \\mathbb {R}$ and $N$ a natural number, the Sudler product is defined as $P_N(\\alpha ) := \\prod _{r=1}^{N} 2 \\left|\\sin \\pi r\\alpha \\right|.$ Note that by 1-periodicity of $P_N(\\alpha )$ and the fact that $P_N(\\alpha ) = 0$ for rational $\\alpha $ and $N$ sufficiently large, the asymptotic analysis of such products can be restricted to irrational numbers $\\alpha \\in [0,1]$ .", "Sudler products appear in many different areas of mathematics that include, among others, restricted partition functions [35], KAM theory [23] and Padé approximants [29], and were used in the context of almost Mathieu operators in the solution of the Ten Martini Problem by Avila and Jitomirskaya [7].", "Recently, Aistleitner and Borda [1], [2] established a connection between Sudler products and the work of Bettin and Drappeau [10], [11], [12] on the order of magnitude of the Kashaev invariant of certain hyperbolic knots, following the work of Zagier [37].", "Writing $\\Vert P_N \\Vert _{\\infty } = \\max _{0 < \\alpha < 1} P_N(\\alpha )$ , Erdős and Szekeres [16] claimed that the limit $\\lim _{N \\rightarrow \\infty } \\Vert P_N \\Vert _{\\infty }^{1/N}$ exists and equals a value between 1 and 2, without formally proving it.", "This was done by Sudler [35] and Wright [36] who showed that $\\lim _{N \\rightarrow \\infty } \\Vert P_N \\Vert _{\\infty }^{1/N} = C \\approx 1.22$ .", "Inspired by this, the order of growth of Sudler products was extensively examined from a metric point of view.", "For more results in this area, we refer the reader to [6], [8], [9], [13], [15], [17], [24], [25].", "In this paper, we consider the pointwise behaviour of Sudler products.", "First studied by Erdős and Szekeres [16], it was proven that $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0, \\quad \\limsup _{N \\rightarrow \\infty } P_N(\\alpha ) = \\infty $ holds for almost every $\\alpha $ .", "In contrast to the result on $\\lim _{N \\rightarrow \\infty } \\Vert P_N \\Vert _{\\infty }^{1/N}$ , Lubinsky and Saff [27] showed that for almost every $\\alpha $ , $\\lim _{N \\rightarrow \\infty } P_N(\\alpha )^{1/N} = 1$.", "For distributional results for varying $N$ and fixed $\\alpha $ , see e.g.", "[14], [21].", "Mestel and Verschueren [30] examined the behaviour of $P_N(\\phi )$ where $\\phi = \\frac{\\sqrt{5}-1}{2} = [0;1,1,1,\\ldots ]$ is the fractional part of the Golden Ratio.", "They showed that the limit along the Fibonacci sequence $\\lim \\limits _{n \\rightarrow \\infty } P_{F_n}(\\phi )$ exists.", "As the Fibonacci numbers are the denominators of the continued fraction convergents of $\\phi $ , this hints at a connection between the Sudler product of $\\alpha $ and its Diophantine approximation properties.", "Indeed, this was established in [1], [4], [20], especially for the case where $\\alpha $ is a quadratic irrational.", "Returning to (REF ), Lubinsky [28] showed that $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ holds for any $\\alpha \\in \\mathbb {R}$ that has unbounded continued fraction coefficients.", "In fact, he proved that there is a finite cutoff value $K$ (in [18] it was shown that the proof of Lubinsky actually gives a value of $K \\approx e^{800}$ ) such that whenever an irrational $\\alpha = [a_0;a_1,a_2,\\ldots ]$ fulfills $\\limsup _{n \\rightarrow \\infty } a_n \\geqslant K$ , then $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ .", "Lubinsky conjectured that $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ holds for all $\\alpha $ , which was disproven by Grepstad, Kaltenböck and Neumüller [19], as they proved that $\\liminf _{N \\rightarrow \\infty } P_N(\\phi ) > 0.$ Aistleitner, Technau and Zafeiropoulos [4] found a close connection between the behaviour of $\\liminf \\limits _{N \\rightarrow \\infty } P_N(\\alpha )$ and $\\limsup \\limits _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N}$ , which was generalized by Aistleitner and Borda in [1]: They showed that for badly approximable numbersAlthough (REF ) is only stated for quadratic irrationals, the result follows for any badly approximable number as Aistleitner and Borda show that for $0 \\leqslant N \\leqslant q_k$ , we have $\\log P_N(\\alpha ) + \\log P_{q_k-N-1}(\\alpha ) = \\log q_k + \\mathcal {O}(\\log (1 + \\max \\limits _{k \\in \\mathbb {N}} a_k)).$ $\\alpha $ , we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0 \\; \\Longleftrightarrow \\; \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} = \\infty .$ It was shown in [4] that for $\\beta (b) := [0;\\overline{b}]$ , we have $\\liminf \\limits _{N \\rightarrow \\infty } P_N(\\beta ) > 0$ and $\\limsup \\limits _{N \\rightarrow \\infty } \\frac{P_N(\\beta )}{N} < \\infty $ , if and only if $b \\leqslant 5$ .", "Recently, the sufficient condition for $\\liminf \\limits _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ and $\\limsup \\limits _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} = \\infty $ was generalized by Grepstad, Neumüller and Zafeiropoulus [20] to arbitrary quadratic irrationals $\\alpha $ .", "They showed that if $\\alpha = [a_0;a_1,\\ldots ,a_{p},\\overline{a_{p+1},\\ldots ,a_{p + \\ell }}]$ and $a_K := \\limsup _{n \\rightarrow \\infty } a_n \\geqslant 23$ , then $\\liminf \\limits _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ and $\\limsup \\limits _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} = \\infty $ , also showing that this value can be decreased to 22 when the (shortest) period length $\\ell $ is even.", "In the following theorem, we prove that this value can be decreased to 7, even when considering arbitrary badly approximable numbers.", "Theorem 1 Let $\\alpha $ be an irrational number with continued fraction expansion $\\alpha = [a_0;a_1,a_2,\\ldots ]$ that satisfies $\\limsup _{n \\rightarrow \\infty } a_n \\geqslant 7.$ Then $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0, \\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} = \\infty .$ Based on numerical evidence, Grepstad, Neumüller and Zafeiropoulus [20] speculated that $\\liminf _{N \\rightarrow \\infty }P_N(\\alpha ) = 0$ holds for any quadratic irrational that fulfills $\\limsup _{n \\rightarrow \\infty } a_n(\\alpha ) \\geqslant 6$ , which would be in accordance with the result obtained by Aistleitner, Technau and Zafeiropoulus [4] for the special case of period length $\\ell = 1$ .", "We show that this remains true when $\\ell =2$ .", "Theorem 2 Let $\\alpha $ be a quadratic irrational with continued fraction expansion $\\alpha = [a_0;a_1,\\ldots ,a_p,\\overline{a_{p+1},a_{p+2}}]$ and assume that $a_K = \\max \\lbrace a_{p+1},a_{p+2}\\rbrace \\geqslant 6$ .", "Then we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0, \\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} = \\infty .$ However, the following theorem proves that in general, the condition $\\limsup _{n \\rightarrow \\infty }a_n(\\alpha ) \\geqslant 6$ does not imply $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ , even when restricting to quadratic irrationals.", "Theorem 3 Let $\\alpha = [0;\\overline{6,5,5}]$ or $\\alpha = [0;\\overline{5,4}]$ .", "Then we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > 0, \\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} < \\infty .$ Clearly, this gives a negative answer to the question raised by Grepstad, Neumüller and Zafeiropoulus.", "Further, note that the combination of Theorems REF (respectively Theorem REF ) and REF leads to the result that the threshold value for $\\limsup _{n \\rightarrow \\infty } a_n(\\alpha )$ in Theorem REF (respectively Theorem REF ) is indeed optimal.", "This strongly improves the current best bounds which were 23 for quadratic irrationals and $K \\approx e^{800}$ for arbitrary irrationals." ], [ "Directions for further research", " Fixing a period length $\\ell \\geqslant 1$ , what is the smallest value for $a_K = \\limsup _{n \\rightarrow \\infty }a_n$ such that any $\\alpha = [a_0;a_1,\\ldots ,a_{p},\\overline{a_{p+1},\\ldots ,a_{p + \\ell }}]$ with $\\ell $ chosen minimal, fulfills (REF )?", "Denoting this optimal value by $K_{\\ell }$ , the results from [4] and Theorems REF  – REF show that $K_1 = 6, K_2 = 6, K_3 = 7, K_{\\ell } \\leqslant 7$ for $\\ell \\geqslant 4$ .", "We believe that $K_{\\ell } = 7$ for any $\\ell \\geqslant 4$ , and that this can be proven by considering the irrationals $\\alpha _{\\ell } = [0;\\overline{6,\\underbrace{5,\\ldots ,5}_{\\ell -1 \\text{ times}}}]$ .", "In principle, this should be possible to prove with the methods applied in Theorem REF for any fixed $\\ell $ , but both the combinatorial and computational effort increases for larger $\\ell $ .", "Also for $\\ell \\rightarrow \\infty $ , one needs to establish an additional argument.", "We are also interested in the dual problem in the following sense: what is the maximal value of $a_K$ (or which other property) that guarantees $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > 0, \\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} < \\infty $ ?", "For quadratic irrationals with $\\ell = 1$ , [4] proves that this is the case for $a_K = 5$ .", "However, for general quadratic irrationals (and thus badly approximable integers), the optimal value cannot exceed 3, since we show in a byproduct of the proof of Theorem REF that $\\alpha = [0;\\overline{4,1}]$ still fulfills $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ .", "We believe that the value $a_K = 3$ is optimal, however, our tools are not strong enough to prove this: the criterion used to prove Theorem REF uses just a sufficient, but not necessary condition for $\\alpha $ to fulfill $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ , and the methods in Theorems REF and REF can only be applied to a fixed irrational $\\alpha $ (modulo preimages under Gauss map iterations), and not to infinitely many $\\alpha $ at the same time.", "Thus again, we could only prove such results for quadratic irrationals with fixed period length $\\ell $ .", "Lubinsky [28] showed that for any badly approximable $\\alpha $ , there exist $c_1(\\alpha ), c_2(\\alpha )$ such that $N^{-c1} \\ll P_N(\\alpha ) \\ll N^{c_2}.$ Note that this is in contrast to the almost sure behaviour (with respect to Lebesgue measure).", "For almost every $\\alpha $ and every $\\varepsilon > 0$ , we have $\\log N \\log \\log N \\ll \\log P_N(\\alpha ) \\ll \\log N (\\log \\log N)^{1+ \\varepsilon },$ which can be deduced from [14] and [28].", "It was shown in [1] that if $C_1(\\alpha ),C_2(\\alpha )$ denote the infimum of all $c_1,c_2$ fulfilling (REF ), we have $C_2(\\alpha ) = C_1(\\alpha ) +1$ .", "For more results in this direction, we refer the reader to [1], [3] where several statements about estimates on $C_1(\\alpha ), C_2(\\alpha )$ are proven.", "If $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > 0$ , then clearly we have $C_1(\\alpha ) = 0$ .", "To the best of our knowledge, the precise value of $C_1(\\alpha )$ is not known for any $\\alpha $ that fulfills $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ .", "In [22], we proved that for $\\beta = [0;\\overline{b}]$ , $b \\leqslant 5$ , and $q_{n-1} \\leqslant N \\leqslant q_n$ , we have that $P_{q_{n -1}}(\\beta ) \\leqslant P_N(\\beta ) \\leqslant P_{q_n-1}(\\beta )$ .", "We are interested in whether a similar behaviour holds for any badly approximable number $\\alpha $ (with the previous inequality adjusted to suitable subsequences $q_{n_k}$ ), provided that $n$ is chosen sufficiently large, and $\\alpha $ fulfills $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > 0$ ." ], [ "Structure of the paper", "The rest of the article is structured as follows: In Section , we recall basic properties of continued fractions that are used in this article and review already established results on perturbed Sudler products and, if $\\alpha $ is a quadratic irrational, their limiting behaviour along subsequences.", "In Section , we introduce a sufficient condition for badly approximable $\\alpha $ to deduce $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ , which can be applied for infinitely many irrationals at the same time, and using this tool, we prove Theorem REF .", "In Sections and , we prove Theorems REF respectively REF by explicitly computing values of finitely many limit functions evaluated at certain perturbations.", "This article is structured in a way such that the proofs in Sections  –  are almost independent of each other, only relying on statements proven in Section ." ], [ "Notation", "Given two functions $f,g:(0,\\infty )\\rightarrow \\mathbb {R},$ we write $f(t) = \\mathcal {O}(g(t)), f \\ll g$ or $g \\gg f$ when $\\limsup _{t\\rightarrow \\infty } \\frac{|f(t)|}{|g(t)|} < \\infty $ .", "Any dependence of the value of the limsup above on potential parameters is denoted by the appropriate subscripts.", "Given a real number $x\\in \\mathbb {R},$ we write $\\lbrace x\\rbrace $ for the fractional part of $x$ and $\\Vert x\\Vert =\\min \\lbrace |x-k|: k\\in \\mathbb {Z}\\rbrace $ for the distance of $x$ from its nearest integer.", "We denote the characteristic function of a relation $R$ by ${1}_R$ and understand the value of empty sums and products as 0 respectively 1." ], [ "Continued fractions", "For convenience of the reader, we recall here some well-known facts on continued fractions that are used in this paper.", "For a more detailed background, see the classical literature e.g.", "[5], [33], [34].", "Furthermore, we define notations in context of continued fractions that are particularly useful here.", "Every irrational $\\alpha $ has a unique infinite continued fraction expansion $[a_0;a_1,...]$ with convergents $p_k/q_k = [a_0;a_1,...,a_k]$ fulfilling the recursions $p_{k+1} = p_{k+1}(\\alpha ) = a_{k+1}p_k + p_{k-1}, \\quad q_{k+1} = q_{k+1}(\\alpha ) = a_{k+1}q_k + q_{k-1}, \\quad k \\geqslant 1$ with initial values $p_0 = a_0,\\; p_1 = a_1a_0 +1,\\; q_0 = 1,\\; q_1 = a_1$ .", "One can deduce from these recursions that $q_k$ grows exponentially fast in $k$ ; in particular, we have for any $k,j \\in \\mathbb {N}$ $\\frac{q_{k}}{q_{k+j}} \\gg _{\\alpha } C^j$ where $0 < C(\\alpha ) < 1$ .", "Defining $\\delta _k := \\Vert q_k \\alpha \\Vert ,\\quad \\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_{k} = [0;a_{k},a_{k-1},\\ldots ,a_1], \\quad \\vec{\\alpha }_{k} = [0;a_k,a_{k+1},\\ldots ],\\quad k \\geqslant 1,$ we have $q_k\\alpha &\\equiv (-1)^k\\delta _k \\pmod {1},\\\\q_k\\delta _k &= \\frac{1}{a_{k+1} + \\vec{\\alpha }_{k+2} + \\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_{k}}.$ We call an irrational number $\\alpha $ badly approximable if $\\limsup _{n \\rightarrow \\infty } a_n(\\alpha ) < \\infty $ .", "For fixed $a \\in \\mathbb {N}$ , we define $\\mathcal {B}(a,M) := \\lbrace \\alpha = [0;a_1,a_2,\\ldots ] \\in \\lbrace 0,1\\rbrace :\\max _{i \\geqslant M+1} a_i = a \\rbrace .$ If $\\beta = [a_0;a_1,a_2,\\ldots ]$ is badly approximable with $\\limsup _{n \\rightarrow \\infty } a_n(\\beta ) = a$ , then there exists a $K_0 \\in \\mathbb {N}$ such that $T^{K_0}(\\beta ) \\in \\mathcal {B}(a,M)$ , where $T$ denotes the Gauss map defined by $T(x) = {\\left\\lbrace \\begin{array}{ll}0 &\\text{ if } x= 0\\\\ \\left\\lbrace \\frac{1}{x}\\right\\rbrace \\mod {1}&\\text{ if } 0 < x \\leqslant 1.\\end{array}\\right.", "}$ Fixing an irrational $\\alpha = [a_0;a_1,...]$ , the Ostrowski expansion of a non-negative integer $N$ is the unique representation $N = \\sum _{\\ell = 0}^k b_{\\ell }q_{\\ell } \\quad \\text{ where }b_{k+1} \\ne 0,\\;\\; 0 \\leqslant b_0 < a_1, \\;\\; 0 \\leqslant b_{\\ell } \\leqslant a_{\\ell +1} \\text{ for } \\ell \\geqslant 1,$ with the additional rule that $b_{\\ell -1} = 0$ whenever $b_{\\ell } = a_{\\ell +1}$ .", "If $\\alpha = \\frac{p}{q}$ is a rational number (with $p,q$ coprime), then we have $\\alpha = [a_0;a_1,\\ldots ,a_{k}]$ for some $k \\in \\mathbb {N}$ with $a_k > 1$ .", "For $N < q = q_k$ , the Ostrowski expansion is defined as in the irrational setting.", "We turn our attention to the case where $\\alpha $ is a quadratic irrational, which will be of particular interest in Theorems REF and REF .", "We define $ \\mathcal {Q}(a) := \\lbrace \\alpha = [0;\\overline{a_1,\\ldots ,a_{\\ell }}] \\in \\lbrace 0,1\\rbrace : \\ell \\in \\mathbb {N}, \\max _{1 \\leqslant i \\leqslant \\ell } a_i = a\\rbrace $ where $(\\overline{a_1,\\ldots ,a_{\\ell }}) :=(a_1,\\ldots ,a_{\\ell },a_1,\\ldots ,a_{\\ell },a_1,\\ldots ,a_{\\ell },\\ldots )$ .", "As before, each quadratic irrational $\\beta = [a_0;a_1,\\ldots ,a_p,\\overline{a_{p+1},\\ldots , a_{p+\\ell }}]$ with $\\max _{p+1 \\leqslant k \\leqslant \\ell + p} a_k = a$ fulfills $T^p(\\beta ) \\in \\mathcal {Q}(a)$ .", "We will see later that the arguments we use are invariant under the Gauss map, so it suffices to examine quadratic irrationals $\\alpha = [0;\\overline{a_1,\\ldots ,a_{\\ell }}] \\in \\mathcal {Q}(a)$ .", "For fixed $\\ell $ , we denote by $[k]$ the smallest non-negative residue of $k \\mod {\\ell }$ .", "Furthermore, we adopt the following notation from [20].", "For ${\\bf a} = (a_1,\\ldots ,a_{\\ell })$ , we define the permutation operators $\\tau _r, \\sigma _r$ for $0 \\leqslant r \\leqslant \\ell -1$ by $\\tau _r({\\bf a}) := (a_{r+1},\\ldots ,a_{\\ell },a_1,\\ldots ,a_r),$ respectively $\\sigma _r({\\bf a}) := (a_{r-1},\\ldots ,a_1,a_{\\ell },\\ldots ,a_r) \\text{ if } r \\geqslant 2,$ and $\\sigma _0({\\bf a}) := (a_{\\ell -1},\\ldots ,a_1,a_{\\ell }),\\sigma _1({\\bf a}) := (a_{\\ell },\\ldots ,a_1).$ Its corresponding quadratic irrationals will be denoted by $\\alpha _{\\tau _r} := [0;\\overline{a_{r+1},\\ldots ,a_{\\ell },a_1,\\ldots ,a_r}], \\quad \\alpha _{\\sigma _r} := [0;\\overline{a_{r-1},\\ldots ,a_1,a_{\\ell },\\ldots ,a_r}].$ With this notation, we obtain that for every $0 \\leqslant r \\leqslant \\ell -1$ $\\lim _{m \\rightarrow \\infty } \\frac{q_{m\\ell + r -1}}{q_{m\\ell +r}} = \\alpha _{\\sigma _r}, \\quad \\lim _{m \\rightarrow \\infty } q_{m\\ell + r}\\delta _{m\\ell + r} = \\frac{1}{a_{r+1}+ \\alpha _{\\tau _{[r+2]}}+ \\alpha _{\\sigma _r}} =: C(r).", "$" ], [ "Perturbed Sudler products", "All results in this paper rely on a decomposition approach that was implicitly used already in [19] to prove $\\liminf \\limits _{N \\rightarrow \\infty }P_N(\\phi ) > 0$ , and more explicitly in later literature (see e.g.", "[1], [2], [3], [4], [20], [21], [22]).", "The Sudler product $P_N(\\alpha )$ is decomposed into a finite product with factors of the form $P_{q_n}(\\alpha ,\\varepsilon ) := \\prod _{r=1}^{q_n} 2 \\left|\\sin \\Big (\\pi \\Big (r\\alpha + (-1)^{n}\\frac{\\varepsilon }{q_n}\\Big )\\Big )\\right|,$ a fact that is summarized by the following proposition: Proposition 4 Let $N = \\sum _{i=0}^{n} b_{i}q_i(\\alpha )$ be the Ostrowski expansion of an arbitrary integer $q_n \\leqslant N < q_{n+1}$ .", "For $0 \\leqslant i \\leqslant n$ and $k \\in \\mathbb {N}$ , we define $\\varepsilon _{i,k}(N):= q_i\\left(k\\delta _i + \\sum _{j=1}^{n-i} (-1)^jb_{i+j} \\delta _{i+j}\\right)$ and $K_i(N) := \\prod _{i=0}^{n}\\prod _{c_i= 1}^{b_i(N)-1} P_{q_i}(\\alpha ,\\varepsilon _{i,c_i}(N))\\cdot P_{q_{i-1}}(\\alpha ,\\varepsilon _{i-1,0}(N))^{{1}_{[b_{i-1}(N) \\ne 0]}},$ where $b_{-1} := 0$ .", "Then we have $P_N(\\alpha ) = \\prod _{i=0}^{n}\\prod _{c_i= 0}^{b_i-1} P_{q_i}(\\alpha ,\\varepsilon _{i,c_i}(N)),$ as well as $P_N(\\alpha ) = P_{q_n}(\\alpha ) \\cdot \\prod _{i = 1}^n K_i(N).$ For $N = \\sum _{i=0}^n b_{i}q_i(\\alpha )$ fixed, we define $M_{i,k} = M_{i,k}(N) = \\sum _{j=i+1}^{n} b_jq_j + kq_i, \\quad \\quad i = 0,\\ldots , n, \\quad k = 0,\\ldots ,b_i-1.$ Using the approach of [1], [4], [20], we obtain $P_N(\\alpha ) = \\prod _{i=0}^{n}\\prod _{c_i= 0}^{b_i-1} P_{q_i}(\\alpha ,\\tilde{\\varepsilon }_{i,c_i})$ where by periodicity of the sine, any $\\tilde{\\varepsilon }_{i,c_i} = \\tilde{\\varepsilon }_{i,c_i}(N)$ that fulfills the relation $\\frac{(-1)^i\\tilde{\\varepsilon }_{i,c_i}}{q_i} &\\equiv M_{i,c_i}\\alpha \\pmod {1}$ can be chosen.", "By (REF ), we have $M_{i,c_i}\\alpha = c_iq_i\\alpha + \\sum _{j=1}^{n-i} b_{i+j}q_{i+j}\\alpha \\equiv (-1)^i\\left(c_i\\delta _i + \\sum _{j=1}^{n-i} (-1)^jb_{i+j}\\delta _{i+j}\\right) \\pmod {1},$ thus $\\varepsilon _{i,c_i}(N)$ fulfills (REF ).", "This proves (REF ), whereas (REF ) follows by a regrouping of the products.", "Mestel and Verschueren [30] showed that $P_{F_n}(\\phi ) = P_{F_n}(\\phi ,0)$ converges to some positive constant $C_1$ .", "Aistleitner, Technau and Zafeiropoulus showed that for quadratic irrationals with period length 1, the limit $G(\\alpha ,\\varepsilon ) := \\lim _{n \\rightarrow \\infty }P_{q_n}(\\alpha ,\\varepsilon )$ exists and that the convergence is uniform on compact intervals.", "The authors also give a rather long, closed expression for $G(\\alpha ,\\varepsilon )$ , that helped them to approximately compute $G(\\alpha ,\\varepsilon )$ and in particular, $\\lim _{N \\rightarrow \\infty } P_{q_n}(\\alpha ) = G(\\alpha ,0)$ .", "This result was generalized in [20] to hold for quadratic irrationals with arbitrary period length in the following way: Theorem A (Grepstad, Neumüller, Zafeiropoulus [20]).", "Let $\\alpha := [0;\\overline{a_1,\\ldots ,a_{\\ell }}]$ and $P_{q_n}(\\alpha ,\\varepsilon )$ be as in (REF ).", "Then for each $1 \\leqslant r \\leqslant \\ell $ , $P_{q_{m\\ell + k}}(\\alpha ,\\varepsilon )$ converges locally uniformly to a function $G_{r}(\\alpha ,\\varepsilon )$ .", "The functions $G_r(\\alpha ,\\cdot )$ are continuous and $C^{\\infty }$ on every interval where they are non-zero.", "The limit functions play a crucial role in the sufficient condition used in [20] to obtain their results.", "The following result is a straightforward generalization of [4] for quadratic irrationals with arbitrary period length $\\ell $ : Theorem B (Grepstad, Neumüller, Zafeiropoulus [20]).", "Let $\\beta = [0;a_1,\\ldots a_p,\\overline{a_{p+1},\\ldots ,a_{p +\\ell }}]$ be an arbitrary quadratic irrational, $\\alpha = T^p(\\beta ) = [0;\\overline{a_1,\\ldots ,a_{\\ell }}]$ and $G_{r}(\\alpha ,\\varepsilon )$ as in Theorem A.", "If there exists some $1 \\leqslant r \\leqslant \\ell $ such that $G_{r}(\\alpha ,0) < 1$ , then $\\liminf _{N \\rightarrow \\infty } P_N(\\beta ) = 0, \\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\beta )}{N} = \\infty .$ The following Lemma is helpful for any asymptotical analysis of Sudler products using the decomposition into perturbed products.", "It implies that the behaviour of finitely many shifted products is negligible for the asymptotic order of magnitude.", "Lemma 5 Let $\\alpha \\in \\mathcal {Q}(a), N \\in \\mathbb {N}$ and $\\varepsilon _{i,c_i}(N)$ as before.", "Then we have $1 \\ll _{\\alpha } P_{q_k}(\\alpha ,\\varepsilon _{i,c_i}(N)) \\ll _{\\alpha } 1$ uniformly in $N$ and $k$ .", "The fact that $P_{q_k}(\\alpha ,\\varepsilon _{i,c_i}(N)) \\gg _{\\alpha } 1$ is proven in [1], whereas $P_{q_k}(\\alpha ,\\varepsilon _{i,c_i}(N)) \\ll _{\\alpha } 1$ follows immediately by the convergence to the uniformly bounded limit functions $G_r$ ." ], [ "Proof of Theorem ", "As we want to prove a statement on arbitrary badly approximable numbers, the usage of limit functions from Theorem A is not possible.", "However, the following lemma can be seen as a generalization of Theorem B for arbitrary badly approximable irrationals $\\alpha $ .", "Lemma 6 Let $\\alpha $ be a badly approximable number and assume that there exist real numbers $\\delta > 0, 0 < c < 1$ such that for infinitely many $k$ we have $\\sup _{\\varepsilon \\in (-\\delta ,\\delta )} P_{q_k}(\\alpha ,\\varepsilon ) < c.$ Then $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0, \\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} = \\infty .$ The proof follows along the lines of [4] and [20].", "As there are infinitely many $k$ fulfilling (REF ), $\\delta _k \\ll _{\\alpha } \\frac{1}{q_k}$ and $q_k$ grows exponentially in $k$ , we can extract an increasing subsequence $k_j$ such that the following holds for every $j \\in \\mathbb {N}$ : $\\sup _{\\varepsilon \\in (-\\delta ,\\delta )} P_{q_{k_j}}(\\alpha ,\\varepsilon ) < c < 1.$ $q_{k_j} \\geqslant 2q_{k_{j-1}}$ , $j \\geqslant 2$ .", "$\\delta _{k_j} < \\frac{\\delta }{4q_{k_{j-1}}}$ .", "Now we define $N_{j} := \\sum \\limits _{i =1}^j q_{k_j}, j \\geqslant 1$ .", "By Proposition REF , we obtain $P_{N_j}(\\alpha ) = \\prod _{i=1}^{j}P_{q_{n_j}}(\\alpha ,\\varepsilon _{n_j,0}(N)).$ Observe that $\\vert \\varepsilon _{n_j,0}(N) \\vert \\leqslant q_{n_j}\\left(\\sum _{\\ell = 1}^{\\infty } \\delta _{n_{j+\\ell }}\\right)\\leqslant \\frac{\\delta }{4}\\left(\\sum _{\\ell = 0}^{\\infty }\\frac{1}{2^{\\ell }}\\right) = \\frac{\\delta }{2},$ so we obtain $P_{N_{k_j}}(\\alpha ) =\\prod _{i=1}^j P_{q_k}(\\alpha ,\\varepsilon _{N_{k_j}}) < c^j,$ hence with $j \\rightarrow \\infty $ , we obtain $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ .", "The statement follows now immediately from (REF ).", "In order to apply Lemma REF , we need an analogue of the limit functions $G_r$ from Theorem A for badly approximable irrationals.", "The following proposition shows that for large $k$ , we can substitute the complicated expression of $P_{q_k}$ by a more controllable function $H_k$ , that is of similar shape to the representation of $G_{[k]}$ in [1].", "Proposition 7 For $h \\in \\mathbb {N}$ , we define $H_k(\\alpha ,\\varepsilon ) :=2\\pi \\vert \\varepsilon + q_k\\delta _k \\vert \\prod _{n=1}^{\\lfloor q_k/2\\rfloor } h_{n,k}(\\varepsilon ),$ where $h_{n,k}(\\varepsilon ) = h_{n,k}(\\alpha ,\\varepsilon ) := \\Bigg \\vert \\Bigg (1 - q_k\\delta _k\\frac{\\left\\lbrace n\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k\\right\\rbrace - \\frac{1}{2}}{n}\\Bigg )^2 - \\frac{\\left(\\varepsilon + \\frac{q_k\\delta _k}{2}\\right)^2}{n^2}\\Bigg \\vert .$ For any badly approximable $\\alpha $ and any compact interval $I$ , we have $P_{q_k}(\\alpha ,\\varepsilon ) = H_k(\\alpha ,\\varepsilon )\\left(1 + \\mathcal {O}\\left(q_k^{-2/3}\\log ^{2/3}q_k\\right)\\right) + \\mathcal {O}(q_k^{-2}),\\quad \\varepsilon \\in I,$ with the implied constant only depending on $\\alpha $ and $I$ .", "In particular, we have $\\lim _{k \\rightarrow \\infty } \\vert P_{q_k}(\\alpha ,x) - H_k(\\alpha ,x) \\vert = 0,$ with the convergence being locally uniform on $\\mathbb {R}$ .", "We follow the strategy of [1] and [22], which both treat the case where $\\alpha $ is a quadratic irrational.", "Thus, the details in the proof that are identical to the arguments applied there are omitted.", "We define $f(x) := \\vert 2\\sin (\\pi x)\\vert $ and $p_{n,k}(\\varepsilon ) = p_{n,k}(\\alpha ,\\varepsilon ) := \\left|\\frac{f^2\\Big (\\frac{n}{q_k} - \\big (\\lbrace n\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k\\rbrace - \\frac{1}{2}\\big )\\delta _k\\Big ) - f^2\\big (\\frac{2\\varepsilon + q_k\\delta _k}{2q_k}\\big )}{f^2\\big (\\frac{n}{q_k}\\big )}\\right|.$ Using trigonometric identities and $\\alpha = p_k/q_k + (-1)^k\\delta _k/q_k$ , we obtain $P_{q_k}(\\beta ,\\varepsilon ) =f(\\delta _k + \\tfrac{\\varepsilon }{q_k})\\,q_k\\prod _{ 0 < n < q_k/2} \\left|\\frac{f^2\\Big (\\frac{n}{q_k} - \\big (\\lbrace n\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k\\rbrace - \\frac{1}{2}\\big )\\delta _k\\Big ) - f^2\\big (\\frac{2\\varepsilon + q_k\\delta _k}{2q_k}\\big )}{f^2\\big (\\frac{n}{q_k}\\big )}\\right|$ with an additional factor $\\frac{f\\left(\\frac{1}{2} - \\frac{2\\varepsilon + q_k\\delta _k}{2q_k}\\right)}{f(\\frac{1}{2})} = 1 + \\mathcal {O}(q_k^{-2})$ if $q_k$ is even.", "Let $\\psi (t) = t^{2/3}\\log ^{1/3} t$ .", "Since $\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k$ has bounded partial quotients, discrepancy estimates on $\\left\\lbrace \\lbrace n\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k\\rbrace : 1 \\leqslant n \\leqslant N\\right\\rbrace $ for $N < q_k$ , together with Koksma's inequality (for details, see e.g.", "[26]) lead to $\\prod _{\\psi (q_k) < n < \\frac{q_k}{2}} p_{n,k}(\\varepsilon ) = 1 + \\mathcal {O}\\Big (\\frac{\\log q_k}{\\psi (q_k)}\\Big ), \\quad \\prod _{n = \\psi (q_k)+1}^{\\lfloor q_k/2 \\rfloor } h_{n,k}(\\varepsilon ) = 1 + \\mathcal {O}\\Big (\\frac{\\log q_k}{\\psi (q_k)}\\Big ).$ Furthermore, we can prove (see [22]) that $\\prod \\limits _{n=1}^{\\psi (q_k)} p_{n,k}(\\varepsilon ) =\\Big (1 + \\mathcal {O}\\Big (\\frac{\\psi (q_k)^2}{q_k^2}\\Big )\\Big ){\\prod \\limits _{n=1}^{\\psi (q_k)} h_{n,k}(\\varepsilon )}+ \\mathcal {O}(q_k^{-2}),$ and clearly, we have $f\\Big (\\delta _k + \\frac{\\varepsilon }{q_k}\\Big )q_k = 2\\pi \\Big \\vert \\varepsilon + q_k\\delta _k\\Big \\vert \\Big (1 + \\mathcal {O}\\big (q_k^{-2}\\big )\\Big )+ \\mathcal {O}(q_k^{-2}).$ Combining the previous estimates, we obtain the desired result.", "Lemma 8 Let $\\alpha = [0;a_1,a_2,\\ldots ]$ be a badly approximable number, $a_K := \\limsup _{n \\rightarrow \\infty } a_n \\geqslant 2$ and $a_{\\max } := \\max _{n \\in \\mathbb {N}}a_n$ .", "Let $0 \\leqslant x < y \\leqslant 1, T \\in \\mathbb {N}$ and let $\\begin{split}g(\\ell ,x,y) &:= \\sum _{n =1}^{\\ell } \\frac{1}{2} - \\left(\\lbrace nx\\rbrace \\cdot {1}_{[\\lfloor nx \\rfloor = \\lfloor ny \\rfloor ]}\\right), \\quad \\\\b(x) &:= \\pi x\\log x,\\\\F(T,x,y) &:= \\sum _{\\ell = 1}^{T}\\frac{g(\\ell ,x,y)}{\\ell (\\ell +1)}, \\quad \\\\E(T,a) &:= \\frac{1+\\log T}{T}\\left(\\frac{a}{8 \\log a} + 6\\right) + \\frac{\\frac{a}{8} + \\frac{23}{4}}{T}.\\end{split}$ Assume there exists $\\varepsilon ^{\\prime }> 0, \\in \\mathbb {N}$ such that for infinitely many $k \\in \\mathbb {N}$ and $\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k \\in [x_k,y_k]$ , it holds that $b\\left(\\frac{a_{k+1} + [0;\\overline{a_K,1}] + x_k}{2\\pi }\\right) - F(T,x_k,y_k) - E(T,a_K) > \\varepsilon ^{\\prime }.$ Then we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0, \\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} = \\infty .$ In order to prove this result, we need a technical estimate on the Birkhoff sum $\\sum _{n = 1}^{\\ell } f(n\\alpha )$ where $f(x) := 1/2 -\\lbrace x\\rbrace $ .", "Proposition 9 Let $\\alpha = [0;a_1,\\ldots ,a_k]$ be a rational number, $m \\leqslant k$ , $a := \\max _{1 \\leqslant i \\leqslant m}a_i$ and let $\\ell \\in \\mathbb {N}$ with $\\ell < q_m$ .", "Then we have $\\left|\\sum \\limits _{n=1}^{\\ell }\\frac{1}{2} -\\left\\lbrace n\\alpha \\right\\rbrace \\right|\\leqslant \\left(\\frac{a}{8 \\log a} + 6\\right)\\log \\ell + \\frac{a}{8} + \\frac{23}{4}.$ This is essentially [20], which is an immediate Corollary of [32].", "Note that in contrast to their setting, we deal with rational numbers, however, the proof remains the same as long as $\\ell < q_m$ .", "We adopt the notations introduced in Proposition REF .", "Observe that if $\\vert \\varepsilon \\vert $ is sufficiently small, we can remove the absolute values in the definition of $h_{n,k}(\\alpha , \\varepsilon )$ .", "Since $\\alpha $ is badly approximable, we have $q_k\\delta _k \\gg _{\\alpha } 1$ , so we can also assume $\\vert \\varepsilon + q_k\\delta _k \\vert > 0$ .", "This leads to $h_{n,k}(\\alpha ,\\varepsilon ) \\leqslant 1 + 2q_k\\delta _k \\frac{\\frac{1}{2}- \\lbrace n \\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k\\rbrace }{n} + \\frac{2\\varepsilon }{n^2}$ for any $n,k \\in \\mathbb {N}$ , provided that $\\varepsilon $ is sufficiently small.", "By $\\log (1+x) \\leqslant x$ and using again $q_k\\delta _k \\gg _{\\alpha } 1$ , we obtain $\\begin{split}H_k(\\alpha ,\\varepsilon )&\\leqslant 2\\pi (\\varepsilon + q_k\\delta _k)\\cdot \\exp \\left(\\sum _{n = 1}^{\\lfloor q_k/2\\rfloor }2q_k\\delta _k \\frac{\\frac{1}{2}- \\lbrace n \\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k\\rbrace }{n} + \\frac{2\\varepsilon }{n^2}\\right)\\\\&= \\left(1 + \\mathcal {O}_{\\alpha }(\\delta )\\right)\\cdot 2\\pi q_k\\delta _k\\exp \\left(2q_k\\delta _k\\sum _{n = 1}^{\\lfloor q_k/2\\rfloor } \\frac{\\frac{1}{2}- \\lbrace n \\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k\\rbrace }{n}\\right).\\end{split}$ Now let $T \\in \\mathbb {N}$ with $T < q_k/2$ .", "Writing $S_{\\ell }(\\alpha ) := \\sum \\limits _{n=1}^{\\ell }\\frac{1}{2} -\\left\\lbrace n\\alpha \\right\\rbrace $ , we use summation by parts to obtain $\\begin{split}\\sum _{n=1}^{\\lfloor q_k/2\\rfloor }\\frac{\\frac{1}{2} -\\left\\lbrace n\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k\\right\\rbrace }{n}&\\leqslant \\sum _{\\ell =1}^{T}\\frac{S_{\\ell }(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k)}{\\ell (\\ell +1)} +\\frac{S_{\\lfloor q_k/2\\rfloor }}{\\lfloor q_k/2\\rfloor }+ \\sum _{\\ell =T+1}^{\\lfloor \\sqrt{q_k}\\rfloor }\\frac{\\vert S_{\\ell }(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k)\\vert }{\\ell ^2}+ \\sum _{\\ell =\\lfloor \\sqrt{q_k}\\rfloor +1}^{\\lfloor q_k/2\\rfloor }\\frac{\\vert S_{\\ell }(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k)\\vert }{\\ell ^2}.\\end{split}$ We apply Proposition REF to the last three terms on the right-hand side of the previous inequality.", "Since $\\limsup _{i \\rightarrow \\infty } a_i = a_K$ , there exists some $K_0 \\in \\mathbb {N}$ such that $\\max _{k \\geqslant K_0} a_i = a_K$ .", "If $k$ is sufficiently large, we have that $\\sqrt{q_k} < q_{k-K_0}(\\alpha _k)$ (this follows immediately from the fact that $q_k(\\alpha _k) = q_k(\\alpha )$ ).", "Hence, we obtain $\\sum _{\\ell =T+1}^{\\lfloor \\sqrt{q_k}\\rfloor }\\frac{\\vert S_{\\ell }(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k)\\vert }{\\ell ^2}\\leqslant E(T,a_K),$ and another application of Proposition REF with $a = a_{\\max }$ yields $\\frac{S_{\\lfloor q_k/2\\rfloor }}{\\lfloor q_k/2\\rfloor } + \\sum _{\\ell =\\sqrt{q_k}}^{\\lfloor q_k/2\\rfloor }\\frac{\\vert S_{\\ell }(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k)\\vert }{\\ell ^2}= \\mathcal {O}_{\\alpha }\\left(\\frac{\\log q_k}{\\sqrt{q_k}}\\right).$ By a short case distinction argument, we see that $S_{\\ell }(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k) \\leqslant g(\\ell ,x,y)$ holds for all $\\ell \\in \\mathbb {N}$ , hence combined with the previous estimates we obtain $\\sum _{n=1}^{\\lfloor q_k/2 \\rfloor }\\frac{\\frac{1}{2} -\\left\\lbrace n\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k\\right\\rbrace }{n} \\leqslant F(T,x,y) +E(T,a_K) + \\mathcal {O}_{\\alpha }\\left(\\frac{\\log q_k}{\\sqrt{q_k}}\\right), \\quad k \\rightarrow \\infty .$ Applying a logarithm to (REF ) leads to $\\begin{split}\\frac{\\log H_k(\\alpha ,\\varepsilon )}{2q_k\\delta _k} &\\leqslant \\mathcal {O}_{\\alpha }(\\delta ) + F(T,x,y) +E(T,a_K) + \\mathcal {O}_{\\alpha }\\left(\\frac{\\log q_k}{\\sqrt{q_k}}\\right)- b\\left(\\frac{1}{2\\pi q_k\\delta _k}\\right).\\end{split}$ Observe that for $K \\geqslant K_0$ , we have $\\frac{1}{q_k\\delta _k} = a_{k+1} + \\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k + \\vec{\\alpha }_{k+2}\\geqslant a_{k+1} + [0;\\overline{a_K,1}] + x_k,$ where we used that $\\vec{\\alpha }_{k+2} \\in \\mathcal {Q}(a_K)$ and $\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k \\in [x_k,y_k]$ .", "Since $b$ is monotonically increasing on $[0,\\infty )$ , this implies $b\\left(\\frac{a_{k+1} + [0;\\overline{a_K,1}] + x_k}{2\\pi }\\right) \\leqslant b\\left(\\frac{1}{2\\pi q_k\\delta _k}\\right).$ Using the assumption (REF ) and choosing $k, \\delta $ in (REF ) sufficiently large respectively sufficiently small, we deduce that for infinitely many $k$ and any $\\varepsilon \\in (-\\delta ,\\delta )$ , we have $\\log H_k(\\alpha ,\\varepsilon ) < -\\varepsilon ^{\\prime }/2$ .", "Using Proposition REF , this shows that for $k$ large enough, we fulfill (REF ) and thus, the statement follows from Lemma REF .", "Now we are in position to prove Theorem REF and start with a short outline of the proof.", "Looking closer at the condition (REF ) in Lemma REF , and assuming $T$ to be very large and $x_k,y_k$ very close together, this is morally equivalent to $F(T,x_k,y_k)< b\\left(\\frac{a_{k+1} + [0;\\overline{a_K,1}] + x_k}{2\\pi }\\right)< \\frac{a_{k+1}}{2} \\log \\left(\\frac{a_{k+1}}{2\\pi }\\right)$ when $\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k \\in [x_k,y_k]$ .", "For large $a_K$ , we can use the trivial estimate $g(\\ell ,x,y) \\leqslant \\ell /2$ to show that (REF ) holds for any $x \\in [0,1]$ , so we can concentrate on reasonably small $a_K$ .", "We see in Figure REF that $F(T,x,y)$ is not too big when bounded away from 0 and (other) rationals with a small denominators.", "If $\\alpha $ is a badly approximable number, then $\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_k \\in \\mathcal {B}(a,M)$ with $M$ large is bounded away from these rationals (see Proposition REF below).", "Covering $\\mathcal {B}(a,M)$ with a finite number of small intervals $[x,y]$ that avoid being close to rationals with small denominators, we are left to prove that (REF ) respectively (REF ) holds on each such interval, which reduces the problem to checking finitely many cases.", "As we are bounded away from the values $x,y$ where $F(T,x,y)$ is large, we can morally think of $F(T,x,y) \\leqslant 0$ , so the question is essentially reduced to whether $a_{k+1} > 2\\pi \\approx 6.28$ .", "A refinement of this argument is in fact enough to prove Theorem REF by considering those $k$ where $a_{k+1} = a_K \\geqslant 8$ , and does also work for most badly approximable numbers with $a_K = 7$ .", "However, it might happen that $a_{k+1} = 7$ and $\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_{k} \\approx x_k$ is so small that $F(T,x,y)$ is significantly larger than 0, and our estimates are too coarse to show (REF ) directly.", "Fortunately, this only happens when $a_{k} = 6$ and $\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_{k-1}$ is so large that $F(T,x,y)$ for $x \\approx \\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_{k-1}$ is significantly smaller than 0, enabling us to show (REF ) for $k-1$ instead.", "Figure: Values of ∑ ℓ=1 100 g(ℓ,x j ,y j ) ℓ(ℓ+1)\\sum _{\\ell = 1}^{100}\\frac{g(\\ell ,x_j,y_j)}{\\ell (\\ell +1)} for y j =x j +1/100y_j = x_j + 1/100.We see that the global maximum appears at x=0x = 0, with local peaks next at rationals with small denominator.Proposition 10 Let $a \\geqslant 2, M \\geqslant 100, 1 \\leqslant m \\leqslant a+1$ .", "Then $\\mathcal {B}(a,M)$ is disjoint from the following sets: $&\\left[0,\\frac{1}{a+1}\\right], \\quad \\left[\\frac{a+1}{a+2},1\\right], \\quad \\\\&\\left[\\frac{1}{m} - \\frac{1}{m^2(a+2)}, \\frac{1}{m} + \\frac{1}{m^2(a+2)}\\right],\\\\&\\left[\\frac{2}{2m+1}- \\frac{1}{(2m+1)^2(a+3)},\\frac{2}{2m+1}+ \\frac{1}{(2m+1)^2(a+3)}\\right],\\\\&\\left[\\frac{3}{3m+1}- \\frac{1}{(3m+1)^2(a+3)},\\frac{3}{3m+1}+ \\frac{1}{(3m+1)^2(a+3)}\\right],\\\\&\\left[\\frac{3}{3m+2}- \\frac{1}{(3m+2)^3(a+3)},\\frac{3}{3m+2}+ \\frac{1}{(3m+2)^2(a+3)}\\right].$ Clearly, $\\alpha \\in \\mathcal {B}(a,M)$ fulfills $\\alpha > [0;a,1] = \\frac{1}{a+1}$ and $\\alpha < [0;1,a+1] = \\frac{a+1}{a+2}$ , hence $\\mathcal {B}(a,M)$ is disjoint from the first two sets in (REF ).", "Now let $\\alpha \\in \\mathcal {B}(a,M) \\cap [\\frac{1}{m} - \\frac{1}{m^2(a+2)},\\frac{1}{m}]$ , then $\\alpha = [0;m,\\alpha ^{\\prime }]$ for some $\\alpha ^{\\prime } \\in \\mathcal {B}(a,M-1)$ .", "By (REF ) (which still holds for $M-1$ ), we have $\\alpha ^{\\prime } > \\frac{1}{a+1}$ , hence $\\alpha < \\frac{1}{m} - \\frac{1}{m^2(a+1 + 1/m)} \\leqslant \\frac{1}{m} - \\frac{1}{m^2(a+2)}$ , a contradiction.", "Similarly, if $\\alpha \\in \\mathcal {B}(a,M) \\cap [\\frac{1}{m},\\frac{1}{m} + \\frac{1}{m^2(a+2)}]$ , then $\\alpha = [0;m+1,\\alpha ^{\\prime }]$ and again by (REF ) $\\alpha ^{\\prime } < \\frac{a+1}{a+2}$ , we have $\\alpha > \\frac{1}{m} + \\frac{1}{m^2(a+2)}$ .", "Now assume that $\\alpha \\in \\mathcal {B}(a,M) \\cap \\left[\\frac{2}{2m+1}- \\frac{1}{(a+3)(2m+1)^2}, \\frac{2}{2m+1} + \\frac{1}{(a+3)(2m+1)^2}\\right]$ .", "Note that () is contained in $\\left(\\frac{1}{m},\\frac{1}{m+1}\\right)$ , hence $\\alpha = \\frac{1}{m + \\alpha ^{\\prime }}$ with $\\alpha ^{\\prime } \\in \\mathcal {B}(a,M)$ .", "If $\\alpha ^{\\prime } \\geqslant \\frac{1}{2}$ , then using () with $m = 2$ implies $\\alpha ^{\\prime } > \\frac{1}{2}+ \\frac{1}{4(a+2)}$ and thus, we have $\\begin{split}\\alpha < \\frac{1}{m + \\frac{1}{2} + \\frac{1}{4(a+2)}}&= \\frac{2}{2m+1} - \\frac{2}{(2(a+2)(2m+1)+1)(2m+1)}\\\\&< \\frac{2}{2m+1} - \\frac{1}{(a+3)(2m+1)^2}.\\end{split}$ If $\\alpha ^{\\prime } < \\frac{1}{2}$ , then again by (), $\\alpha ^{\\prime } < \\frac{1}{2} - \\frac{1}{4(M+2)}$ , and so $\\begin{split}\\alpha > \\frac{1}{m + \\frac{1}{2} - \\frac{1}{4(a+2)}}&= \\frac{2}{2m+1} + \\frac{2}{(2(a+2)(2m+1)-1)(2m+1)}\\\\&> \\frac{2}{2m+1} + \\frac{1}{(a+3)(2m+1)^2}.\\end{split}$ The proof of () and () works precisely in the same fashion as (), by applying () with $m =3$ respectively $m =2$ .", "Let $\\alpha = [a_0;a_1,a_2,\\ldots ]$ be a badly approximable irrational (the case where $\\alpha $ has unbounded partial quotients was proven by Lubinsky in [28]) with $a_K = \\limsup _{k \\rightarrow \\infty } a_k \\geqslant 7$ and let $K_0 \\in \\mathbb {N}$ such that $\\alpha \\in \\mathcal {B}(a_K,K_0)$ .", "If $\\limsup _{k \\rightarrow \\infty }a_k = \\liminf _{k \\rightarrow \\infty }a_k$ , then there exists some $M \\in \\mathbb {N}$ such that $T^{M}(\\alpha ) = [0;\\overline{a_K}]$ (with $T$ denoting the Gauss map).", "It was shown in [4] that $\\liminf _{N \\rightarrow \\infty } P_N([0;\\overline{a_K}]) = 0$ and arguing as in [20], this behaviour is invariant under the application of the Gauss map.", "Thus, we will assume from now on that $\\liminf _{k \\rightarrow \\infty }a_k < a_K-1$ .", "This implies that there exists an increasing sequence $(k_j)_{j \\in \\mathbb {N}}$ that fulfills the following for every $j \\in \\mathbb {N}$ : $k_j \\geqslant K_0 +1000$ .", "$a_{k_j+1} = a_K$ .", "$a_{k_j} \\leqslant a_K-1$ .", "We distinguish cases for the value of $a_K$ .", "$a_K \\geqslant 18$ : In view of Lemma REF , it suffices to show (REF ) for all $k_j$ and some $T \\in \\mathbb {N}$ .", "Using the trivial bound $g(\\ell ,x,y) \\leqslant \\frac{1}{2}$ and monotonicity of $b$ , we have $F(T,x,y) < \\frac{\\log (T+1)}{2}, \\quad b\\left(\\frac{a_K + [0;\\overline{a_K,1}] + x}{2\\pi }\\right)> \\frac{a_{K}}{2} \\log \\left(\\frac{a_{K}}{2\\pi }\\right),$ so we are left to prove that $\\frac{\\log (T+1)}{a_K} + \\frac{2E(T,a_K)}{a_K} < \\log \\left(\\frac{a_K}{2\\pi }\\right) + \\varepsilon ^{\\prime },$ which is fulfilled for $T = 50$ and $a_K = 18$ .", "Since the left-hand side of (REF ) is monotonically decreasing in $a_K$ , whereas the right-hand side is increasing, the result follows immediately for all $a_K \\geqslant 18$ .", "$8 \\leqslant a_K \\leqslant 18$ : As in the case $a_K \\geqslant 18$ we fix an arbitrary $k_j$ , but instead of working with trivial bounds on $g(\\ell ,x,y)$ , we compute the sum explicitly in small grids for $[x,y]$ .", "As $E(T,a_K)$ and $b\\left(\\frac{a_K + [0;\\overline{18,1}]+x}{2\\pi }\\right)$ are monotonically increasing respectively decreasing in $a_K$ , (REF ) can be deduced from $F(x,y,T) + E(T,18) < b\\left(\\frac{9 + [0;\\overline{18,1}]+x}{2\\pi }\\right)$ whenever $9 \\leqslant a_K \\leqslant 18$ .", "We can prove with computational assistance that for $T = 200, R = 2000$ , we have $F\\left(\\frac{i}{R},\\frac{i+1}{R},T\\right) + E(T,18) < b\\left(\\frac{9 + [0;\\overline{18,1}]+x}{2\\pi }\\right)$ for all integers $i$ that satisfy $\\left\\lfloor \\frac{R}{19} \\right\\rfloor \\leqslant i \\leqslant \\left\\lceil \\frac{20 R}{21}\\right\\rceil $ .", "By Proposition REF , $\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_{k_j-1} \\in \\mathcal {B}(18,10) \\subseteq [1/19,19/20]$ .", "Hence, (REF ) holds for any badly approximable $\\alpha $ with $9 \\leqslant a_K \\leqslant 18$ and thus, Lemma REF gives the result.", "Analogously, we can prove for the same values of $T,R$ that we have $F\\left(\\frac{i}{R},\\frac{i+1}{R},T\\right) + E(T,8) < b\\left(\\frac{8 + [0;\\overline{8,1}]+x}{2\\pi }\\right)$ for all integers $i$ that satisfy $\\left\\lfloor \\frac{R}{9} \\right\\rfloor \\leqslant i \\leqslant \\left\\lceil \\frac{9R}{10}\\right\\rceil $ and can conclude the case $a_K = 8$ exactly in the same way as for $9 \\leqslant a_K \\leqslant 18$ .", "$a_K = 7$ : This case is by far the most intricate one.", "We cut out small windows close to rationals with a small denominator, justified by Proposition REF , in the following way: we lay a grid of size $R$ over the unit interval and remove those intervals $\\left[\\frac{i}{R},\\frac{i+1}{R}\\right]$ that are completely contained inside one of the sets (REF ) – () with $m = 1,\\ldots ,6$ .", "In that way we can prove using $T = 4.000, R = 360.000$ that (REF ) holds whenever $a_{k_j+1} = 7,\\quad \\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_{k_j} \\geqslant [0;6,1,6,1,7,1,7]\\approx 0.14549.$ Figure: *This proves Theorem REF for badly approximable $\\alpha $ with $a_K = 7$ such that $\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_{k_j} \\geqslant [0;6,1,6,1,7,1,7]$ holds infinitely often.", "If this is not the case, using $a_{k_j-1} \\leqslant 6$ , this implies that $a_{k_j} = 6, a_{k_j-1} = 1, a_{k_j-2} = 7$ holds infinitely often.", "Therefore, we can deduce that $\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_{k_j-1} \\geqslant [0;1,7,7,1,7,1] > 0.876$ .", "For $T = 2.000,R = 50.000$ , we have that $F\\left(\\frac{i}{R},\\frac{i+1}{R},T\\right) + E(T,7) < b\\left(\\frac{6 + [0;\\overline{7,1}]+x}{2\\pi }\\right)$ holds for all integers $i$ that satisfy $\\left\\lfloor 0.87\\cdot R \\right\\rfloor \\leqslant i \\leqslant \\left\\lceil \\frac{9R}{10}\\right\\rceil $ .", "Equation (REF ) follows now by the same argumentation as above." ], [ "Proof of Theorem ", "From now on, we will only deal with quadratic irrationals of the form $\\alpha = [0;a_1,\\ldots ,a_p,\\overline{a_{p+1},\\ldots ,a_{\\ell }}]$ with $a_K = \\max _{p+1 \\leqslant i \\leqslant p+{\\ell }} a_i$ .", "By invariance under application of the Gauss map (which will follow from Lemma REF and the method of proof naturally), we can assume that $\\alpha \\in \\mathcal {Q}(a_K)$ .", "As the statement of Theorem REF follows for $a_K \\geqslant 7$ from Theorem $\\ref {K_l<=7}$ , we can fix $a_K = 6$ , and thus only have to check the five quadratic irrationals $\\alpha = [0;\\overline{6,a}]$ with $a \\in \\lbrace 1,\\ldots ,5\\rbrace $ .", "For each fixed quadratic irrational, the limit functions $G_r(\\alpha ,\\cdot ), r \\in \\lbrace 0,1\\rbrace $ from Theorem A can be computed explicitly, up to an arbitrarily small error (see Lemma REF below).", "If $\\alpha = [0;\\overline{6,a}]$ with $a \\in \\lbrace 1,\\ldots ,4\\rbrace $ , we will show that $G_{0}(\\alpha ,0) < 1$ and thus, the statement follows immediately from Theorem B.", "If $\\alpha = [0;\\overline{6,5}]$ , Theorem B fails since $\\min \\lbrace G_{0}(\\alpha ,0),G_{1}(\\alpha ,0)\\rbrace > 1$ .", "However, we adapt the proof strategy of Aistleitner, Technau [4] that was used in order to show that $\\alpha = [0;\\overline{6}]$ fulfills (REF ): instead of choosing an Ostrowski expansion with only very few non-zero coefficients, we consider an Ostrowski expansion with all coefficients being 1.", "For $q_{k} \\leqslant N < q_k$ , this leads to almost all negative perturbations fulfilling $\\varepsilon _i(N) \\approx -0.025, 1 \\leqslant i \\leqslant k$ and so, $P_{q_{i+1}}(\\alpha ,\\varepsilon _{i+1}(N)) \\cdot P_{q_{i}}(\\alpha ,\\varepsilon _i(N)) \\lessapprox 0.999 < 1$ holds for most such $i$ .", "Using the decomposition (REF ), we will deduce the result.To formalize this, we start with the ingredients needed to explicitly compute the value of $G_r(\\alpha ,\\varepsilon )$ for fixed $\\alpha ,\\varepsilon $ .", "We use a representation of the limit functions from Theorem A that is due to Aistleitner and Borda [1], which is much easier to control than the one used in [20] (although by the uniqueness of limits, the functions obviously coincide).", "Lemma 11 (see [1], Theorem 4) Let $\\alpha = [0;a_1,\\ldots ,a_p,\\overline{a_{p+1},\\ldots ,a_{\\ell }}]$ and let $\\alpha _{\\tau _r}$ and $C(r)$ as in (REF ) and (REF ).", "Further, define $G_{r}(\\alpha ,\\varepsilon ) = 2\\pi \\vert \\varepsilon + C(r)\\vert \\prod _{n=1}^{\\infty }\\left|g_{n,r}(\\alpha _{\\tau _r},\\varepsilon ) \\right|$ where $g_{n}(\\alpha _{\\tau _r},\\varepsilon ) := \\left(1 - C(r)\\frac{\\left\\lbrace n\\alpha _{\\tau _r}\\right\\rbrace - \\frac{1}{2}}{n}\\right)^2 - \\frac{\\left(\\varepsilon + \\frac{C(r)}{2}\\right)^2}{n^2}.$ Then we have $\\lim _{m \\rightarrow \\infty }P_{q_{m\\ell + r+p}}(\\alpha ,\\varepsilon ) = G_{r}(\\alpha ,\\varepsilon ).$ Remark 1 In case of quadratic irrationals, one can show that the functions $H_k$ defined in Proposition REF converge along the subsequences $(m \\ell + r)_{m \\in \\mathbb {N}}, 0 \\leqslant r \\leqslant \\ell -1$ locally uniformly to $G_r$ .", "The convergence rate in (REF ) is also the same as the one obtained in Proposition REF .", "Lemma 12 Let $[a,b] \\subseteq I$ where $I$ is a zero-free interval of $G_r(\\alpha ,\\cdot )$ .", "Then we have $\\min \\left\\lbrace G(a),G(b)\\right\\rbrace \\leqslant G(x) \\quad \\forall x \\in [a,b].$ If $[a,b]$ does not contain the maximizer (which is unique in $I$ ) of $G_r(\\alpha ,\\cdot )$ , then we also have $G(x) \\leqslant \\max \\left\\lbrace G(a),G(b)\\right\\rbrace \\quad \\forall x \\in [a,b].$ Let $T \\in \\mathbb {N}, 0 \\leqslant r \\leqslant K-1$ and $\\varepsilon \\in I \\subseteq (-1,1) $ such that $G_r(\\varepsilon ) > 0$ for all $\\varepsilon \\in I$ .", "Let $G_{r,T}(\\alpha ,\\varepsilon ) := 2\\pi \\vert \\varepsilon + C(r)\\vert \\prod _{n=1}^{T}\\left|g_{n,r}(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r,\\varepsilon )\\right|$ Then we have for $T$ sufficiently large that $\\left(1 - 2C(r)E(T,a_K) - \\frac{5}{T}\\right)G_{r,T}(\\varepsilon ) &\\leqslant G_{r}(\\varepsilon )\\leqslant G_{r,T}(\\varepsilon )\\left(1 - 3C(r)E(T,a_K)\\right)^{-1}.$ This follows directly from the fact that $G_{r}(\\alpha ,\\varepsilon )$ is log-concave on zero-free intervals, which can be proven in the precisely same way as it was done in [4] for $\\alpha $ being the golden ratio.", "For the second inequality in (REF ), we follow the strategy of [22].", "Clearly, it suffices to show that $\\prod _{n=T+1}^{\\infty }\\left|g_{n,r}(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r,\\varepsilon )\\right|\\leqslant 1 + 3C(r)E(T,a_K).$ Observe that $g_{n,r}(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r,\\varepsilon ) \\geqslant 1 - 2/n - 4/n^2,$ hence we can remove the absolute values around $g_{n,r}$ in the definition of $G_{r}$ for $n \\geqslant 4$ .", "Furthermore, this implies $\\prod _{n=T+1}^{\\infty }g_{n,r}(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r,\\varepsilon ) \\leqslant \\prod _{n=T+1}^{\\infty }\\left(1 - C(r)\\frac{\\left\\lbrace n\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r\\right\\rbrace - \\frac{1}{2}}{n}\\right)^2.$ By arguments as in Lemma REF , we obtain $\\sum _{n=T+1}^{\\infty }C(r)\\frac{\\left\\lbrace n\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r\\right\\rbrace - \\frac{1}{2}}{n}\\leqslant C(r)E(T,a_K),$ so we can deduce $\\begin{split}\\prod _{n=T+1}^{\\infty }g_{n,r}(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r,\\varepsilon )&\\leqslant \\prod _{n=T+1}^{\\infty }\\left(1 - 2C(r)\\frac{\\left\\lbrace n\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r\\right\\rbrace - \\frac{1}{2}}{n}\\right)\\leqslant \\exp \\left(\\sum _{n=T+1}^{\\infty }- 2C(r)\\frac{\\left\\lbrace n\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r\\right\\rbrace - \\frac{1}{2}}{n}\\right)\\\\&\\leqslant \\exp \\left(2C(r)E(T,a_K)\\right).\\end{split}$ For $T$ sufficiently large, $2C(r)\\cdot E(T,a_K)$ is bounded from above by $1/2$ , and for $x < \\frac{1}{2},$ we have the inequality $\\exp (x) \\leqslant 1 + \\frac{3}{2}x$ , which leads to (REF ).", "Concerning the first inequality in (REF ), we use that $\\prod _{n = N}^M \\left(1 + a_n\\right) \\geqslant 1 - \\left|\\sum _{n = N}^M a_n \\right|- \\frac{1}{N-1}$ holds for any $N, M \\in \\mathbb {N}$ , $\\vert a_n \\vert \\leqslant \\min \\lbrace \\frac{1}{2}, 1/n\\rbrace $ , a fact that can be proven by simple estimates on the Taylor series of $\\log $ and $\\exp $ (see [22]).", "This implies for $T$ sufficiently large that $\\begin{split}\\prod _{n = T+1}^{\\infty } g_{n,r}(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r,\\varepsilon ) \\geqslant 1 &- 2C(r)\\left|\\sum _{n = T+1}^{\\infty } \\left(\\frac{\\lbrace n \\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r\\rbrace - \\frac{1}{2}}{n}\\right)\\right|\\\\&- \\sum _{n = T+1}^{\\infty }\\frac{C(r)^2\\left(\\lbrace n \\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r\\rbrace - \\frac{1}{2}\\right)^2 + \\left(\\varepsilon + C(r)/2\\right)^2}{n^2}- \\frac{1}{T}.\\end{split}$ Applying (REF ) and $\\vert C(r)\\vert , \\vert \\varepsilon \\vert < 1$ , we obtain $\\prod _{n = T+1}^{\\infty } g_{n,r}(\\mathchoice{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\displaystyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\displaystyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.66656pt}\\scalebox {-1}[1]{\\m@th \\textstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.66656pt}\\m@th \\textstyle \\alpha \\hspace{1.66656pt}}}}\\hspace{-1.66656pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}{\\hspace{1.111pt}\\scalebox {-1}[1]{\\m@th \\scriptscriptstyle \\vec{\\scalebox {-1}[1]{\\hspace{-1.111pt}\\m@th \\scriptscriptstyle \\alpha \\hspace{1.111pt}}}}\\hspace{-1.111pt}}_r,\\varepsilon )\\geqslant 1 - 2C(r)E(T,a_K) - \\frac{5}{T}.$ Remark 2 Using Lemma REF , we can now compute lower respectively upper bounds for $G_r(\\alpha ,\\varepsilon )$ that approximate the exact value arbitrarily well, by taking $T$ large and small intervals $[a,b] \\in \\varepsilon $ such that $a,b$ can be represented in a computer program symbolically (e.g.", "rational numbers).", "In the rest of the paper, whenever we compute a value $G_r(\\alpha ,\\varepsilon )$ , we will always implicitly apply this procedure, without stating the exact intervals and the size of $T$ .", "Note that with this approach, we can also prove the following: if $\\alpha = [0;\\overline{1,a_2}]$ , then $G_1(\\alpha ,0) < 1$ if and only if $a_2 \\geqslant 4$ , and if $\\alpha = [0;\\overline{2,a_2}]$ , then $G_1(\\alpha ,0) < 1$ if and only if $a_2 \\geqslant 5$ , statements which have been conjectured in [20].", "In order to prove $\\liminf _{N \\rightarrow \\infty } P_N\\left([0;\\overline{6,5}]\\right) = 0$ , we need to study the behaviour of $P_{q_k}(\\alpha ,\\varepsilon )$ when $\\varepsilon $ is bounded away from 0.", "To do so, we need to define some more notations.", "We call a tuple $(\\beta _1,\\ldots ,\\beta _j)$ admissible with respect to $r$ and $\\alpha $ (where $0 \\leqslant r \\leqslant \\ell -1$ ) if there exists some $N \\in \\mathbb {N}, i = [r]$ such that if $N = \\sum _{j =1}^n b_j(N)q_j(\\alpha )$ is the Ostrowski expansion of $N$ , we have $(b_{i-1},b_{i},\\ldots ,b_{i+j-1}) = (\\beta _1,\\beta _2,\\ldots ,\\beta _j)$ .", "Similarly, we define a sequence $(\\beta _j)_{j \\in \\mathbb {N}}$ to be admissible if for any $J \\in \\mathbb {N}$ , $(\\beta _j)_{1 \\leqslant j \\leqslant J}$ is admissible with respect to $r = 0$ and $\\alpha $ .", "The purpose of this definitions is that the admissibility of a sequence respectively tuple encodes the rules on the digits of the Ostrowski expansion.", "For example, we call a tuple $(a,b,c) \\in \\mathbb {N}^3$ admissible with respect to $\\alpha = [0;\\overline{6,5}]$ and 1 if $a \\le 6, b \\le 5, c \\le 6$ , if $b = 5$ then $a = 0$ and if $c = 6$ , then $b = 0$ .", "For convenience, we will drop the dependence of the admissibility of $(\\beta _1,\\ldots ,\\beta _j)$ on $r$ and $\\alpha $ , whenever $\\alpha ,r$ are implicitly defined by the context.", "For $(\\beta _j)_{j \\in \\mathbb {N}}$ being an admissible sequence, we define for $0 \\leqslant r \\leqslant \\ell -1, [i] = r$ and $1 \\leqslant k \\leqslant \\beta _i$ , $\\varepsilon _{r,k}^{\\prime }((\\beta _{i+j})_{j \\in \\mathbb {N}}):= kC(r)+ \\sum _{j =1}^{\\infty }(-1)^j \\beta _{r+j}C([r+j])\\cdot \\prod _{n = 1}^{j}\\alpha _{\\sigma _{[r+n]}},$ whenever the sum converges.", "The motivation behind this definition is the following: suppose that $N_n$ is the sequence of integers along which we hope to prove that $\\lim _{n \\rightarrow \\infty } P_{N_n}(\\alpha ) = 0$ , and assume that $N_n= \\sum _{m =1}^n \\beta _mq_m$ with $(\\beta _j)_{j \\in \\mathbb {N}}$ being an admissible sequence.", "If $(\\beta _j)_{j \\in \\mathbb {N}}$ has a specific periodic structure (say for convenience, with same period $\\ell $ than the continued fraction expansion of $\\alpha $ ), then we can explicitly compute the actual value of $\\varepsilon _{r,k}^{\\prime }((\\beta _{i+j})_{j \\in \\mathbb {N}})^{\\prime }$ , which no longer depends on $i$ , but only on $[i]=r$ .", "The following proposition tells us that we can replace the perturbation value of $\\varepsilon _{i,k}(N_n)$ by $\\varepsilon _{[i],k}^{\\prime }$ and thus, $P_{q_i}(\\varepsilon _{i,k}(N_n)) \\approx P_{q_i}(\\varepsilon _{[i],k}^{\\prime })\\approx G_{[i]}(\\varepsilon _{[i],k}^{\\prime })$ , with the value of the latter term being explicitly computable, so we only need to compute $\\ell $ function evaluations, regardless of $n$ .", "Proposition 13 Let $(\\beta _j)_{j \\in \\mathbb {N}}$ be an admissible sequence and let $(N_n)_{n \\in \\mathbb {N}}$ with $N_n := \\sum _{m =1}^n \\beta _mq_m$ the associated sequence.", "For every $\\delta > 0$ , there exist $I_0,J_0$ such that for all $n \\geqslant J_0$ , $\\sup _{I_0 < i < n - J_0} \\left|\\varepsilon _{i,k}(N_n) - \\varepsilon ^{\\prime }_{[i],k}\\left((\\beta _{i+j})_{j \\in \\mathbb {N}}\\right) \\right|< \\delta .$ If $i \\leqslant n - J_0$ , we have $\\begin{split}\\vert \\varepsilon _{i,k}(N_n)- \\varepsilon ^{\\prime }_{r,k}((\\beta _{i+j})_{j \\in \\mathbb {N}})\\vert &\\leqslant \\sum _{j = 1}^{J_0}\\beta _{i+j}\\left|(-1)^iq_i\\delta _{i+j} - (-1)^jC([r+j])\\prod _{n=1}^j \\alpha _{\\sigma _{[r+n]}}\\right|\\\\&+\\sum _{j = J_0 + 1}^{\\infty } \\beta _{i+j}\\left(q_i \\delta _{i+j} +C([r+j])\\prod _{n=1}^j \\alpha _{\\sigma _{[r+n]}}\\right).\\end{split}$ By (REF ), we have $\\frac{q_i}{q_{i+j}} \\ll c^j$ for some $c < 1$ and by (), $q_{i+j}\\delta _{i+j} \\leqslant 1$ .", "Similarly, we have $C([j+i])\\prod _{n=1}^j \\alpha _{\\sigma _{[r+n]}}\\ll \\tilde{c}^j$ holds for some $0 < \\tilde{c} < 1$ , and since $\\alpha $ is badly approximable, $b_{i+j} \\ll 1$ , where all implied constants only depend on $\\alpha $ .", "Thus, we can follow that $\\sum _{j = J_0 + 1}^{\\infty } \\beta _{i+j}\\left(q_i \\delta _{i+j} +C([r+j])\\prod _{n=1}^j \\alpha _{\\sigma _{[r+n]}}\\right)\\ll \\sum _{j = J_0+1}^{\\infty } \\left(c^j + \\tilde{c}^j\\right).$ Choosing $J_0(\\delta )$ sufficiently large, we see that the second sum in (REF ) is bounded from above by $\\delta /2$ .", "Applying (REF ), we have for every $j \\geqslant 0$ that $\\begin{split}\\lim _{m \\rightarrow \\infty } (-1)^{i}q_{m\\ell + i} \\delta _{m\\ell + i+j}= (-1)^j C([j+i])\\prod _{n=1}^j \\alpha _{\\sigma _{[r+n]}}.\\end{split}$ So if $i \\geqslant I_0$ with $I_0 = I_0(J_0)$ sufficiently large, also the first sum in (REF ) is bounded from above by $\\delta /2$ , which concludes the proof.", "By Theorem REF and (REF ), it suffices to show that $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ holds for those $\\alpha $ where $T^p(\\alpha ) = [0;\\overline{6,a}]$ for some $p \\in \\mathbb {N}$ and $a \\in \\lbrace 1,2,3,4,5\\rbrace $ .", "If $a \\leqslant 4$ , then a direct computation of $G_0(T^p(\\alpha ),\\cdot )$ shows $G_0(T^p(\\alpha ),0) < 1$ (see Figure REF ) and thus, the statement follows from Theorem B since this criterion is invariant under the application of the Gauss map.", "[subfigure]labelformat=empty Figure: Limit function G 0 (α,ε)G_0(\\alpha ,\\varepsilon ) for α=[0;6,a ¯]\\alpha = [0;\\overline{6,a}] with a∈{1,2,3,4,5}a \\in \\lbrace 1,2,3,4,5\\rbrace .", "It is visible that for a=2a = 2, we have G 0 (α,0)<1G_0(\\alpha ,0) < 1.", "This inequality is also valid for a=3,4a = 3,4, but the numerical value of G 0 (α,0)G_0(\\alpha ,0) gets closer to 1 as aa increases.", "For a=5a = 5, we obtain G 0 (α,0)>1G_0(\\alpha ,0) > 1.For the rest of the proof, we set $\\alpha = [0;\\overline{6,5}]$ .", "Now let $N_n = \\sum _{i=0}^{2n} q_i(\\alpha ),$ that is we consider only natural numbers with all Ostrowski coefficients being 1.", "Let $(\\beta _j)_{j \\in \\mathbb {N}}$ with $\\beta _j = 1$ for all $j \\in \\mathbb {N}$ (which is clearly an admissible sequence), and let $\\delta > 0$ arbitrary.", "By Proposition REF , we have that for any $\\delta $ , there exist $I_0, J_0$ such that $\\sup _{I_0 < i < 2n - J_0} \\left|\\varepsilon _{i,k}(N_n) - \\varepsilon ^{\\prime }_{[i],k}\\left((\\beta _{j})_{j \\geqslant 1}\\right) \\right|< \\delta ,$ provided that $n$ is sufficiently large.", "Using the notation $\\alpha _{\\pi } = {\\alpha }_{\\sigma _1}\\cdot {\\alpha }_{\\sigma _2}$ , we have $\\begin{split}\\varepsilon _{1,0}^{\\prime }\\left((\\beta _j)_{j \\in \\mathbb {N}}\\right) &= - C(0)\\frac{{\\alpha }_{\\sigma _2}}{1 - \\alpha _{\\pi }} + C(1)\\frac{\\alpha _{\\pi }}{1 - \\alpha _{\\pi }} \\approx -0.025499,\\\\\\varepsilon _{0,0}^{\\prime }\\left((\\beta _j)_{j \\in \\mathbb {N}}\\right) &= - C(1)\\frac{{\\alpha }_{\\sigma _1}}{1 - \\alpha _{\\pi }} + C(0)\\frac{\\alpha _{\\pi }}{1 - \\alpha _{\\pi }} \\approx -0.0266289,\\end{split}$ which follows from elementary geometric series identities.", "Using (REF ), we can therefore prove with computational assistance that $G_0\\left(\\varepsilon _{2i,0}(N_n)\\right)\\cdot G_1\\left(\\varepsilon _{2i-1,0}(N_n)\\right) < 0.997,$ provided that $J_0 \\leqslant 2i \\leqslant 2n - I_0$ .", "By Theorem A, we see that for $i \\geqslant I_0(\\delta )$ , $\\left|P_{q_i}\\left(\\alpha ,\\varepsilon _{i,0}(N_n)\\right) - G_{[i]}\\left(\\alpha ,\\varepsilon _{i,0}(N_n)\\right)\\right|< \\delta .$ Choosing $\\delta $ sufficiently small, we thus obtain for $I_0(\\delta ) < 2i < 2n- J_0(\\delta )$ (without loss of generality, $I_0,J_0$ are even) that $P_{q_{2i}}\\left(\\alpha ,\\varepsilon _{2i,0}(N_n)\\right)\\cdot P_{q_{2i-1}}\\left(\\alpha ,\\varepsilon _{2i+1,0}(N_n)\\right)< 0.999.$ Using the decomposition (REF ) from Proposition REF , this implies $\\begin{split}P_N(\\alpha ) < &\\prod _{i =1}^{I_0/2} P_{q_{2i}}\\left(\\alpha ,\\varepsilon _{i,0}(N_n)\\right)P_{q_{2i-1}}\\left(\\alpha ,\\varepsilon _{i+1,0}(N_n)\\right)\\times \\\\&\\prod _{i = n - J_0/2}^{n} P_{q_{2i}}\\left(\\alpha ,\\varepsilon _{i,0}(N_n)\\right)P_{q_{2i-1}}\\left(\\alpha ,\\varepsilon _{i+1,0}(N_n)\\right)\\cdot 0.999^{n-I_0/2-J_0/2}.\\end{split}$ Since $I_0,J_0$ are finite and by Lemma REF , all shifted products are uniformly bounded.", "letting $n \\rightarrow \\infty $ shows that $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) = 0$ .", "We are left to show that this argumentation also applies for any $\\beta \\in T^{-p}(\\alpha )$ , $p \\in \\mathbb {N}$ .", "By Lemma REF , we have that $\\lim _{i \\rightarrow \\infty } P_{q_{i}}(T^p(\\alpha ),\\varepsilon ) - P_{q_{i+p}}(\\alpha ,\\varepsilon ) = 0$ and a variation of Proposition REF with indices shifted by $p$ is still valid, so choosing $(\\beta _j)_{j \\in \\mathbb {N}} = (1,1,1,\\ldots )$ shows $\\liminf _{N \\rightarrow \\infty } P_N(\\beta ) = 0$ by the same argumentation as above.", "Proof of Theorem REF The proof of Theorem REF follows some ideas of [4], although we need to incorporate the additional difficulty of period lengths $\\ell > 1$ .", "We start with the easier case where $\\alpha = [0;\\overline{5,4}]$ first, and prove the statement for $\\alpha = [0;\\overline{6,5,5}]$ later with similar ideas, only stating the needed refinements.", "We say that a function $\\Pi _r: \\mathbb {N}^j \\rightarrow \\mathbb {R}, 0 \\leqslant r \\leqslant \\ell $ is a lower-approximation function for $\\alpha $ and $r$ if for any admissible sequence $(\\beta _{j})_{j \\in \\mathbb {N}}$ and any $i \\in \\mathbb {N}$ that fulfills $b_{i-1} = \\beta _1, b_i = \\beta _2, b_{i+j-1} = \\beta _j, [i] = r$ , we have $\\Pi _r(\\beta _1,\\ldots ,\\beta _j) \\leqslant \\prod _{k=1}^{b-1}G_{r}\\left(\\varepsilon ^{\\prime }_{r,k}((b_{i+1+j})_{j \\in \\mathbb {N}})\\right)\\cdot \\left(G_{[r-1]}\\left(\\varepsilon ^{\\prime }_{r-1,k}((b_{i+j})_{j \\in \\mathbb {N}}))\\right)\\right)^{{1}_{[b_{i-1} \\ne 0]}},$ where $\\varepsilon ^{\\prime }_{r,k}((\\beta _{j})_{j \\in \\mathbb {N}})$ is defined as in (REF ).", "The following lemma shows us how we prove the theorem under the assumption of having good lower-approximation functions $\\Pi _{[i]}$ whose evaluation exceeds 1 for most $i$ .", "These functions will be explicitly constructed in Lemmas REF respectively REF by estimates on the perturbation value.", "Period length $\\ell =2$ Lemma 14 Let $\\alpha $ be a quadratic irrational with period $\\ell = 2$ and $\\Pi _0,\\Pi _1: \\mathbb {N}^4 \\rightarrow \\mathbb {R}$ be lower-approximation functions for $\\alpha $ that fulfill $\\Pi _1(a,b,c,d) \\cdot \\Pi _0(b,c,d,e) >1.01$ for any admissible $(a,b,c,d,e)$ (with respect to $\\alpha $ and 0) where $(a,b,c) \\ne (0,0,0)$ .", "Then we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > 0, \\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} < \\infty .$ Recall from Proposition REF that we can write $P_N(\\alpha ) = P_{q_n}(\\alpha ) \\cdot \\prod _{i = 1}^n K_i(N)$ with $K_i(N) = \\prod _{i=0}^{n}\\prod _{c_i= 1}^{b_i-1} P_{q_i}\\left(\\alpha ,\\varepsilon _{i,c_i}(N)\\right)\\cdot P_{q_{i-1}}\\left(\\alpha ,\\varepsilon _{i-1,0}(N)\\right)^{{1}_{[b_{i-1} \\ne 0]}}.$ Let $\\delta >0$ be a fixed small constant.", "Using the uniform continuity of $P_{q_i}$ from Theorem A and the assumption that $\\Pi _i$ is a lower approximation function, we have $K_i(N) \\geqslant \\Pi _{[i]}\\left(b_{i-1}(N),b_i(N),b_{i+1}(N),b_{i+2}(N)\\right) - \\delta $ for all $i \\geqslant I_0 = I_0(\\delta )$ .", "If $b_{i-1} = b_i = 0$ , all considered products are empty, hence $K_i(N) = \\Pi _{[i]}\\left(b_{i-1}(N),b_i(N),b_{i+1}(N),b_{i+2}(N)\\right) = 1.$ Now let $[i] = 1$ and set $a = b_{i-1}(N), b = b_i(N), c = b_{i+1}(N), d = b_{i+2}(N)$ .", "Combining (REF ) and (REF ), we have $K_{i}(N)\\cdot K_{i+1}(N)\\geqslant \\left(\\Pi _{[i]}(a,b,c,d) - {1}_{[(a,b) \\ne (0,0)]}\\delta \\right)\\cdot \\left(\\Pi _{[i+1]}(b,c,d,e) - {1}_{[(b,c)\\ne (0,0)]}\\delta \\right).$ If $(a,b,c) \\ne (0,0,0)$ , then combining (REF ) with the previous inequality yields $K_{i}(N)\\cdot K_{i+1}(N) \\geqslant 1,$ provided that $i \\geqslant I_0$ , and $\\delta $ is chosen sufficiently small.", "Clearly, (REF ) shows that the previous inequality also holds in case $(a,b,c) = (0,0,0)$ .", "Combining (REF ) and (REF ), we obtain $P_{N}(\\alpha ) \\geqslant P_{q_n}(\\alpha )\\cdot \\prod _{i = 1}^{I_0} K_{i}(N).$ By Lemma REF , each of those finitely many factors is uniformly bounded away from 0, so clearly, we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > C(I_0,\\alpha ) > 0$ and by (REF ), the result follows.", "So we are left to show that the assumption of Lemma REF is fulfilled when choosing $\\alpha = [0;\\overline{5,4}]$ , which will be shown in the following statements.", "Lemma 15 Let $\\alpha = [0;\\overline{5,4}]$ , $(a,b) \\ne (0,0)$ and let $(a,b,c,d)$ be admissible values with respect to 0 respectively 1.", "Then there exist lower-approximation functions $\\Pi _r$ with respect to $\\alpha $ and $r = 0,1$ with the following properties: $\\Pi _0(a,b,c,d) > 1.01$ for all $(a,b) \\ne (0,0)$ .", "If $a \\ne 0$ , then $\\Pi _0(a,b,c,d) > 1.22$ .", "If $a = 0, b \\ne 0$ , then $\\Pi _1(a,b,c,d) > 1.01$ .", "If $a \\ne 0$ and $b \\ne 1$ , we have $\\Pi _1(a,b,c,d) > 1.01$ .", "If $a \\ne 0, b =1$ , we have $\\Pi _1(a,b,c,d) > 0.84$ .", "If $(a,b) = (0,0)$ , then $\\Pi _0(a,b,c,d)= \\Pi _1(a,b,c,d) = 1$ .", "Assuming for the moment Lemma REF to hold, we can prove the following corollary.", "Corollary 16 Let $\\alpha = [0;\\overline{5,4}]$ .", "Then we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > 0,\\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} < \\infty .$ In view of Lemma REF , it suffices to prove that for all admissible $(a,b,c,d,e)$ where $(a,b,c) \\ne (0,0,0)$ , we have $\\Pi _1(a,b,c,d) \\cdot \\Pi _0(b,c,d,e) >1.01.$ If $a = 0$ , Lemma REF (ii) and (v) implies $\\Pi _1(a,b,c,d) \\geqslant 1$ .", "If $b \\ne 0$ we have by Lemma REF (i) that $\\Pi _0(b,c,d,e) > 1.22$ , so (REF ) clearly holds.", "If $(a,b) = (0,0)$ , then $c \\ne 0$ , hence by Lemma REF (i) and (v), we have (REF ) again.", "Since $\\Pi _0(b,c,d,e) \\geqslant 1$ in any case, Lemma REF (iii) treats all cases except $b = 1$ .", "But if $b = 1$ , Lemma REF (i) and (iv) shows that $\\Pi _1(a,b,c,d)\\cdot \\Pi _0(b,c,d,e) \\geqslant 1.22\\cdot 0.84 > 1$ .", "The main task in this proof is to construct good lower-approximation functions $\\Pi _r$ and check with computational assistance that the stated numerical inequalities are fulfilled.", "Similarly to Proposition REF , we define for $r \\in \\lbrace 0,1\\rbrace $ and $1 \\leqslant k\\leqslant a_{r+1}$ $\\varepsilon _{r,k}^{\\prime }(a,b):= kC(r) - aC([r+1]){\\alpha }_{\\sigma _[r+1]} + bC([r+2])\\alpha _{\\pi }.$ For shorter notation, we write $c_{r,0} = C(r), \\quad c_{r,1} = C([r+1]){\\alpha }_{\\sigma _[r+1]}, \\quad c_{r,2} = C([r])\\alpha _{\\pi }.$ Let $\\varepsilon ^{\\prime }_{r,k}((b_{i+j})_{j \\in \\mathbb {N}})$ be defined as in Proposition REF , $[i] = r$ with $b_{i+1} = a, b_{i+2} = b$ and $(b_{i+j})_{j \\in \\mathbb {N}}$ a sequence of admissible values.", "Observe that $\\begin{split}\\varepsilon ^{\\prime }_{r,k}((b_{i+j})_{j \\in \\mathbb {N}}) - \\varepsilon _{r,k}^{\\prime }(a,b)= \\; &c_{r,1}\\left( b_{i+3}\\alpha _{\\pi } + b_{i+5}\\alpha _{\\pi }^2 + b_{i+7}\\alpha _{\\pi }^3 + \\ldots \\right)\\\\\\;+ &c_{r,2}\\left(b_{i+4}\\alpha _{\\pi } + b_{i+6}\\alpha _{\\pi }^2 + b_{i+8}\\alpha _{\\pi }^3 + \\ldots \\right).\\end{split}$ Since $0 \\leqslant b_{i+j} \\leqslant a_{i+j+1} \\leqslant 5$ , we get $L_r := - 5c_{r,1} \\frac{\\alpha _{\\pi }}{1 - \\alpha _{\\pi }} \\leqslant \\varepsilon ^{\\prime }_{r,k}((b_{i+j})_{j \\in \\mathbb {N}}) - \\varepsilon _{r,k}^{\\prime }(a,b) \\leqslant 5c_{r,2} \\frac{\\alpha _{\\pi }}{1 - \\alpha _{\\pi }} := U_r.$ Now we define $\\tilde{G}_{r,k}(a,b) := \\min \\left\\lbrace G_{r}\\left(\\varepsilon ^{\\prime }_{r,k}(a,b) + L_r\\right),G_{r}\\left(\\varepsilon ^{\\prime }_{r,k}(a,b) + U_r\\right)\\right\\rbrace $ and $\\Pi _r(a,b,c,d) := \\prod _{k=1}^{b-1}\\tilde{G}_{r,k}(c,d)\\cdot \\left(\\tilde{G}_{[r-1],0}(b,c)\\right)^{{1}_{[a \\ne 0]}}.$ By Lemma REF (i), we have $\\Pi _r(a,b,c,d) \\geqslant \\prod _{k=1}^{b-1}G_{r}( \\varepsilon ^{\\prime }_{r,k}((b_{i+j})_{j \\in \\mathbb {N}}))\\cdot \\left(G_{[r-1]}( \\varepsilon ^{\\prime }_{r,k}((b_{i+j-1})_{j \\in \\mathbb {N}}))\\right)^{{1}_{[b_{i+j-1} \\ne 0]}}$ for any admissible sequence $(\\beta _{j})_{j \\in \\mathbb {N}}$ that fulfills $\\beta _{i-1} = a, \\beta _i = b, \\beta _{i+1} = c$ and $\\beta _{i+2} = d$ , hence $\\Pi _0, \\Pi _1$ are lower-approximation functions.", "Applying the procedure described in Remark REF , we can compute lower bounds for $\\Pi _r(a,b,c,d)$ in finitely many steps.", "In that way, we obtain the results (i) - (v) with computational assistance by distinguishing all admissible cases for $a,b,c,d,e$ .", "This concludes the proof of Lemma REF .", "Period length $\\ell =3$ Having established the result for $\\alpha = [0;\\overline{5,4}]$ , we head over to the case where $\\alpha = [0;\\overline{6,5,5}]$ .", "The argument is very similar to the case above, so we will only indicate the changes that are needed.", "We have the following statement, which is analogous to Lemma REF , followed by a corollary comparable to Corollary REF : Lemma 17 Let $\\alpha = [0;\\overline{6,5,5}]$ , and let $\\Pi _r: \\mathbb {N}^5 \\rightarrow \\mathbb {R}$ be lower-approximation functions for $\\alpha $ and $r = 0,1,2$ .", "Let $(a,b,c,d,e)$ be admissible values.", "Then we have the following: We have $\\Pi _0(a,b,c,d,e)> 1.001, \\Pi _1(a,b,c,d,e) > 0.81, \\Pi _2(a,b,c,d,e) > 1.001$ for all admissible $(a,b,c,d,e)$ with $(a,b) \\ne (0,0)$ .", "If $b \\ne 1$ and $a \\ne 0$ , then $\\Pi _1(a,b,c,d,e) \\geqslant 1.001$ .", "If $b \\ne 1$ and $a \\ne 0, \\Pi _2(a,b,c,d,e) \\geqslant 1.25$ , $\\Pi _0(a,b,c,d,e) \\geqslant 1.19$ If $a \\ne 0, c \\geqslant 1$ , then $\\Pi _1(a,b,c,d,e) \\geqslant 0.85$ .", "If $a \\ne 0$ and $e \\ne 0$ or $f \\leqslant 2$ , we have $\\Pi _1(a,1,1,1,e)\\cdot \\Pi _2(1,1,1,e,f)\\cdot \\Pi _0(1,1,e,f,g) > 1.007.$ If $a \\ne 0$ , we have $\\Pi _1(a,1,1,1,0)\\cdot \\Pi _2(1,1,1,0,f)\\cdot \\Pi _0(1,1,0,f,g) > 0.98.$ If $(a,b) = (0,0)$ , then $\\Pi _0(a,b,c,d,e) = \\Pi _1(a,b,c,d,e) = \\Pi _2(a,b,c,d,e) = 1$ .", "Corollary 18 Let $a,b,c,d,e,f,g$ be admissible values such that $(a,b,c,d) \\ne (0,0,0,0)$ and $\\Pi _1(a,b,c,d,e)\\cdot \\Pi _2(b,c,d,e,f)\\cdot \\Pi _0(c,d,e,f,g)< 1.001.$ Then we have $a \\ne 0, b = 1, c = 1, d = 1, e = 0, f \\geqslant 3$ and in that case, $\\Pi _1(a,1,1,1,0)\\cdot \\Pi _2(1,1,1,0,f)\\cdot \\Pi _0(1,1,0,f,g)> 0.98.$ Let $(a,b,c,d,e,f,g,h,i,j)$ be admissible values such that $\\Pi _1(a,b,c,d,e)\\cdot \\Pi _2(b,c,d,e,f)\\cdot \\Pi _0(c,d,e,f,g)< 1.$ Then we have $\\Pi _1(a,b,c,d,e)\\cdot \\Pi _2(b,c,d,e,f)\\cdot \\Pi _0(c,d,e,f,g)\\cdot \\Pi _1(d,e,f,g,h)\\cdot \\Pi _2(e,f,g,h,i) \\cdot \\Pi _0(f,g,h,i,j)> 1.$ Let $a,b,c,d,e,f,g$ be admissible values such that (REF ) holds.", "We see from Lemma REF (i) and (vi) immediately that $a \\ne 0, b = 1$ .", "If $c \\ne 1$ , then by (ii) $\\Pi _2(b,c,d,e,f) > 1.25$ , hence $\\Pi _1(a,b,c,d,e)\\cdot \\Pi _2(b,c,d,e,f)\\cdot \\Pi _0(c,d,e,f,g)> 0.81\\cdot 1.25 > 1.$ Thus $c = 1$ follows, which implies by (iii) that $\\Pi _1(a,b,c,d) \\geqslant 0.85$ .", "If $d \\ne 1$ , we have by (ii) that $\\Pi _1(a,b,c,d,e)\\cdot \\Pi _2(b,c,d,e,f)\\cdot \\Pi _0(c,d,e,f,g)\\geqslant 0.85 \\cdot 1.19>1,$ hence $d = 1$ .", "By (iv), this implies $e = 0$ and $f \\geqslant 3$ .", "Using (v), this concludes the first statement.", "For the second statement, we use part (i) of the corollary to see that $ e= 0, f \\geqslant 3$ .", "This implies by $(i)$ that $P_1(d,e,f,g) \\geqslant 1, P_0(f,g,h,i) \\geqslant 1$ and by (ii), $P_2(e,f,g,h) \\geqslant 1.25$ .", "Since $0.98\\cdot 1.25 > 1$ , the result follows.", "Lemma 19 Let $\\alpha = [0;\\overline{6,5,5}]$ .", "Then we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > 0,\\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} < \\infty .$ Let $N = \\sum _{i=0}^{n} b_{i}q_i(\\alpha )$ be the Ostrowski expansion of some arbitrary integer $N$ .", "Now let $M_i(N) := K_{3i+1}(N)\\cdot K_{3i+2}(N)\\cdot K_{3i+3}(N)$ with $K_i$ defined as in (REF ).", "Arguing as in Lemma REF , it suffices to show that for $i \\geqslant I_0$ we have $\\prod \\limits _{i = I_0}^{m}M_i(N) \\geqslant C > 0$ where $m := \\left\\lfloor \\frac{n}{3}\\right\\rfloor -1$ and $C = C(\\alpha )$ is independent of $N$ .", "We define $J := \\left\\lbrace j \\in \\lbrace I_0+1,\\ldots ,m\\rbrace : M_J(N) < 1\\right\\rbrace , \\quad J-1 := \\lbrace j -1: j \\in J\\rbrace .$ Choosing $I_0$ sufficiently large, we can apply Corollary REF to deduce $M_j(N)\\cdot M_{j+1}(N) > 1$ for all $j \\in J$ , $M_i(N) \\geqslant 1$ for all $i \\geqslant I_0+1$ and $M_{I_0}(N) > 0.98.$ Hence we obtain $\\prod _{i \\geqslant I_0}^{m}M_i(N) \\geqslant 0.99 \\cdot \\prod _{j \\in J}\\left(M_j(N)\\cdot M_{j+1}(N)\\right) \\cdot \\prod _{\\begin{array}{c}i = I_0\\\\ i, i+1 \\notin J\\end{array}}^{m}M_i(N) \\geqslant 0.98$ and thus, the statement follows.", "As in Lemma REF , we define for $r \\in \\lbrace 0,1\\rbrace $ and $1 \\leqslant k\\leqslant a_{r+1}$ $\\varepsilon _{r,k}^{\\prime }(a,b,c):= kc_{r,0} - ac_{r,1} + bc_{r,2} - cc_{r,3},$ where $\\begin{split}&\\alpha _{\\pi } := {\\alpha }_{\\sigma {[r+1]}}{\\alpha }_{\\sigma {[r+2]}}{\\alpha }_{\\sigma {[r+3]}}, \\quad c_{r,0} = C(r), \\quad c_{r,1} = C([r+1]){\\alpha }_{\\sigma {[r+1]}}, \\\\&c_{r,2} = C([r+2]){\\alpha }_{\\sigma {[r+1]}}{\\alpha }_{\\sigma {[r+2]}}, \\quad c_{r,3} = C(r)\\alpha _{\\pi }.\\end{split}$ Analogously to (REF ), we can prove that $\\begin{split}L_r := -\\frac{6\\alpha _{\\pi }}{1 - \\alpha _{\\pi }^2}\\left(\\alpha _{\\pi }c_{j,1} + c_{j,2} + \\alpha _{\\pi }c_{j,3}\\right) &\\leqslant \\varepsilon ^{\\prime }_{r,k}((b_{i+j})_{j \\in \\mathbb {N}}) - \\varepsilon _{r,k}^{\\prime }(a,b,c) \\\\&\\leqslant \\frac{6\\alpha _{\\pi }}{1 - \\alpha _{\\pi }^2}\\left(c_{j,1} + \\alpha _{\\pi }c_{j,2} + c_{j,3}\\right) := U_r\\end{split}$ and define $\\begin{split}\\tilde{G}_{r,k}(a,b,c) &:= \\min \\left\\lbrace G_{r}\\left(\\varepsilon ^{\\prime }_{r,k}(a,b,c) + L_r\\right),G_{r}\\left(\\varepsilon ^{\\prime }_{r,k}(a,b,c) + U_r\\right)\\right\\rbrace ,\\\\\\Pi _r(a,b,c,d,e) &:= \\prod _{k=1}^{b-1}\\tilde{G}_{r,k}(c,d,e)\\cdot \\left(\\tilde{G}_{[r-1],0}(b,c,d)\\right)^{{1}_{[a \\ne 0]}}.\\end{split}$ As before, we can deduce that $\\Pi _r, r \\in \\lbrace 0,1,2\\rbrace $ are lower-approximating functions, and we can finish the proof as in Lemma REF by using computational assistance.", "Acknowledgements The author is grateful to Christoph Aistleitner for various comments on an earlier version of this paper." ], [ "Proof of Theorem ", "The proof of Theorem REF follows some ideas of [4], although we need to incorporate the additional difficulty of period lengths $\\ell > 1$ .", "We start with the easier case where $\\alpha = [0;\\overline{5,4}]$ first, and prove the statement for $\\alpha = [0;\\overline{6,5,5}]$ later with similar ideas, only stating the needed refinements.", "We say that a function $\\Pi _r: \\mathbb {N}^j \\rightarrow \\mathbb {R}, 0 \\leqslant r \\leqslant \\ell $ is a lower-approximation function for $\\alpha $ and $r$ if for any admissible sequence $(\\beta _{j})_{j \\in \\mathbb {N}}$ and any $i \\in \\mathbb {N}$ that fulfills $b_{i-1} = \\beta _1, b_i = \\beta _2, b_{i+j-1} = \\beta _j, [i] = r$ , we have $\\Pi _r(\\beta _1,\\ldots ,\\beta _j) \\leqslant \\prod _{k=1}^{b-1}G_{r}\\left(\\varepsilon ^{\\prime }_{r,k}((b_{i+1+j})_{j \\in \\mathbb {N}})\\right)\\cdot \\left(G_{[r-1]}\\left(\\varepsilon ^{\\prime }_{r-1,k}((b_{i+j})_{j \\in \\mathbb {N}}))\\right)\\right)^{{1}_{[b_{i-1} \\ne 0]}},$ where $\\varepsilon ^{\\prime }_{r,k}((\\beta _{j})_{j \\in \\mathbb {N}})$ is defined as in (REF ).", "The following lemma shows us how we prove the theorem under the assumption of having good lower-approximation functions $\\Pi _{[i]}$ whose evaluation exceeds 1 for most $i$ .", "These functions will be explicitly constructed in Lemmas REF respectively REF by estimates on the perturbation value." ], [ "Period length $\\ell =2$", "Lemma 14 Let $\\alpha $ be a quadratic irrational with period $\\ell = 2$ and $\\Pi _0,\\Pi _1: \\mathbb {N}^4 \\rightarrow \\mathbb {R}$ be lower-approximation functions for $\\alpha $ that fulfill $\\Pi _1(a,b,c,d) \\cdot \\Pi _0(b,c,d,e) >1.01$ for any admissible $(a,b,c,d,e)$ (with respect to $\\alpha $ and 0) where $(a,b,c) \\ne (0,0,0)$ .", "Then we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > 0, \\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} < \\infty .$ Recall from Proposition REF that we can write $P_N(\\alpha ) = P_{q_n}(\\alpha ) \\cdot \\prod _{i = 1}^n K_i(N)$ with $K_i(N) = \\prod _{i=0}^{n}\\prod _{c_i= 1}^{b_i-1} P_{q_i}\\left(\\alpha ,\\varepsilon _{i,c_i}(N)\\right)\\cdot P_{q_{i-1}}\\left(\\alpha ,\\varepsilon _{i-1,0}(N)\\right)^{{1}_{[b_{i-1} \\ne 0]}}.$ Let $\\delta >0$ be a fixed small constant.", "Using the uniform continuity of $P_{q_i}$ from Theorem A and the assumption that $\\Pi _i$ is a lower approximation function, we have $K_i(N) \\geqslant \\Pi _{[i]}\\left(b_{i-1}(N),b_i(N),b_{i+1}(N),b_{i+2}(N)\\right) - \\delta $ for all $i \\geqslant I_0 = I_0(\\delta )$ .", "If $b_{i-1} = b_i = 0$ , all considered products are empty, hence $K_i(N) = \\Pi _{[i]}\\left(b_{i-1}(N),b_i(N),b_{i+1}(N),b_{i+2}(N)\\right) = 1.$ Now let $[i] = 1$ and set $a = b_{i-1}(N), b = b_i(N), c = b_{i+1}(N), d = b_{i+2}(N)$ .", "Combining (REF ) and (REF ), we have $K_{i}(N)\\cdot K_{i+1}(N)\\geqslant \\left(\\Pi _{[i]}(a,b,c,d) - {1}_{[(a,b) \\ne (0,0)]}\\delta \\right)\\cdot \\left(\\Pi _{[i+1]}(b,c,d,e) - {1}_{[(b,c)\\ne (0,0)]}\\delta \\right).$ If $(a,b,c) \\ne (0,0,0)$ , then combining (REF ) with the previous inequality yields $K_{i}(N)\\cdot K_{i+1}(N) \\geqslant 1,$ provided that $i \\geqslant I_0$ , and $\\delta $ is chosen sufficiently small.", "Clearly, (REF ) shows that the previous inequality also holds in case $(a,b,c) = (0,0,0)$ .", "Combining (REF ) and (REF ), we obtain $P_{N}(\\alpha ) \\geqslant P_{q_n}(\\alpha )\\cdot \\prod _{i = 1}^{I_0} K_{i}(N).$ By Lemma REF , each of those finitely many factors is uniformly bounded away from 0, so clearly, we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > C(I_0,\\alpha ) > 0$ and by (REF ), the result follows.", "So we are left to show that the assumption of Lemma REF is fulfilled when choosing $\\alpha = [0;\\overline{5,4}]$ , which will be shown in the following statements.", "Lemma 15 Let $\\alpha = [0;\\overline{5,4}]$ , $(a,b) \\ne (0,0)$ and let $(a,b,c,d)$ be admissible values with respect to 0 respectively 1.", "Then there exist lower-approximation functions $\\Pi _r$ with respect to $\\alpha $ and $r = 0,1$ with the following properties: $\\Pi _0(a,b,c,d) > 1.01$ for all $(a,b) \\ne (0,0)$ .", "If $a \\ne 0$ , then $\\Pi _0(a,b,c,d) > 1.22$ .", "If $a = 0, b \\ne 0$ , then $\\Pi _1(a,b,c,d) > 1.01$ .", "If $a \\ne 0$ and $b \\ne 1$ , we have $\\Pi _1(a,b,c,d) > 1.01$ .", "If $a \\ne 0, b =1$ , we have $\\Pi _1(a,b,c,d) > 0.84$ .", "If $(a,b) = (0,0)$ , then $\\Pi _0(a,b,c,d)= \\Pi _1(a,b,c,d) = 1$ .", "Assuming for the moment Lemma REF to hold, we can prove the following corollary.", "Corollary 16 Let $\\alpha = [0;\\overline{5,4}]$ .", "Then we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > 0,\\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} < \\infty .$ In view of Lemma REF , it suffices to prove that for all admissible $(a,b,c,d,e)$ where $(a,b,c) \\ne (0,0,0)$ , we have $\\Pi _1(a,b,c,d) \\cdot \\Pi _0(b,c,d,e) >1.01.$ If $a = 0$ , Lemma REF (ii) and (v) implies $\\Pi _1(a,b,c,d) \\geqslant 1$ .", "If $b \\ne 0$ we have by Lemma REF (i) that $\\Pi _0(b,c,d,e) > 1.22$ , so (REF ) clearly holds.", "If $(a,b) = (0,0)$ , then $c \\ne 0$ , hence by Lemma REF (i) and (v), we have (REF ) again.", "Since $\\Pi _0(b,c,d,e) \\geqslant 1$ in any case, Lemma REF (iii) treats all cases except $b = 1$ .", "But if $b = 1$ , Lemma REF (i) and (iv) shows that $\\Pi _1(a,b,c,d)\\cdot \\Pi _0(b,c,d,e) \\geqslant 1.22\\cdot 0.84 > 1$ .", "The main task in this proof is to construct good lower-approximation functions $\\Pi _r$ and check with computational assistance that the stated numerical inequalities are fulfilled.", "Similarly to Proposition REF , we define for $r \\in \\lbrace 0,1\\rbrace $ and $1 \\leqslant k\\leqslant a_{r+1}$ $\\varepsilon _{r,k}^{\\prime }(a,b):= kC(r) - aC([r+1]){\\alpha }_{\\sigma _[r+1]} + bC([r+2])\\alpha _{\\pi }.$ For shorter notation, we write $c_{r,0} = C(r), \\quad c_{r,1} = C([r+1]){\\alpha }_{\\sigma _[r+1]}, \\quad c_{r,2} = C([r])\\alpha _{\\pi }.$ Let $\\varepsilon ^{\\prime }_{r,k}((b_{i+j})_{j \\in \\mathbb {N}})$ be defined as in Proposition REF , $[i] = r$ with $b_{i+1} = a, b_{i+2} = b$ and $(b_{i+j})_{j \\in \\mathbb {N}}$ a sequence of admissible values.", "Observe that $\\begin{split}\\varepsilon ^{\\prime }_{r,k}((b_{i+j})_{j \\in \\mathbb {N}}) - \\varepsilon _{r,k}^{\\prime }(a,b)= \\; &c_{r,1}\\left( b_{i+3}\\alpha _{\\pi } + b_{i+5}\\alpha _{\\pi }^2 + b_{i+7}\\alpha _{\\pi }^3 + \\ldots \\right)\\\\\\;+ &c_{r,2}\\left(b_{i+4}\\alpha _{\\pi } + b_{i+6}\\alpha _{\\pi }^2 + b_{i+8}\\alpha _{\\pi }^3 + \\ldots \\right).\\end{split}$ Since $0 \\leqslant b_{i+j} \\leqslant a_{i+j+1} \\leqslant 5$ , we get $L_r := - 5c_{r,1} \\frac{\\alpha _{\\pi }}{1 - \\alpha _{\\pi }} \\leqslant \\varepsilon ^{\\prime }_{r,k}((b_{i+j})_{j \\in \\mathbb {N}}) - \\varepsilon _{r,k}^{\\prime }(a,b) \\leqslant 5c_{r,2} \\frac{\\alpha _{\\pi }}{1 - \\alpha _{\\pi }} := U_r.$ Now we define $\\tilde{G}_{r,k}(a,b) := \\min \\left\\lbrace G_{r}\\left(\\varepsilon ^{\\prime }_{r,k}(a,b) + L_r\\right),G_{r}\\left(\\varepsilon ^{\\prime }_{r,k}(a,b) + U_r\\right)\\right\\rbrace $ and $\\Pi _r(a,b,c,d) := \\prod _{k=1}^{b-1}\\tilde{G}_{r,k}(c,d)\\cdot \\left(\\tilde{G}_{[r-1],0}(b,c)\\right)^{{1}_{[a \\ne 0]}}.$ By Lemma REF (i), we have $\\Pi _r(a,b,c,d) \\geqslant \\prod _{k=1}^{b-1}G_{r}( \\varepsilon ^{\\prime }_{r,k}((b_{i+j})_{j \\in \\mathbb {N}}))\\cdot \\left(G_{[r-1]}( \\varepsilon ^{\\prime }_{r,k}((b_{i+j-1})_{j \\in \\mathbb {N}}))\\right)^{{1}_{[b_{i+j-1} \\ne 0]}}$ for any admissible sequence $(\\beta _{j})_{j \\in \\mathbb {N}}$ that fulfills $\\beta _{i-1} = a, \\beta _i = b, \\beta _{i+1} = c$ and $\\beta _{i+2} = d$ , hence $\\Pi _0, \\Pi _1$ are lower-approximation functions.", "Applying the procedure described in Remark REF , we can compute lower bounds for $\\Pi _r(a,b,c,d)$ in finitely many steps.", "In that way, we obtain the results (i) - (v) with computational assistance by distinguishing all admissible cases for $a,b,c,d,e$ .", "This concludes the proof of Lemma REF ." ], [ "Period length $\\ell =3$", "Having established the result for $\\alpha = [0;\\overline{5,4}]$ , we head over to the case where $\\alpha = [0;\\overline{6,5,5}]$ .", "The argument is very similar to the case above, so we will only indicate the changes that are needed.", "We have the following statement, which is analogous to Lemma REF , followed by a corollary comparable to Corollary REF : Lemma 17 Let $\\alpha = [0;\\overline{6,5,5}]$ , and let $\\Pi _r: \\mathbb {N}^5 \\rightarrow \\mathbb {R}$ be lower-approximation functions for $\\alpha $ and $r = 0,1,2$ .", "Let $(a,b,c,d,e)$ be admissible values.", "Then we have the following: We have $\\Pi _0(a,b,c,d,e)> 1.001, \\Pi _1(a,b,c,d,e) > 0.81, \\Pi _2(a,b,c,d,e) > 1.001$ for all admissible $(a,b,c,d,e)$ with $(a,b) \\ne (0,0)$ .", "If $b \\ne 1$ and $a \\ne 0$ , then $\\Pi _1(a,b,c,d,e) \\geqslant 1.001$ .", "If $b \\ne 1$ and $a \\ne 0, \\Pi _2(a,b,c,d,e) \\geqslant 1.25$ , $\\Pi _0(a,b,c,d,e) \\geqslant 1.19$ If $a \\ne 0, c \\geqslant 1$ , then $\\Pi _1(a,b,c,d,e) \\geqslant 0.85$ .", "If $a \\ne 0$ and $e \\ne 0$ or $f \\leqslant 2$ , we have $\\Pi _1(a,1,1,1,e)\\cdot \\Pi _2(1,1,1,e,f)\\cdot \\Pi _0(1,1,e,f,g) > 1.007.$ If $a \\ne 0$ , we have $\\Pi _1(a,1,1,1,0)\\cdot \\Pi _2(1,1,1,0,f)\\cdot \\Pi _0(1,1,0,f,g) > 0.98.$ If $(a,b) = (0,0)$ , then $\\Pi _0(a,b,c,d,e) = \\Pi _1(a,b,c,d,e) = \\Pi _2(a,b,c,d,e) = 1$ .", "Corollary 18 Let $a,b,c,d,e,f,g$ be admissible values such that $(a,b,c,d) \\ne (0,0,0,0)$ and $\\Pi _1(a,b,c,d,e)\\cdot \\Pi _2(b,c,d,e,f)\\cdot \\Pi _0(c,d,e,f,g)< 1.001.$ Then we have $a \\ne 0, b = 1, c = 1, d = 1, e = 0, f \\geqslant 3$ and in that case, $\\Pi _1(a,1,1,1,0)\\cdot \\Pi _2(1,1,1,0,f)\\cdot \\Pi _0(1,1,0,f,g)> 0.98.$ Let $(a,b,c,d,e,f,g,h,i,j)$ be admissible values such that $\\Pi _1(a,b,c,d,e)\\cdot \\Pi _2(b,c,d,e,f)\\cdot \\Pi _0(c,d,e,f,g)< 1.$ Then we have $\\Pi _1(a,b,c,d,e)\\cdot \\Pi _2(b,c,d,e,f)\\cdot \\Pi _0(c,d,e,f,g)\\cdot \\Pi _1(d,e,f,g,h)\\cdot \\Pi _2(e,f,g,h,i) \\cdot \\Pi _0(f,g,h,i,j)> 1.$ Let $a,b,c,d,e,f,g$ be admissible values such that (REF ) holds.", "We see from Lemma REF (i) and (vi) immediately that $a \\ne 0, b = 1$ .", "If $c \\ne 1$ , then by (ii) $\\Pi _2(b,c,d,e,f) > 1.25$ , hence $\\Pi _1(a,b,c,d,e)\\cdot \\Pi _2(b,c,d,e,f)\\cdot \\Pi _0(c,d,e,f,g)> 0.81\\cdot 1.25 > 1.$ Thus $c = 1$ follows, which implies by (iii) that $\\Pi _1(a,b,c,d) \\geqslant 0.85$ .", "If $d \\ne 1$ , we have by (ii) that $\\Pi _1(a,b,c,d,e)\\cdot \\Pi _2(b,c,d,e,f)\\cdot \\Pi _0(c,d,e,f,g)\\geqslant 0.85 \\cdot 1.19>1,$ hence $d = 1$ .", "By (iv), this implies $e = 0$ and $f \\geqslant 3$ .", "Using (v), this concludes the first statement.", "For the second statement, we use part (i) of the corollary to see that $ e= 0, f \\geqslant 3$ .", "This implies by $(i)$ that $P_1(d,e,f,g) \\geqslant 1, P_0(f,g,h,i) \\geqslant 1$ and by (ii), $P_2(e,f,g,h) \\geqslant 1.25$ .", "Since $0.98\\cdot 1.25 > 1$ , the result follows.", "Lemma 19 Let $\\alpha = [0;\\overline{6,5,5}]$ .", "Then we have $\\liminf _{N \\rightarrow \\infty } P_N(\\alpha ) > 0,\\quad \\limsup _{N \\rightarrow \\infty } \\frac{P_N(\\alpha )}{N} < \\infty .$ Let $N = \\sum _{i=0}^{n} b_{i}q_i(\\alpha )$ be the Ostrowski expansion of some arbitrary integer $N$ .", "Now let $M_i(N) := K_{3i+1}(N)\\cdot K_{3i+2}(N)\\cdot K_{3i+3}(N)$ with $K_i$ defined as in (REF ).", "Arguing as in Lemma REF , it suffices to show that for $i \\geqslant I_0$ we have $\\prod \\limits _{i = I_0}^{m}M_i(N) \\geqslant C > 0$ where $m := \\left\\lfloor \\frac{n}{3}\\right\\rfloor -1$ and $C = C(\\alpha )$ is independent of $N$ .", "We define $J := \\left\\lbrace j \\in \\lbrace I_0+1,\\ldots ,m\\rbrace : M_J(N) < 1\\right\\rbrace , \\quad J-1 := \\lbrace j -1: j \\in J\\rbrace .$ Choosing $I_0$ sufficiently large, we can apply Corollary REF to deduce $M_j(N)\\cdot M_{j+1}(N) > 1$ for all $j \\in J$ , $M_i(N) \\geqslant 1$ for all $i \\geqslant I_0+1$ and $M_{I_0}(N) > 0.98.$ Hence we obtain $\\prod _{i \\geqslant I_0}^{m}M_i(N) \\geqslant 0.99 \\cdot \\prod _{j \\in J}\\left(M_j(N)\\cdot M_{j+1}(N)\\right) \\cdot \\prod _{\\begin{array}{c}i = I_0\\\\ i, i+1 \\notin J\\end{array}}^{m}M_i(N) \\geqslant 0.98$ and thus, the statement follows.", "As in Lemma REF , we define for $r \\in \\lbrace 0,1\\rbrace $ and $1 \\leqslant k\\leqslant a_{r+1}$ $\\varepsilon _{r,k}^{\\prime }(a,b,c):= kc_{r,0} - ac_{r,1} + bc_{r,2} - cc_{r,3},$ where $\\begin{split}&\\alpha _{\\pi } := {\\alpha }_{\\sigma {[r+1]}}{\\alpha }_{\\sigma {[r+2]}}{\\alpha }_{\\sigma {[r+3]}}, \\quad c_{r,0} = C(r), \\quad c_{r,1} = C([r+1]){\\alpha }_{\\sigma {[r+1]}}, \\\\&c_{r,2} = C([r+2]){\\alpha }_{\\sigma {[r+1]}}{\\alpha }_{\\sigma {[r+2]}}, \\quad c_{r,3} = C(r)\\alpha _{\\pi }.\\end{split}$ Analogously to (REF ), we can prove that $\\begin{split}L_r := -\\frac{6\\alpha _{\\pi }}{1 - \\alpha _{\\pi }^2}\\left(\\alpha _{\\pi }c_{j,1} + c_{j,2} + \\alpha _{\\pi }c_{j,3}\\right) &\\leqslant \\varepsilon ^{\\prime }_{r,k}((b_{i+j})_{j \\in \\mathbb {N}}) - \\varepsilon _{r,k}^{\\prime }(a,b,c) \\\\&\\leqslant \\frac{6\\alpha _{\\pi }}{1 - \\alpha _{\\pi }^2}\\left(c_{j,1} + \\alpha _{\\pi }c_{j,2} + c_{j,3}\\right) := U_r\\end{split}$ and define $\\begin{split}\\tilde{G}_{r,k}(a,b,c) &:= \\min \\left\\lbrace G_{r}\\left(\\varepsilon ^{\\prime }_{r,k}(a,b,c) + L_r\\right),G_{r}\\left(\\varepsilon ^{\\prime }_{r,k}(a,b,c) + U_r\\right)\\right\\rbrace ,\\\\\\Pi _r(a,b,c,d,e) &:= \\prod _{k=1}^{b-1}\\tilde{G}_{r,k}(c,d,e)\\cdot \\left(\\tilde{G}_{[r-1],0}(b,c,d)\\right)^{{1}_{[a \\ne 0]}}.\\end{split}$ As before, we can deduce that $\\Pi _r, r \\in \\lbrace 0,1,2\\rbrace $ are lower-approximating functions, and we can finish the proof as in Lemma REF by using computational assistance." ], [ "Acknowledgements", "The author is grateful to Christoph Aistleitner for various comments on an earlier version of this paper." ] ]
2210.07743
[ [ "Dark matter in the Scotogenic model with spontaneous lepton number\n violation" ], [ "Abstract Scotogenic models constitute an appealing solution to the generation of neutrino masses and to the dark matter mystery.", "In this work we consider a version of the Scotogenic model that breaks lepton number spontaneously.", "At this scope, we extend the particle content of the Scotogenic model with an additional singlet scalar which acquires a non-zero vacuum expectation value and breaks a global lepton number symmetry.", "As a consequence, a massless Goldstone boson, the majoron, appears in the particle spectrum.", "We discuss how the presence of the majoron modifies the phenomenology, both in flavor and dark matter observables.", "We focus on the fermionic dark matter candidate and analyze its relic abundance and prospects for both direct and indirect detection." ], [ "Introduction", "The origin of neutrino masses and the nature of the dark matter (DM) component of the Universe are two of the most relevant open questions in current physics.", "Regarding the former, neutrino oscillation experiments have robustly established the existence of non-zero neutrino masses and lepton mixings.", "In fact, some of the oscillation parameters have been already determined with great accuracy [1].", "Since the Standard Model (SM) of particle physics does not include a mechanism for the generation of neutrino masses, an extension is called for.", "Similarly, the Planck collaboration has determined that about $27 \\%$ of the energy-matter content of the Universe is in the form of DM [2].", "It is often assumed that the DM is made of particles, but no state in the SM spectrum has the required properties to play such a role.", "Again, this motivates the exploration of scenarios beyond the SM.", "There are many neutrino mass models.", "Among them, radiative models [3], [4], [5], [6] (models that induce neutrino masses at the loop level) are particularly well motivated, since they naturally explain the smallness of neutrino masses due to the loop suppression (for a review, see [7]).", "Furthermore, tree-level contributions to neutrino masses are often forbidden by a conserved $\\mathbb {Z}_2$ symmetry which, in addition, stabilizes the lightest $\\mathbb {Z}_2$ -odd state.", "Provided it has the correct quantum numbers and can be produced in the early Universe in the correct amount, this state is a valid DM candidate.", "Therefore, radiative models offer a good solution to simultaneously address the origin of neutrino masses and the DM problem.", "A prime example of such class of models is the Scotogenic model [8].", "This model introduces an additional $\\rm SU(2)_L$ doublet, $\\eta $ , and three generations of fermion singlets, $N$ , all charged under a $\\mathbb {Z}_2$ parity.", "These ingredients suffice to generate neutrino masses at the 1-loop level and provide a viable DM candidate.", "In most neutrino mass models, neutrinos are Majorana fermions.", "This is precisely the case of the Scotogenic model.", "In this class of models, $\\rm U(1)_L$ — where $L$ stands for lepton number — is broken in two units.", "The breaking can be explicit, due to the presence of lepton number violating parameters in the Lagrangian, or spontaneous, if the minimum of the scalar potential of the model does not preserve the symmetry.", "In the standard Scotogenic model [8] the breaking is explicit.", "In contrast, in this paper we consider a version of the Scotogenic model that breaks lepton number spontaneously.", "This is achieved by extending the particle content of the model with an additional singlet scalar, denoted as $\\sigma $ , which acquires a non-zero vacuum expectation value (VEV) and breaks the global $\\rm U(1)_L$ symmetry.", "As a consequence, the spectrum of the theory contains a massless Goldstone boson, the majoron, $J$  [9], [10], [11], [12], [13].", "This state leads to novel phenomenological predictions, both in flavor observables (due to the existence of new channels such as $\\ell _\\alpha \\rightarrow \\ell _\\beta \\, J$ ) and in the DM sector (due to the existence of new processes in the early Universe).", "Several works combining spontaneous lepton number breaking with the Scotogenic generation of fermion masses can be found in the literature.", "We highlight [14], which also studies the DM phenomenology of the Scotogenic model with spontaneous lepton number violation.", "We build upon this previous work and go beyond it in several ways.", "First of all, our analysis takes into account a wide variety of lepton flavor violating (LFV) constraints, including processes that involve the majoron either virtually or as a particle in the final state.", "We confirm the results of [14], but also discuss in further detail some aspects of the DM phenomenology of the model.", "A high-energy extension of the Scotogenic model featuring a massless majoron was also introduced in [15], while Ref.", "[16] proposes a model with spontaneous lepton number violation that induces a small 1-loop mass for a dark Majorana fermion à la Scotogenic.", "The authors of [17] studied electroweak baryogenesis in an extended Scotogenic scenario including a majoron, whereas the possible Scotogenic origin of the small lepton number violation of the inverse seesaw was discussed in [18].", "Finally, the spontaneous breaking of a gauged version of lepton number in a Scotogenic scenario was considered in [19].", "The rest of the manuscript is organized as follows.", "We present the model in Sec.", ", where we define its basic ingredients, discuss its scalar sector and the generation of Majorana neutrino masses and briefly comment on the possible DM candidates.", "The most important experimental bounds that constrain our scenario are discussed in Sec.", ", while the results of our numerical study are presented in Sec. .", "Finally, we summarize and draw our conclusions in Sec.", "." ], [ "The model", "We consider a variant of the original Scotogenic model.", "The SM particle content is extended by adding the $\\rm SU(2)_L$ scalar doublet $\\eta $ , the scalar singlet $\\sigma $ and three generations of fermion singlets $N$ .", "The scalar doublets of the model can be decomposed into $\\rm SU(2)_L$ components as $H = \\begin{pmatrix} H^+ \\\\ H^0 \\end{pmatrix} \\, , \\quad \\eta = \\begin{pmatrix} \\eta ^+ \\\\ \\eta ^0 \\end{pmatrix} \\, .$ Here $H$ is the usual SM Higgs doublet.", "We impose the conservation of a global $\\rm U(1)_L$ symmetry which can be identified with lepton number.", "Finally, we also introduce the usual $\\mathbb {Z}_2$ parity of the Scotogenic model, under which $N$ and $\\eta $ are odd while the rest of the fields are even.", "Alternatively, one can assign $\\rm U(1)_L$ charges in such a way that the $\\mathbb {Z}_2$ parity is obtained as a remnant symmetry after the spontaneous breaking of lepton number [15].", "This is more economical in terms of symmetries, since the usual Scotogenic parity is not imposed, but automatically obtained from lepton number.", "However, the generation of the Scotogenic $\\lambda _5$ coupling requires the introduction of a non-renormalizable operator.", "The particle content of the model and the representations under the gauge and global symmetries are summarized in Table REF .", "The most general Yukawa Lagrangian, involving the new particles compatible with all symmetries, can be written as $ \\mathcal {L}_Y = y \\, \\overline{\\ell _L} \\, \\eta \\, N + \\kappa \\, \\sigma \\, \\overline{N^c} \\, N + \\text{h.c.}\\, ,$ where $y$ and $\\kappa $ are $3 \\times 3$ matrices.", "In the following, we take $\\kappa $ to be diagonal without loss of generality.", "The most general scalar potential is given by $\\mathcal {V} & = m_H^2 \\, H^\\dagger H + m_\\eta ^2 \\, \\eta ^\\dagger \\eta + m_\\sigma ^2 \\, \\sigma ^* \\sigma + \\frac{\\lambda _1}{2} \\, \\left( H^\\dagger H \\right)^2 + \\frac{\\lambda _2}{2} \\, \\left( \\eta ^\\dagger \\eta \\right)^2 + \\frac{\\lambda _\\sigma }{2} \\left( \\sigma ^* \\sigma \\right)^2 \\nonumber \\\\& + \\lambda _3 \\left( H^\\dagger H \\right) \\left( \\eta ^\\dagger \\eta \\right) + \\lambda _3^{H \\sigma } \\left( H^\\dagger H \\right) \\left( \\sigma ^* \\sigma \\right) + \\lambda _3^{\\eta \\sigma } \\left( \\eta ^\\dagger \\eta \\right) \\left( \\sigma ^* \\sigma \\right) \\nonumber \\\\& + \\lambda _4 \\left( H^\\dagger \\eta \\right) \\left( \\eta ^\\dagger H \\right) + \\left[ \\frac{\\lambda _5}{2} \\left(H^\\dagger \\eta \\right)^2 + \\text{h.c.}\\right] \\, ,$ where $m_H^2$ , $m_\\eta ^2$ and $m_\\sigma ^2$ are parameters with dimension of mass$^2$ and the rest of the parameters are dimensionless." ], [ "Symmetry breaking and scalar sector", "We will assume that the scalar potential parameters are such that a minimum is found for the configuration $\\langle H^0 \\rangle = \\frac{v}{\\sqrt{2}} \\, , \\quad \\langle \\eta ^0 \\rangle = 0 \\, , \\quad \\langle \\sigma \\rangle = \\frac{v_\\sigma }{\\sqrt{2}} \\, .$ Here $v \\approx 246$ GeV is the usual electroweak VEV.", "This vacuum preserves the $\\mathbb {Z}_2$ parity, which remains a conserved symmetry.", "In contrast, lepton number is spontaneously broken and a Majorana mass term for the $N$ singlets is induced, with $\\frac{M_N}{2} = \\kappa \\, \\frac{v_\\sigma }{\\sqrt{2}} \\, .$ The tadpole equations obtained by minimizing the scalar potential are given by $\\frac{\\partial \\mathcal {V}}{\\partial H^0} &= \\frac{v_H}{\\sqrt{2}} \\left( m_H^2 + \\frac{ \\lambda _1\\, v^2}{2} + \\frac{\\lambda _3^{H\\sigma }\\, v_\\sigma ^2}{2}\\right) = 0 \\, , \\\\\\frac{\\partial \\mathcal {V}}{\\partial \\sigma } &= \\frac{v_\\sigma }{\\sqrt{2}} \\left( m_\\sigma ^2 + \\frac{ \\lambda _\\sigma \\, v_\\sigma ^2}{2} + \\frac{\\lambda _3^{H\\sigma }\\, v^2}{2}\\right) = 0 \\, .$ Assuming the conservation of CP in the scalar sector, one can split the neutral scalar fields in terms of their real and imaginary components as $H^0 = \\frac{1}{\\sqrt{2}} \\left( S_H + i \\, P_H + v \\right) \\, , \\quad \\eta ^0 = \\frac{1}{\\sqrt{2}} \\left( \\eta _R + i \\, \\eta _I \\right) \\, , \\quad \\sigma = \\frac{1}{\\sqrt{2}} \\left( S_\\sigma + i \\, P_\\sigma + v_\\sigma \\right) \\, .$ The $\\eta _R$ and $\\eta _I$ fields do not mix with the rest of scalars due to the $\\mathbb {Z}_2$ parity.", "In this case, the scalar potential contains the piece $\\mathcal {V}_{\\rm mass}^N = \\frac{1}{2} \\, \\text{Re}(z_i) \\, \\left( \\mathcal {M}_R^2 \\right)_{ij} \\, \\text{Re}(z_j) + \\frac{1}{2} \\, \\text{Im}(z_i) \\, \\left( \\mathcal {M}_I^2 \\right)_{ij} \\, \\text{Im}(z_j) \\, ,$ where $z = \\lbrace H^0, \\sigma \\rbrace $ and $\\mathcal {M}_R^2$ and $\\mathcal {M}_I^2$ are the $2 \\times 2$ CP-even and CP-odd squared mass matrices, respectively.", "One finds $ \\hspace{-34.14322pt} \\mathcal {M}_R^2 = \\left( \\begin{array}{cc}m_H^2 + \\frac{3\\,\\lambda _1}{2} v^2 + \\frac{\\lambda _3^{H\\sigma }}{2}v_\\sigma ^{2} & \\lambda _3^{H\\sigma } \\, v \\, v_\\sigma \\\\\\lambda _3^{H\\sigma } \\, v \\, v_\\sigma & m_\\sigma ^2 + \\frac{3 \\lambda _\\sigma }{2} v_\\sigma ^{2}+ \\frac{\\lambda _3^{H\\sigma }}{2} v^{2}\\end{array} \\right) \\, ,$ and $ \\hspace{-34.14322pt} \\mathcal {M}_I^2 = \\left( \\begin{array}{cc}m_H^2 + \\frac{\\lambda _1}{2}v^2 + \\frac{\\lambda _3^{H\\sigma }}{2}v_\\sigma ^{2} &0 \\\\0 & m_\\sigma ^2 + \\frac{\\lambda _\\sigma }{2} v_\\sigma ^{2}+ \\frac{\\lambda _3^{H\\sigma }}{2}v^2\\end{array} \\right) \\, .$ One can now use the tadpole equations in Eqs.", "(REF )-() to evaluate these matrices at the minimum of the scalar potential.", "We obtain $ \\hspace{-34.14322pt} \\mathcal {M}_R^2 = \\left( \\begin{array}{cc}\\lambda _1 \\, v^2 & \\lambda _3^{H\\sigma } \\, v \\, v_\\sigma \\\\\\lambda _3^{H\\sigma } \\, v \\, v_\\sigma & \\lambda _\\sigma \\,v_\\sigma ^{2}\\end{array} \\right) \\, ,$ while the CP-odd mass matrix becomes identically zero as expected, since it has to provide two massless states: the unphysical Goldstone boson $z$ that becomes the longitudinal component of the $Z$ boson and a physical massless Goldstone boson associated to the spontaneous breaking of the lepton number, the majoron ($J$ ).", "Therefore, since $\\sigma $ is a gauge singlet field one can make the identification $ J=P_\\sigma , \\hspace{14.22636pt} z=P_H \\,.$ The CP-even states $\\left\\lbrace S_H, S_\\sigma \\right\\rbrace $ mix leading to two massive states, $h_1$ and $h_2$ as follows: $ \\left( \\begin{array}{c}h_1\\\\h_2\\end{array} \\right) = \\mathcal {O} \\left( \\begin{array}{c}S_H\\\\S_{\\sigma } \\end{array} \\right)= \\left( \\begin{array}{cc}\\cos \\alpha & \\sin \\alpha \\\\-\\sin \\alpha & \\cos \\alpha \\end{array} \\right) \\, \\left( \\begin{array}{c}S_H\\\\S_{\\sigma } \\end{array} \\right) ,$ where $\\mathcal {O}$ is the $2\\times 2$ orthogonal matrix which diagonalizes the CP-even mass matrix, such that $\\mathcal {O} \\mathcal {M}_R^2\\mathcal {O}^{T}=\\text{diag} (m_{h_1}^{2}, m_{h_2}^{2}) \\, ,$ and the mass eigenvalues are given by $m_{(h_1,h_2)}^2=\\frac{\\lambda _1}{2}v^2+\\frac{\\lambda _\\sigma }{2}v_\\sigma ^2 \\mp \\sqrt{(2\\lambda _3^{H\\sigma }\\, v \\,v_\\sigma )^{2}+(\\lambda _1\\,v^2-\\lambda _\\sigma \\, v_\\sigma ^2)^{2}} \\, .$ One of the two scalar masses has to be associated with the $\\sim 125$ GeV SM Higgs boson and an additional CP-even state is present in the spectrum.", "The angle $\\alpha $ is the doublet-singlet mixing angle and is given by $ \\tan \\,\\alpha =\\frac{2\\lambda _3^{H\\sigma }\\, v \\,v_\\sigma }{\\lambda _1\\,v^2-\\lambda _\\sigma \\, v_\\sigma ^2-\\sqrt{(2\\lambda _3^{H\\sigma }\\, v \\,v_\\sigma )^{2}+(\\lambda _1\\,v^2-\\lambda _\\sigma \\, v_\\sigma ^2)^{2} }} \\,.$ We focus now on the $\\mathbb {Z}_2$ -odd scalars.", "The masses of the CP-even and CP-odd components of $\\eta ^{0}$ are given by $ m^2_{(\\eta _R \\, , \\eta _I)}=m_\\eta ^2+\\frac{\\lambda _3^{\\eta \\sigma }}{2}v_\\sigma ^{2}+\\frac{\\lambda _3+\\lambda _4\\pm \\lambda _5}{2}v^2 \\, ,$ thus as in the usual Scotogenic model the mass difference between $\\eta _R$ and $\\eta _I$ is controlled by the $\\lambda _5$ coupling.", "Finally, the mass of the charged scalar fields $\\eta ^{\\pm }$ turns out to be $m^2_{\\eta ^\\pm }=m_\\eta ^2+\\frac{\\lambda _3}{2}v^2+\\frac{\\lambda _3^{\\eta \\sigma }}{2}v_\\sigma ^{2}\\, .$" ], [ "Neutrino masses", "Neutrino masses are induced at the 1-loop level, in the same way as in the standard Scotogenic model, as shown in Fig.", "REF .", "One finds the $3 \\times 3$ neutrino mass matrix $ (m_\\nu )_{\\alpha \\beta } = \\sum _{b=1}^3 \\frac{y_{\\alpha b} \\, y_{\\beta b}}{32\\pi ^2} \\, m_{Nb}\\left[\\frac{m^2_{\\eta _R}}{m^2_{Nb}-m^2_{\\eta _R}}\\log \\frac{m^2_{\\eta _R}}{m^2_{Nb}}-\\frac{m^2_{\\eta _I}}{m^2_{Nb}-m^2_{\\eta _I}}\\log \\frac{m^2_{\\eta _I}}{m^2_{Nb}}\\right] \\, ,$ where $m_{\\eta _R}$ and $m_{\\eta _I}$ are the $\\eta _R$ and $\\eta _I$ masses, respectively, and $m^2_{Nb}$ are the diagonal elements of $M_N$ .", "We note that neutrino masses vanish for $m_{\\eta _R} =m_{\\eta _I}$ .", "This is consistent with the fact that $m_{\\eta _R} -m_{\\eta _I} \\propto \\lambda _5$ , and in the limit $\\lambda _5 \\rightarrow 0$ a conserved lepton number can be defined.", "This allows one to assume $\\lambda _5 \\ll 1$ in a natural way [20]." ], [ "Dark matter", "The lightest $\\mathbb {Z}_2$ -odd state is completely stable and can, in principle, be a good DM candidate.", "In this model, as in the standard Scotogenic model, this role can be played either by the lightest $N$ state or by a neutral $\\eta $ field ($\\eta _R$ or $\\eta _I$ , depending on the sign of $\\lambda _5$ ).", "In this work we will concentrate on the fermion DM and thus consider $N_1$ , the lightest singlet fermion, to be our DM candidate." ], [ "Constraints", "Several experimental and theoretical constraints will be considered in our numerical analysis." ], [ "Boundedness from below", "We demand the scalar potential to be bounded from below, which implies the following set of conditions [21]: $\\lambda _1,\\lambda _2,\\lambda _\\sigma &\\ge 0 \\, , \\\\\\lambda _3 &\\ge -2 \\sqrt{\\lambda _1 \\, \\lambda _2} \\, , \\\\4 \\, \\lambda _1 \\, \\lambda _\\sigma &\\ge \\left(\\lambda _3^{H \\sigma }\\right)^2 \\, , \\\\4 \\, \\lambda _2 \\, \\lambda _\\sigma &\\ge \\left(\\lambda _3^{\\eta \\sigma }\\right)^2 \\, , \\\\\\lambda _3 + \\lambda _4 - |\\lambda _5| &\\ge -2 \\sqrt{\\lambda _1 \\, \\lambda _2} \\, .$ The CMS collaboration has searched for invisible Higgs boson decays at the LHC [22].", "In our model, the Higgs boson may decay invisibly to a pair of majorons or a pair of DM particles, $h \\rightarrow J \\,J$ and $h \\rightarrow N_1 \\, N_1$ .", "The former will always be kinematically available, since the majoron is massless, whereas the latter requires $m_{N_1} \\ge m_h/2$ .", "Reference [22] assumes a completely SM-like Higgs boson production at the LHC.", "However, in our case all Higgs boson production channels at the LHC are suppressed with respect to the SM by $c_\\alpha ^2$ , where $c_\\alpha = \\cos \\alpha $ and $\\alpha $ is the mixing angle in the CP-even scalar sector.", "Therefore the limit derived in [22] translates into $c_\\alpha ^2 \\, \\text{BR}(h \\rightarrow \\text{invisible}) < 0.19 \\quad \\text{at} \\, 95\\% \\, \\text{C.L.}", "\\, .$ All the parameter points considered in our analysis comply with the constraints from neutrino oscillation experiments.", "This is guaranteed by means of a modified Casas-Ibarra parametrization [23], properly adapted to the Scotogenic model [24], [25], [26], which allows us to express the $y$ Yukawa matrix as $ y = \\sqrt{\\Lambda }^{\\: -1} \\, R \\, \\sqrt{\\widehat{m}_\\nu } \\, U^\\dagger \\, .$ Here $\\Lambda $ is a matrix defined as $\\Lambda =\\text{diag}(\\Lambda _b)$ , with $\\Lambda _b = \\frac{m_{Nb}}{32\\pi ^2} \\, \\left[\\frac{m^2_{\\eta _R}}{m^2_{Nb}-m^2_{\\eta _R}}\\log \\frac{m^2_{\\eta _R}}{m^2_{Nb}}-\\frac{m^2_{\\eta _I}}{m^2_{Nb}-m^2_{\\eta _I}}\\log \\frac{m^2_{\\eta _I}}{m^2_{Nb}}\\right] \\, ,$ while $R$ is an orthogonal matrix ($R^T R = R R^T = \\mathbb {I}$ ), generally parametrized by three complex angles.", "Finally, $U$ is the unitary matrix that brings $m_\\nu $ to diagonal form as $U^T \\, m_\\nu \\, U = \\widehat{m}_\\nu = \\text{diag}(m_1,m_2,m_3)$ , with $m_i$ ($i=1,2,3$ ) the neutrino physical masses.", "The entries of the unitary matrix $U$ as well as the neutrino squared mass differences are measured in neutrino oscillation experiments.", "Our analysis will use the results of the global fit [1].", "The interaction Lagrangian of majorons with charged leptons can be written as [27] $ \\mathcal {L}_{\\ell \\ell J} = J \\, \\bar{\\ell }_\\beta \\left( S_L^{\\beta \\alpha } \\, P_L + S_R^{\\beta \\alpha } \\, P_R \\right)\\ell _{\\alpha } + \\text{h.c.}\\, ,$ where $\\ell _{\\alpha ,\\beta }$ are the standard light charged leptons and $P_{L,R}$ are the usual chiral projectors.", "The $S_{L,R}$ couplings are induced at the 1-loop level in our model, as shown in [15].", "The diagonal $S^{\\beta \\beta } =S_L^{\\beta \\beta } + S_R^{\\beta \\beta \\ast }$ couplings are purely imaginary, due to the fact that majorons are pseudoscalar states, and are strongly constrained due to their potential impact on astrophysical observations.", "Large couplings to electrons or muons are excluded since they would lead to an abundant production of majorons in dense astrophysical media and an efficient cooling mechanism.", "The authors of [28] used data from white dwarfs to set the bound $\\text{Im} \\, S^{e e} < 2.1 \\times 10^{-13} \\, ,$ while the supernova SN1987A was considered in [29] to establish the limit Two alternative bounds are given in [29].", "We decided to consider the most conservative one.", "$\\text{Im} \\, S^{\\mu \\mu } < 2.1 \\times 10^{-9} \\, .$ Finally, there are also laboratory bounds on the majoron diagonal couplings to charged leptons.", "In [27], the results of the OSQAR experiment [30], a light-shining-through-a-wall experiment, were used to find the approximate bounds $S^{ee} \\lesssim 10^{-7}$ and $S^{\\mu \\mu } \\lesssim 10^{-5}$ .", "We note that these are clearly less stringent than the bounds obtained from astrophysical observations.", "As in most neutrino mass models, LFV is a powerful constraint that strongly restricts the allowed parameter space of our model.", "Several processes will be considered in our analysis: The radiative decays $\\ell _\\alpha \\rightarrow \\ell _\\beta \\, \\gamma $ , which turn out to be the most constraining ones in most neutrino mass models.", "In particular, the MEG experiment restricts the $\\mu \\rightarrow e \\gamma $ branching ratio to be smaller than $4.2 \\times 10^{-13}$  [31].", "We also consider the analogous limits on $\\tau $ LFV decays [32], but they are less stringent.", "The 3-body decays $\\ell _\\alpha \\rightarrow \\ell _\\beta \\, \\ell _\\gamma \\, \\ell _\\gamma $ , with $\\beta = \\gamma $ and $\\beta \\ne \\gamma $ .", "In this case we follow [33] and include the usual photon penguin contributions as well as other usually less relevant contributions, such as box diagrams.", "Majoron mediated contributions are also included, using the results derived in [27].", "The decays $\\ell _\\alpha \\rightarrow \\ell _\\beta \\, J$ with the majoron in the final state are also considered, as they constrain the off-diagonal $S_{L,R}$ couplings directly.", "For instance, the null results obtained in the search for the decay $\\mu \\rightarrow e J$ at TRIUMF [34] can be translated into the bound $|S^{e\\mu }| < 5.3 \\times 10^{-11}$  [35].", "We use the analytical expressions for the majoron off-diagonal couplings to charged leptons in the Scotogenic model found in [15].", "$\\mu -e$ conversion in nuclei, again following the analytical results in [33]." ], [ "Numerical results", "We now proceed to discuss the results of our analysis.", "To perform the numerical scan, we have first implemented the model in SARAH (version 4.11.0) [36], a Mathematica package for the analytical evaluation of all the information about the model.", "See [37] for a pedagogical introduction to the use of SARAH.", "With this tool, we have created a source code for SPheno (version 4.0.2) [38], [39], thus allowing for an efficient numerical evaluation of all the analytical expressions derived with SARAH.", "We have also computed several observables of interest in our model, including the lepton flavor violating ones, both analytically and with the help of FlavorKit [40], for an in-depth cross-check of their expressions.", "Finally, we have used micrOmegas (version 5.0.9) [41] to obtain the main DM observables, namely the DM relic density and direct and indirect detection predictions.", "Table: Values of the main input parameters for the numerical scan.As already mentioned, while both the scalar $\\eta _{R,I}$ and the fermions $N_i$ are, in principle, viable DM candidates in this model, in our analysis we focus on the lightest Majorana fermion $N_1$ as the main component of the DM.", "We summarize our choice of parameters for the numerical scan in Tab.", "REF .", "Moreover, $\\lambda _1$ is fixed by the condition of requiring $m_{h_1} = 125$ GeV.", "In some numerical scans we have fixed the value of $m_{h_2} =500$ GeV or the value of $m_{\\eta ^2}$ such that $m_{\\eta _{R,I},\\eta ^\\pm } - m_{N_1} \\lesssim 20$ GeV, as we will discuss in more detail below.", "Since we want to focus on $N_1$ as the DM candidate, we further require that $m_{N_1} < m_{N_{2,3}}, m_{\\eta _{R,I}}$ .", "We have chosen normal hierarchy for the neutrino spectrum and considered the best-fit values for the neutrino oscillation parameters found by the global fit [1].", "Finally, the three angles in the orthogonal $R$ matrix are assumed to be real and taken randomly in our numerical scans.", "We first show in Fig.", "REF the relic abundance of $N_1$ , as a function of its mass.", "For this specific scan, we have fixed $m_{h_2} = 500$ GeV to highlight the $s$ -channel annihilation of $N_1$ via $h_2$ .", "In this figure, grey points denote solutions either leading to overabundant DM or excluded by any of the constraints listed in Sec.", ", or where the spin-independent $N_1$ -nucleon elastic scattering cross section is excluded by the most recent data from the LUX-ZEPLIN experiment [42].", "Cyan points denote solutions which can reproduce the observed cold DM relic density, as they fall within the 3$\\sigma $ range obtained by the Planck satellite data [43], $\\Omega _{N_1} h^2 =0.120 \\pm 0.0036$ (blue thin band).", "Solutions leading to underabundant DM (which would then require another DM candidate to explain the totality of the observed cold DM relic density) are depicted in blue.", "As can be seen from the plot, most of the solutions lead to overabundant DM, except for points falling in the following regions: (i) a resonant region where $m_{N_1} \\sim m_{h_1}/2 \\sim 60$ GeV, (ii) a second resonant region where $m_{N_1} \\sim m_{h_2}/2 \\sim 250$ GeV and (iii) a region of coannihilations at higher $m_{N_1}$ .", "Figure: Relic abundance of N 1 N_1 as a function of m N 1 m_{N_1}.", "Cyan points depict solutions in agreement with the cold DM measurement obtained from Planck data  (the blue thin band shows the 3σ\\sigma interval) while blue points depict solutions leading to underabundant DM.", "Gray points are excluded by any of the constraints listed in Sec.", "or due to an overabundant DM relic density.To explore in more detail the third, high-mass region, we performed a second numerical scan in which we have varied the mass difference $\\Delta = m_{\\eta _R} - m_{N_1}$ in the $\\left[ 0, 20\\right]$ GeV range.", "In such a way, we have enforced $N_1$ to be in the $\\sim [100 -3000]$ GeV region, where coannihilations with $\\eta _{R,I}$ and $\\eta ^\\pm $ are very relevant, thus reducing the relic abundance of $N_1$ .", "Figure REF shows this region in parameter space, in which the DM relic density is set by coannihilations.", "The color code is the same as in Fig.", "REF .", "Compared to the result of the previous scan (Fig.", "REF ), we can see that if coannihilations are relevant, more viable solutions can be found in the $m_{N_1} \\sim \\left[100, 3000 \\right]$ GeV region.", "Figure: Relic abundance of N 1 N_1 as a function of m N 1 m_{N_1} in the coannihilation region, where Δ∈0,20\\Delta \\in \\left[0, 20\\right] GeV.", "Same color code as in Fig.", ".Next we discuss the results for $N_1$ direct detection.", "In order to maximize the number of viable solutions, we focus again on the coannihilation region, and we show in Fig.", "REF the spin-independent $N_1$ -nucleon elastic scattering cross section, $\\sigma _{\\rm SI}$ , as a function of the DM mass, $m_{N_1}$ .", "The cross section shown in this figure is weighted by the relative abundance $\\xi $ , defined as $\\xi = \\frac{\\Omega _{N_1}}{\\Omega _{\\text{\\rm DM,Planck}}} \\, ,$ where $\\Omega _{\\rm DM,Planck} h^2 = 0.120$  [43].", "We apply the same color code as in Fig.", "REF , that is cyan points indicate solutions explaining the totality of observed DM, while blue points denote under-abundant DM.", "The plain green line and dashed area indicate the current most stringent limit from the LUX-ZEPLIN experiment (LZ-2022) [42], while the black dashed line denotes the constraint from XENON1T (XENON1T-2018) [44].", "Other (less stringent) constraints on $\\sigma _{\\rm SI}$ apply from the liquid xenon experiment PandaX-II [45] and from liquid argon experiments like DarkSide-50 [46] and DEAP-3600 [47], although they are not shown here.", "Future facilities including XENONnT [48], DarkSide-20k [49], ARGO [49] and DARWIN [50], [51] (see [52] for an overview) will be able to further inspect the parameter space of this model.", "As for general reference, we further illustrate the expected discovery limit corresponding to the so-called “$\\nu $ -floor\" from coherent elastic neutrino-nucleus scattering (CE$\\nu $ NS) for a Ge target [53] (dashed orange line).", "Notice, however, that this should not be taken as a hard limit, as it can be overcome with different techniques and it has strong dependences on both the target material and a series of uncertainties (see for example [54], [55], [56] for more details).", "Figure: Spin-independent N 1 N_1-nucleon elastic scattering crosssection – weighted by the relative abundance – as a function ofm N 1 m_{N_1}.", "The green area is already excluded by the LUX-ZEPLINexperiment (LZ-2022) , while the black dashedline denotes the constraint from XENON1T(XENON1T-2018) .", "The dashed orange curveindicates the expected discovery limit corresponding to theν\\nu -floor from CEν\\nu NS of solar and atmospheric neutrinos fora Ge target .Finally, we have explored the predictions for the velocity-averaged cross section of $N_1$ annihilation into gamma rays.", "These are among the most suitable messengers to probe DM via indirect detection.", "We focus once more on the coannihilation region, i.e.", "on the high-mass range $m_{N_1} \\sim 0.1 - 2$ TeV, where the annihilation channels $N_1N_1 \\rightarrow h_1 h_1, h_2 h_2, h_1 h_2, Z^0 Z^0, h_i J$ can be relevant.", "The hadronization of the final-state gauge bosons and Higgs bosons will produce neutral pions, which in turn can decay into photons thus giving rise to a gamma-ray flux with a continuum spectrum which may be within reach of DM indirect detection experiments.", "While a detailed calculation of the gamma-ray energy spectra produced by the annihilation of two $N_1$ particles in this specific model should be performed, in order to correctly compute exclusion bounds from existing gamma-ray data, this is out of the scope of this work.", "However, we can notice that the main annihilation channels in this high-mass range include Higgs bosons in the final state.", "The gamma-ray energy spectrum from DM DM $ \\rightarrow h_1 h_1$ annihilation channel is very similar to that from DM DM $\\rightarrow W^+ W^-$ at $m_{N_1} \\sim 1$ TeV (see for instance Fig.", "15 of [57]).", "In the following, for the sake of simplicity, we will compare our predictions with bounds obtained assuming $W^+ W^-$ as the main annihilation channel, to get an overall idea of how current data can constrain the parameter space of this model.", "Charged cosmic rays can also be used to look for $N_1$ annihilations, even though their detection is more challenging due to uncertainties in the treatment of their propagation.", "For instance, AMS-02 data on the antiproton flux and the Boron to Carbon (B/C) ratio can be used to constrain the $N_1$ annihilation cross section [58], [59], [60].", "With some caveats concerning the astrophysical uncertainties on the $\\bar{p}$ production, propagation and on solar modulation (see e.g.", "[61], [62], [63]), these bounds turn out to be stronger than gamma-ray limits from dwarf spheroidal satellite galaxies in some mass ranges.", "Following the same considerations as before, i.e.", "that the antiproton energy spectrum from DM DM $ \\rightarrow h_1 h_1$ annihilation channel is very similar to that from DM DM $\\rightarrow W^+ W^-$ at $m_{N_1} \\sim 1$ TeV, we will compare our predictions to current limits on the $N_1$ annihilation cross section set by combination of $\\bar{p}$ and B/C data of AMS-02 [58], [59] assuming $W^+W^-$ as the dominant annihilation channel.", "We show in Fig.", "REF the $N_1$ total annihilation cross section — weighted by $\\xi ^2$ — versus its mass.", "The color code follow the same scheme as in Figs.", "REF , REF .", "We also depict the 95% C.L.", "upper limits currently set by the Fermi-LAT with gamma-ray observations of Milky Way dSphs (6 years, Pass 8 event-level analysis) [64] (red solid curve and shaded area) and from a combination of $\\bar{p}$ and B/C data of AMS-02 [58], [59] (green), both assuming $N_1 N_1 \\rightarrow W^+ W^-$ as main annihilation channel due to the considerations made before.", "We see that few solutions already fall within the region currently excluded by AMS-02 data.", "As already highlighted, while a dedicated analysis should be performed for this specific model, we can conclude that current $\\bar{p}$ and B/C data may be already excluding a relevant part of the parameter space.", "Forthcoming data will allow to further probe $N_1$ as a DM candidate via its multi-messenger signals.", "Figure: N 1 N_1 total annihilation cross section as a function of m N 1 m_{N_1}.", "The red and green lines refer to the corresponding 95% C.L.", "upper limits currently set by Fermi-LAT gamma-ray data from dSphs  and from the antiproton and B/C data of AMS-02 , respectively.As in many scenarios for neutrino mass generation, LFV processes strongly restrict the available parameter space of the model.", "In addition to $\\mu \\rightarrow e \\gamma $ , very commonly considered in phenomenological studies, our model also leads to signatures with the majoron in the final state, like $\\mu \\rightarrow eJ$ .", "Figure REF shows BR($\\mu \\rightarrow e J$ ) as a function of BR($\\mu \\rightarrow e \\gamma $ ).", "Again, we have focused on the coannihilation region.", "We first notice that some parameter points are already excluded by the current experimental limits on these LFV branching ratios.", "However, one can also see that our numerical scan also finds many valid parameter points leading to very low values of both BR($\\mu \\rightarrow e \\gamma $ ) and BR($\\mu \\rightarrow e J$ ), clearly below the discovery reach of planned experiments.", "This is not surprising, since we take random $R$ matrices in our numerical scans, hence accidentally finding parameter points with suppressed $\\mu -e$ flavor violation.", "While a slight correlation among these two observables can be observed in Fig.", "REF , $\\mu \\rightarrow e \\gamma $ receives contributions from additional loop diagrams that do not involve the majoron.", "The two observables are hence independent.", "Interestingly, we find that BR($\\mu \\rightarrow e J$ ) is generally more constraining than BR($\\mu \\rightarrow e \\gamma $ ), although the difference is not very significant.", "Figure: BR(μ→eJ\\mu \\rightarrow e J) as a function of BR(μ→eγ\\mu \\rightarrow e \\gamma )in the coannihilation region, where Δ∈0,20\\Delta \\in \\left[0,20\\right] GeV.", "Same color code as inFig. .", "The horizontal and vertical linescorrespond to the current experimental limits, discussed inSec.", "." ], [ "Summary and discussion", "Most SM extensions aiming at an explanation of neutrino oscillation data consider Majorana neutrinos.", "This option breaks the accidental $\\rm U(1)_L$ lepton number symmetry of the SM in two units.", "If the breaking of lepton number is spontaneous, a Goldstone boson appears in the particle spectrum of the theory, the majoron.", "In this work we have analyzed the dark matter phenomenology of this scenario in the context of the popular Scotogenic model.", "Focusing on the fermionic DM candidate $N_1$ , we have found that it can explain the observed DM abundance in three regions of parameter space: (i) a resonant region where it annihilates via $h_1$ , with $m_{N_1} \\sim 60$ GeV, (ii) a second resonant region where $s-$ channel annihilations via $h_2$ are relevant and (iii) a region of coannihilations at $m_{N_1} \\sim 1$ TeV.", "In particular, if coannihilations are relevant, more allowed solutions are found, either explaining the totality of DM or at least a sizeable part of it.", "While some of these solutions are already excluded by the recent LUX-ZEPLIN result, most of them are within the reach of near-future direct detection experiments.", "Interestingly, indirect detection searches seem to constitute another promising tool to further probe $N_1$ as a DM candidate via its multi-messenger signals, mainly gamma rays and antiprotons.", "All in all, the presence of the majoron and of a second Higgs open up the allowed parameter space of $N_1$ as DM, compared to the standard Scotogenic model.", "The majoron has an impact also on the phenomenology of LFV observables, as it leads to new interesting signatures, where it appears in the final state.", "Among these, we found that BR($\\mu \\rightarrow e J$ ) is generally more constraining than the most common BR($\\mu \\rightarrow e \\gamma $ ).", "Finally, let us comment that the presence of a massless majoron may have relevant implications on the early-Universe cosmology.", "In particular, it can affect cosmological and astrophysical environments, and can contribute to $\\Delta N_\\mathrm {eff}$ .", "The majoron can be produced from the Higgs decay, or the annihilation of $N_1$ or even via freeze-in through its linear coupling with the active neutrinos.", "In order to avoid constraints from $\\Delta N_\\mathrm {eff}$ , one might require the $N_1$ annihilation cross section into majorons to be small enough, at the freeze-out epoch.", "Another interesting scenario consists in $N_1$ having very tiny couplings, so that it does not reach thermal equilibrium in the early Universe and it is instead produced via freeze-in.", "Such a production mechanism, yet together with the presence of the majoron, should also lead to some interesting phenomenology.", "We leave such analysis for a follow-up of this paper." ], [ "Acknowledgements", "Work supported by the Spanish grants PID2020-113775GB-I00 (AEI/10.13039/501100011033), CIPROM/2021/054, SEJI/2018/033 and SEJI/2020/016 (Generalitat Valenciana).", "AV acknowledges financial support from MINECO through the Ramón y Cajal contract RYC2018-025795-I.", "VDR acknowledges financial support by the Universitat de València through the sub-programme “ATRACCIÓ DE TALENT 2019”, in the early stages of this work." ] ]
2210.07706
[ [ "Towards Trustworthy AI-Empowered Real-Time Bidding for Online\n Advertisement Auctioning" ], [ "Abstract Artificial intelligence-empowred Real-Time Bidding (AIRTB) is regarded as one of the most enabling technologies for online advertising.", "It has attracted significant research attention from diverse fields such as pattern recognition, game theory and mechanism design.", "Despite of its remarkable development and deployment, the AIRTB system can sometimes harm the interest of its participants (e.g., depleting the advertisers' budget with various kinds of fraud).", "As such, building trustworthy AIRTB auctioning systems has emerged as an important direction of research in this field in recent years.", "Due to the highly interdisciplinary nature of this field and a lack of a comprehensive survey, it is a challenge for researchers to enter this field and contribute towards building trustworthy AIRTB technologies.", "This paper bridges this important gap in trustworthy AIRTB literature.", "We start by analysing the key concerns of various AIRTB stakeholders and identify three main dimensions of trust building in AIRTB, namely security, robustness and fairness.", "For each of these dimensions, we propose a unique taxonomy of the state of the art, trace the root causes of possible breakdown of trust, and discuss the necessity of the given dimension.", "This is followed by a comprehensive review of existing strategies for fulfilling the requirements of each trust dimension.", "In addition, we discuss the promising future directions of research essential towards building trustworthy AIRTB systems to benefit the field of online advertising." ], [ "Introduction", "Recent years have witnessed widespread adoption of online advertising, which has become the dominant sector in the advertising industry.", "Compared with traditional television, radio, newspaper, magazines and billboards, online advertising not only provides advertisers with an alternative option to diversity their strategies to reach more potential customers via the Internet, but also allows them to personalize ads to viewers in a real-time and cost-effective manner [112].", "The key enabling technology for online advertising is Real-Time Bidding (RTB), which refers to the algorithmic trading of online advertising opportunities (a.k.a.", "ad impressions) through artificial intelligence (AI)-empowered real-time auctioning [112].", "In RTB, the entire auction process for each impression usually takes less than 100 milliseconds before the ad is positioned.", "By automating the auctioning process involving a large number of available inventories among ad publishers, RTB has significantly transformed the online advertising marketplace.", "Compared with other types of online advertising, RTB offers a more streamlined, efficient and targeted purchasing process for advertisers.", "It promotes user behavior targeting based on user data rather than contextual data, and focuses on the most relevant inventory which can result in high returns on investment for the advertisers.", "Currently, there have been two surveys on the topic of RTB [112], [68].", "They review RTB from the perspective of algorithm design with the aim of maximizing the revenue or other key performance indicators for different participants of the ad delivery process.", "Despite its development, RTB still faces challenges which threaten its trustworthiness.", "Firstly, RTB systems face many security threats.", "On the one hand, there have been diverse types of frauds which can deplete advertisers' budgets.", "On the other hand, ads can be injected into the publishers' pages by malicious participants, which brings no revenue to the publishers or even damage their reputation.", "Secondly, RTB systems face challenges that demand high levels of algorithmic robustness.", "As shown in [112], [124], RTB suffers from the sample selection biases (SSBs), which characterize the systematic distinction between the data distributions in the training space and the inference space.", "Last but not least, RTB systems face users from diverse demographic backgrounds.", "Thus, fair treatment of the users is an important consideration [20].", "As trustworthy AI research starts to gain traction in recent years, works on enhancing the trustworthiness of RTB systems have also emerged.", "Nevertheless, there is currently no comprehensive survey on trustworthy AI-empowered RTB (AIRTB) techniques.", "In this paper, we attempt to bridge this gap.", "This paper contributes to the trustworthy AI literature in the following ways: We provide a detailed analysis of the RTB technology ecosystem, with focus on the diverse stakeholders involved and the trustworthy AI dimensions important to them.", "We propose a unique multi-tiered taxonomy of trustworthy AIRTB based on the major techniques supporting AIRTB, and summarize trustworthy AI related challenges in each part.", "In lower tiers of this taxonomy, we summarize the key techniques supporting the security, robustness and fairness aspects of AIRTB.", "To the best of our knowledge, it is the first such taxonomy on this topic, and provides new perspectives to existing works in this field.", "We discuss the main metrics adopted in existing approaches to experimentally evaluate the performance of trustworthy AIRTB approaches, thereby, providing readers with a useful guide on experiment design.", "We outline promising future research directions towards building trustworthy AI-empowered real-time auctioning systems for online advertising.", "For each direction, we analyse the limitations in the current literature and propose potential ways forward.", "Through this survey, we aim to provide researchers and practitioners with an informative overview of trustworthy AI-empowered real-time auctioning to help them enter this interdisciplinary field." ], [ "An Overview of Trustworthy AIRTB", "This section provides backgrounds on the topic of Trustworthy AIRTB [68].", "We start by introducing the main terminologies.", "Then, we illustrate the AIRTB ecosystem and the commonly adopted revenue models.", "Lastly, we summarize the trust building requirements by various AIRTB stakeholders.", "Table: Stakeholders of a typical AIRTB ecosystem." ], [ "Terminologies", "Table REF lists the key stakeholders (a.k.a.", "participantsIn this paper, we use “participants” and “stakeholders” interchangeably.)", "in a typical AIRTB ecosystem and their corresponding descriptions according to the Interactive Advertising Bureau (IAB)https://wiki.iab.com.", "An AIRTB ecosystem includes three main types of stakeholders: 1) advertisers, 2) ad networks, and 3) publishers.", "Advertisers need to continually adjust the design of their ad campaigns based on analysis from various ad networks to increase their impact.", "To maximize revenues, publishers need to selectively subscribe to a number of ad networks based on cost-benefit analysis.", "Ad networks form Ad Exchanges to offer combined marketplaces for advertisers and publishers to join.", "To make it more efficient for advertisers to choose target users and publishers to place their ads, demand side platforms (DSPs) emerge.", "They are autonomous agents working on behalf of advertisers in an ad exchange.", "By combining demands, DSPs can enhance the effectiveness and selectivity for advertisers.", "Similarly, supply side platforms (SSPs) work as the agents to help publishers in an ad exchange sell impressions and optimally manage their inventories [28].", "Since ad networks, ad exchanges, DSPs and SSPs function together to enable AIRTB services, we refer to them collectively as “Ad Exchange Networks”." ], [ "Workflow", "The high-level overview of a typical AIRTB ecosystem is shown in Figure REF .", "The advertisers set up ad campaigns in the auction market, while the publishers register ad impressions with the auction market.", "The auction market execute auctions to trade impressions and ad campaigns in order to strike a balance between demand and supply.", "The typical workflow of each auction is triggered by the emergence of an ad request from a user (e.g., when a user opens a webpage).", "Upon receiving such ad request, the ad exchange packages it with user information (e.g., the URL of the webpage which the user is visiting, the IP address of the user, etc.)", "as well as the publisher information (e.g., the context), and transits the packaged bid request to DSPs which might be interested in displaying their advertisers' ads to this user to solicit bids.", "Based on this bid request, each DSP assesses the potential revenue and possible cost to determine a bid price according to its bidding function, and sends it back as the bid response.", "Afterwards, the ad exchange selects the winner and determines the market price according to the adopted auction mechanism (e.g., the second-price auction, first-price auction).", "It then charges the winner and informs the publisher to display the winning ad.", "To improve the efficiency of ad delivery, the advertisers and the ad exchange networks often adopt the Online Behavioural Advertising (OBA) technology, which collects users’ actions, constructs models to estimate user preferences, and thereafter shows ads tailored to the estimated preferences.", "To achieve this, they collect and share information about both the users and the publishers by utilizing browser cookies." ], [ "Revenue Models", "Generally, publishers allow advertisers to post their ads on their websites in exchange for commissions from the activities users take.", "The quantity of impressions, clicks, and user actions constitute three typical basis for the common models that the publishers adopt to generate revenues.", "Cost Per Impression Mile (CPM): Under this revenue model, the fees paid by the advertisers are based on the expense per 1,000 views of the corresponding ad.", "CPM was designed for conventional advertising systems.", "It is preferred by the publishers because as long as they display the ads, they can receive revenues without the need to consider user actions (e.g., clicks, conversions).", "Cost Per Click (CPC): Under this revenue model, publishers are paid according to the number of clicks by the viewers on the ads shown in their webpages.", "Compared to CPM, CPC ensures a higher return on investment for the advertisers as clicks by users are strong indicators of potential interest.", "Cost Per Action (CPA): Under this revenue model, advertisers pay publishers based on the number of the predefined actions taken by the users following the clicks (e.g., purchasing the corresponding products, subscribing to the services).", "CPC can be regarded as a special case of CPA.", "Compared to clicks, the predefined actions following clicks are preferred by the advertisers.", "As such, CPA is more popular with advertisers.", "Nevertheless, CPA has its limitations.", "On the one hand, the publishers are less keen on adopting CPA due to the possibility that fraudulent advertisers may under-report the number of predefined actions performed by the users in order to reduce the commission payout.", "On the other hand, implementing this model is difficult, particularly when dealing with complicated actions." ], [ "Desirable Trustworthy AI Dimensions", "Recent ethics guidelines for AI given by the European Union (EU) [103] state that trustworthy AI ecosystems are supposed to adhere to four main ethical principles: explainability, fairness, prevention of harm, and respect for human autonomy.", "A variety of trustworthy AI dimensions have been proposed by various organizations and researchers based on these four main principles [10], [106].", "In this paper, we follow [67] and focus on the following five key dimensions in our discussion about trustworthy AIRTB from the perspectives of the key stakeholders: 1) security, 2) robustness, 3) fairness, 4) explainability and 5) accountability.", "Table REF summarizes the dimensions of trustworthy AI concerned by each stakeholders in an AIRTB system.", "Specifically, to build trust with the advertisers, the AIRTB system needs to make an effort to improve on all five dimensions.", "To build trust with the publishers, AIRTB needs to fulfill their requirements from the perspectives of security, robustness, fairness and accountability.", "As far as the ad exchange networks are concerned, if their requirements of security and accountability are fulfilled, they can establish trust with an AIRTB system.", "Table: Detailed trustworthy AI dimensions required by different AIRTB stakeholders.", "(**) denotes that the corresponding requirement has been studied by existing research.Nevertheless, as the research on trustworthy AIRTB is still in a relatively early stage, the majority of existing studies are concentrated on dimensions of security, robustness and fairness (as shown in Table REF ).", "Hence, our survey mainly focuses on these three trustworthy AIRTB dimensions." ], [ "Security", "This section reviews threats facing AIRTB systems and methods to secure them.", "Specifically, we first briefly summarize the most common attacks targeting AIRTB, and the tools and techniques may used to carry out the attacks.", "Then, we discuss and analyse the countermeasures against these attacks." ], [ "Attackers and Threat Models", "The proposed taxonomy of attacks on AIRTB is shown in Fig.", "REF .", "They can be categorized into three major groups: 1) placement fraud, 2) traffic fraud, and 3) action fraud [131].", "Placement frauds involve changing or manipulating the information that appears on users' clients or the publisher's websites in order to generate more clicks or impressions.", "Traffic frauds refer to those that attempt to boost the volume of impressions or clicks from various locations by using fictitious traffics.", "For instance, malicious attacks can use a crowd or a botnet to artificially boost the number of clicks or impressions on the publishers' websites.", "Action frauds focus on users' actions to make money.", "For instance, malicious attackers can hire actors to artificially boost conversion rates with the help of web bots in order to earn additional commissions." ], [ "Tools and Methods used by Attackers", "Attackers adopt four main types of tools and methods with attacking AIRTB systems: 1) malvertising, 2) inflight modification of ad traffic, 3) click fraud, and 4) hacking campaign accounts." ], [ "Malvertising", "Malvertising refers to the approach of spreading malwares to vulnerable devices through online advertising [30].", "Due to the complexity of AIRTB which involves numerous redirections among various stakeholders, attackers can insert malicious contents (e.g., malicious ads) into places where the ad exchange networks and publishers failed to anticipate.", "Malvertising threats could be launched by AIRTB advertisers as well as publishers.", "For example, advertisers can easily launch malvertises by inserting malicious ads into legal ad networks.", "Ad networks may place such ads on the websites of the corresponding publishers, where end users might be misled to click on them.", "In addition, the publishers can also include malicious contents on their websites, which could inadvertently lead the users to download malwares even without activation process.", "Flash-based ads constitute one of the widely known types of malvertising [32]." ], [ "Inflight modification of ad traffic (IMAT)", "In [55], the authors introduced an ad fraud method known as the Man-In-The-Middle (MITM) attack, which modifies ad traffics in-flight.", "The Bahama botnet is a well-known example of this attack.", "It enables malwares to force affected devices to provide users with modified ads.", "Specifically, in the Bahama botnet, the malicious actors manipulate the the Domain Name System (DNS) translations on the compromised devices, and then redirect user traffics to target websites.", "In this way, click-through payments are activated by user clicks on the fake ads, resulting in payouts from the advertisers without actual clicks on the intended ads.", "Alternatively, this attack can also be launched through compromised wireless router botnets.", "This configuration involves turning a malware-infected wireless router into a bot.", "Then, the botnet master can launch traffic modification attacks in-flight to re-route traffics through such routers.", "This attack is often performed by public hotspots, which provide free Wifi access to users while inserting ads to boost revenues." ], [ "Click Fraud", "Click frauds (a.k.a.", "click spamming, malicious attacks) refer to the automatic or manual attacks that aim to elicit fake clicks on the ads in order to generate illegal revenues [63].", "As such attackers click on ads without actually being interested in the contents, they can defraud the advertisers' ad budgets and harm the well-being of the AIRTB ecosystem.", "Click frauds can be divided into two categories: 1) conventional click frauds, and 2) crowd-based click frauds.", "1) Conventional click fraud.", "Conventional click frauds can be carried out through three main approaches: 1 hit inflation, 2 hit shaving, and 3 badvertising.", "1 Hit inflation This type of click fraud refers to attacks that aim to increase the number of hits to generate revenues for publishers, or on competitors' ads with invalid clicks to deplete their ad budgets [4].", "It is often performed by publishers and advertisers.", "Publisher click inflation.", "This attack takes place when fraudulent publishers purposely inflate the click-through rate without genuine interest in the ads to increase revenue from the ad networks [88].", "According to the number of fraudsters, publisher click inflation attacks consist of two main types: 1) coalition attacks, and 2) non-coalition attacks [81].", "The coalition publisher click inflation attacks are carried out by a group of colluding publishers, while the non-coalition attacks involve only one publisher.", "Coalition attacks has two main advantages.", "On the one hand, it is more challenging for countermeasures to detect the relationships among malicious devices and malicious publishers.", "On the other hand, sharing resources instead of adding more physical resources among malicious devices and malicious publishers lowers the costs of launching attacks.", "Advertiser clicking inflation.", "This attack refers to the case that the malevolent advertisers aim to deplete their competitors' marketing budgets by launching hit inflation attacks [105].", "In this way, the attackers can enhance their likelihood to win future ad placement auctions, especially in situations where the daily marketing budgets of the advertisers are limited.", "2 Hit Shaving As mentioned in Sec.", ", in contrast to paying the publishers based on the number of clicks, the advertisers generally prefer CPA as they pay publishers based on the desired user actions.", "Nevertheless, CPA is susceptible to hit shaving attacks [23], which are also known as deflation frauds.", "Through hit shaving attacks, dishonest advertisers decrease the number of clicks from publishers in an imperceptible manner in order to defraud them of the payouts they deserve.", "3 Badvertising Badverting refers to covert click fraud attacks that automatically and silently create clicks on ads once the users access the websites so as to increase the attackers' revenue [33].", "Compared with traditional attacks based on malwares, badvertiments are more stealthy and generally take the form of phishing attacks and spams.", "Badvertising consists of two steps: (i) delivery, which transmits either corrupt data to users or users to corrupt data; and (ii) execution, which distributes advertisements to the targeted users secretly and automatically.", "It can be successfully implemented by manipulating the JavaScript codes that the clients' browsers download and run in order to publish advertisements.", "JavaScript snippet files are often inserted into the publishers' Web pages for online advertising systems to function.", "The JavaScript files will run each time a user accesses the pages and downloads ads from ad servers.", "When the ads are downloaded, the JavaScript files' frames are updated to include the HTML codes necessary to display the ads.", "The publisher counts how many times the users click on the links to the ad providers' servers using the click-through payment system.", "Consequently, the users are referred to the websites of the ad clients.", "In order to deploy clicks automatically, Badvertisements run extra malicious scripts.", "To put it simply, the malicious scripts parse the HTML codes and assemble all links after running and rewriting the frame.", "Then, they modify the webpages to include the HTML iframes.", "If the users choose to click the links, the iframes will be triggered in the background and load their contents to take advantage of the users.", "2) Crowd-based click fraud.", "Crowd-based click frauds leverage crowdsourcing [53] to recruit actual individuals to artificially boost ad traffic [107].", "As crowdsourcing systems are widely available, it is possible to hire a large number of workers to blindly click on a rival advertiser's ads to inflate its expenses.", "Compared with the conventional frauds mentioned above, crowd-based click frauds possess several characteristics: they often involve a sizable group of people, they generate limited traffic, and the click actions cannot be distinguished from normal click actions." ], [ "Hacking Campaign Accounts", "AIRTB has spawned the tools (e.g., AdWords) which help advertisers launch online campaigns quickly and effectively.", "However, such tools also make advertisers' accounts vulnerable.", "Malicious actors can take over the advertisers' accounts and leverage these tools to set up attacks with more significant impact.", "This attack is referred to as hacking campaign accounts [73].", "The campaign accounts may be blocked from legitimate access, or even entered without authorization once they are hacked." ], [ "Summary", "Table REF summarizes the attacks on AIRTB systems discussed in this section.", "Among all these attacks, only Inflight Modification of Ad traffic affects all three stakeholders, while hacking campaign accounts, hit shaving, and badvertising only affect one stakeholder.", "Hacking campaign accounts does not affect specific revenue models.", "Both hit inflation and crowd-based fraud can be applied under all revenue models adopted by AIRTB.", "Moreover, they both affect publishers and advertisers, while having no effect on ad networks.", "Among the AIRTB stakeholders, publishers and advertisers are the main targets of attacks, while the ad networks are only being targeted by two attacks.", "CPC is the revenue model that is affected by the most number of attacks, while CPM is affected by the least number of attacks.", "This is due to the difference in popularity of the revenue models." ], [ "Defending AIRTB Systems", "After gaining an overview of attacks on AIRTB, we now look into approaches to defending against such attacks.", "Fig.", "REF shows the proposed taxonomy of existing methods against attacks on AIRTB systems.", "Based on how and when they take effect, existing AIRTB defense methods can be classified into two main categories: 1) offline methods and 2) online methods.", "The former are designed to detect and mitigate attacks before and after the ads have been placed.", "The latter are activated during the ad placement process.", "In this sense, they complement each other." ], [ "Offline Methods", "There are four main categories of offline defense methods for AIRTB: 1) Heuristic methods, 2) Data Analytics–based methods, 3) Executable Analysis-based methods, and 4) Theoretical Analysis methods.", "1) Heuristic Methods.", "In the early stage of development for AIRTB, most ad exchange networks clean up malicious ads based on heuristics [73], [107].", "For example, if an advertiser finds that it received many clicks from the same IP address without actual purchase, it can exclude future clicks from this IP address (or even the entire region in which the IP address is located).", "Such methods generally incur high labor costs and tend to become ineffective quickly due to the rapid evolution of attackers' strategies.", "2) Data Analytics–based Methods.", "As machine learnig-based data analytics techniques develop in recent years, researchers have started to leverage them for detecting ad frauds and malicious ads.", "Based on anonymized data produced for a data-mining competition in 2012, [80] reveals the following discoveries.", "Firstly, to accurately detect frauds, it is essential to analyse the potential features embedded in fine-grained time-series.", "Secondly, the most effective approaches to nonlinear classification tasks that are strongly unbalanced, combined with heterogeneous variables and noisy or missing patterns are those that combine numerous traditional data-mining techniques.", "However, it is worth noting that as the competition took place prior to the wide adoption of deep learning.", "Deep learning techniques can improve the ability to detect ad fraud, particularly in terms of feature engineering capacity [6].", "Based on the findings that publishers involved in click frauds tend to receive higher return on investment (ROI) than honest ones, the Viceroi defense method has been proposed [22].", "It consists of both the offline and the online components.", "The offline component exams click logs across a range of periods to filter out fraudulent clicks as well as geographic areas where the distribution of revenue per user are abnormal.", "The online component determines whether a specific click belongs to the abnormal region, based on the aforementioned considerations.", "In [21], the authors analysed a preventive method used to identify click frauds by ad exchange networks based on a variety of information from a unique publisher website (e.g., mouse movements), and added fake ads to the website to elicit malicious participants' clicking data.", "Similar to adopting the Bayesian method to handle false positive and false negative cases in data analytics, [85] proposed a Bayesian-based approach to detect fraud from click streams.", "In cases where prior ground truth information is unavailable, click fraud detection is more challenging [8].", "In [89], ads obtained by using both static and behavior analysis are used to analyse fraudulent behavhours.", "These ads are first divided into nine features, and then inputted to a Support Vector Machine (SVM) to identify malicious ones inserted by publishers.", "In addition, [1] proposed a malvertising detection system - MadTracer - which can be incorporated into ad exchange networks.", "MadTracer actively crawls information about the AIRTB process and utilizes a decision tree-based method to automatically create a series of fraud detection criteria.", "Existing data analytics-based AIRTB defense methods tend to be difficult to deploy.", "Moreover, as these methods are generally developed based on known attacks, they are unable to detect new attacks.", "3) Executable Analysis-based Methods.", "In AIRTB mobile advertising, third-party libraries are required.", "However, since ad libraries are granted the same level of authorization as their hosting applications, this can lead to serious security and privacy issues if these libraries are compromised.", "As shown by [11], security flaws can be found early by analysing the malicious executables utilized in malvertising, mobile advertising libraries, and the mobile applications.", "In [40], the authors developed a system, which can detect various types of risks in AIRTB (e.g., gathering sensitive user data, retrieving code from the web).", "Their findings suggest that mobile application stores should impose stricter rules on applications that have embedded ad libraries.", "In [19], the authors developed an analysis tool - MAdFraud - to detect ad fraud by concurrently executing a number of applications in emulators.", "It performs detection in three steps: 1) constructing HTTP request trees, 2) recognizing ad request pages via machine learning, and 3) detecting clicks in the constructed trees based on the adopted heuristics.", "In [66], DECAF was proposed to automatically detect multiple types of ad placement frauds in Windows-based mobile platforms.", "It focuses on the user interface (UI) state transition graph and exploits automated application navigation and optimizations to scan a large number of visual elements in a short time, and determine whether ads within a certain application are in violation of predefined standards governing ad placement and presentation.", "In addition, to study the malware executables adopted in malvertising, [96] proposed an automated framework which can simulate suspicious browsing activities based on over 800 real-world malicious executables.", "The majority of the approaches in this group are susceptible to attacks such as click-farms and can be rendered ineffective by sophisticated ad frauds (e.g., botnet ad frauds).", "In addition, the performance of such methods often depends on the tuning of key parameters, which makes them difficult to deploy.", "4) Theoretical Analysis Methods.", "Theoretical analysis of the security of AIRTB can offer useful ideas for building practical solutions.", "Game theoretic approaches are widely adopted in this area to study the interactions between defenders and attackers as well as other stakeholders in an AIRTB auctioning ecosystem.", "Many believed that ad networks (e.g., Google) would lose money if it compensated advertisers for fraudulent clicks, which implies that there is no financial motivation for ad networks to combat fraud.", "However, analysis in [74] found that ad networks would be worse off in the long term if frauds were unchallenged.", "In [111], a game theoretic model was proposed to study the botnet-driven ad fraud issue.", "It reveals that, in certain cases, ad networks are unable to resolve the issue of ad fraud on their own and needs to incur additional costs to elicit help from trusted third-parties.", "In [25], the authors examined a similar issue with a more sophisticated economic model - the Hotelling Competition-based Game-theoretic model - which is capable of taking into account a wider range of variables.", "[44], [45] leverage game theoretic modelling to cope with malvertising.", "In these two works, the malvertiser (i.e., the attacker) and the ad network (i.e., the defender) are players of a Bayesian game, since the the ad network only has partial knowledge of whether it is dealing with a normal advertiser or an attacker.", "Although these works shed light on the motivations and trade-offs of attackers and defenders, they generally lack experimental evaluation results to investigate the realism of the findings in practice, making them difficult to apply." ], [ "Online Methods", "Online approaches defending AIRTB can be divided into four categories: 1) Client-Centric Methods, 2) Client & Server Cooperative Methods, 3) Network-Centric Methods, and 4) Adblocking and Anti-Adblocking methods.", "1) Client-Centric Methods.", "As the name suggests, client-centric methods attempt to address security threats at the client side, either web browsers or mobile apps.", "Since the execution settings as well as integration techniques for these two types of clients are significantly different, researchers have proposed different methods for them.", "1 For Web Browser Clients Tripwire [91] is thought to be the milestone client-side defense solution for AIRTB in web browser environments to detect HTTP changes made to the web pages.", "It was considered as a competitor of HTTPS regarding the ad since it is less expensive.", "One drawback of Tripwire is that it cannot cope with a large number of attacks taking place simultaneously at the endpoint.", "Well-written malicious codes as well as well-written browser extensions can still circumvent Tripwire to manipulate the target webpages.", "In addition, Tripwire struggles from the lack of a reliable channel for communication.", "Consequently, adversaries with enough access privileges can remove the code for integrity verification and fake legitimate responses.", "In [110], the authors proposed a defense mechanism based on authenticated hash-chains.", "The fundamental operation of this method is the computation of hash values of web pages.", "It performed well initially on static webpages.", "Nevertheless, as websites become increasingly interactive, its effectiveness drops.", "Today, to assess the integrity of webpages and web applications, we must calculate the hash values of the document object model (DOM) tree, which are provided by the corresponding browser.", "In [123], the authors propose various test methods, which can filter out bots.", "For example, in order to test mouse events, functionality and behavior of browsers, the authors develop JavaScript snippets.", "These tests are then used to exam the client ad requests.", "AdSentry [24], a browser-based defense mechanism, targets ads based on JavaScript to protect website users from attacks such as malvertising.", "It uses a shadow JavaScript engine to entirely mediate the ad script's access to the webpage (including its DOM) without affecting other ad-related functions.", "This enables flexible regulation of ad script behaviours.", "AdSentry has a policy enforcer that enables end users and the publishers to customize the access permissions for ads.", "For end users, AdSentry can employ adblockers to automatically detect advertisements and enclose them in a specialized JavaScript wrapper.", "For the publishers, if the advertisements are encapsulated by unique JavaScript variables, their executions are confined to the shadow JavaScript engine.", "2 For Mobile App Clients Lack of sufficient oversight might enable legitimate advertising service providers to exploit mobile ad libraries to launch attacks on AIRTB from the inside.", "To deal with this problem, [86] proposed AdDroid to separate out privileges related to ad libraries in Android applications.", "Specifically, AdDroid grants application developers an unique API for advertising.", "As a result, API requests related to advertising do not receive the same permission as the mobile app itself.", "In [62], a novel verifiable mobile ad framework - AdAttester - was developed based on the ARM TrustZone technique.", "There are two main types of security primitives included in AdAttester, unforgettable clicks and verifiable displays, both of which are implemented based on the ARM TrustZone hardware root of trust in order to collect proofs that are attached to ad requests for attestation made to ad providers.", "In [49], FCFraud was proposed to target click fraud at the operating system level.", "It consists of a Linux kernel component which builds HTTP request trees from domain names that are available to the public and associated with advertising.", "Based on this, it monitors hardware activities such as clicks.", "If an extrapolated click from the built trees is verified to be not set off by the hardware, the corresponding request is detected as click fraud.", "AdSherlock [13] is a similar approach based on the ad request tree model.", "In [101], the authors proposed a machine learning-based system to distinguish fake clicks from the legitimate ones.", "It employs a classifier, which is trained using the motion sensor signals from mobile devices.", "Most existing client-centric methods are faced with one main issue: clients are vulnerable to hacking.", "Once a client is hacked, the security countermeasures within the client are rendered ineffective.", "As such, defense techniques that utilize both the client and the server have been proposed to get around this limitation.", "2) Client & Sever Cooperative Methods.", "In [42], one specific type of baiting ads named bluff ads are developed to be recognizable and clicked by bots or inadequately skilled click farm actors.", "The clicks on bluff ads are regarded as fraudulent by the server-side component.", "To make HTTPS more effective in scenarios like caching of Content Delivery Network (CDN), [102] proposed a new form of HTTP protocol with integrity, named HTTPi.", "In order to accomplish this, new modules must be added to web servers as well as web browsers.", "This approach can reduce ad concerns related to hacking.", "To cope with attacks on the endpoints, [104] proposed a framework for client-server transaction fingerprinting.", "To verify true clicks (i.e., only those that can be verified as legitimate), [52] put forth a new scheme based on requests being verified by client-based cryptography attestations.", "In [12], a similar approach was proposed with the goal of improving transparency from the viewpoint of the advertisers.", "Specifically, it gathers the pertinent data related to each impression (i.e., the URL in which the impression is delivered, the User-Agent which accepts such impression, and interactions between the user and the ad impression like clicks on the ad and mouse movements), and then transmits them to a central server.", "In [51], the authors envisaged gathering and inputting data related to user mouse movements into machine learning frameworks to aid ad fraud detection.", "To some extent, this is a useful technique for telling bots and people apart.", "Nevertheless, its applicability is still limited as currently: 1) the size of user profile data which cover the mouse movements is significantly larger than that of those which do not contain mouse movements; and 2) the data are generally not well-organized.", "In addition, by adding random noisy information to the automated mouse movements, bots can still trick machine learning algorithms into classifying them as humans.", "In [2], the authors developed a novel framework to identify mobile click bots.", "It consists of a client-side component and a server-side component.", "The former is used to gather and screen all events created by mobile clients, while the latter is used to analyse incoming data and filter out bots.", "Current methods in this category tend to focus on the security of only one participant, leaving the other ones still vulnerable to attacks.", "3) Network-Centric Methods.", "Through investigation of the network edge traffics, network monitoring tools (e.g., intrusion detection systems (IDSs)) can detect fraudulent and malvertising traffics.", "As such, there have been works taking advantage of IDSs to combat malvertising in AIRTB.", "In malvertising, attackers try to avoid having their web-based exploit kit services banned by hiding them.", "As a result, more sophisticated malvertising traffic detection methods are required.", "In order to contaminate client browsers, a web-based attack needs to redirect a web browser to the corresponding landing page, retrieve exploit files, and download infected codes.", "Based on the design of the web requests, which takes the form of a tree and classifying data based on structural information, [1] proposed a method to identify malicious ad traffic.", "In particular, information retrieval methods are first used to create the index of harmful tree samples.", "Then, the subtree similarity search algorithm is adopted to identify HTTP flows associated with the malicious exploit kits.", "It is worth noting that these approaches can be expanded and used by large organizations like Internet Service Providers (ISPs) to identify further advertising-related problems, such as ad frauds.", "However, there are several open challenges to be solved.", "On the one hand, these approaches need to be made more efficient to handle considerably higher levels of traffic.", "On the other hand, ISPs might require financial incentives to participate in these approaches.", "4) Adblocking and Anti-Adblocking.", "Adblockers can stop multiple types of malvertising.", "Nevertheless, adblocking can be disadvantageous for the publishers whose revenues depend on advertising.", "To maintain a steady stream of income, the publishers can establish rules which restrict access to the services or contents to people who are prepared to view the ads.", "Such decisions are based on economic considerations and have been analysed in [90] based on game theoretic modelling.", "In essence, an adblocker functions as a web browser extension and has greater access rights than anti-adblocking scripts, which typically execute within the JavaScript engine sandbox.", "Thus, adblockers ultimately hold a technical advantage in the rivalry between adblocking and anti-adblocking.", "Certainly, the rising popularity of adblocking can be perceived as a sign of trouble for the AIRTB ecosystem.", "However, from a technical standpoint, adblocking and anti-adblocking techniques can help enhance the security of AIRTB systems.", "As reported in [78], [46], plenty of publishers have implemented methods and techniques to identify and disable adblockers.", "The fundamental approach is through filtering.", "Firstly, a list is constructed based on feedbacks from the general public.", "Then, adblockers will block web traffics based on this blacklist.", "Some web traffics in this blacklist might be equipped with anti-adblockers.", "To combat these anti-adblockers, some approaches have been developed to improve the existing adblocking methods.", "In [82], JavaScript source codes undergo static program analysis in order to identify those that display ads.", "According to [129], more than 30% of the Alexa top 10,000 websites have been equipped with anti-adblockers scripts.", "The authors of [50] proposed to aggregate information from various sources (including JavaScript, HTML, and HTTP) into a machine learning framework, and adopt AI algorithms to construct a filtering list to detect anti-adblockers.", "In [50], the authors developed a scheme to prevent ads from being served via mobile apps.", "It consists of a machine learning classifier to reject ads using information from packets acquired after intercepting the network interface of the device." ], [ "Evaluation Metrics", "Establishing a collection of performance evaluation measures is crucial for the long-term development of security research for AIRTB as it allows for the unbiased comparison of the proposed methods.", "In this section, we discuss the benchmark evaluation metrics used in the state-of-the-art works.", "Accuracy: This is the most widely used evaluation metric.", "In current works, accuracy takes various forms, such as the number of recognised malicious attackers [62], precision [51], recall [49], the false positive rate [66], and the false negative rate [123].", "Efficiency: Efficiency is another widely adopted evaluation metric.", "Some works measure the overhead caused by the deployed mitigation strategies, such as processing time [40], the overhead on memory and CPU [100], and the overhead on event throughput (e.g., clicks) [100].", "It is worth noting that, efficiency is not used to directly evaluate the effectiveness in terms of combating malicious attacks, but to evaluate deployment complexity." ], [ "Summary", "Attack detection in AIRTB is an active area of research because malicious actors are constantly coming up with novel attacks.", "Thus, the most important factor in making the entire AIRTB ecosystem secure is through real-time detection of attacks.", "Due to the rapid expansion of AIRTB to meet the demands of the publishers and advertisers, the attack surface of AIRTB also becomes larger.", "Although numerous efforts are being made to mitigate attacks against AIRTB, the security requirements of all stakeholders still cannot be met by any existing approach in a well integrated manner.", "Nevertheless, several areas of research are gaining traction: Data analytics: Approaches for identifying ad fraud and bots and mitigating malvertising can take advantage of the vast amount of data created in the ads placement process.", "Security improvements: Limiting the permissions of mobile apps and adblocking is an effective solution for improving the security of the AIRTB ad delivery process.", "Theoretical analysis: Game theoretic approaches are frequently adopted to examine issues such as participants' motivations, and resource trade-offs in attacks and defenses." ], [ "Robustness", "Fig.", "REF provides an overview of the roadmap of research related to robustness in AIRTB.", "Most such works focus on helping AIRTB approaches to be robust against various types of biases.", "According to [67], biases related to AI research can be divided into three categories: 1) discriminatory bias, 2) productive bias, and 3) erroneous bias.", "Discriminatory biases manifest as algorithmic unfairness towards groups or individuals (e.g., the creation of discriminating content or subpar performance for some users) [99].", "Almost all machine learning algorithms are faced with the productive bias problem [43].", "According to the “no free lunch theory” [119], only predictive models that are biased towards specific distributions can outperform them during modeling.", "In general, productive bias is introduced through the presumptions about the learning tasks (via loss function design), the distribution assumption, and the optimization methods.", "Erroneous biases are a specific type of systematic errors caused by unrealistic assumptions.", "For instance, it is commonly assumed in AI that there is no difference between the distribution of the real data and that of the training data.", "Nevertheless, due to reasons like sample selection bias [71], [72], the training data used might not accurately reflect the distribution of the real data.", "Therefore, if the assumption is incorrect, the learnt model may perform poorly on the test data.", "In the field of AIRTB, addressing erroneous biases has been the main focus of robustness research.", "Thus, in the following parts, we use the teams “bias” and “erroneous bias” interchangeably.", "Current robust AIRTB research focuses on addressing the following types of erroneous biases: 1) winning price estimation biases, 2) sample selection bias (SSB), and 3) bid shading biases in closed (i.e., censored) first price auctions." ], [ "Robustness against Winning Probability Estimation Biases", "In second price auction-based AIRTB, when a bid request is received from the ad exchange, DSPs need to calculate a bid price as the bid response to join the corresponding auction.", "It is important for DSPs to estimate the winning prices in order to optimize their prices [112].", "However, winning price estimation is complicated with the data censorship issue.", "On the one hand, with second price auction, a DSP only knows the winning price if it wins in the auction.", "Thus, the winning price observed by each DSP is right-censored: the DSP only knows the winning prices are higher than its bids when it loses the auctions, but not the exact values [112].", "Moreover, in second price auctions, if the reserve price of a specific bid request set by the corresponding SSP is higher than the second highest bid price, but lower than the highest one, the winning DSP needs to pay the reserve price for the bid request instead of the market price.", "In this sense, the winning price observed by each DSP is left-censored: it only knows the upper bound of the market price.", "Thus, to get an accurate estimation model for winning probability, these biases caused by the data censorship issue must be addressed.", "Existing methods for mitigating winning probability estimation biases can be divided into three main categories: 1) Tobit-based methods, 2) Kaplan–Meier methods, and 3) auxiliary task-based methods.", "This type of methods explicitly divides the loss functions into two parts, one for the winning records with observable winning prices and the other for the losing records with censored winning prices.", "These two sub-loss functions are then optimized jointly.", "Wu et al.", "[121] first proposed to incorporate the censored regression model [41] into AIRTB to estimate the winning rate for the target DSP on a given bid price.", "In this work, linear regression is leveraged for bid prices with observable winning prices, and censored regression is leveraged for bid prices with censored winning prices.", "The maximum likelihood procedure is used to predict the winning probability of a given bid price.", "Nevertheless, [121] assumes that the winning prices follow a normal distribution, which might not always be true in practice.", "To address this problem, [130] replaced the normal distribution assumption with the gamma distribution assumption, and developed a gamma-based and regularization-aware censored linear regression model.", "However, this approach led to further discussions that gamma distributions might also not be realistic.", "As such, [120] explored multiple distributions and combined them with the deep leaning models to investigate the corresponding prediction qualities.", "Ghosh et al.", "[38] relaxed the assumptions on the distribution of winning probability by adopting a mixture density censored network to learn smooth winning price distributions.", "Apart from the disadvantage caused by the strong assumptions, the aforementioned methods are also unable to estimate the possible ad cost for each specific bid price as they can only perform point estimations of market prices.", "To address this problem, [38] leverages the improved Censored Regression.", "To address the same problem, [92] proposed a deep landscape forecasting model taking advantage of recurrent neural networks (RNNs) to model the conditional winning probability with respect to any given bid price flexibly without imposing any assumptions on the underlying distribution." ], [ "Kaplan–Meier Methods", "Methods falling into this category leverage the Kaplan–Meier estimation approach to address the censored data problem in winning probability estimation.", "Kaplan–Meier estimation is a well-know survival analysis approach used for forecasting a patient's chance of survival after a certain treatment [31], [39], which is similar to the winning probability estimation task.", "In [127], the authors analysed the striking similarities between the survival analysis and AIRTB winning probability estimation, and leveraged the non-parametric Kaplan-Meier Product-Limit approach [54] to fit the market price distribution.", "However, this method relies on the counting-based statistics of given sample clusters, making it unable to accurately predict the winning probability for individual bid requests.", "To address this problem, [113] proposed to learn the Kaplan–Meier estimation for individual bid requests by predicting two probabilities: 1) the probability of losing a given auction at a certain bid price, and 2) the probability of winning a given auction at a specific market price.", "In most cases, the sequential patterns shown in the features over the price space turn out to be important for winning probability estimation.", "The deep landscape forecasting model in [92] combines the survival analysis for dealing with censorship with RNNs to model the sequential patterns for probability distribution forecasting." ], [ "Auxiliary Task-based Methods", "Yang et al.", "[124] address the winning probability estimation problem by combining it with the utility estimation problem into a multi-task learning problem.", "The resulting approach is capable of solving the AIRTB winning probability estimation problem as a multi-category classification task.", "Despite performance improvement, this model is faced with the problem of tuning hyperparameters, which are used to weight the importance of the winning probability estimation task versus the utility estimation problem." ], [ "Evaluation Metrics", "The evaluation metrics commonly adopted to measure the robustness of AIRTB winning probability estimation methods are summarized as follows: Mean Squared Error (MSE) : In [121], [130], [113], MSE between the ground truth winning prices and the estimated ones is adopted to evaluate the effectiveness of the proposed methods: $\\begin{aligned}MSE = \\frac{1}{N}\\sum _{i=1}^N (y_i - \\hat{y}_i)^2,\\end{aligned}$ where $N$ denotes the number of the data samples.", "$y_i$ and $\\hat{y}_i$ are the ground truth value and the estimated value of the $i$ -th sample, respectively.", "The smaller the MSE, the better the performance.", "Mean Absolute Error (MAE) : In [120], MAE is adopted as one of the evaluation metrics: $\\begin{aligned}MAE = \\frac{1}{N}\\sum _{i=1}^N ||y_i - \\hat{y}_i||.\\end{aligned}$ The smaller the MAE, the better the performance.", "Log-likelihood (LL): It is the log of the density function (or the probability): $\\begin{aligned}LL = \\sum _{i=1}^N \\log (f_{x_i}(y_i|\\theta )),\\end{aligned}$ where $x_i$ is the observed data.", "$f$ denotes the estimation model with parameters $\\theta $ .", "It assesses how likely it is to observe the data using the model.", "The likelihood that the data are generated by the model increases with the log-likelihood.", "However, compared with MSE or MAE, log-likelihood performs worse when the values are small.", "Average Negative Log Probability (ANLP): It measures the probabilities of the bid requests used for testing occur alongside the corresponding market prices: $\\begin{aligned}ANLP = -\\frac{1}{N}\\sum _{i=1}^N \\log f_{x_i}(y_i|x_i).\\end{aligned}$ Concordance index (C-index): C-index assesses how accurately a model can order samples based on their market prices.", "Pearson correlation: It is adopted to measure the correlation between the estimated results and the ground truth values, and is defined as: $\\begin{aligned}COR(Y, \\hat{Y}) = \\frac{\\sum _{i=1}^N(y_i-\\bar{y})(\\hat{y}_i-\\bar{\\hat{y}})}{\\sqrt{\\sum _{i=1}^N(y_i-\\bar{y})^2 \\sum _{i=1}^N(\\hat{y}_i-\\bar{\\hat{y}})^2}}.\\end{aligned}$ The larger the correlation, the better the performance.", "KL-divergence (KL): KL between the ground truth distribution $q_Y$ and the estimated distribution $q_{\\hat{Y}}$ is formulated as: $\\begin{aligned}KL(q_Y||q_{\\hat{Y}}) = - \\sum _x q_Y \\log \\frac{q_{\\hat{Y}}}{q_Y}.\\end{aligned}$ The smaller the distance, the better the performance.", "1-Wasserstein distance (WD): WD between the ground truth distribution $q_Y$ and the estimated distribution $q_{\\hat{Y}}$ is formulated as: $\\begin{aligned}WD(q_Y, q_{\\hat{Y}}) = \\sum _x |q_Y - q_{\\hat{Y}}|.\\end{aligned}$ The smaller the distance, the better the performance." ], [ "Summary", "Generally, most AIRTB debiasing methods based on Tobit models tend to be parametric ones, and need to make assumptions on the distributions of the winning probability.", "Non-parametric methods based on Kaplan–Meier estimation learn the winning probability distribution without making any assumptions.", "Nevertheless, the majority of the approaches falling into this category need to cluster the data in order to leverage heuristics to boost model accuracy.", "The clustering operation, to some extent, limits the applicability of these methods to dynamic real world AIRTB data.", "In addition, methods based on Kaplan–Meier are only able to make coarse-grained predictions using data segments.", "The latest trend is to frame the winning probability from the perspective of multi-tasking learning by incorporating other auxiliary tasks.", "However, the performance of such approaches is still sensitive to hyperparameters, and requires tedious parameter tuning.", "Figure: Illustration of the sample selection bias (SSB) in RTB auction.", "The training space only contains winning impression data, whereas the inference space includes all full-volume bid request data ." ], [ "Robustness against Utility Estimation Biases", "AIRTB is faced with the problem of SSB [125], which refers to the systematic mismatch in data distributions between the training space and the inference space [118], [124].", "Existing models for AIRTB utility estimation mostly need labeled data for supervised learning.", "In practice, the labels on whether the users responded or not (e.g., clicks and conversions) and the market prices for bid requests are only available if the advertiser wins in the corresponding auctions [127], [124].", "As shown in Figure REF , most existing utility estimation approaches are learned only on clicked samples (including click-through rate, i.e., CTR and conversion rate, i.e., CVR), while drawing conclusions about the entire space with all impression samples.", "In addition, the clicked samples and the converted samples only account for a small fraction of the impression samples.", "They are biased due to user actions.", "Thus, the SSB problem greatly reduces the effectiveness of utility estimation models.", "Compared with CTR, CVR reflects user preferences more strongly (through subscription of service, registration, installation of software, etc.", "), and is more relevant to advertisers.", "Thus, in this part, we focus on AIRTB approaches for addressing SSB in CVR prediction.", "Existing works for enhancing the robustness of AIRTB against the SSB issue in CVR prediction can be divided into three main categories: 1) Sampling-based Methods, 2) Sequential Pattern-based Methods, and 3) Auxiliary Task-based Methods." ], [ "Sampling-based Methods", "This type of methods attempt to solve the SSB problem in CVR prediction by leveraging sampling approaches.", "Through the introduction of sampling, the models trained are pulled to fit the true distribution of the entire space instead of just the training space.", "In particular, [127] addresses the SSB issue in CVR prediction by using rejection sampling to fit the true underlying distribution from observations.", "However, this method is susceptible to numerical instability when dividing the rejection probability to weight the samples." ], [ "Sequential Pattern-based Methods", "When being shown an ad, the potential responses by the target user form a sequence, which is generally in the form of impression $\\rightarrow $ click $\\rightarrow $ conversion [70].", "Methods used to mitigate SSB in CVR prediction under this category attempt to fully exploit such sequential patterns of user responses.", "The first such work is [70].", "It models CVR with all samples via training two auxiliary tasks (i.e., post-view click-through rate and post-view click-through conversion rate).", "It is extended in [118] by modeling two auxiliary actions (i.e., disjoint purchase-related Deterministic Action (DAction) and Other Action (OAction)), which are injected between click and purchase (i.e., conversion).", "Wang et al.", "[115] further considered the problem of delayed feedback and proposed ESDF based on neural networks to model the CVR prediction problem from the entire space perspective through combing the benefit of the time delay factor as well as the user sequential behavior pattern.", "The CTR prediction task is involved in all these studies, which is the task preceding CVR prediction.", "Intuitively, representations learned from one task may be helpful for the other [79].", "Therefore, [116] takes advantage of the interplay of representation learning across multiple tasks to perform neural architecture search to learn the best connections between task-specific layer-wise representations.", "The proposed AutoHERI frames the CVR prediction in the entire space with automated hierarchical representation integration.", "However, all these methods only focus on macro-level behaviors, which help understand subsequent purchase patterns at the granularity of the item level.", "They overlook the more frequently occuring fine-grained behaviors (e.g., clicks) on detailed elements of items (e.g., user comments, pictures, videos), which are referred to as micro-level behaviors.", "To some extend, insight into micro-level behaviors can enhance understanding of future macro-level behaviors.", "As such, [7] applied Purchase-related Micro-behavior Graph (PMG) to describe the users' micro-level behaviors to transform the CVR prediction problem into a graph classification problem.", "Work [117] extended [70] and [118] by constructing the complete user sequential behavior graph, where the micro-level and macro-leve behaviors are hierarchically encapsulated as the one-hop and two-hop post-click nodes in a unified framework, respectively.", "Inspired by studies used to eliminate bias in recommender systems, [126] frames the SSB problem in CVR prediction from a causal perspective, and accounts for the causes of missing not at random [65].", "They proposed two causal estimators for CVR prediction, which adapt the missing not at random mechanism to be trained on a perfect dataset where all exposed items are clicked by users." ], [ "Auxiliary Task-based Methods", "In [124], all unlabeled data generated by the losing bids are leveraged to estimate CTR to address the SSB issue.", "They proposed the Multi-task Advertising Estimator (MTAE), a multi-task learning framework for CTR prediction and market price modeling.", "MTAE takes advantage of the sufficient bid prices of the full-volume bid requests and incorporates the auxiliary task of estimating the winning probability into the model for unbiased learning." ], [ "Evaluation Metrics", " Area Under Curve (AUC): This is one of the most widely used metrics for utility estimation model evaluation.", "It reflects the ranking ability and is formulated as: $\\begin{aligned}AUC = \\frac{1}{|S_{+}||S_{-}|} \\sum _{x^+ \\in S_+} \\sum _{x^{-} \\in S_{-}} I(\\phi (x^+)>\\phi (x^{-}),\\end{aligned}$ where $S_{+}$ and $S_{-}$ are the positive samples and negative samples, respectively.", "And $|\\cdot |$ denotes the number of, $\\phi (\\cdot )$ and $I(\\cdot )$ are the estimation function and the indicator function, respectively.", "Group Area Under Curve (GAUC): This metric is computed as follows.", "Firstly, partition the test data samples into several groups based on the unique user ID.", "Secondly, calculate the AUC in each group.", "Then, average the weighted AUC.", "This process could be formulated as: $\\begin{aligned}GAUC = \\frac{\\sum _u w_u AUC_u}{\\sum _u w_u},\\end{aligned}$ where $w_u$ is the weight for user $u$ and is set as 1 in utility estimation, and $AUC_u$ is user $u$ 's AUC.", "F1 Score (F1): It is formulated as: $\\begin{aligned}F1 = \\frac{TP}{TP+\\frac{1}{2}(FP +FN)},\\end{aligned}$ where $FN$ , $FP$ , and $TP$ are the number of false negative, false positive and true positive estimations.", "Mean Squared Error (MSE): MSE is adoped in [7] to evaluate the effectiveness of their utility estimation function, which is defined in Eq.", "(REF ).", "Negative Log-likelihood: Its formulation is defined as adding a negative symbol to that of Log-likelihood defined in Eq.", "(REF )." ], [ "Summary", "Sampling-based methods are faced with the numerical instability problem when introducing rejection sampling.", "For those based on sequential patterns of user response actions are concerned, despite performance improvement, the majority of them are still restricted to impression-level inference spaces.", "In addition, there is a lack of theoretical support about them being unbiased estimators.", "Similar to Sec.", "REF , auxiliary task-based methods face the problem of tuning hyperparameters, which is tedious." ], [ "Robustness against Bid Shading Biases", "Before 2017, the second-price auction mechanism, in which the winner pays the second highest bid price to the supply side platforms, was the dominant type of AIRTB auction mechanisms.", "However, almost all the mainstream SSPs and ad exchange networks (e.g., OpenX, Rubicon Project, Pubmatic, Index Exchange, AppNexus) are starting to implement first-price auctions [97], [128].", "Two major reasons for moving away from second-price auction are: 1) as the bidders always pay exactly what they bid, first-price auction increases the transparency and accountability for the bidders [16], [37], [93], [94]; and 2) the widely used and favored techniques of Header Bidding are incompatible with the unaltered second-price auction mechanism [5].", "DSPs can estimate their potential competitors' bid prices, and take their behaviors into consideration in order to precisely design the optimal bidding strategies in the first-price auction mechanism.", "If the DSP knows the competitors' bid prices in advance, providing the bid price slightly higher than the highest bid price from all competitors would be the optimal bidding strategy (i.e., winning the corresponding auction with the lowest bid price).", "However, this is impossible in practice, where the DSP needs to predict the minimum winning price precisely, and attempts to lower their original bid prices which they had set for the second-price auction mechanism, that means shading the true value of the inventory in accordance.", "Such a process is named as bid shading, which has been utilized in auctions across various industries but is new to online advertising, especially to AIRTB.", "In first-price auction, the minimum winning price is either the highest bid price submitted by competing advertisers or the floor price.", "However, whether or not to announce the minimum winning price after the auction is determined by the SSP.", "If the minimum winning price is announced, the first-price auction mechanism is referred to as open (or non-censored); otherwise, it is referred to as closed (or censored).", "If the auction is closed, DSPs need to estimate the winning prices.", "This is similar to winning probability estimation in second-price auction, and is also biased.", "To the best of our knowledge, only one study [128] deals with the censored problem in first-price auction-based AIRTB.", "Specifically, they first utilize the formulation in [83] to formulate the cumulative distribution function of the minimum winning price, enhancing the flexibility to select distributions.", "Then, they obtain the parameters of the cumulative distribution function through minimizing the loss between the ground truth and the predicted likelihood of winning." ], [ "Fairness", "Fairness, which is defined as “the absence of any prejudice or favoritism towards an individual or a group based on their intrinsic or acquired traits in the context of decision making” [98], [72], [67] is a crucial issue in AIRTB systems.", "Ad creatives expect the AIRTB system offers fair exposure opportunities to them, avoiding the Matthew effect [64].", "Being fair might also attract advertisers of niche ads or items, which can increase the variety and originality of the ads or products in an AIRTB system.", "From the perspective of the AIRTB ecosystem, fairness is advantageous over the long run.", "For instance, the unfair trading ecosystem might give more exposure opportunities to advertisers with short-lived popular items, leading to loss of users over time.", "Similarly, it might provide some niche providers with few impressions.", "Niche advertisers may have a propensity to leave unfair AIRTB systems as a result of the negative feedbacks, reducing the diversity of ads offered to users.", "Fairness can help improve users' loyalty to the AIRTB systems.", "In a nutshell, improving fairness is of vital importance for AIRTB systems.", "Depending on the focus, fairness can be separated into two main categories: 1) process fairness and 2) outcome fairness [114].", "Process fairness (a.k.a.", "procedural justice [61]) requires fair allocation in the process [61], [76], while outcome fairness (a.k.a.", "distributive justice [61]) requires fair outcomes as a result of fair allocation [29], [61].", "In the following part of this section, we pay more attention to outcome fairness as most of the state-of-the-art studies in AIRTB are anchored in this category.", "Outcome fairness can be achieved in two main ways: 1) grouped by the goal, and 2) grouped by the concept.", "Depending on the fairness level of the result, outcome fairness can be divided into individual fairness and group fairness.", "According to the fairness concept, outcome fairness includes various sub-categories, from those that have received much attention (e.g., consistent fairness, calibrated fairness) to those only explored by a small number of works (e.g., maximin-shared fairness, Rawlsian maximin fairness, envy-free fairness, counterfactual fairness).", "In order to ensure group fairness, two groups of individuals which have distinct sensitive attributes (e.g., gender, age, ethnicity) must statistically experience similar treatments and receive similar outcomes.", "Fair outcomes for a group of people can be ensured by group fairness.", "However, at the individual level, discrimination can still occur [26].", "Individual fairness requires that outcomes must be equitable for each individual.", "In some contexts, individual fairness means that similar people should be treated equally [26], [9].", "However, there are plenty of alternative ways to define individual-level fairness.", "In order to be clear, following work [114], we refer to a broader meaning of fairness (i.e., the individual level fairness) as individual fairness.", "It is worth noting that compared to individual fairness, group fairness is more complicated since there can be multiple divisions and these divisions may evolve over time, allowing one person to be a member of multiple groups at once [35].", "In addition, individual fairness in scenarios in which each person belongs to a distinct group can be conceptually viewed as a specific example of group fairness.", "Unfair practices can ocurr in AIRTB systems.", "In the following, we first discuss unfairness in AIRTB.", "Then, we analyse the reasons behind the unfairness in AIRTB.", "Finally, we discuss existing studies proposed to mitigate unfairness in AIRTB." ], [ "Unfairness in AIRTB Systems", "Unfairness can occur in AIRTB systems unintentionally and intentionally.", "For example, [20] shows that men were shown more high-paying job ads than women with comparable profiles.", "In addition, [59] experimentally demonstrated that STEM (science, technology, engineering and math) job ads, which are designed to be gender neutral, are displayed to fewer women than men across almost all major platforms.", "For instance, in Facebook, a platform where women make up 52% [109] of the user population and who are more likely to click ads, women are far less likely to be shown such ads than men.", "The study found that women are a prized demographic, making them more expensive to advertise to.", "This implies that ads that are meant to be gender-neutral can be delivered in the way that appears to be discriminatory by AIRTB algorithms that focus on optimizing cost-effectiveness.", "Ali et al.", "[3] explained that this is not solely the indication of the ingrained cultural bias nor a result of user profiles inputted into ads algorithms, but rather the product of competitive spillovers among advertisers.", "Apart from users, advertisers can also be unfairly treated during the AIRTB auction process [108].", "Recently, several research works have devoted considerable effort to investigate the causes of unfairness in AIRTB.", "A number of possible explanations have been found.", "On the one hand, the goal of AIRTB systems is to deliver the right ads to right users at the right time.", "To achieve such goal, the system creates detailed user profiles and monitors ads performance to learn how users respond to various ads.", "Based on historical data, subsequent ads can be delivered to targeted to users more precisely.", "However, during this process, the AIRTB system can unintentionally deliver ads primarily to certain groups of users.", "This is especially troubling when it comes to employment, housing, and credit related ads because unfairness in these categories can violate certain legislation.", "On the other hand, market forces and financial optimization strongly influence how ads are delivered, as some user groups are bound to be more valuable than others [27], [60], [69], [95].", "Therefore, advertisers on a tight budget are more likely to lose bids for the “valued” customers.", "In this case, if the sensitive attributes of these “valued” customers belong to the protected classes, this can result in unfair ad delivery, even if the advertisers may not intend to exclude these customers." ], [ "Enhancing Fairness in AIRTB Systems", "Fig.", "REF summarizes the methods used to mitigate unfairness in AIRTB systems.", "Existing studies can be divided into two categories: 1) for advertisers, and 2) for users.", "The first category focuses on mitigating ad exchange networks' unfair favor for certain groups of advertisers.", "The second category focuses on fulfilling restrictions on the percentage of ad impressions that must reach particular groups of users belonging to certain demographics to enhance fair treatment of users." ], [ "Fairness for Advertisers", "As there can be many advertisers connected to a specific ad exchange network, it is critical for the network to decide on which groups of advertisers to send bid requests to and how to choose winners for bid requests fairly in order to maximize its profit and build trust with advertisers.", "In [108], it has been suggested that this goal can be achieved through basic modifications to the ad auction mechanism that the ad exchange network uses.", "However, this work focuses on providing theoretical models for regulating online ad auctions instead of mitigating unfairness.", "Enhancing fairness is just one of the aspects studied.", "As such, whether such intervention works in existing AIRTB systems still needs to be validated." ], [ "Fairness for Users", "Works falling into this category can be further divided into two groups: 1) methods designed by the entire system, and 2) methods designed by the advertisers.", "Methods designed by the entire system.", "To prevent discriminatory advertisements with respect to sensitive attributes, [14] proposed an optimization-based ad auction framework based on Myerson auction [75], which maximizes the revenue of the ecosystem conditioned on constraints that preclude the development of unintentional discrimination.", "The constraints can be any notion of group fairness described in [15].", "Works [56], [87] extended [14] by adapting it into different auction frameworks and capturing requirements of fairness from data, respectively.", "Celis et al.", "[14] focuses on solving the optimization problem formulated based on algorithmic fairness, while [17] focuses on comparing its performance with the unfair optimum and studying the cost of achieving fairness.", "It is an individual fairness approach (i.e., any two individuals must obtain additively similar allocations from each advertiser if they have been given multiplicatively similar values by all the advertisers).", "They follow the idea of [47] and exam how auction design affects outcome fairness with the assumption that bids of each advertiser are accepted without discrimination.", "Afterwards, they adopt the Inverse Proportional Allocation algorithm to balance social welfare and fairness for a wide-range of value stability conditions.", "Apart from these methods which focus on algorithms design, [48] proposed to adopt auditing to cope with group unfairness based on user features.", "However, this method depends greatly on human intervention and incurs high costs.", "Methods designed by advertisers.", "Works under this category attempt to design bidding and targeting strategies that rectify the unfairness introduced by the AIRTB system from the perspective of the advertisers.", "In [77], the authors designed bidding strategies for advertisers to achieve impression parity across various demographic groups.", "Similarly, in [36], the authors designed targeting strategies for achieving parity in outcomes or conversions across different user groups.", "In both approaches, the requirement of group fairness is formulated as the constraint on the main objective of maximizing advertisers' revenues." ], [ "Evaluation Metrics", "Statistical significance has been adopted to measure AIRTB fairness [48].", "Specifically, the percentages of people belong to different gender groups seeing two different ads are calculated.", "Then, the Z-test with a 95% confidence level is performed to measure the statistical significance of the difference between these two groups." ], [ "Summary", "It can be observed from Fig.", "REF that research on fair AIRTB techniques is still in its infancy.", "Existing works mostly focus on group fairness, while only two focusing on individual fairness [17], [108].", "Most of them formulate the fairness requirements as the constraints on the main objective function, with only one taking fairness as the goal [48]." ], [ "Promising Future Research Directions", "There are many areas where exciting new research can be carried out towards building a trustworthy AIRTB ecosystem.", "As shown in Table REF , existing studies on trustworthy AIRTB focus on the security, robustness and fairness dimensions of trustworthy AI, leaving other dimensions such as accountability and explainability less well explored.", "In addition, there is still plenty of room for improvement for research on the security, robustness and fairness dimensions.", "In this section, we discuss promising future directions for this emerging interdisciplinary field." ], [ "Demand for Transparency", "One problem of AIRTB systems is that it is difficult for the advertisers to acquire information about the sources of the traffics, leaving room for ad fraud and ad injection.", "Typically, most participants may be hesitant to disclose the specifics of the approaches and techniques used for combating ad frauds.", "Since the advertisers are the sources of revenue and the foundation of the value chain for AIRTB systems, they have great incentive to help maintain the sustainability of the ecosystem.", "Therefore, techniques to help them more transparently assess the effectiveness of their ad campaigns and advertising strategies while guarding against ad frauds are desirable." ], [ "Trade-offs between Stakeholders", "Table REF lists the security requirements of various stakeholders of AIRTB.", "It can be observed that security requirements by different stakeholders may conflict.", "For instance, driven by profit, publishers may be tempted to generate fraudulent ad traffics, which contradicts with advertisers' requirements for a fraud free AIRTB system.", "To build the trustworthy AIRTB system, it is crucial to satisfy security the requirements by different stakeholders simultaneously.", "The research problem of striking the right balance among multiple AIRTB stakeholders' security requirements in a cost-effective manner remains open." ], [ "Standardizing Evaluation Metrics", "To evaluate the effectiveness of methods enhancing the robustness of AIRTB systems, most existing works adopt MSE, ANLP and other conventional metrics which are designed to assess ranking performance.", "However, as shown in [18], these metrics may not be well-suited for evaluating robustness.", "Furthermore, since different works perform evaluation following different metrics, the results are reported inconsistently.", "In order to objectively compare the performance of different approaches in this area of research, more suitable and standardized evaluation metrics need to be proposed." ], [ "Balancing Requirements from Various Stakeholders", "As shown in Table REF , existing studies focus on satisfying the robustness requirements from the advertisers.", "As far as the publishers are concerned, in order for them to build trust with the AIRTB system, they also need to build unbiased models to optimally calculate the reserve price based on the historical auction records.", "It would be useful for future research to look into this area in order to serve the robustness needs of the publishers well." ], [ "Leveraging Auxiliary Information", "Taking advantage of the wealth of auxiliary information in AIRTB systems can enhance its robustness.", "There have been a few works in recent years showing that biases in recommender systems can be corrected by using user or item attributes.", "Since the ads, advertisers, publishers and users all possess auxiliary information, investigations on how to leverage such information to enhance the robustness of AIRTB systems can be worthwhile." ], [ "Reasoning", "Causal graphs among the stakeholders can be useful for enhancing the robustness of AIRTB systems.", "Reasoning about the occurrence and effect of bias are keys for debiasing.", "Causal graphs can provide a new source of information for this purpose.", "As such, building new reasoning techniques to leverage causal graphs in AIRTB to enhance its robustness against biases can be a promising future research direction." ], [ "New Evaluation Metrics", "As shown in Section , there is only one fairness metric used by existing studies in AIRTB.", "Compared to the diverse notions of fairness studied in this field, this is inadequate.", "As such, new research on designing proper evaluation metrics to compare the fairness of AIRTB approaches is required." ], [ "New Benchmarking Datasets", "Existing methods tend to evaluate their methods using datasets collected by themselves from various advertising platforms, making it impossible to compare the performance in a standardized manner.", "There is a lack of public benchmarking datasets to study the fairness related approaches for AIRTB systems.", "This gap needs to be bridged in order for this field to be further advanced in a sustainable way." ], [ "Tradeoff between Fairness and Performance", "In most cases, improving fairness means the loss in performance in machine learning.", "For example, in the field of recommender systems, the tradeoff between recommendation accuracy and fairness has been well studied [114].", "However, fairness is not necessarily at odds with performance in well-designed systems.", "Particularly, in studies related to classification tasks [58], it is reported that increasing fairness may result in accuracy improvement.", "Therefore, investigating how to enhance fairness while maintaining performance for various stakeholders is crucial for successfully implementing fairness strategies in AIRTB systems." ], [ "Multi-Faceted Fairness", "Most of existing works focus on achieving just one notion of fairness and corresponding goals in AIRTB.", "However, as there are multiple fairness notions [34], [84] and multiple stakeholders involved in AIRTB systems, it is useful to study how multi-faceted fairness can be achieved." ], [ "Causal Inference for Fairness", "An emerging area of research in machine learning is on reducing unfairness through causal inference , [122].", "However, to our best knowledge, there is no study on fairness in AIRTB.", "Bridging this gap can lead to innovative new capabilities in the AIRTB system, enhancing its fairness in an interpretable manner." ], [ "Accountability", "It is well-known that AIRTB systems have various drawbacks.", "Among them, the most noticeable one is that it can be challenging for the advertisers, publishers, DSPs and SSPs to assess their benefits derived from joining the auctions.", "For example, it is difficult for the advertisers to acquire information about the sources of ad traffics, leaving room for frauds and backroom deals.", "As such, it is necessary to design accountable and verifiable auction mechanisms to build trust with various stakeholders." ], [ "Conclusions", "This paper provides a comprehensive review of the trustworthy AIRTB literature.", "To our best knowledge, this is the only survey on this emerging interdisciplinary area.", "We proposed a multi-tiered taxonomy to analyse trustworthy AIRTB works focusing on security, robustness and fairness.", "Under each topic, we summarize the challenges faced, the key approaches taken as well as the main evaluation metrics adopted to experimentally measure their performance in order to support long-term sustainability of this field.", "Finally, we suggest some promising future research directions that can help enhance trustworthiness of AIRTB systems.", "For this field to move forward, collaboration among researchers and industry practitioners is key.", "We hope that this survey can serve as a useful roadmap towards building trustworthy AIRTB systems of the future." ], [ "Acknowledgments", "This research is supported by the National Research Foundation, Singapore under its AI Singapore Programme (AISG Award No: AISG2-RP-2020-019); the Joint NTU-WeBank Research Centre on Fintech (Award No: NWJ-2020-008); the Nanyang Assistant Professorship (NAP); the RIE 2020 Advanced Manufacturing and Engineering (AME) Programmatic Fund (No.", "A20G8b0102), Singapore; and Future Communications Research & Development Programme (FCP-NTU-RG-2021-014).", "Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation, Singapore." ] ]
2210.07770
[ [ "Chaos bound and its violation in the torus-like black hole" ], [ "Abstract In this paper, we have studied the variation of the chaos bound in two regions of the torus-like black hole, i.e., the region close to the black hole horizon and the region at a certain distance from the black hole horizon.", "The angular momentum of the particle affects the effective potential and influences the magnitude of the chaotic behavior of the particle.", "Therefore, the angular momentum of particle is important in the study.", "The angular momentum of a particle not only affects the particle equilibrium orbital position, but also affects the Lyapunov exponent.", "As the angular momentum of the particle increases, the particle equilibrium position gradually moves away from the black hole horizon.", "In the near black hole horizon region, the chaos bound is not violated, however, at the far black hole horizon region, the chaos bound is violated.", "In addition unlike the charged AdS black hole which has a spherical topology of the horizon, the torus-like black hole has a toroidal topology of the horizon." ], [ "Introduction", "Chaos is a seemingly random, chance and irregular motion that occurs only in non-linear and non-accumulable dynamical systems, which are sensitive to initial conditions [1], [2], [3].", "Since the tiny errors in chaotic motion grow rapidly with time, the motion at this point can be quite different from what it would be without these errors.", "This means that long-term prediction of chaotic motion in general is very difficult.", "It also means that chaotic motion has many new properties that the usual dynamical systems do not have.", "This has led to widespread interest in the study of chaos in various areas of physics.", "In general relativity, the geodesic motion of particles in a generic Kerr-Newman black hole spacetime [4] is integrable and there is no chaos in this system.", "In order to study chaotic motion in general relativity and to ensure that the dynamical system describing the motion of the mass is integrable, it is necessary to resort to some spacetime with a complex geometry or to introduce some additional interactions.", "Along this spirit, the chaotic motions of particles have been studied in the multi-black hole spacetimes [5], [6], the perturbed Schwarzschild spacetime [7], [8], [9], and in the non-standard Kerr black hole spacetime described by MankoNovikov metric [10], [11], [12].", "The study of chaotic behavior of the geodesic motion of particles has now involved several spacetime contexts, and the main interest of researchers in these systems is to use and further develop coordinate invariant descriptions and metrics of chaotic behavior to make them applicable to general relativity where space and time are not absolute.", "It has been recently shown that using the Melnikov method to identify chaotic behavior in geodesic motion perturbed by the minimum length effect around a Schwarzschild black hole, there is Smale horseshoes chaotic structure in the phase space [13].", "Based on the Melnikov method, the existence of a critical amplitude affecting temporal chaos is demonstrated by studying the thermodynamic chaos of RN-AdS black holes immersed in Perfect Fluid Dark Matter, while spatial chaos is always present regardless of the perturbation intensity [14].", "It has also been tentatively proved that the chaotic behavior of particles near the black hole has quantum gravitational effects [15].", "In addition, chaotic phenomenon was also investigated for the pinning particles in Kerr spacetime [16].", "More interestingly, chaos in loop string dynamics has been found in the asymptotically flat Schwarzschild black hole spacetime [17], in the AdS-Schwarzschild black hole [18] and in the AdS-Gauss-Bonnet black hole spacetime [19] after the introduction of loop strings instead of point particles.", "In recent studies, a number of violations of the chaos bound have been discovered [20], [21], [22], [23], [24].", "The static equilibrium of charged probe particles around a black hole can be provided by the Lorentz force.", "In Ref.", "[20], Zhao et al.", "considered the contribution of the sub-leading terms in the expansion of the near-horizon regions and investigated the chaotic bound in the near-horizon regions using the effective potential.", "It is found that the Reissner-Nordstrom and Reissner-Nordstrom-AdS black holes satisfy this bound, which is violated in a large number of charged black holes.", "In their study, they only considered the radial contribution.", "In fact, since angular momentum affects the effective potential and increases the magnitude of the chaotic behaviour of the particles, angular momentum also has an effect on the Lyapunov exponent.", "In consideration of the above, Lei et al.", "again studied the chaos bound in the near-visible region of Reissner-Nordstrom and Reissner-Nordstrom-AdS black holes [24].", "It is found that the bound is violated in the near-visible region when the angular momentum of the charge and particles of the black hole is large.", "In rotating charged black holes, the existence of a violation of the bound was also found by calculations of the effective potential [21], [22].", "In this paper, we study the effect of the angular momenta of charged particles on the chaos bounded by the circular motion of particles around a torus-like black hole.", "The concept of a torus black hole was first introduced in the literature [25].", "Unlike other black holes, the topology of this black hole spacetime is $S\\times S\\times M^{2}$ .", "This has inspired many studies of torus-like black holes [26], [27], [28], [29], [30], [31], [32], [33].", "The rest of this paper is organized as follows.", "In Sec.", ", the exact calculation by Jacobi matrix yields the value of the Lyapunov exponent.", "In Sec.", ", the variation of the chaos bound of the torus-like black hole is analyzed by numerical calculations, focusing on the regions close to and at a certain distance from the black hole horizon.", "In Sec.", ", we devote our discussion to our conclusions." ], [ "Lyapunov exponent in the torus-like black hole", "In this subsection, taking into account the motion of charged particles on the equatorial plane of the torus-like black hole, we focus on the calculation of the eigenvalues of the Jacobi matrix, which leads to a general expression for the Lyapunov exponent.", "The metric of the black hole is $ds^{2}=-F(r)dt^{2}+N^{-1}(r)dr^{2}+C(r)d\\theta ^{2}+D(r)d\\phi ^{2},$ where the electromagnetic potential $A_{\\mu }=A_{t}dt$ .", "When a charged particle moves around a black hole, its Lagrangian can be expressed as $\\mathcal {L=\\frac{\\mathrm {1}}{\\mathrm {2}}}(-F\\dot{t}^{2}+\\frac{\\dot{r}^{2}}{N}+D\\dot{\\phi }^{2})-qA_{t}\\dot{t},$ here $\\dot{x}^{\\mu }=\\frac{dx^{\\mu }}{d\\tau }$ , where $\\tau $ is the proper time.", "With the aid of the relevant definition of generalized momentum ($\\pi _{\\mu }=\\frac{\\alpha \\mathcal {L}}{\\alpha \\dot{x}}$ ), thus the energy ($E$ ) and angular momentum ($L$ ) of the particle can be obtained $\\begin{aligned}&E=-\\pi _{t}=F\\dot{t}+qA_{t},\\\\&L=\\pi _{\\phi }=D\\dot{\\phi },\\\\\\end{aligned}$ and $\\pi _{r}=\\frac{\\dot{r}}{N}.$ Then the Hamiltonian quantity of the particle is $H=\\frac{-(\\pi _{t}+qA_{t})^{2}+\\pi _{r}^{2}FN+\\pi _{\\phi }^{2}D^{-1}F}{2F},$ using the Hamiltonian quantities of the particles, the radial coordinates and radial momentum versus time can be obtained separately $\\frac{dr}{dt}=\\frac{\\dot{r}}{\\dot{t}}=-\\frac{\\pi _{r}FN}{\\pi _{t}+qA_{t}},$ $\\frac{d\\pi _{r}}{dt}=\\frac{\\dot{\\pi _{r}}}{\\dot{t}}=-qA_{t}^{\\prime }+\\frac{1}{2}[\\frac{\\pi _{r}^{2}FN^{\\prime }}{\\pi _{t}+qA_{t}}+\\frac{(\\pi _{t}+qA_{t})F^{\\prime }}{F}-\\frac{\\pi _{\\phi }^{2}D^{-2}D^{\\prime }F}{\\pi _{t}+qA_{t}}],$ where $\\begin{aligned}&\\dot{t}=\\frac{\\partial H}{\\partial \\pi _{t}}=-\\frac{-(\\pi _{t}+qA_{t})}{F},\\\\&\\dot{\\pi _{t}}=-\\frac{\\partial H}{\\partial t}=0,\\\\&\\dot{r}=\\frac{\\partial H}{\\partial \\pi _{r}}=\\pi _{r}N,\\\\&\\dot{\\phi }=\\frac{\\partial H}{\\partial \\pi _{\\phi }}=\\frac{\\pi _{\\phi }}{D},\\\\&\\dot{\\pi _{\\phi }}=-\\frac{\\partial H}{\\partial \\phi }=0,\\\\&\\dot{\\pi _{r}}=-\\frac{\\partial H}{\\partial r}=-\\frac{1}{2}[\\pi _{r}^{2}N^{\\prime }-\\frac{2qA_{t}^{\\prime }(\\pi _{t}+qA_{t})}{F}+\\frac{(\\pi _{t}+qA_{t})^{2}F^{\\prime }}{F^{2}}-\\pi _{\\phi }^{2}D^{-2}D^{\\prime }],\\\\\\end{aligned}$ hereby, $\"\\prime \"$ denotes the derivative with respect to $t$ .", "The equation $g_{\\mu \\nu }\\dot{x}^{\\mu }\\dot{x}^{\\nu }=\\eta $ defines the normalization of the four velocities of a particle, when $\\eta =0$ , corresponds to the case of a photon, and when $\\eta =-1$ , represents the case of a massive particle.", "In this paper, the particle is charged.", "Next, the constraints can be obtained from the normalization and metric as follows $\\pi _{t}+qA_{t}=-\\sqrt{F(1+\\pi _{r}^{2}N+\\pi _{\\phi }^{2}D^{-1})},$ using this constraint, we can obtain $\\frac{dr}{dt}=\\frac{\\pi _{r}FN}{\\sqrt{F(1+\\pi _{r}^{2}N+\\pi _{\\phi }^{2}D^{-1})}},$ $\\frac{d\\pi _{r}}{dt}=-qA_{t}^{\\prime }-\\frac{\\pi _{r}^{2}(FN)^{\\prime }+F^{\\prime }}{2\\sqrt{F(1+\\pi _{r}^{2}N+\\pi _{\\phi }^{2}D^{-1})}}-\\frac{\\pi _{\\phi }^{2}(D^{-1}F){}^{\\prime }}{2\\sqrt{F(1+\\pi _{r}^{2}N+\\pi _{\\phi }^{2}D^{-1})}}.$ Next, consider the Lyapunov exponent, in whose acquisition the effective potential of the particle plays an important role [20], [21], [22].", "The Liapunov exponent can be obtained from the eigenvalue of a Jacobi matrix in the phase space.", "Considering the motion of the particle in the equilibrium orbit, a condition, namely $\\pi _{r}=\\frac{d\\pi _{r}}{dt}=0$ , is needed to constrain the trajectory of the particle.", "According to literature [34], using the eigenvalues and constraints mentioned above, the specific expression for the Lyapunov exponent can be obtained as follows $\\lambda ^{2}=\\frac{1}{4}\\frac{N(F^{\\prime }+\\pi _{\\phi }^{2}(D^{-1}F)^{\\prime })^{2}}{F(1+\\pi _{\\phi }^{2}D^{-1})^{2}}-\\frac{1}{2}N\\frac{F^{\\prime \\prime }+\\pi _{\\phi }^{2}(D^{-1}F)^{\\prime \\prime }}{1+\\pi _{\\phi }^{2}D^{-1}}-\\frac{qA_{t}^{\\prime \\prime }FN}{\\sqrt{F(1+\\pi _{\\phi }^{2}D^{-1})}}.$ From the above equation, it can be observed that the contribution of angular momentum to the Lyapunov exponent is not negligible.", "The surface gravity is $k=\\frac{F^{\\prime }(r)}{2}$ ." ], [ "Chaos bound and its violation in the torus-like black hole", "The metric of the torus-like black hole in four dimensional space is given as follows [25], [29] $F(r)=-\\frac{2M}{\\pi r}+\\frac{4Q^{2}}{\\pi r^{2}}-\\frac{r^{2}}{3},$ $C(r)=r^{2},$ $D(r)=r^{2}\\sin ^{2}\\theta ,$ where, $M$ and $Q$ are the mass and electric charge of the black hole, respectively.", "$$ is the cosmological constant, which is connected with a thermodynamic variable ($P=-\\frac{\\Lambda }{8\\pi }=\\frac{3}{8\\text{$\\pi $}l^{2}}$ ).", "In case of $<0$ , the coordinate singularities appear at the horizon radii ($r_{\\pm }$ ), meanwhile $F(r_{\\pm })=0$ .", "The nonvanishing component of electromagnetic vector potential for the torus-like black hole in four dimensional space is [29] $A_{t}=-\\frac{4Q}{r}.$ According to the literature [34], here we set the electromagnetic vector potential $A_{t}$ to $\\frac{4Q}{r}$ .", "With the help of the above, we then focus on finding the positions of the equilibrium orbits.", "In the equatorial plane, $\\theta $ is set to $\\frac{\\pi }{2}$ .", "In the case $\\pi _{r}=\\frac{d\\pi _{r}}{dr}=0$ , the expression for the equilibrium position of the orbit can be yielded.", "And the expression of $\\frac{d\\pi _{r}}{dt}$ is $\\frac{d\\pi _{r}}{dt}=\\frac{4qQ}{r^{2}}+\\frac{3L^{2}(8Q^{2}-3Mr)+r^{2}(12Q^{2}-3Mr-8P\\pi ^{2}r^{4})}{\\sqrt{6\\pi }r^{3}\\sqrt{(L^{2}+r^{2})(6Q^{2}-3Mr+4P\\pi ^{2}r^{4})}}.$ During the numerical analysis, $P=1,M=1,q=15$ is set, and the change of the equilibrium orbit position is observed by changing the value of $Q$ and $L$ .", "The specific numerical changes are summarized in the Table REF .", "It can be clearly observed from the table that when the charge $Q$ is fixed, the value of $r_{0}$ increases gradually with the increase of angular momentum.", "In order to compare the orbital equilibrium position with the position of the black hole horizon, Table REF gives the position of the black hole horizon for different charges.", "By observation, it can be intuitively found that the positions of the orbits gradually deviate from the black hole horizon.", "When only the charge is considered, it is observed that as the charge increases, the value of the equilibrium position of the orbit gradually decreases.", "Further exploration of the variation of the black hole chaos bound is in the near-horizon region and at a certain distance from the horizon.", "For the observation, we perform numerical calculations for the Lyapunov exponent at the equilibrium orbits and the surface gravity at the black hole horizon.", "The results of numerical calculations are used to plot Fig.", "REF .", "As can be observed from Fig.", "REF , there are both violations of the bounds.", "When the charge value is fixed, the variation of the angular momentum has a significant effect on the value of the Lyapunov exponent, which leads to the change of the bound.", "There is no violation when the angular momentum is relatively small, i.e., the position of the equilibrium orbit is close to the black hole horizon.", "As the angular momentum increases, the bound is violated at a certain distance from the black hole horizon.", "There exists a special value of angular momentum, when the angular momentum is larger than this value, the violation phenomenon appears.", "It is observed that this special value increases with the increase of the charge value." ], [ "Conclusions", "In this paper, the variations of the chaos bound of the torus-like black hole in the near-horizon region and in the region at a certain distance from the black hole horizon were studied.", "The effects of different charges and different angular momenta on the equilibrium position of the orbit were first investigated.", "It is found that when the charge value is fixed, the orbital equilibrium position gradually moves away from the black hole horizon with the increase of angular momentum.", "When only the charge parameter is considered, the orbital equilibrium position parameter decreases with the increase of charge value, which is the same as the effect of charge on the black hole horizon.", "Subsequently, the Liapunov exponent is calculated by the Jacobi matrix, and further study reveals that the angular momentum has a great influence on the value of Liapunov exponent.", "The plot shows that the chaos bound is not violated in the near-horizon region; and with the increase of angular momentum, the bound is violated at a certain distance from the black hole horizon.", "The conclusion is the same as the one reached in Ref.", "[34].", "We are grateful to Deyou Chen, Peng Wang, Haitang Yang, Jun Tao and Xiaobo Guo for useful discussions.", "The authors contributed equally to this work.", "This work is supported in part by NSFC (Grant No.", "11747171), Xinglin Scholars Project of Chengdu University of Traditional Chinese Medicine (Grant no.QNXZ2018050)." ] ]
2210.07799
[ [ "Towards Immersive Collaborative Sensemaking" ], [ "Abstract When collaborating face-to-face, people commonly use the surfaces and spaces around them to perform sensemaking tasks, such as spatially organising documents, notes or images.", "However, when people collaborate remotely using desktop interfaces they no longer feel like they are sharing the same space.", "This limitation may be overcome through collaboration in immersive environments, which simulate the physical in-person experience.", "In this paper, we report on a between-groups study comparing collaborations on image organisation tasks, in an immersive Virtual Reality (VR) environment to more conventional desktop conferencing.", "Collecting data from 40 subjects in groups of four, we measured task performance, user behaviours, collaboration engagement and awareness.", "Overall, the VR and desktop interface resulted in similar speed, accuracy and social presence rating, but we observed more conversations and interaction with objects, and more equal contributions to the interaction from participants within groups in VR.", "We also identified differences in coordination and collaborative awareness behaviours between VR and desktop platforms.", "We report on a set of systematic measures for assessing VR collaborative experience and a new analysis tool that we have developed to capture user behaviours in collaborative setting.", "Finally, we provide design considerations and directions for future work." ], [ "Introduction", "Remote collaboration continues to grow in popularity, particularly since the COVID-19 pandemic.", "Using a wide range of technologies people are engaging in a greater variety of remote collaboration tasks than ever before.", "A key example of a task that needs better support for remote collaboration is collaborative sensemaking, i.e., working together to make sense of information [53].", "Standard practice for collaborative sensemaking is video conferencing (e.g., Zoom, https://zoom.us/) and online tools like shared whiteboards (e.g., Miro, https://miro.com/), but these do not provide a strong sense of spatial awareness or embodied presence.", "Previous research has claimed that Virtual Reality (VR) can overcome these limitations [77].", "The emerging field of Collaborative Immersive Analytics (CIA) [7] explores how VR can be used for analysing data in a group setting.", "For example, [43] describe early work in using VR for collaborative visualisation, and [20] provide a summary of collaborative data visualisation using VR.", "[49] reviewed several Immersive Analytics projects by groups using a large cylindrical projection environment where people reported on getting more work done in two days than in six months of desktop collaboration tools.", "VR potentially offers an embodied experience closer to in-person collaboration than is afforded by remote collaboration using a desktop interface.", "However, although CIA systems are promising, there has been little research that compares collaborative sensemaking in VR to a desktop interface and—to the best of our knowledge—none with groups of more than three people at a time.", "Studying collaboration among groups larger than three is important because it enables different dynamics than with smaller groups, e.g., a group of four may subdivide into two teams of two in order to discuss separately and work in parallel.", "Our research explores whether shared immersive environments can better support group sensemaking tasks compared to a desktop environment.", "Our study asked groups of four people to arrange and cluster images using remote collaboration via either a VR or desktop interface.", "Compared to a desktop condition, we found that teams in VR had similar task effectiveness but that VR provided several benefits, with the teams on each platform engaging in significantly different behaviours in terms of interaction, conversation, coordination and awareness.", "From our observation of these different behaviours we derive suggestions for the design of future collaborative VR applications in various aspects, such as communication, notification, navigation, environment and virtual elements, and provide directions for future research.", "Our study focused on spatial sensemaking tasks, which are fundamental sensemaking behaviour that uses space to make sense of information and manage data, and is a way to externalise thoughts and reduce cognitive affordances [60], [1].", "Our results showed that, when collaborating with others in VR, people employ various space-use strategies rather than preferring to use the space around them when performed tasks individually, as found in previous research [4], [63], [44].", "Interestingly, we still identified common patterns from these different space usages.", "These findings can inform future researchers of how the space could be used in collaborative sensemaking, especially when designing and providing automatic spacial layout for organising information and data.", "We also present a new tool to capture and analyse group behaviour in a collaborative sensemaking task.", "Past remote collaboration research has found that behavioural measures are often a better measure of technology impact than performance measures [51], [6].", "However, behaviour of groups of more than two people can be complex and difficult to analyse.", "We have created new techniques for analysing multiway communication and interactions between group members and objects in the environment.", "In summary, our paper makes the following novel contributions: (1) the first comparative study of a four-person collaborative sensemaking task between a VR condition and a desktop interface condition; (2) results suggesting benefits to collaborative sensemaking in immersive environments over desktop environments; (3) novel systematic measures for understanding and analysing the behaviours of larger groups (four people or more) in an immerse collaborative sensemaking task; and (4) design considerations for immersive collaborative sensemaking applications and directions for future works.", "In the next section, we survey past work informing our research.", "In Section , we describe our group sensemaking user study and the measures.", "In Sections , and , we present the study outcomes and discuss the results.", "Finally, we summarise and reflect on findings, limitations and directions for future work.", "Commercial immersive collaborative techniques and applications are emerging to support sensemaking, such as Microsoft's Mesh (https://www.microsoft.com/en-us/mesh) or Meta's Workrooms (https://www.oculus.com/workrooms/), which allow multiple people to join and perform tasks such as presenting data or brainstorming.", "While there is momentum in the immersive industry to provide collaboration platforms, it is unclear however, how those tools can support sensemaking.", "Spatial sensemaking uses spatial organisation to manage data to support understandability and memorability, which has been researched on various platforms [3], [60], [16], [1], [22], [75].", "Immersive techniques extend traditional desktop interfaces into immersive environments, and enable use of embodied representations and intuitive interaction to improve cognition and facilitate sensemaking [12], [50].", "Recent research has studied how individuals and groups perform spatial sensemaking on data and documents in immersive spaces.", "For example, [4] found that for immersive data analytics, data scientists tended to use the space in front of them to explore different visualisations while they used the space around them to present their findings to an audience.", "Similarly, other research also showed that people tended to layout maps and documents spherically around themselves [63], [44].", "[46] found that semi-circular layouts in 3D space improve visual search compared to flat layout.", "However, all of this research only examined single-user scenarios.", "Fewer researchers have explored collaborative sensemaking in immersive analytics.", "For example, Lee et al.", "[42] evaluated a CIA prototype with three simultaneous users, and found that people still organise visualisations in curved layouts around them, but also placed them in flat arrangements to facilitate group discussion and collaboration.", "Most recently,  [48], [47] conducted an empirical study to investigate the impact of physical surroundings on spatial arrangement with paired users in Augmented Reality (AR).", "They found that some users produced cylindrical layouts around them while others used the walls and physical furniture as separation for different clusters.", "The main results regarding the use of space for sensemaking in immersive environments are that people tend to arrange information around them in a semi-circular shape or a flat manner.", "More work is needed to understand how users collaborate and interact with data items, and how this compares to traditional collaborative 2D desktop systems." ], [ "Collaboration Proximity and Territoriality", "The collaborative sensemaking process is affected by two spatial factors: the (physical) proximity of collaborators to one another, and the use of territories to manage resources in the workspace.", "Hawkey et al.", "[33] found that collaboration is more effective when groups physically stand close to each other when using a large vertical display.", "In contrast, Prouzeau et al.", "[56] found that participants using separate desktops performed a collaborative path-finding task faster than when using a shared large vertical display.", "However, they note that the latter produces consistently higher quality results, due to there being more verbal communication in the shared environment.", "How groups manage resources in the common workspace has been shown to influence collaboration styles [65], [37], [14].", "Most notably are the notions of personal, shared and storage territories [64].", "Bradel et al.", "[9] found that groups who predominantly used shared territories on a large vertical display communicated more than those who mainly used personal territories.", "The partitioning of the shared workspace into territories is natural and rarely requires verbal communication [71].", "Personal territories are commonly directly in front of users [69], [40], [45], and the shift between personal and shared territories can easily shift as users physically move around the workspace [68], [38].", "Previous research shows that personal and shared territories affect collaboration and communication.", "It is important to explore how such collaboration proximity and territory pattern differ in immersive 3D and 2D flat shared space." ], [ "Awareness and Presence in Collaborative Workspaces", "It is common to perform sensemaking tasks with flat screens, mouse and keyboard setups.", "For example, using a combination of collaborative production and communication tools such as Miro or Zoom.", "These provide a shared 2D virtual space for the joint manipulation of virtual objects (e.g., documents and images) [21], while use of audio and video enables communication, awareness and social presence [30], [62], [5].", "Workspace awareness [28] is key to recognising others' behaviours in a cooperative environment.", "While collaborators' activities are not difficult to realise in a co-located situation, it is a challenge to maintain awareness via remote collaboration [30].", "Social presence, or the sense of “being with another” [8], is desirable as it can enable collaborators to freely converse and seek help [74].", "Early research has investigated how to increase awareness and presence by showing others' cursors and viewports on a desktop interface such as Telepointers [27].", "Hauber et al.", "[32] compared physical face-to-face meetings, 2D desktop, and non-stereoscopic 3D desktop setups for a ranking task.", "They found that while the physical meeting was superior to the computer-mediated collaborative tools, the non-stereoscopic 3D desktop improved awareness and presence over the 2D desktop setup.", "Later, embodied virtual representations have been employed in distributed tabletop and big display applications [67], [72].", "Zillner et al.", "[79] presented 3D whole-body telepresence of remote collaborators on a digital whiteboard, and compared this technique to a 2D representation and co-collocated collaboration.", "Their findings showed that 3D embodiment representation greatly improved collaboration effectiveness.", "Similarly, Higuch et al.", "[36] employed 3D-processed immersive telepresence over a digital whiteboard and found that, compared to traditional video conferencing, 3D telepresence provided better co-presence and a more enjoyable environment.", "Pejsa et al.", "[54] developed an AR technique that projected a remote collaborators' virtual replication onto the physical environment.", "They found that this life-sized telepresence provided a stronger sense of being together than a Skype video conferencing.", "In stereoscopic immersive environments, collaboration and presence is usually enhanced by the use of embodied avatars [26], [55], [24].", "While realistic full body avatars are favoured for collaboration, using non-realistic avatars still produces a good sense of presence and awareness [66], [34], [78], [42].", "Notably, Cordeil et al.", "[17], in a collaborative visualisation task, found no difference in communication and perceived sense of presence in a CAVE environment (where participants see each other physically) compared to a pair of connected VR headsets.", "However, direct comparison of collaboration in immersive VR to using a desktop interface remains to be addressed, and research especially needs to be done on whether users collaborate, and perceive awareness and presence differently in these two scenarios." ], [ "Collaboration Measures", "While previous research has studied and measured how people use online collaborative tools for group work and educational purposes (e.g., [59], [35], [10]), and has focused on modelling collaboration behaviours [29], [2], [39], [52], [73], [13], little research has investigated the comparison between non-immersive and immersive platforms for sensemaking.", "Some research has investigated collaboration in a hybrid system with one person in VR and two on desktop interfaces [70], and a hybrid of VR, AR, desktop and wall display with three users [11], but none of these made a comparison between different interfaces and quantified user behaviours and collaboration.", "Radu and Schneider [57] proposed an AR system for physics learning in pairs and compared different levels of virtual involvements, from non-AR to including all AR elements.", "However, their measures mainly focused on learning outcome, user attitude and oral communications.", "Billinghurst et al.", "[6] conducted series experiments to compare paired users in face-to-face, projection screen and AR settings, and proposed a set of measures covering oral and gestural communications, while the quantification of user interactions with virtual contents during collaboration was still missing.", "This previous research shows that user experience and behaviours have yet to be systematically measured in immersive environment collaborations, and how different these behaviours could be in VR and with a desktop interface is still unknown." ], [ "Summary", "Taken together, to the best of our knowledge, there has been no comparative study using non-immersive and immersive environments for sensemaking in groups of four, and systematically measuring the similarity and difference of user interactions, collaborations, communications and space usage in these environments.", "Thus our research is novel, and addresses an important research gap.", "In the next section we describe the user study developed to explore this space." ], [ "User Study", "Our study aimed to explore how people use a 3D immersive space for collaborative sensemaking when organising images.", "We wanted to investigate the advantages and disadvantages of collaborating and communicating in a 3D immersive environment compared with collaborating via a 2D desktop tool and communicating using Zoom." ], [ "Study Design", "We set up two conditions: (1) a remote VR collaboration system which allows participants to jointly arrange images (VR), and (2) a remote desktop collaboration system that allows participants to arrange images together while on Zoom (Desktop).", "We used a between-subjects design for the study where each group of four completed the study with one of the conditions.", "In the VR condition, participants were each in their own room with at least $4\\times 4$ metres of space, sound isolated from one another but in the same building.", "In the Desktop condition (conducted during a strict COVID-19 lockdown), participants joined from home using their own computer." ], [ "Tasks and Data Set", "To investigate sensemaking for image organisation, we designed tasks requiring participants to arrange a set of images into groups around a set of affective labels, i.e., emotion terms such as “glad”, “distressed”, etc.", "In both Desktop and VR, images and labels were on moveable “cards\" that could be freely placed within the environment.", "Participants were required to discuss their feelings about the images and reach a consensus to place the images near the labels with which they felt the images had the strongest correspondence.", "We used the Open Affective Standardized Image Set (OASIS) [41] for the tasks, which contains 900 images.", "Each image in the data set has two dimensions: valence (the degree of pleasantness or unpleasantness) and arousal (the degree of excitement or calm), where the images' values in these dimensions have been rated by 822 participants.", "These two dimensions of emotion served as the ground truth for our image grouping task to measure performance accuracy.", "We designed four tasks and varied the number of image groups and images for each task.", "The first task, which had the least groups and fewest images, was considered as a trial task to allow participants to get used to collaborating in the shared environment used for their condition.", "The number of groups in the other tasks ranged from 4 to 7, and the number of images in each group ranged from 3 to 8.", "We plotted the OASIS image distribution in a valence-arousal matrix to select the images for each task and tried to maximise the distance between the groups and minimise the distance between the images within a group.", "The emotion terms used as group labels were selected from the affect model from [61], which organises the terms by valence-arousal.", "We matched the distribution of the images in the valence-arousal matrix to the emotion word coordinates to select the terms for representing each image group.", "To avoid a learning effect between tasks, we selected different terms and images for each task.", "Figure: Setup of VR and Desktop conditions.", "Participants were presented as embodied avatars in VR.", "On the desktop they see others' mouse cursors in the platform while communicating via conferencing video calls (omitted in the figure).", "Name tags were visible to participants but are hidden in the figure for anonymity.", "Individual images show sample layouts created by participants.", "For Desktop: (a) Rows, (b) Columns, (c) Clusters in corners, and (d) Clusters around canvas border.", "For VR: (e) Rows (f) Columns, (g) Clusters on a 2D “panel” (h) Clusters on multiple 2D “panels”, (i) spherical Clusters, and (j) Convex hull." ], [ "VR Condition", "Our VR condition allows participants to place the image and label cards hanging freely within a 3D space.", "The VR system was implemented with the Unity3D engine.", "We used the standalone Oculus Quest 2 VR headsets and controllers in the study.", "The headsets are self-contained and require no cord connection to a PC, allowing participants to freely move and rotate their bodies in the space.", "The controllers have a trigger that allows participants to naturally grasp and manipulate objects with either hand.", "The locations used in the VR condition were empty rooms with a cleared area of $4\\times 4$ within which each participant had complete freedom of movement.", "Initially, the image cards were distributed evenly in the room at waist-height facing up, with randomly generated $x$ - and $z$ -axis positions and 360 rotations around their $y$ -axis.", "The same approach was used to distribute the label cards.", "Participants were randomly distributed in the space when they started the tasks (by walking to press a “START” button displayed at a random location within the space).", "To find the most suitable size for images and labels we conducted a pilot study with four participants with varying image and text sizes.", "Based on participant feedback, our final study design used image and label cards measuring $20\\times 20$  .", "The label font size was chosen such that the longest label width filled the card.", "We used “LiberationSans SDF” in the TextMeshPro package as the font style and used the most common yellow colour (R: 255, G: 237, B: 45) as the background for labels (such that they resembled `post-it notes').", "The controllers were represented as virtual hands in the environment to give a sense of self-presence to participants.", "To allow participants to focus on the spatial arrangement, we kept interaction as simple as possible.", "Participants could freely move or rotate objects with a simple grip interaction activated with the controller trigger.", "When the trigger was pressed, participants would see their virtual hands perform a pinch gesture, and visual feedback of a thick blue outline was provided for objects under manipulation.", "Since we were most interested in natural “embodied” interaction, we did not support ranged interactions.", "Participants could see their collaborators as embodied avatars in the system.", "We used the Oculus Avatar SDK to implement the avatars, which show head, upper body and hands (as seen in Figure REF ).", "To distinguish different collaborators, the head and hands were given unique colours, and name tags were displayed above the avatar's head and hands, always facing the viewer's camera.", "To support the awareness of collaborators' activities, we provided head gaze by casting a pointing ray from the centre of the avatar's two eyes.", "Whenever a pointing ray hit an image or label card, a reticle with a name tag was shown at the hit point.", "To achieve remote collaboration and communication in the VR system, we used the Photon Unity Networking framework to synchronise the movement of avatars and objects.", "The Photon Voice package was used to transmit participants' voice through the network.", "We implemented 3D spatial sound for the audio of the VR system, so participants could hear the voices of nearby collaborators as loader than the those further away in the shared virtual space.", "We used a customised logarithmic roll-off curve for the spatial sound, with maximum audible distance of 4 meters." ], [ "Desktop Condition", "For the Desktop condition, we developed a system with the functionally of the VR system, inspired by the Miro online whiteboard, but with the minimal functionality required for the study tasks.", "The desktop system was implemented using Unity3D and run on Windows or macOS.", "The prototype presentation used a windowed mode set to a $2560\\times 1600$ pixels resolution as the default window size.", "The application window was resizable.", "We asked participants to maximise the application to fit their screen.", "A square 2D canvas was displayed on the centre of the application and used the full window length.", "Like in a Miro board, participants in the Desktop condition could see the positions of their collaborators' mouse cursors (see Figure REF ).", "Visualising users' mouse cursors in shared space is a conventional approach to support group awareness  [23].", "This cursor representation is the same as the mouse cursor used in typical desktop systems, familiar for computer users.", "Also, it is simple but effective enough to share collaborators' location and movement on the limited desktop screen [27].", "Each mouse cursor had a unique colour and a name tag above it.", "The movement of mouse cursors and the image and label cards were synchronised using the Photon Unity Networking framework.", "We used the same approach as the VR condition to initially distribute the images and label cards on the canvas.", "The $x$ - and $z$ -axis positions of the objects in the VR condition were projected to the $x$ - and $y$ -axis positions on the desktop canvas.", "Since people look at the screen from a fixed orientation, the rotation of the objects in this condition is fixed to the orientation of the screen.", "We conducted a pilot study and found that the most suitable scale of the (square) image and label cards as a fraction of canvas width was 0.067.", "To avoid extra navigation on canvas, since the cards can be clearly seen with a normal sized 23 inch monitor, we did not provide zoom in/out functions in the application.", "We used the same font style, font size relative to card size and label note background colour as the VR condition.", "Participants used a mouse to interact with the desktop application.", "The only interaction required in the study is moving objects.", "They could move image or label cards by left-clicking on them and dragging with the mouse.", "As with the VR condition, during interaction the image or label card was highlighted with a blue outline.", "During the study, participants were connected via Zoom video calls, and their voices were transmitted via Zoom, so they could easily speak to one another.", "As we want to compare embodied VR and traditional desktop environment, as a convention, the loudness of users voice was naturally controlled by the distance from a user to the camera and microphone." ], [ "Participants", "We recruited 40 participants (13 female, 27 male), resulting in 20 people (5 groups) for each condition.", "We balanced the age and gender for the VR condition (6 female and 14 male, aged 18-34, M = 26.5, SD = 4.2) and the Desktop condition (7 female and 13 male, aged 18-44, M = 25.8, SD = 5.3).", "In the VR condition, 20% of the participants self-reported no experience in VR/AR, 55% reported having 10 hours or less of experience, and 25% reported having more than 10 hours experience.", "Only three participants reported using social VR/AR systems before.", "The participants in the Desktop condition were asked about previous experience with video conferencing and desktop collaborative tools (e.g., Miro, Google Docs).", "Over 90% of participants reported frequent usage of video conferencing, and 55% reported having used desktop collaborative tools.", "We offered the participants gift cards for their participation.", "The experiment lasted 45 minutes." ], [ "Procedure", "Each condition followed the same general procedure.", "In the VR condition, to prevent hearing each other, the participants were in separate rooms, each of which had an empty space of at least $4\\times 4$  .", "The participants communicated via the VR system.", "They were supervised physically by two experimenters and also monitored via Zoom video calls set up in each room.", "For the Desktop condition, due to COVID-19, the study was run completely online.", "The participants communicated via Zoom calls, supervised by the experimenter.", "The participants needed to enter their names to log into the VR/Desktop system.", "After login, the participants completed a training protocol individually before meeting their collaborators.", "The training guided the participants to practise grabbing, moving and placing objects in the system.", "In the VR condition, the objects were placed at particular positions to force participants to walk around and become familiar with the environment.", "After training, the participants were shown task instructions.", "When all four participants finished reading the instructions, they joined the meeting room together—the VR/Desktop environment where they would complete the tasks.", "In the meeting room, the participants were asked to introduce themselves and do an ice-breaker activity: sharing which animal they wanted to be if they could.", "When the participants finished the ice-breaker activity and were ready for the tasks, they asked the experimenter to show them the first task.", "The experimental tasks were fully controlled by the experimenter, who used a server application to control the progress of the study and communicate with the participants.", "When the participants finished each task, they needed to ask the experimenter to change the task for them.", "In the first task of one group, the participants finished the task without any discussion or collaboration, and so the experimenter prompted them to review the arrangement of the images.", "All other groups naturally discussed and collaborated on all tasks.", "After the study, all participants were asked to complete a questionnaire regarding demographic information and their previous experience with VR, video conferencing systems and collaboration tools.", "They were also asked to complete a post-experiment questionnaire, answering short-answer questions about their strategies for image arrangement and collaboration during the study and overall feedback, and rating their perception of Social Presence on a 7-point Likert scale from strongly disagree to strongly agree [31]." ], [ "Data Collection", "The server application used by the experimenter also collected the study log data: time stamps of actions, positions of image and label cards, and participants' head and hands/mice positions.", "In addition, we developed an audio recording system to record participants' voice.", "Four audio recording systems were run during the study, each followed one participant to record their audio.", "The Desktop/VR systems that were run by the participants were connected to a Photon server used by the experimenter on a desktop PC, with high-speed Ethernet to minimise the latency of the networking.", "The server application includes the ability to replay the visuals and synced audio from experimental sessions to facilitate data analysis." ], [ "Measures and Research Questions", "Our experimental design is exploratory rather than using formal hypothesis testing.", "Therefore, we collected as much data as possible to support a variety of post-hoc analysis.", "In the following, we describe the research questions we addressed and the measures that we analyse to do so.", "The definitions of measure terms refer to Appendix REF .", "We also make note, where applicable, of some of our prior expectations.", "Regarding task performance, we measured accuracy and completion time.", "We explored activities in terms of interaction, conversation and coordination, and analysed engagement with respect to awareness and social presence.", "We also investigated spatial usage and territories.", "In both conditions, the sources of our analyses came from the system logs, playback of log files, audio recordings and post-experiment questionnaires.", "Firstly, we computed the occurrence of a number of objective measures from the playback of log files and audio recordings to identify conversation contents (e.g., planning or greeting), conversation types (task and social), conversation participants (initial participant and responding participants) and conversation targets (label and image).", "As these measures were objective rather than subjective, one author went through the playback manually to identify behaviour types of interest.", "To increase the rigor of these results, the author took another pass and did a double-check.", "Then we went through the playback and identified criteria for each behaviour, and developed a program to count the occurrence of these behaviours from logs.", "Some example behaviours are: for interaction (grab and regroup), for coordination (monitoring, protection, etc.", "), and for awareness (no response, shaking, etc.).", "We checked the results captured by the program against a sample of the playback to make sure the results matched the behaviours present in the playback.", "For the samples of results we checked, the program accurately counted all occurrences of the behaviours.", "The definitions and criteria to identify these behaviours are discussed below." ], [ "Task", "RQ1: Is there a difference in sensemaking task accuracy and completion time across VR and Desktop?", "We did not expect that the conditions would affect the task accuracy.", "We believed the participants in VR would take more time than the Desktop condition to complete the tasks as they needed to walk physically in the environment." ], [ "Activity", "Interaction.", "RQ2: Are there differences in frequency, duration and type of people's interactions with objects across VR and Desktop?", "From our logs we can identify precisely which participants interacted with which objects, where they grabbed and moved those objects to, how many times, and how much time they spent moving them.", "We regard the most significant movements as regrouping, where those movements change the grouping (the label to which an image is closest).", "We did not expect any major difference in the number of interaction operations, but one might expect differences in how participants interact with objects during communication or perhaps through embodied decision making (e.g., externalising their thought process by different usage of space for objects being discussed or left for later discussion).", "An important measure of effectiveness of collaborative activities is whether all team members are able to contribute equally to the task.", "We wondered RQ3: Do people collaborate more equally in VR or Desktop in terms of interaction?", "We used the Gini coefficient [18], [19] to do the calculation, which ranges from 0 to 1, with 0 meaning all participants of a group contributed to the interaction equally, and 1 indicating they did not contribute equally—typically one of the participants led the whole group during a task.", "Conversation.", "RQ4: Do people spend a different amount of time on discussion in VR than Desktop?", "We analysed the conversation proportion to explore this question, using the procedure from Cordeil et al. [17].", "Same as for interaction, we also explored equality of conversation among participants.", "RQ5: Do people collaborate more equally in VR or Desktop in terms of discussion?", "RQ6: Does conversation differ in VR from Desktop?", "We categorised the conversation into task-related and social conversations.", "We expected roughly equal task-related conversation between the two conditions, but for the participants in the VR condition to have more social conversations due to being more aware of the people around them.", "Coordination.", "RQ7: What are the differences in coordination between participants across Desktop and VR in terms of communication and action activities?", "From previous studies [29], [39] and our observations, we identified six behaviours of coordination from the playback and audio recordings: Planning, Assistance (question and exchange), Monitoring, Protection, Respect and Accommodation.", "For detailed definitions of these terms refer to Appendix REF ." ], [ "Engagement", "Awareness.", "As mentioned, we measured the study audio and visual playback to get a sense of the participants' collaborative awareness to address: RQ8: Is there a difference in the type and amount of communication between users based on the distance between them?", "To investigate how participants acknowledged where the others were, we measured the relationship between the discussion and the distance between participants.", "We retrieved three behaviours related to awareness from the visual playback and audio recordings: No response, Think aloud and Conversation conflict.", "We expected that in the VR condition the participants would communicate more with those close to them in VR, and that there would be no such correlation between mouse cursor separation and communication in the Desktop condition.", "Another analysis related to awareness concerns gestures to get others' attention.", "RQ9: Is there a difference in the amount of gestures between the two conditions of VR and Desktop?", "The program retrieved gestures from the visual playback and audio recordings for the analysis.", "We expected there would be more gestures in the VR condition than the Desktop condition as the VR condition allowed more embodied presence of others in the environment.", "Social Presence Rating.", "RQ10: Do people experience Social Presence differently for VR and Desktop?", "We gathered the subjective rating of Social Presence to evaluate participants' perception of how aware they were of others during collaboration, using the Harms and Biocca's questionnaire [31].", "The questionnaire measures the factors of Social Presence in four sub-categories: Co-presence, Attentional Allocation, Perceived Understanding (Message & Affective) and Perceived Interdependence (Emotional & Behavioral), covering the aspects of communication, coordination and awareness.", "We expected that the VR condition would be more similar to face-to-face circumstances and the participants would be less aware of each other during tasks in the Desktop condition." ], [ "Spatial Usage", "One interesting question investigates the spatial usage based on observations.", "RQ11: Do people use the space in VR and Desktop differently?", "Do they have different strategies to use these space?", "We analysed the visual replay and plotted participant and object positions over time to address this question.", "The following is a detailed analysis of the various measures in terms of the first nine research questions (RQs).", "We use the assumption-free non-parametric Wilcoxon's rank-sum test for this analysis [25].", "All the statistical results of the measures are presented in Table REF .", "A concise summary of the significant results and their relationships to the research questions is provided in Section .", "Some figures are combined regardless of their order to minimise space.", "Figure: Table of study results." ], [ "Task", "We did not find significant differences on task accuracy or completion time between two conditions.", "Regarding RQ1 we cannot accept the VR or Desktop condition as being faster or more accurate.", "However Figure REF (left, middle) shows that there is a weak trend that the VR condition was slightly faster and more accurate across most tasks." ], [ "Activity", "Interaction.", "Figure REF (left) shows the proportion of time that participants spent on grabbing (and moving) objects and performing regrouping actions in both conditions (RQ2).", "Participants spent a significantly greater percentage of their time on both grabbing and regrouping in VR than in the Desktop condition.", "Figure: Left: Proportion of time that participants grab and regroup objects (data points represent individual participants).Middle: The average number of participants grabbing each object, and the average number of participants changing the grouping of each image.", "The data points represent objects/images.Right: Equality of interactions across tasks (each data point represents one group in one task).To compare participants' interactions with objects across groups and conditions, we computed networks showing which participants grabbed objects (labels and images) and which regrouped images.", "A sample of the graphs computed for each of the four tasks across one VR group and one Desktop group is shown in Figure REF .", "We saw a common pattern across all groups that in VR participants interacted with more objects than in Desktop and all four members of each VR group tended to interact more uniformly.", "This is evident from the more highly connected and symmetric graphs for the rows of VR graphs.", "An analysis of the degrees of object and image nodes (Figure REF middle) also shows the greater degree of interaction in VR.", "In VR, objects are interacted with by significantly more participants than in Desktop for both grabbing and regrouping.", "Figure: Indicative graphs of participants' interactions with objects in one VR and one Desktop group.", "In the “Grabs\" graphs there is a link between a participantand an object (label/image)if the object was grabbed and moved by that user.", "In the “Regroup Actions\" graphs a link shows the image was moved to a different group by the user.", "The link thickness indicates the number of interactions.", "In VR, we see objects are both manipulated and regrouped by more participants than Desktop.Figure REF (right) shows the equality of participants' interactions with the objects in each task for both conditions in terms of duration (time spent grabbing and regrouping), count of grabs and regroup operations (RQ3).", "The only statistically significant difference is in grab duration.", "The lower Gini coefficient in the VR condition shows participants in VR interacted with the objects more equally in terms of duration, and significantly more equally than those in the Desktop condition.", "Conversation.", "Figure REF (left) shows the proportion of time that participants have conversations (someone was speaking) in both conditions (RQ4).", "Participants spent significantly more time speaking in VR than in Desktop.", "Figure REF (middle) shows the equality of contributions by participants to the conversation as a proportion of task time in both conditions (RQ5).", "There is no significant difference between the two conditions.", "Where there is conversation, Gini coefficient scores between 0.3–0.6 indicate that there is an imbalance between participants in both conditions, although the condition does not effect this significantly.", "Figure: Left, Middle: Oral collaboration performance.", "Each data point represents one group in one task.Right: Coordination activities boxplots.", "For Plan, Question, and Exchange activities, each data point represents one group in one task, and the percentage is duration of each activity against task completion time.", "For Monitoring, Protection, Respect and Contention, data points represent individual participants.", "The Monitoring percentage relates to duration, while for Protection, Respect, and Contention percentage indicates the number of times that these activities happened to a participant relative to the total conversation/interaction times across all tasks.We identified various topics of social-related conversations in VR: Greeting (2 times), Emotion (4 times), Fun (5 times), Interaction (2 times), Distance (6 times) and Height (2 times) across all groups.", "The social conversations rarely happened in Desktop, once for Greeting and once for Fun (RQ6).", "Some examples for these topics are as follows.", "For Fun, participants played with the hand animation and explored it with others.", "For Interaction, participants tried to “steal” objects from others' hands.", "For Distance, as the participants were remotely located, some participants were surprised that they could pass through others and discussed the feeling of this.", "Some participants also expressed that the height of some objects placed by other participants were out of their reach.", "The duration of these conversations were mostly between 10 to 45 seconds.", "We identified three topics for the task-related conversations, as shown in Figure REF (right): Planning, Question and Exchange, which also related to Coordination (discussed below).", "Coordination.", "Figure REF (right) shows there are significant differences of Monitoring, Protection, Respect and Contention (RQ7).", "The participants in Desktop spent much more time on monitoring than in VR.", "They also tended to protect their work more than in VR.", "Participants respected others' work more in VR than in Desktop.", "For Accommodation, we found participants exhibited the Voting activity once in VR and twice in Desktop, and they tried to interact with the same objects significantly more in Desktop than in VR.", "There is no significant difference of Planning and Assistance (Question and Exchange) activities between two conditions (RQ7)." ], [ "Engagement", "Awareness.", "Figure REF (left) shows there are significant differences of No response, Think aloud and Conversation conflict behaviours.", "There are more occurrences of No response in Desktop than in VR, but less Think aloud and Conversation conflict happened in Desktop than in VR.", "We produced heatmaps of participants' $x$ - and $z$ -axis positions to explore the relationship between distance and communication between participants (RQ8).", "In VR, we observed that the parallel conversations happened wherever the participants were.", "For example, Figure REF (right (a)) is the heatmap of a group in one task.", "It shows most of the time the participants worked in pairs (blue and magenta, red and cyan) and stood on the opposite sides of the “object wall” they built in the center of the room (see Figure REF (e)).", "Besides talking to the person on the same side, participants also talked to the others across the “object wall\".", "The conversations with no responses happened when the other participants stood far away and had some participants close to them, as shown in Figure REF (right (b)) that the participants freely walked around the room during the task.", "The heatmaps for the Desktop condition did not reveal that there was a relationship between the distance between participants (in terms of mouse cursor position) and the communication pattern.", "Figure: Left, Middle: Awareness behaviours boxplots.The data points for No response, Think aloud and Conversation conflict behaviours represent tasks.", "The percentage values indicate the relative number of times these behaviours occurred.For the Shake behaviour each data point represents individuals, and the count value indicates the number of times that participants shook their controller/mouse in all tasks.Right: Sample heatmaps of participants' xx- and zz-axis positions over time.", "Each heatmap is generated for one group in one task, and each colour represents a participant.Two gestures were identified that participants performed to attract others' attention: Pointing and Shaking.", "Since Pointing happened in every conversation in Desktop condition (i.e., mouse hovering over objects) as the only awareness indication, we did not include it into the discussion.", "The Shaking gesture was counted during speaking.", "In VR, participants shook their virtual hands or objects, and in Desktop participants quickly moved their mouse cursor around on or with objects.", "As can be seen from Figure REF (middle) there is no significant difference across two conditions (RQ9).", "Social Presence Rating.", "Except for the sub-category Attentional Allocation, the Cronbach's Alpha of other sub-categories ranges from 0.90 to 0.96 in VR and from 0.78 to 0.95 in Desktop, showing high internal consistency.", "The Cronbach's Alpha of Attentional Allocation is 0.65 and 0.64 for VR and Desktop, which are in an accepted range.", "Figure REF (right) shows there is no significant difference in social presence in each sub-category across both conditions (RQ10).", "Figure: Left, Middle: Task accuracy and completion time across all groups.", "Right: Boxplots for Social Presence results of overall rating and each sub-categories: Co-presence (CP), Attentional Allocation (AA), Perceived Message Understanding (PMU), Perceived Affective Understanding (PAU), Perceived Emotional Interdependence (PEI) and Perceived Behavioral Interdependence (PBI).", "The data points represent individual participants." ], [ "Observation Results", "To explore RQ11, we analysed the study visual replay in detail in terms of: Strategies, Organisations and Territories, which were widely used to analyse collaboration behaviours in previous research [9], [15], [42], [47].", "Strategies.", "Participants' strategies for space use were quite similar in both conditions, where the participants divided the space into regions: grouped region, unsure region and ungrouped region.", "The empty space between the regions tended to be used temporarily by the participants for placing the objects for discussion.", "Collaboration strategies were also quite similar across conditions.", "We identified two main strategies: Strategy 1 Participants worked individually to organise images first (Phase One).", "Some groups put images they were unsure of in an empty space, then subsequently discussed the images in the unsure pile (Phase Two).", "When they finished an initial grouping of all images, they started reviewing the groups (Phase Three).", "This basic strategy was used by four groups in both VR and Desktop.", "Strategy 2 Participants gathered all the images to the centre of the space, and discussed and grouped them together.", "One group in VR and one group in Desktop used this strategy.", "Organisation.", "There were two types of organisation in VR condition: (1) 2D space (two groups) and (2) 3D space (three groups).", "The images in the organisation using 2D space were basically formed into a kind of “panel” and these panels were either placed in the centre of the room (see Figures REF (e) and (f)) or on one side of the room (see Figure REF (g)).", "There were three patterns of layout of the objects on these 2D panels, e.g., Figure REF (e) the images followed the labels and formed Rows, Figure REF (f) the images followed the labels and formed Columns, and Figure REF (g) the images were placed around the labels into Clusters.", "We identified three patterns from the organisations using 3D space.", "The first is depicted in Figure REF (h), the participants organised the labels around multiple sides of the room and put the images around these labels to form some Clusters.", "Each cluster was organised as a small 2D “panel”.", "In the second, the participants fully used the 3D space and organised the labels far away from one another as possible.", "For example in Figure REF (i), four labels were put into four corners and one label was placed in the centre of the room.", "The images placed around them formed a spherical layout in each Cluster.", "For the third, one group created a Convex hull and built a kind of “console” in the centre of the room (see Figure REF (j)).", "The labels were placed around a circle in the centre of the console and the images followed the labels and extended from the centre to the outside of the console.", "Spatial organisations in the Desktop condition were more uniform than in VR.", "We observed three patterns of the layouts in Desktop: (1) Rows (one group), (2) Columns (two groups), and (3) Clusters (two groups).", "The rows and columns layouts were similar to the 2D “panel” layouts in VR condition as shown in Figures REF (a) and (b).", "For the cluster layouts, each cluster occupied a separate space on the 2D canvas, either on the corners (when number of labels = 4 (see Figure REF (c)) or around the border of the canvas (see Figure REF (d)).", "In both conditions, participants either planned the arrangements (two groups in VR, three groups in Desktop) or instinctively organised the objects without discussion (three groups in VR and two groups in Desktop).", "The organisation planning normally happened at the beginning of the tasks, but some groups refined the plan in the middle of the tasks when they found the initial organisation did not work for the task.", "The groups that had spatial organisation plans usually created more organised layouts in both conditions, such as Row and Column layouts.", "When there was no spatial organisation planning, the participants just followed others' strategies.", "This normally happened in the groups using Cluster layouts in both conditions.", "Interestingly, the VR group that produced the Convex hull layout (see Figure REF (j)) did not plan the organisation.", "They collected the labels in the center of the room at the beginning of Task 1, and stood in a circle around the labels, facing others to discuss the word meanings.", "When one participant rotated a label towards themselves to see the text, others used the same behaviour.", "As a result, they naturally extended the grouping of images outwards from the labels in the center and built a “console”, and used the same layout throughout the subsequent tasks.", "Territories.", "We observed territory patterns from Figure REF , which show the positions of objects when picked and placed.", "Each sub-figure is generated for one group, and each column in the sub-figures represents each task.", "A dot is drawn when a participant picked (first row in each sub-figure) or placed (second row in each sub-figure) the object.", "The dots are colour-coded by participants.", "Figures REF (a)–(e) are the heatmaps for the groups in VR, and Figures REF (f)–(h) shows three patterns in Desktop.", "These territories match the object organisation discussed previously.", "Figure REF (a) reveals how participants in VR built a 2D “panel” in the centre of the room, and Figure REF (b) shows how they built a panel along the sides of the room.", "Figures REF (c)–(e) show use of territories in 3D space.", "Figures REF (c) and (d) depict building a spherical layout, and Figure REF (e) is for the “console” built in the room centre.", "The territory heatmap for Desktop condition is uniform, e.g., Figure REF (f) shows the participants built the row layout, Figure REF (g) shows is the heatmap for the column layout, and Figure REF (h) is the heatmap for the cluster layouts.", "Interestingly, Figure REF (h) column 3 and 4 also reveals that in Task 3 and 4 the participants collected all the objects into the canvas centre at first to make space for organising the grouped images.", "Figure: Sample participant interaction heatmaps that indicate the territories of participants.", "Two groups of Desktop condition are omitted as their heatmap patterns are similar to (h).", "The dots represented the positions when an object was picked/placed by a participant, coloured the same as the participant.", "Sub-figures (a)–(e) are xx- and zz-axis positions of objects in VR and (f)–(h) are xx- and yy-axis positions of objects in Desktop.Feedback.", "The questionnaire prompted participants for their open-ended thoughts and suggestions.", "For both Desktop and VR conditions, feedback overwhelmingly described the systems as being fun to use and easy to collaborate in.", "For the Desktop condition, some comments were made by single participants: auto-alignment tools for the cards; a highlight button to attract attention of group members; and the ability to see each other within the application itself, akin to online services such as Gather (https://www.gather.town/).", "For the VR condition, three participants pointed out that they felt the headsets were uncomfortable after some time.", "Two participants had requested more communication methods such as symbols or emoji-like responses, particularly to show agreement in group decisions.", "The remaining comments were given by single participants: poor provenance as there was no way to identify who moved the cards without asking; difficulties caused by height differences between group members; a feature to avoid collisions with each other; and a way to select and move multiple cards at once." ], [ "Discussion", "There are several notable findings from the results of the user study.", "Firstly, VR prompted more interactions with virtual objects and more communications between participants than Desktop, especially much more social conversations.", "In the Desktop condition, participants spent more time monitoring others' behaviours, but interestingly they had more contention while interacting with objects than in VR.", "VR participants initiated more discussions when they tried to modify others' work, while Desktop participants were involved in more discussions concerning their own work.", "There was more parallel talking in VR than in Desktop, and more no-response conversation happened in Desktop than in VR.", "Detailed discussions on the measures follow." ], [ "Task", "The results of accuracy match our expectation discussed in Section REF (RQ1), where the condition did not have effects on accuracy.", "Unexpectedly, participants in the Desktop condition did not complete the tasks faster.", "From observation, we found that in VR participants often used two hands to grab two images at the same time, which potentially saved time for moving images.", "Another explanation for the similar completion time is that the space in VR is not large, so participants could easily walk in the virtual room.", "As the study was designed to examine natural embodiment experience, we did not include extra navigation techniques (e.g., teleportation for VR, panning and zooming for Desktop).", "It would be interesting to investigate in the future whether the navigation techniques affect task completion time in a large virtual space." ], [ "Activity", "Interaction.", "Participants grabbed and regrouped objects more frequently and spent significantly longer time on these in VR than in Desktop (RQ2), and VR participants contributed more equally on grabbing objects (RQ3).", "From observation, the reason for this could be that aligning objects in 3D space requires more effort than in 2D space, so participants adjusted the placement of objects more in VR than on the Desktop to make the organisation neater.", "This indicates that there is a desire to have auto-alignment mechanisms for objects placed in an immersive environment.", "Also, we observed that when there was not enough space to add more images into individual groups, participants spent longer time in VR than Desktop on moving the images one-by-one to make more space for the existing image groups.", "Providing functions to allow grouping and moving multiple images together will facilitate this object arrangement process.", "Another reason for more interaction and longer interaction time in VR could be that the animated hand representation in VR gave strong self-presence and embodiment sense to participants, which may increase the enjoyment of interactions and stimulate participants to interact more with objects.", "Conversation.", "Participants spent significantly more time speaking in VR than Desktop (RQ4).", "This is surprising but could be attributed to VR increasing the sense of physically being in the space and working with other people.", "However, there is no significant difference in conversation equality of participants between the two conditions (RQ5).", "As expected, we observed that the task-related conversations were similar in the two conditions, and there were much more social conversations in VR than Desktop (RQ6).", "This might suggest that people in VR feel more like speaking one-to-one rather than one-to-many in a Zoom like interface.", "Also, VR might provide a more natural and less professional shared space feeling than video conferencing, as the latter has already been widely adopted in various professional occasions.", "One possibility to shape a more professional workplace in VR could be providing professional scenes, avatars and objects to simulate an office or conference venue.", "Another interesting finding is about the conversation on height.", "In VR when there was a great disparity in the heights of the participants, sometimes the short participants commented there were some images outside their accessible range, and then others helped them to reach or place the images.", "While providing distance interaction can solve this issue, another possibility is employing navigation techniques to “scroll” the whole virtual space, which not only supports interaction but also give everyone the power to glimpse the whole space.", "Coordination.", "Regarding coordination activities (RQ7), participants in the Desktop condition monitored others' behaviours and subsequently protected their own works more than in VR.", "It is harder to get an overview of the space in VR than Desktop, especially to notice what is happening outside your own field of view in an immersive environment.", "One possibility is to provide notifications of changes made by others to inform users.", "Participants in VR respect others' work more than in Desktop.", "Although there is no difference in the social presence perception rated by participants across the two conditions, we suspect that the greater presence and awareness of working with others made the participants respect others more in VR than in the Desktop condition.", "Also, the lack of presence led to Desktop participants having more contentions on interaction with objects than in VR." ], [ "Engagement", "Awareness.", "As shown in Figure REF , there were more Think aloud and Conversation conflicts in VR than Desktop, as separate parallel conversations can occur in the VR environment but are impossible in Zoom calls.", "However, we did not observe any relationship between distance and communication for awareness (RQ8).", "As discussed in Section REF , the parallel conversations in VR happened regardless of the collaborators' positions in the virtual space.", "This is an interesting phenomenon as the spatial sound did not work as in the physical world where people tend to only talk to others standing close to them.", "To support a better experience of parallel conversations, VR systems could provide sound isolation mechanisms to allow users to choose whom they want to talk with.", "There were more occurrences of No response occurring in Desktop than VR, which could be due to people in Desktop being uncertain who a question was directed towards or perhaps because of lower engagement.", "Also, there were slightly more Shaking behaviours in Desktop than VR (RQ9), which is contrary to our expectation.", "We suspect that it was more difficult for participants in Desktop to get others' attention than in VR.", "Social Presence Rating.", "Surprisingly, participants did not give a significantly lower rating on Social Presence in Desktop than VR (RQ10) as we expected.", "Perhaps people are now so familiar with remote Desktop collaboration that they feel relatively comfortable communicating, despite the impediments we observe above." ], [ "Observations", "Spatial Organisation.", "Regarding RQ11, although the spatial organisations in Desktop are neater, we can find common patterns of the layouts across two conditions, like Rows, Columns and Clusters.", "This suggests that even in immersive 3D space, users tend to organise objects as conventionally as they do in 2D space.", "Future research could be done to design and provide automatic layout options in immersive sensemaking systems to facilitate object organisations.", "We also found diverse space usage from different groups in VR, which is different from previous studies [4], [63], [44], [42].", "This suggests that the nature of tasks, devices, and individual differences need to be considered when designing an automatic organisational layout system.", "We did not observe which spatial usage and organisations facilitated or impaired task performance and collaboration, and we note that this could be due to the limited number of appearances of each organisation pattern.", "Territories.", "There is no clear boundary of individual working spaces in either condition.", "Although there were individual and collaborative working phases, participants freely walked within the room in VR or moved their mice on the canvas on the Desktop to interact with objects or communicate with others during the whole task.", "This leads to an interesting question for future research: should there be functions to claim individual territories in the shared space and would this positively or negatively impact collaboration?", "Other Concerns.", "Although we did not ask for physical workload rate from participants, participants reported discomfort with VR headsets.", "However, the physical affordance seems not to impede participants' engagement and activity in the VR condition." ], [ "Design Considerations for Immersive Collaboration", "Here we summarise from the detailed analysis above the key considerations that may inform design of and research into future immersive collaboration systems: Communication (based on the discussion on Awareness in Section REF ): active (manual or automatic) control over participant volume levels is essential to support parallel conversations.", "In our VR system we used spatial audio with a custom logarithmic roll-off (see REF ).", "Participants indicated they wanted greater control, so either better automatic sound attenuation needs to be developed or systems need to consider providing functions to mute certain collaborators.", "Notification (based on the discussion on Coordination in Section REF ): consider providing notifications of collaborators' activity, since participants wanted to know of changes made by others in order to protect against destructive changes.", "This is both a workaround for the limited field of view in current VR headsets, but also the ability to be aware of changes made behind one's back is a potential visualisation “superpower” [76].", "Navigation (based on the discussion on Conversation in Section REF ): consider providing navigation mechanisms, to allow users to “scroll” the whole immersive space in all the three dimensions (x, y, z).", "Participants wanted the ability to move the surrounding space to access out-of-reach objects without modifying layout and to get an overview of the environment.", "We can also consider providing a miniature space to give an overview and inform users about current field of view.", "Environment (based on the discussion on Conversation in Section REF ): consider providing options for users to choose different types of environments, decorative elements and avatars, to support different collaborative scenarios and atmosphere, or to support management of territories (as per the discussion on Territories in Section REF ).", "The ability to adapt working environment for various requirements of collaborative relationships and feelings can improve efficiency, effectiveness and engagement in collaboration [58].", "For example, participants who spent time playing with avatars or gestures in VR, might have done so less in a more formal VR environment like a virtual classroom or conference room.", "Automatic placement (based on the discussions on Interaction in Section REF and Spatial Organisation in Section REF ): consider providing auto-alignment mechanisms to facilitate placing virtual objects, and providing automatic layout options to organise the objects.", "While layout and alignment tools are frequently found in desktop diagramming tools, in immersive environments they still reduce physical work required to place objects but are arguably more important because this work is increased due to the third dimension.", "For example, allowing snapping to 2D panels to facilitate alignment to planes and spherical surfaces, which were tasks we observed users doing manually in our study (Fig.", "REF )." ], [ "Limitations, Conclusions and Future Work", "Our results suggest many positive outcomes and potential advantages for performing distributed collaborative sensemaking tasks that involve spatial organisation in VR over more standard Desktop environment.", "Our main conjecture is that many of the advantages stem from the natural embodied interaction and communication that is possible in VR, and this would lead to increased engagement in the task.", "However, we note that there could also be other sources of increased engagement in VR, such as the isolation effect of VR headsets reducing environmental distractions, or the novelty effect of using VR compared to more familiar desktop communication and collaboration tools.", "Our study employed basic and natural techniques that are suitable for novice users.", "In the future, these techniques could be used as the basis for a suite of studies varying the parameters of the shared immersive environment, such as the introduction of table and wall surfaces, ranged interactions with objects, and navigation techniques such as teleport and zoom to enable larger shared virtual workspaces.", "Collaboration in immersive sensemaking is an area that has not been well studied.", "This research involved groups of four subjects.", "In the future, the measures that we have developed to investigate group user behaviours could be extended to fit larger group settings.", "Also, we used a variety of objective measures for behavioural analysis, but only one subjective measure, Social Presence.", "In the future, these measures could be supplemented with more subjective surveys around group dynamics (such as who the perceived leader was), and communication behaviours (such as subject perception of how easy it was to communicate with each other).", "It would also be good to use interviews to capture more overall feedback about the experience.", "In summary, we believe that group-based immersive sensemaking is a rich area for future research, and hope that the measures we have developed could be of use to broader research community.", "This research was supported under the Australian Research Council’s Discovery Projects funding scheme (DP180100755)." ], [ "Task", "Accuracy.", "The proportion of the number of images that were categorised into the correct emotion terms to the total number of images.", "Completion Time.", "Between the time that the experimenter started showing the task content to the participants and the time that the experimenter changed the content to the next task." ], [ "Interaction", "Grab.", "The interaction that moves an object but does not change the grouping of an image (change the label to which an image is closest).", "Regroup.", "The interaction that moves an object and changes the grouping of an image.", "Interaction Proportion.", "The proportion of interaction duration in each task to the task completion time.", "Interaction Count.", "The number of times that interactions are applied to objects.", "Interaction Equality.", "The balance between the duration of interaction that each participant contributed (equality of duration) and the balance between the interaction count of each participant (equality of count).", "The Gini coefficient [18] that is a commonly used measure of inequality was used to do the calculation.", "As the sample size was not large, a refined bias-corrected method of the Gini coefficient [19] was used." ], [ "Conversation", "Conversation Proportion.", "The proportion of conversation duration in each task to the task completion time.", "Conversation Equality.", "The balance between the duration of conversation that each participant contributed.", "Again the bias-corrected Gini coefficient was used to do the calculation." ], [ "Coordination", "Planning.", "Planning refers to the high-level decisions about the tasks and collaborations.", "The discussions would relate to the strategies for organising the image and label cards, and dividing the responsibilities among participants.", "Assistance.", "Participants help each other during collaboration.", "It is further divided into two categories: Question (directly requesting help from others, asking/answering questions), and Exchange (exchanging information).", "Monitoring.", "Gutwin and Greenberg [29] defined monitoring as the awareness of others in the shared workspace, such as the information of who are the others, where are the others and what the others are doing.", "Monitoring here is considered as explicit observation of others and focus on others' activities during collaboration.", "The Monitoring activity was retrieved automatically from audio and visual session playback when participants did not speak or interact with any objects longer than 5 seconds.", "Protection.", "Protection refers to that participants defended their own work.", "This activity was retrieved automatically from visual playback and audio recordings, when participants made a response to the discussion of the images that they had grouped.", "Respect.", "Respect refers to a participant only modified other's work after asking for permission.", "This activity was captured automatically from visual playback and audio recordings.", "The criteria for retrieving this was when participants wanted to changed the grouping of an image, they asked others' opinions.", "Accommodation.", "Accommodation relates to dealing with conflicts during collaboration, such as resolving different opinions (e.g.", "Voting) and interacting to the same object at the same time (e.g.", "Contention)." ], [ "Awareness", "No Response.", "No response when a participant initialised a question.", "Think Aloud.", "A participant talked to self.", "Conversation Conflict.", "More than one conversation happened concurrently.", "Shaking.", "When speaking, a participant shook the virtual hand (VR) / mouse cursor (desktop) or objects." ] ]
2210.07784
[ [ "Compressed Sensing of Compton Profiles for Fermi Surface Reconstruction:\n Concept and Implementation" ], [ "Abstract Compton scattering is a well-established technique that can provide detailed information about electronic states in solids.", "Making use of the principle of tomography, it is possible to determine the Fermi surface from sets of Compton-scattering data with different scattering axes.", "Practical applications, however, are limited due to long acquisition time required for measuring along enough number of scattering directions.", "Here, we propose to overcome this difficulty using compressed sensing.", "Taking advantage of a hidden sparsity in the momentum distribution, we are able to reconstruct the three-dimensional momentum distribution of bcc-Li, and identify the Fermi surface with as little as 14 directions of scattering data with unprecedented accuracy.", "This compressed-sensing approach will permit further wider applications of the Compton scattering experiments." ], [ "Introduction", "The Compton scattering comprises the collision events in which photons (usually X-rays) are inelastically scattered by electrons in materials.", "Since these electrons are in motion, the scattered radiation is Doppler-broadened and its measurement provides information on the electron momentum density (EMD) projected along the scattering direction [1], [2].", "Compton scattering measurements play an important role in investigations of the finite-temperature electronic structure, and supplies complementary information to other experiments such as the angle-resolved photoemission spectroscopy (ARPES) and the de Haas–van Alphen measurement.", "Experimental Compton scattering studies on elemental Li and Al revealed marked influences of electronic correlations in particular in Li [3], [4], [5].", "Recent applications to strongly correlated superconductors unveiled the Fermi surface in cuprates La$_{2-x}$ Sr$_x$ CuO$_4$  [6], cobalt oxides Na$_x$ CoO$_2$  [7], and doped iron-arsenides [8].", "Other applications to topical compounds include the study of the EMD around Dirac cones in graphene [9], the Fermi surface change across the metal-insulator transition in Ba$_{1-x}$ K$_x$ BiO$_3$  [10], and the temperature evolution between small and large Fermi surfaces in heavy-fermion compound YbRh$_2$ Si$_2$  [11].", "Furthermore, magnetic Compton scattering using circularly polarized X-rays clarified the spin-dependent EMD of ferromagnetic iron and nickel [12], [13], [14], [15], [16] and the orbital resolved occupations in Mn compounds [17].", "On the theoretical side, recent developments take account of electronic correlations by $GW$ approximation [18] and by the dynamical mean-field theory, which is applied to iron, nickel, and their alloy [19], [20], [21].", "More recent proposals include an unexpected universal scaling predicted for the Compton profiles of alkali metals [22], and the detection of magnetoelectric multipoles through the Compton scattering [23].", "The Compton scattering experiment measures the double differential cross section ${{\\mathrm {d}}^2\\sigma /{\\mathrm {d}}\\omega {\\mathrm {d}}\\Omega }$ .", "Within the so-called impulse approximation [24], [25], the cross section yields the Compton profile $J_{\\zeta }(p_z)$ , which is related to EMD, $\\rho (\\mathbf {p})$ , by a double integral [1], [2] $J_{\\zeta }(p_z) = \\iint \\rho (\\mathbf {p})\\, {\\mathrm {d}}p_x {\\mathrm {d}}p_y.$ Here, $p_z$ is chosen to be parallel to the scattering direction denoted by $\\zeta $  [Fig.", "REF (b)].", "Although the Compton profile $J_{\\zeta }(p_z)$ possesses the information of $\\rho (\\mathbf {p})$ , the double integral obscures characteristics in $\\rho (\\mathbf {p})$ .", "In particular, discontinuities in $\\rho (\\mathbf {p})$ show up only as cusps in $J_{\\zeta }(p_z)$ , and hence the Fermi-surface features are difficult to identify from the experimental $J_{\\zeta }(p_z)$ data.", "Therefore, there is a need to improve the reconstruction of $\\rho (\\mathbf {p})$ to enhance the capability of Compton scattering experiments to address open questions for fermiology.", "The inverse problem of Eq.", "(REF ) can be regarded as a three-dimenensional extension of the computed tomography (CT).", "Let $f(x, y)$ be a function in $x$ -$y$ plane, and its one-dimensional projection $g_{\\theta }(x^{\\prime }) = \\int {\\mathrm {d}}y^{\\prime }\\, f(x, y)$ is given, where ${(x^{\\prime }, y^{\\prime })}$ are the coordinate rotated from ${(x, y)}$ by angle $\\theta $  [Fig.", "REF (a)].", "Then, the original function $f(x, y)$ can be reconstructed, provided that a set of $g_{\\theta }(x^{\\prime })$ is available for a sufficiently dense distribution of $\\theta $ .", "This principle has been applied to the Compton scattering to reconstruct the three-dimensional function $\\rho (\\mathbf {p})$ from the Compton profile [Fig.", "REF (b)] [26], [27], [4].", "Figure: Schematics of (a) the CT for reconstruction of a two-dimensional function f(x,y)f(x, y) and (b) the Compton profile represented as a double integral of three-dimensional function ρ(𝐩)\\rho (\\mathbf {p}).Compared with the original CT, the reconstruction of $\\rho (\\mathbf {p})$ involves practical difficulties for following two reasons.", "Firstly, the reconstruction of $\\rho (\\mathbf {p})$ requires recovering of two axes eliminated by the double integral in Eq.", "(REF ), while the original CT recovers only one axis.", "Secondly, the number of the measurement axes $\\zeta $ is limited (about 10) for experimental reasons, while the angle $\\theta $ in the CT is practically continuous.", "Because of these difficulties, recent applications recover only one axis and employ the two-dimensional EMD projected onto a plane (e.g., $p_x$ -$p_y$ plane) [28], [29], [30].", "This works for investigation of two-dimensional materials, in which the projected EMD still capture the feature of the Fermi surface.", "However, in order to investigate materials having a three-dimensional Fermi surface, full reconstruction of the three-dimensional EMD, $\\rho (\\mathbf {p})$ , is indispensable.", "Following recent development in data-science techniques, we are now able to improve the inversion process.", "In this paper, we propose a method using compressed sensing to the reconstruction of the Fermi surface.", "Compressed sensing, first applied to MRI, is a data-processing technique that reduces required measurement data for obtaining a certain given precision of the density map [31], [32], [33], [34], [35].", "The key idea is that the final image of the density map is compressible, and the information, hence the number of measured Fourier signals, can be less than the number of pixels in the final image.", "The success of the compressed sensing indicates that using characteristics of the EMD, there is a chance to carry out the Fermi-surface reconstruction with a much fewer number of scattering axes than it was required so far.", "This paper is organized as follows.", "We first review the concept of the compressed sensing in Section .", "Our reconstruction method and technical details in practical calculations are presented in Section  and , respectively.", "Demonstrative results are presented in Section  focusing on the noise of the input data.", "The paper is summarized in Section ." ], [ "Compressed sensing", "In this section, we review the fundamentals of the compressed sensing [36], [37], [38], [39], [40] as a preliminary to its application to Compton profiles.", "We consider a situation where a set of experimental data $\\mathbf {y}$ is related to a physical quantity of interest, $\\mathbf {x}$ , by a linear equation ${\\mathbf {y}=A \\mathbf {x}}$ .", "Here, the sizes of vectors $\\mathbf {y}$ and $\\mathbf {x}$ are $M$ and $N$ , respectively, and $A$ is an ${(M \\times N)}$ matrix.", "If $M<N$ , namely, if the number of equations is less than the number of unknown variables, a solution for $\\mathbf {x}$ is not uniquely determined (underdetermined systems).", "Experimental errors further expand the set of possible solutions that satisfy ${\\mathbf {y}=A \\mathbf {x}}$ within error bars.", "Finding physical solutions for $\\mathbf {x}$ thus involves practical difficulties in realistic applications.", "Compressed sensing solves ${\\mathbf {y}=A \\mathbf {x}}$ for $\\mathbf {x}$ , assuming sparsity in the solution $\\mathbf {x}^{\\ast }$ .", "This can be carried out by solving the optimization problem called generalized least absolute shrinkage and selection operator (LASSO) [41], [42].", "In this case, the function $\\mathcal {F}(\\mathbf {x})$ to minimize is given by $\\mathcal {F}(\\mathbf {x}) = \\frac{1}{2} \\Vert \\mathbf {y}- A \\mathbf {x}\\Vert _2^2 + \\lambda \\Vert B \\mathbf {x}\\Vert _1^{\\vphantom{\\dagger }},$ where $B$ is a non-square matrix, and $\\Vert \\cdot \\Vert _{\\gamma }^{\\vphantom{\\dagger }}$ represents the $L_{\\gamma }$ norm defined by $\\Vert \\mathbf {x}\\Vert _{\\gamma }^{\\vphantom{\\dagger }}&= \\left( \\sum _i |x_i|^{\\gamma } \\right)^{1/\\gamma }.$ The first term in Eq.", "(REF ) yields a least-square fitting, whereas the second term imposes a penalty for the absolute value of each component of $B\\mathbf {x}$ .", "This penalty imposes solutions to have more zeros in $B\\mathbf {x}$ .", "A selected solution $\\mathbf {x}^{\\ast }$ , thus, acquires sparsity in its linear combination $B\\mathbf {x}^{\\ast }$ .", "Clearly, the choice of the matrix $B$ is essential in the generalized LASSO.", "Applications to MRI take advantage of the sparsity in the spatial variations of an expected image [31], [32], [33], [34], [35].", "The matrix $B$ in this case describes differences of intensities between neighboring pixels, which is called total variation [43].", "With the aid of LASSO, measurement time required to obtain a certain resolution in the final result has shown to be reduced considerably.", "The compressed sensing based on the $L_1$ -norm regularization has been widely applied to various measurements [44], [45], [46], [47], [48], [49], [50], [51], [52] and even to theoretical calculations [53], [54], [55], [56], [57], [58], [59].", "The regularization parameter $\\lambda $ plays a crucial role in LASSO.", "How to determine the optimal value of $\\lambda $ will be demonstrated using explicit data in Section .", "Finally, a comment on the matrix $A$ is in order.", "For successful applications of compressed sensing, $A$ should be a dense matrix as discussed below.", "If $A$ is not dense, the matrix $A$ connects an element of $\\mathbf {x}$ to only a few elements of input $\\mathbf {y}$ .", "Hence, if some of these elements of $\\mathbf {y}$ are missing, the corresponding element of $\\mathbf {x}$ cannot be reproduced with accuracy.", "This might lead to a complete failure of the procedure, resulting in entirely wrong solution of $\\mathbf {x}$ .", "If $A$ is a dense matrix, on the other hand, the lack of knowledge of some elements of $\\mathbf {y}$ , has only a diffuse effect over all elements of $\\mathbf {x}$ and might lead to only minor errors in the outcome.", "Moreover, a dense $A$ will cause a large number of degeneracies of possible solutions of $\\mathbf {x}$ , and the $L_1$ -norm regularization will effectively work in choosing a sparse solution." ], [ "Overview of reconstruction methods", "The Radon transform and the equivalent inverse formula found by Cormack in early $^{\\prime }60$  [60], [61] are the seminal works which allowed the development of current CT. Mijnarends applied the method of Cormack to the problem in the angular correlation of positron annihilation radiation [26], [27], which involves the same inversion problem as Eq.", "(REF ).", "He represented $J_{\\zeta }(p_z)$ and $\\rho (\\mathbf {p})$ in terms of the spherical harmonics, $J_{lm}(p)$ and $\\rho _{lm}(p)$ , respectively.", "Equation (REF ) then forms an integral equation consisting of $J_{lm}(p)$ , which is represented around the scattering axis, and $\\rho _{lm}(p)$ , which is represented in the crystal coordinate.", "This complicated equation has been solved analytically.", "Therefore, once $J_{lm}(p)$ are obtained from experimental data, they are immediately converted into $\\rho _{lm}(p)$ , and thus $\\rho (\\mathbf {p})$ , using the analytical solution.", "This method has also been applied to the reconstruction of two-dimensional projected EMD [28], [29], [30].", "An alternative approach uses the Fourier transform as is now common in practical appliations of CT. Tanaka et al.", "applied the direct Fourier transform method to the Compton profiles with elaborate consideration of the error propagation [4].", "They demonstrated reconstruction of the three-dimensional EMD from experimentally measured Compton profiles of a lithium metal.", "There is a room for improvement in the fact that the truncation of the Fourier series results in artificial oscillations in the final result of $\\rho (\\mathbf {p})$ , which make it difficult to identify the discontinuity in $\\rho (\\mathbf {p})$ (Fermi surface).", "From the point of view of the compressed-sensing technique, the direct Fourier transform method is more suitable than the Cormack's method for the following reasons.", "As described in Sec.", ", successful applications of the compressed sensing require the transformation matrix $A$ to be dense.", "The Cormack's method is represented in the polar coordinate, in which different radial coordinates are decoupled.", "Therefore, the matrix $A$ is sparse.", "In the direct Fourier transform method, on the other hand, the matrix $A$ corresponds to the Fourier basis $e^{i\\mathbf {p}\\cdot \\mathbf {r}}$ , in which each real-space component is represented with the whole Fourier components.", "Therefore, the matrix $A$ is dense and satisfies the requirement of the compressed sensing." ], [ "Direct Fourier transform method", "We review the direct Fourier transform method by Tanaka et al.", "in Ref. [4].", "We define the Fourier transform of the momentum density $\\rho (\\mathbf {p})$ by $B(\\mathbf {r})$ : $B(\\mathbf {r}) = \\iiint \\rho (\\mathbf {p}) e^{i\\mathbf {p}\\cdot \\mathbf {r}}\\, {\\mathrm {d}}\\mathbf {p}.$ Substituting ${\\mathbf {r}=(0,0,z)}$ in a coordinate system with $z$ axis being parallel to the scattering direction $\\zeta $ , we obtain $B_{\\zeta }(0, 0, z) = \\int J_{\\zeta }(p_z) e^{ip_z z} \\, {\\mathrm {d}}p_z.$ Here, we used Eq.", "(REF ) to replace $\\rho (\\mathbf {p})$ with $J_{\\zeta }(p_z)$ .", "The subscript $\\zeta $ for $B$ is to indicate the direction of the $z$ axis.", "Compton profiles $J_{\\zeta }(p_z)$ measured on several scattering directions $\\zeta $ yield $B(\\mathbf {r})$ on the corresponding lines in the real space as shown in Fig.", "REF .", "The inverse transformation of Eq.", "(REF ) is given by $\\rho (\\mathbf {p}) = (2\\pi )^{-3}\\iiint B(\\mathbf {r}) e^{-i\\mathbf {p}\\cdot \\mathbf {r}} \\, {\\mathrm {d}}\\mathbf {r}.$ In order to perform this integral, we need $B(\\mathbf {r})$ in the whole $\\mathbf {r}$ space.", "In Ref.", "[4], $B(\\mathbf {r})$ obtained on several lines (Fig.", "REF ) is interpolated for arbitrary $\\mathbf {r}$ , and then the inverse transformation is carried out to reconstruct $\\rho (\\mathbf {p})$ ." ], [ "Application of compressed sensing", "In the inverse Fourier transform in Eq.", "(REF ), missing information in $B(\\mathbf {r})$ was filled by interpolation, which could result in a reduction of accuracy.", "In the following, we directly solve Eq.", "(REF ) for $\\rho (\\mathbf {p})$ without an interpolation by applying the compressed-sensing technique.", "We represent the integral in Eq.", "(REF ) with a discrete sum over $\\mathbf {p}_j$ on a uniformly spaced grid, $\\Delta p_\\mathrm {calc}$ , within a cube of volume $(2P_\\mathrm {max})^3$ .", "The cube is taken to be sufficiently large so that the whole region where $\\rho (\\mathbf {p}_j)$ is finite is covered.", "Then, Eq.", "(REF ) is represented as $B_i = \\sum _j A_{ij} \\rho _j,$ where ${B_i \\equiv B(\\mathbf {r}_i)}$ , ${\\rho _j \\equiv \\rho (\\mathbf {p}_j) \\Delta p^3}$ , and ${A_{ij} \\equiv e^{i\\mathbf {p}_j \\cdot \\mathbf {r}_i}}$ .", "$\\rho _j$ is defined on a dense grid that covers the whole region, whereas $B_i$ is given only on lines that are computed from several Compton profiles in Eq.", "(REF ).", "Therefore, this linear equation forms an underdetermined system that has a fewer number of equations than the number of unknown variables.", "Filling interpolated values in $B(\\mathbf {r})$ is a way to supply additional equations to make the system of linear equations solvable.", "Instead of increasing the number of equations, we reduce the number of variables that need to be determined.", "To this end, we suppose that $\\rho (\\mathbf {p})$ is constant, i.e., ${\\nabla \\rho (\\mathbf {p})=0}$ , in an extensive region.", "This is true away from the Fermi surface, where the energy bands are either fully occupied or empty.", "Such a solution can be obtained by minimizing the following function of the form of the generalized LASSO (see Section ): $\\begin{split}\\mathcal {F}(\\lbrace \\rho _j \\rbrace ) &=\\frac{1}{2} \\sum _{i \\in \\text{measured}} \\left[ B_i - \\sum _j A_{ij} \\rho _j \\right]^2\\\\&+ \\lambda \\sum _j \\sum _{\\xi =p_x, p_y, p_z} \\left| \\sum _{j^{\\prime }} (D_{\\xi })_{jj^{\\prime }} \\rho _{j^{\\prime }} \\right|.\\end{split}$ Here, the summation in the first term is taken over $B_i$ computed from the measured Compton profiles.", "$D_{\\xi }$ is a matrix that represents the derivative ${\\partial /\\partial \\xi }$ .", "With the first-order forward difference, its explicit expression is given by ${(D_{\\xi } \\rho )_j = \\rho _{j(+\\xi )}-\\rho _{j}}$ , where the index ${j(+\\xi )}$ denotes the coordinates one-point ahead of $\\mathbf {p}_j$ to the direction $\\xi $  We omitted the factor ${1/\\Delta p}$ because it only changes the scale of $\\lambda $ ..", "The second term in Eq.", "(REF ) forces the solution to have ${\\partial \\rho (\\mathbf {p}) / \\partial \\xi =0}$ , keeping the first term within a certain range.", "To what extent the second term affects the solution is controlled by the regularization parameter $\\lambda $ , which will be discussed in details in Section REF .", "There are two additional relations that $\\rho (\\mathbf {p})$ should fulfill.", "One is non-negativity $\\rho (\\mathbf {p})\\ge 0,$ and the other is the sum rule $\\iiint \\rho (\\mathbf {p}) \\, {\\mathrm {d}}\\mathbf {p}= \\int J_{\\zeta } (p_z)\\, {\\mathrm {d}}p_z \\equiv n,$ which is obtained by integrating Eq.", "(REF ) over $p_z$ .", "Here, the value $n$ represents the total number of electrons in a unit cell.", "In the discrete representation, the above two relations are written as $\\rho _j \\ge 0\\,, \\qquad \\sum _j \\rho _j = n,$ Our goal is to minimize the function $\\mathcal {F}$ in Eq.", "(REF ) with respect to $\\rho _j$ under the constraints, Eq.", "(REF )." ], [ "Compton profile data", "To demonstrate the performance of our method described in the previous section, we apply it to bcc-Li, which has been addressed by the direct Fourier transform method [4].", "We prepare both the Compton profiles $J_{\\zeta }(p_z)$ and the EMD $\\rho (\\mathbf {p})$ by the first-principles calculations.", "The reconstructed $\\rho (\\mathbf {p})$ will be verified by comparing with $\\rho (\\mathbf {p})$ directly computed without the reconstruction.", "The electronic structure of alkali metals is calculated within density functional theory (DFT) [63], [64] using the spin-polarized relativistic Korringa-Kohn-Rostoker (SPR-KKR) method [65].", "With the local spin-density approximation (LSDA) for the exchange correlation potential [66], the spin-resolved EMD are computed from the corresponding LSDA Green functions.", "The self-consistent LSDA calculations are performed with a ${62\\times 62\\times 62}$ mesh in the Brillouin zone [65].", "$J_{\\zeta }(p_z)$ and $\\rho (\\mathbf {p})$ are obtained by the energy integral in the complex plane on a semi-circular contour with 32 points and a rectangular grid in the momentum space with a cutoff ${|\\mathbf {p}|_\\mathrm {max}=10}$  a.u. [67].", "The step size of the momentum is $0.01$  a.u.", "for $J_{\\zeta }(p_z)$ and $0.002$  a.u.", "for $\\rho (\\mathbf {p})$ .", "We normalize $J_{\\zeta }(p_z)$ to satisfy the sumrule in Eq.", "(REF ).", "Figure REF shows $J_{\\zeta }(p_z)$ computed for 14 directions.", "The results for the principal directions, $[001]$ , $[110]$ , and $[111]$ , have been published in Ref. [22].", "The resolution of $J_{\\zeta }(p_z)$ is ${\\Delta p_\\mathrm {exp} = 0.01}$  a.u., which is comparable to the experimental resolution ${\\Delta p_\\mathrm {exp} = 0.02}$ in Ref. [4].", "To simulate experiments, we add Gaussian noise on $J_{\\zeta }(p_z)$ .", "The width of the Gaussian distribution $\\sigma $ is ${\\sigma =10^{-1}}$ , $10^{-2}$ , or $10^{-3}$ .", "Specific features of $J_{\\zeta }(p_z)$ are the parabola-like shape for ${p_z < p^{\\vphantom{\\dagger }}_\\mathrm {F}}$ , first cusp at ${p_z = p^{\\vphantom{\\dagger }}_\\mathrm {F}}$ , and the tails for ${p_z > p^{\\vphantom{\\dagger }}_\\mathrm {F}}$ .", "The value of ${p^{\\vphantom{\\dagger }}_\\mathrm {F}= 0.58}$  a.u.", "has been found from a precise computation using the enhanced momentum cutoff.", "Higher momentum contributions to $\\rho (p)$ are frequently discussed (see Ref.", "[2] and references therein) and constitute a clear evidence for Umklapp processes.", "Note also that the observed anisotropy of the Compton profiles is a consequence of the directional anisotropy of the bcc lattice." ], [ "Fourier transform", "We first perform the Fourier transform of the Compton profiles $J_{\\zeta }(p_z)$ in Eq.", "(REF ) to obtain its real-space representation $B_{\\zeta }(0, 0, z)$ .", "Since $J_{\\zeta }(p_z)$ is an even function, i.e., ${J_{\\zeta }(p_z)=J_{\\zeta }(-p_z)}$ , the transform is represented as a discrete cosine transformation and $B(\\mathbf {r})$ is real.", "The explicit expression for the discrete cosine transformation is presented in Appendix .", "Figure: B(𝐫)B(\\mathbf {r}) on the axis ζ=[001]{\\zeta =[001]}.", "The blue and red circles show results computed from J ζ (p z )J_{\\zeta }(p_z) without noise and with noise of σ=10 -1 {\\sigma =10^{-1}}, respectively.", "The inset shows an enlarged view around r=0{r=0}.Figure REF shows $B(\\mathbf {r})$ along the axis ${\\zeta =[001]}$ .", "The upper boundary of $r$ is given by ${r_\\mathrm {max}=\\pi /\\Delta p_\\mathrm {exp}} {\\approx 314.2}$  a.u.", "The real-space resolution $\\Delta r$ is ${\\Delta r=\\pi /p_\\mathrm {max}} {\\approx 0.785}$  a.u.", "Figure REF compares $B(\\mathbf {r})$ computed from $J_{\\zeta }(p_z)$ with and without noise.", "It is clear that the influence of noise is relatively large in the large-$r$ region, because $B(\\mathbf {r})$ decays with increasing $r$ .", "Although $B(\\mathbf {r})$ is obtained up to sufficiently large-$r$ region, we truncate these data before going to the next step for the reasons mentioned below.", "As will be described in Section REF , the momentum resolution $\\Delta p_\\mathrm {calc}$ in the calculation of $\\rho (\\mathbf {p})$ is limited because of the computer memory and the computation time for solving the LASSO optimization problem.", "Hence, in the ordinary situation, $\\Delta p_\\mathrm {calc} > \\Delta p_\\mathrm {exp}$ , where $\\Delta p_\\mathrm {exp}$ is the resolution of the Compton profiles.", "This results in a periodicity in $B(\\mathbf {r})$ evaluated from $\\rho (\\mathbf {r})$ with the period ${\\pi /\\Delta p_\\mathrm {calc}}$ , which is smaller than the upper boundary $r_\\mathrm {max}$ of the input data $B(\\mathbf {r})$ .", "Therefore, $B(\\mathbf {r})$ has to be truncated at ${\\pi /\\Delta p_\\mathrm {calc} \\equiv r_\\mathrm {cutoff}}$ ." ], [ "Solving LASSO", "The momentum points $\\mathbf {p}_j$ for representing $\\rho (\\mathbf {p})$ is constructed with a linear mesh between $-P_\\mathrm {max}$ and $P_\\mathrm {max}$ for each axis.", "We set ${P_\\mathrm {max}=3}$  a.u.", "in this paper.", "The number of points, $L$ , for each axis is fixed at ${L=121}$ .", "The momentum resolution is thus ${\\Delta p_\\mathrm {calc} = 0.05}$  a.u.", "We apply the symmetry operations against a set of ${N=L^3}$ momenta, $\\lbrace \\mathbf {p}_j \\rbrace $ , to reduce the number of points.", "The crystals of elemental alkali metals have $O_h$ point-group symmetry.", "There are 48 symmetry operations and therefore only ${N_\\mathrm {irr} \\simeq N/48}$ momenta are inequivalent.", "With this property, we can reduce the number of grid points.", "For details of how to find equivalent points and how to integrate the symmetry features into the optimization problem in Eq.", "(REF ), see Appendix .", "For the irreducible set of momenta $\\lbrace \\tilde{\\mathbf {p}}_j \\rbrace $ , we solve the optimization problem in Eq.", "(REF ) under constraints, Eq.", "(REF ).", "We use the alternating direction method of multipliers (ADMM) [68], which is presented in details in Appendix .", "The calculation time grows as $O(N_\\mathrm {irr}^3)$ , which limits the feasible maximum system size, and hence the resolution.", "We thus chose $N=121^3$ , which corresponds to $N_\\mathrm {irr}=4.0 \\times 10^3$ .", "Figure: The reconstructed EMD ρ(𝐩)\\rho (\\mathbf {p}) along 𝐩∥[100]{\\mathbf {p}\\parallel [100]} (Left panel) and 𝐩∥[110]{\\mathbf {p}\\parallel [110]} (Right panel) compared with the exact one.", "The regularization parameter λ\\lambda is varied among (a) λ=10 1 {\\lambda =10^{1}}, (b) λ=10 -1 {\\lambda =10^{-1}} [the optimal value λ opt {\\lambda _\\mathrm {opt}} determined by the cross validation method (see Section )], and (c) λ=10 -4 {\\lambda =10^{-4}}.The noise is σ=10 -3 {\\sigma =10^{-3}}.Figure REF shows representative results for reconstructed $\\rho (\\mathbf {p})$ along two symmetry axes, ${\\mathbf {p}\\parallel [100]}$ and $[110]$ .", "Results are shown for three values of the regularization parameter $\\lambda $ .", "When $\\lambda $ is large [Fig.", "REF (a)], $\\rho (\\mathbf {p})$ tends to be flat except the region near the Fermi momentum ${p^{\\vphantom{\\dagger }}_\\mathrm {F}\\approx 0.6}$  a.u..", "Furthermore, the discontinuity at ${p=p^{\\vphantom{\\dagger }}_\\mathrm {F}}$ is broadened.", "In the opposite case with small $\\lambda $ [Fig.", "REF (c)], we can identify the discontinuity at ${p=p^{\\vphantom{\\dagger }}_\\mathrm {F}}$ as well as some small features between ${p=1.3}$  a.u.", "and ${p=2.5}$  a.u.", "for ${\\mathbf {p}\\parallel [100]}$ and a structure around ${p=0.8}$  a.u.", "for ${\\mathbf {p}\\parallel [110]}$ .", "But, there are some artificial features too, e.g., a hump at ${p=0}$ for ${\\mathbf {p}\\parallel [110]}$ .", "Between the two limits, we can obtain a reasonable result that shows the physical structure well and exhibits less unphysical features [Fig.", "REF (b)].", "We remark that our result does not show artificial oscillations as observed in the original direct Fourier transform method [13].", "This is due to the regularization term, which makes $\\rho (\\mathbf {p})$ as flat as possible.", "Consequently, we achieve a clear discontinuity at the Fermi momentum without artificial oscillations." ], [ "Determination of the regularization parameter", "For determining an optimal value of $\\lambda $ in an unbiased manner, we use the cross validation (CV) method.", "An application to LASSO is presented, for example, in Ref. [40].", "The input data $\\mathbf {y}$ is split into $K$ groups randomly.", "$K$ is fixed at ${K=5}$ in this paper.", "${(K-1)}$ -groups of data is denoted by $\\mathbf {y}^{\\vphantom{\\dagger }}_\\mathrm {T}$ , and the rest one group of data is denoted by $\\mathbf {y}^{\\vphantom{\\dagger }}_\\mathrm {V}$ .", "Here, the subscripts T and V stand for training and validation, respectively.", "The LASSO is solved with $\\mathbf {y}^{\\vphantom{\\dagger }}_\\mathrm {T}$ as an input.", "More precisely, the summation of the index $i$ in Eq.", "(REF ) is taken over the subset $\\mathbf {y}_\\mathrm {T}$ .", "The converged solution $\\mathbf {x}^{\\ast }$ is validated with $\\mathbf {y}_\\mathrm {V}$ .", "This optimization-validation process is done for $K$ combinations of $\\mathbf {y}^{\\vphantom{\\dagger }}_\\mathrm {T}$ and $\\mathbf {y}^{\\vphantom{\\dagger }}_\\mathrm {V}$ .", "There are two mean-squared errors (MSEs) that quantify the solution.", "One is the training error defined by $\\mathrm {MSE}_\\mathrm {T} =\\frac{1}{M_\\mathrm {T}} \\Vert \\mathbf {y}^{\\vphantom{\\dagger }}_\\mathrm {T} - \\mathcal {P}^{\\vphantom{\\dagger }}_\\mathrm {T} A \\mathbf {x}^{\\ast } \\Vert _2^2,$ where ${M_\\mathrm {T}=M(K-1)/K}$ is the dimension of vector $\\mathbf {y}^{\\vphantom{\\dagger }}_\\mathrm {T}$ , and $\\mathcal {P}^{\\vphantom{\\dagger }}_\\mathrm {T}$ is a projection operator onto the subspace that $\\mathbf {y}^{\\vphantom{\\dagger }}_\\mathrm {T}$ belongs to.", "$\\mathrm {MSE}_\\mathrm {T}$ exhibits a monotonic growth with increasing $\\lambda $ as shown in Fig.", "REF (a), because $\\lambda $ directly controls the ratio between $\\mathrm {MSE}_\\mathrm {T}$ to the $L_1$ -norm regularization term [see Eq.", "(REF )].", "The second quantity is called the validation error or the CV error, which is defined by $\\mathrm {MSE}_\\mathrm {V} =\\frac{1}{M_\\mathrm {V}} \\Vert \\mathbf {y}^{\\vphantom{\\dagger }}_\\mathrm {V} - \\mathcal {P}^{\\vphantom{\\dagger }}_\\mathrm {V} A \\mathbf {x}^{\\ast } \\Vert _2^2,$ where ${M_\\mathrm {V}=M/K}$ and ${\\mathcal {P}_\\mathrm {V}=1-\\mathcal {P}_\\mathrm {T}}$ .", "$\\mathrm {MSE}_\\mathrm {V}$ represents to what extent the fitting result is general.", "Here, “general” means the ability that the results infer different dataset.", "If $\\mathbf {x}^{\\ast }$ is designed to fit minute structure due to noise in $\\mathbf {y}^{\\vphantom{\\dagger }}_\\mathrm {T}$ , $\\mathbf {x}^{\\ast }$ would not match other dataset, namely, $\\mathbf {y}^{\\vphantom{\\dagger }}_\\mathrm {V}$ .", "The validation error thus gets worse as $\\lambda $ decreases beyond a reasonable region.", "We determined an optimal value of $\\lambda $ by the minimum of MSE$_\\mathrm {V}$ , which yields ${\\lambda =1.0 \\times 10^{-1}} {\\equiv \\lambda _\\mathrm {opt}}$ as indicated by the dashed vertical line in Fig.", "REF (a).", "The obtained optimal value is better understood by analyzing the effect of the $L_1$ -norm regularization.", "Figure REF (b) shows the number of non-zero components $N_{>0}$ of the $L_1$ term [the second term in Eq.", "(REF )].", "Below the optimal $\\lambda $ , $N_{>0}$ rapidly increases as $\\lambda $ decreases.", "Such components that appear only for small-$\\lambda $ are used to fit minute structure of the input data, that is, noise, and thus increase validation errors.", "At ${\\lambda =\\lambda _\\mathrm {opt}}$ , $2{,}103$ components are finite out of ${3N_\\mathrm {irr}=119{,}133}$ in $D_{\\xi } \\mathbf {\\rho }$ , meaning that only $1.8\\%$ ($98.2\\%$ ) of $\\nabla \\rho (\\mathbf {p})$ are finite (zero) in the final result." ], [ "Noise-Level Dependence", "This section focuses on the influence of noise on the reconstructed EMD results.", "Fig.", "REF compares the reconstructed EMD for different noise levels, ${\\sigma =10^{-3}}$ , $10^{-2}$ , and $10^{-1}$ .", "The regularization parameter $\\lambda $ were optimized separately using the CV method.", "The left two panels in Fig.", "REF (a) are replots of REF (b) in a different range.", "The third panel shows the intensity map of $\\rho (\\mathbf {p})$ on the ${p_z=0}$ plane.", "It is clear that the occupied states are almost isotropic and resembles those of free-electron gas.", "The Fermi surface can be emphasized by taking the gradient of $\\rho (\\mathbf {p})$ .", "The intensity map of $|\\nabla \\rho (\\mathbf {p})|$ is presented in the right-most panel.", "The high-intensity circle indicates the Fermi surface, which could be compared with the ARPES spectrum at zero frequency.", "As the noise level increases from ${\\sigma =10^{-3}}$ to ${\\sigma =10^{-1}}$ [Fig.", "REF (a) to Fig.", "REF (c)], the discontinuity in $\\rho (\\mathbf {p})$ gets blurred.", "Correspondingly, the peak in $|\\nabla \\rho (\\mathbf {p})|$ becomes broader.", "These results demonstrate that the noise level affects the momentum resolution in the reconstructed $\\rho (\\mathbf {p})$ .", "Nevertheless, we can still determine the Fermi surface by tracking the ridge in $|\\nabla \\rho (\\mathbf {p})|$ ." ], [ "Summary", "The inverse problem for reconstruction of the three-dimensional EMD, $\\rho (\\mathbf {p})$ , is underdetermined in nature, because the number of experimentally measured scattering directions is limited.", "We employed the compressed sensing, which can deal precisely with undertermined systems.", "The compressed sensing uses the limited information for determination of EMD in a specific regions, that is, around the Fermi surface.", "This is accomplished by the sparsity condition for $\\nabla \\rho (\\mathbf {p})$ implemented as an optimization problem called the generalized LASSO.", "We tested this new technique on the reconstruction of $\\rho (\\mathbf {p})$ of bcc-Li from the Compton profiles computed by DFT.", "The compressed-sensing technique allows us to reconstruct $\\rho (\\mathbf {p})$ from 14 projections and to characterize the shape of the Fermi surface.", "We also investigated the noise dependency in the reconstruction problem.", "We show that even if the Compton profiles are perturbed by the noise (assumed errors of experimental measurements), our method stably captures the feature around the Fermi surface.", "The demonstration with bcc-Li will lead to further applications to more complicated materials.", "We believe that out method based on the compressed sensing will contribute to accelerate the research into fermiology and stimulate development on the experiment side as well.", "This work was supported by JSPS KAKENHI grants No.", "17K12749, No.", "19K03649, No.", "20K20522, No.", "21H01003, and No.", "21H01041.", "MM was supported by JST CREST (JPMJCR1861).", "LC acknowledges the financial support by the Deutsche Forschungsgemeinschaft through TRR80 (project E2) Project number 107745057." ], [ "Discrete cosine transformation", "The Fourier transform in Eq.", "(REF ) is computed as follows.", "Let us assume that we have $N$ data of $J(p_z)$ on a uniform grid in the range ${p_z=[0:p_\\mathrm {max}]}$ with the interval ${\\Delta p = p_\\mathrm {max}/(N-1)}$ .", "The data set is represented by $p_k$ and $J_k$ with $k=0, 1, \\cdots , N-1$ .", "Discretizing the integral in Eq.", "(REF ) between $-p_\\mathrm {max}$ and $p_\\mathrm {max}$ and using the relation ${J(p_z)=J(-p_z)}$ , we obtain $B_n = \\Delta p \\left[ J_0 + (-1)^n J_{N-1} + 2 \\sum _{k=1}^{N-2} J_k \\cos \\left( \\frac{\\pi kn}{N-1} \\right) \\right],$ where $B_n$ is defined by ${B_n \\equiv B(0, 0, z_n)}$ with $z_n$ being ${z_n \\equiv n\\pi / p_\\mathrm {max}}$ ($n=0, 1, \\cdots , N-1$ ).", "This discrete cosine transformation is classified as Type I in SciPy python package." ], [ "Symmetry", "Symmetry of $\\rho (\\mathbf {p})$ in the momentum space plays crucial roles in reducing the number of $\\mathbf {p}$ -points to save memory and improving the accuracy of the reconstruction.", "We first make a grid in the whole three-dimensional space.", "In the Cartesian coordinate, the momenta $p_x$ , $p_y$ , and $p_z$ are discretized into $L$ points severally in the range ${[-P_\\mathrm {max}, P_\\mathrm {max}]}$ .", "We thus obtain ${N=L^3}$ grid points, which are represented by $\\mathbf {p}_j$ .", "We transform the vector $\\mathbf {p}_j$ into ${\\mathbf {p}_j^{\\prime }=\\mathcal {R} \\mathbf {p}_j}$ by symmetry operations $\\mathcal {R}$ that are invariant in the crystal.", "In the case of O$_h$ point-group symmetry, there are 48 operations.", "If $\\mathbf {p}^{\\prime }_j$ corresponds to a grid point, say $\\mathbf {p}_k$ , we regard that two vectors $\\mathbf {p}^{\\prime }_j$ and $\\mathbf {p}_k$ are equivalent.", "Applying all symmetry operations $\\lbrace \\mathcal {R} \\rbrace $ to all grid points $\\lbrace \\mathbf {p}_j \\rbrace $ , we construct an inequivalent set of vectors, which we represent by $\\lbrace \\tilde{\\mathbf {p}}_j \\rbrace $ .", "Typical choices of $L$ are summarized in Table REF together with the corresponding values of ${N=L^3}$ and the number of inequivalent vectors, $N_\\mathrm {irr}$ .", "As expected, we obtain ${N_\\mathrm {irr}/N \\sim 1/48}$ .", "In the case with the ADMM algorithm, whose memory and computation cost scales $O(N^3)$ , we can deal with up to ${N_\\mathrm {irr}\\sim 10^4}$ with desktop computers and $10^5$ with cluster computers, namely, ${L \\simeq 81}$ and 161, respectively.", "Table: The number NN of 𝐩\\mathbf {p}-grid points for representing ρ(𝐩)\\rho (\\mathbf {p}) and the number N irr N_\\mathrm {irr} in the irreducible region.The symmetry property is integrated into computations to have only $N_\\mathrm {irr}$ instead of $N$ as follows.", "We introduce notations $\\tilde{\\mathbf {\\rho }}$ for the set of the momentum density at the inequivalent points, and $\\mathbf {\\rho }$ for the full set of the momentum density at the whole points.", "By definition, $\\mathbf {\\rho }$ is obtained by upfolding $\\tilde{\\mathbf {\\rho }}$ by $\\mathbf {\\rho }= F \\tilde{\\mathbf {\\rho }},$ where $F$ is ${(N\\times N_\\mathrm {irr})}$ matrix which has one 1 in each row and 0 otherwise.", "The matrix-vector product $A \\mathbf {\\rho }$ is then evaluated as $A \\mathbf {\\rho }= A F \\tilde{\\mathbf {\\rho }} \\equiv \\tilde{A} \\tilde{\\mathbf {\\rho }},$ where $\\tilde{A}$ is a matrix that is downfolded from $A$ by ${\\tilde{A} \\equiv A F}$ .", "The size of the original matrix $A$ is ${(M \\times N)}$ , while $\\tilde{A}$ is ${(M \\times N_\\mathrm {irr})}$ .", "Using $\\tilde{A}$ , an actual evaluation of the $L_2$ term, ${\\Vert \\mathbf {y}- A \\mathbf {\\rho }\\Vert _2^2}$ , is done with ${\\Vert \\mathbf {y}- \\tilde{A} \\tilde{\\mathbf {\\rho }} \\Vert _2^2}$ , and the solution for $\\tilde{\\mathbf {\\rho }}$ is evaluated.", "Finally, $\\tilde{\\mathbf {\\rho }}$ is upfolded into $\\mathbf {\\rho }$ using Eq.", "(REF ).", "The evaluation of $L_1$ term, $\\Vert D \\mathbf {\\rho }\\Vert _1^{\\vphantom{\\dagger }}$ , needs further elaborate treatment, because the above downfolding reduces ${(3N \\times N)}$ matrix $D$ to ${(3N \\times N_\\mathrm {irr})}$ matrix $(DF)$ , which still has the scale $N$ .", "In order to eliminate the $N$ -scale in $\\Vert D \\mathbf {\\rho }\\Vert _1^{\\vphantom{\\dagger }}$ , we remark that $D \\mathbf {\\rho }\\equiv \\mathbf {\\rho }^{\\prime }$ has the same symmetry property as $\\mathbf {\\rho }$ , since $D$ represents the derivative, which preserves symmetry.", "We can therefore sum up over equivalent elements in $\\mathbf {\\rho }^{\\prime }$ before its $L_1$ norm $\\Vert \\mathbf {\\rho }^{\\prime } \\Vert _1^{\\vphantom{\\dagger }}$ is evaluated.", "This leads the equality ${\\Vert D\\mathbf {\\rho }\\Vert _1^{\\vphantom{\\dagger }}= \\Vert F^\\mathrm {T} (D\\mathbf {\\rho }) \\Vert _1^{\\vphantom{\\dagger }}}$ .", "Substituting $\\mathbf {\\rho }$ with $\\tilde{\\mathbf {\\rho }}$ using Eq.", "(REF ), we obtain $\\Vert D\\mathbf {\\rho }\\Vert _1^{\\vphantom{\\dagger }}= \\Vert F^\\mathrm {T} D F \\tilde{\\mathbf {\\rho }} \\Vert _1^{\\vphantom{\\dagger }}\\equiv \\Vert \\tilde{D} \\tilde{\\mathbf {\\rho }} \\Vert _1^{\\vphantom{\\dagger }},$ where the matrix $\\tilde{D}$ is defined by ${\\tilde{D} \\equiv F^\\mathrm {T} D F}$ .", "The size of $\\tilde{D}$ is ${(3N_\\mathrm {irr} \\times N_\\mathrm {irr})}$ , and thus the scale $N$ has been eliminated." ], [ "ADMM for generalized LASSO with constraints", "We consider a generalized LASSO problem with additional constraints.", "The function to minimize is $\\mathcal {F}(\\mathbf {x})$ in Eq.", "(REF ).", "Two constraints, non-negativity and a sum-rule, are generalized into $P\\mathbf {x}\\ge 0,\\quad \\langle S \\mathbf {x}\\rangle = s,$ where the bracket stands for $\\langle S\\mathbf {x}\\rangle \\equiv \\sum _j (S\\mathbf {x})_j$ , and $s$ is a constant.", "The matrices, $A$ , $B$ , $P$ , and $S$ , have the same column size $N$ , but their row sizes are, in general, all different.", "We solve this optimization problem using ADMM by Boyd et al. [68].", "A situation similar to the present case with constraints is considered in Refs.", "[58], [40].", "Here, we generalize them to includes general four matrices $A$ , $B$ , $P$ , and $S$ .", "Introducing auxiliary vectors $\\mathbf {z}$ and $\\mathbf {z}^{\\prime }$ , we rewrite the function $\\mathcal {F}$ in Eq.", "(REF ) as $\\begin{split}\\widetilde{\\mathcal {F}}(\\mathbf {x}, \\mathbf {z}, \\mathbf {z}^{\\prime })&= \\frac{1}{2} \\Vert \\mathbf {y}-A\\mathbf {x}\\Vert _2^2-\\nu (\\langle S\\mathbf {x}\\rangle -s)\\\\&+ \\lambda \\Vert \\mathbf {z}\\Vert _1^{\\vphantom{\\dagger }}+ \\lim _{\\gamma \\rightarrow \\infty } \\gamma \\sum _j \\Theta (-z^{\\prime }_j),\\end{split}$ where $\\nu $ is a Lagrange multiplier that enforces the sum-rule constraint.", "With the conditions $\\mathbf {z}= B \\mathbf {x}, \\quad \\mathbf {z}^{\\prime } = P \\mathbf {x},$ Eq.", "(REF ) is reduced to Eq.", "(REF ) plus the constraints in Eq.", "(REF ).", "The advantage of the latter form, $\\widetilde{\\mathcal {F}}(\\mathbf {x}, \\mathbf {z}, \\mathbf {z}^{\\prime })$ , is that the minimization with respect to $\\mathbf {x}$ , $\\mathbf {z}$ , and $\\mathbf {z}^{\\prime }$ can be done using analytical formulas.", "Therefore, our task is to make $\\mathbf {x}$ , $\\mathbf {z}$ , and $\\mathbf {z}^{\\prime }$ satisfy the condition, Eq.", "(REF ), keeping minimizing $\\widetilde{\\mathcal {F}}$ .", "In the ADMM approach, the constraints, Eq.", "(REF ), is imposed by the augmented Lagrange multiplier method.", "We here quote the update formulas from Ref.", "[40] with generalization to the four-matrices representation: $\\mathbf {x}&\\leftarrow \\left( A^\\mathrm {T} A + \\mu B^\\mathrm {T} B + \\mu ^{\\prime } P^\\mathrm {T} P \\right)^{-1}\\nonumber \\\\&\\quad \\times \\left( A^\\mathrm {T} \\mathbf {y}+ \\mu B^\\mathrm {T} (\\mathbf {z}- \\mathbf {u}) + \\mu ^{\\prime } P^\\mathrm {T} (\\mathbf {z}^{\\prime } - \\mathbf {u}^{\\prime }) + \\nu S^\\mathrm {T} \\mathbf {d} \\right)\\nonumber \\\\&\\quad \\equiv \\mathbf {\\xi }_1 + \\nu \\mathbf {\\xi }_2,\\\\\\mathbf {z}&\\leftarrow \\mathcal {S}_{\\lambda /\\mu } (B\\mathbf {x}+ \\mathbf {u}),\\\\\\mathbf {u}&\\leftarrow \\mathbf {u}+ B\\mathbf {x}- \\mathbf {z},\\\\\\mathbf {z}^{\\prime } &\\leftarrow \\mathcal {P}_{+} (P\\mathbf {x}+ \\mathbf {u}^{\\prime }),\\\\\\mathbf {u}^{\\prime } &\\leftarrow \\mathbf {u}^{\\prime } + P\\mathbf {x}- \\mathbf {z}^{\\prime },$ where $\\mathbf {d}$ is a vector with all elements being 1, $\\mathcal {P}_+$ is define by ${\\mathcal {P}_+(x)=\\max (x, 0)}$ , which truncates negative values to zero, and $\\mathcal {S}$ is the element-wise soft-thresholding function, which is defined for each element by $\\mathcal {S}_{\\lambda } (x) ={\\left\\lbrace \\begin{array}{ll}0 & (|x| \\le \\lambda ) \\\\x - \\mathrm {sgn}(x) \\lambda & (|x| > \\lambda )\\end{array}\\right.", "}.$ The Lagrange multiplier $\\nu $ is determined by $\\nu = \\frac{s-\\langle S \\mathbf {\\xi }_1 \\rangle }{\\langle S \\mathbf {\\xi }_2 \\rangle }.$ The parameter $\\mu $ and $\\mu ^{\\prime }$ are penalty parameters, which will be explained later.", "As an initial condition, all vectors $\\mathbf {x}$ , $\\mathbf {z}$ , $\\mathbf {u}$ , $\\mathbf {z}^{\\prime }$ , and $\\mathbf {u}^{\\prime }$ are set to zero.", "The most expensive computation in this calculation is the matrix inversion in Eq.", "(REF ).", "We compute the LU decomposition of the matrix $M \\equiv A^\\mathrm {T} A + \\mu B^\\mathrm {T} B + \\mu ^{\\prime } P^\\mathrm {T} P$ before starting the iteration The Cholesky decomposition can be applied instead of the LU decomposition, because the matrix $M$ is real symmetric.", "However, we confirmed that the LU decomposition was faster in our implementation using SciPy..", "Using this result, linear equations are solved in each iteration to update $\\mathbf {x}$ .", "The cost for the LU decomposition is $O(N^3)$ , while the cost for the updates is $O(N^2)$ , where $N$ is the dimension of $\\mathbf {x}$ ($N$ should be replaced with $N_\\mathrm {irr}$ when the symmetry is applied as presented in Appendix ).", "Therefore, the computational cost and the memory storage required for the LU decomposition determine the upper limit of the system size.", "Convergence of the iteration should be checked in two perspectives.", "One is the residual error of the constraint (REF ), namely, ${r\\equiv \\Vert \\mathbf {z}- B \\mathbf {x}\\Vert _2^{\\vphantom{\\dagger }}}$ .", "The other is convergence of the variables, e.g., ${s\\equiv \\Vert \\mathbf {z}_{k+1} - \\mathbf {z}_k \\Vert _2^{\\vphantom{\\dagger }}}$ , where $k$ indicates the quantity at the $k$ -th iteration.", "A fast convergence is achieved when $r$ and $s$ are of the same order.", "A relative magnitude between $r$ and $s$ depends on $\\mu $ : Larger values of $\\mu $ reduce $r$ , since $\\mu $ is the penalty against the constraints (REF ).", "Therefore, if convergence of $r$ is slower than $s$ , one should increase $\\mu $ , and vise versa.", "See Ref.", "[68] for more details." ] ]
2210.07701
[ [ "Mechanical features based object recognition" ], [ "Abstract Current robotic haptic object recognition relies on statistical measures derived from movement dependent interaction signals such as force, vibration or position.", "Mechanical properties that can be identified from these signals are intrinsic object properties that may yield a more robust object representation.", "Therefore, this paper proposes an object recognition framework using multiple representative mechanical properties: the coefficient of restitution, stiffness, viscosity and friction coefficient.", "These mechanical properties are identified in real-time using a dual Kalman filter, then used to classify objects.", "The proposed framework was tested with a robot identifying 20 objects through haptic exploration.", "The results demonstrate the technique's effectiveness and efficiency, and that all four mechanical properties are required for best recognition yielding a rate of 98.18 $\\pm$ 0.424 %.", "Clustering with Gaussian mixture models further shows that using these mechanical properties results in superior recognition as compared to using statistical parameters of the interaction signals." ], [ "Introduction", "As robots are increasingly used in various fields such as agriculture, they have to manipulate objects of different mechanical properties skillfully.", "For instance, to harvest tomatoes or potatoes with similar shape, it is necessary to know their respective mechanical properties to handle them without dropping or crushing them.", "To recognize the objects a robot is interacting with, it is necessary to extract the unique features that characterize them .", "While geometric features can be used to identify solid objects , , , the shape of compliant objects changes with interaction such that shape alone is not sufficient to identify them.", "It is therefore necessary to characterize objects through mechanical parameters extracted during interaction.", "Compliant objects can be recognised by using tactile information obtained during haptic interaction such as force and vibrations.", "Empirical measures of these signals, such as the maximum, minimum and variance have been used for classification , , .", "While these interaction features can be used for object recognition, their value depends on specific actions, and their use can be highly redundant leading to high computational cost.", "The intrinsic mechanical features of objects may yield a more specific representation and thus lead to a more efficient recognition.", "These material properties describe an object's behavior in response to a load, for example the energy loss during impact can be characterized by the coefficient of restitution .", "The deformation and restoration of the surface in response to a force exerted perpendicular is characterised by the material's viscoelasticity.", "Similarly when applying a tangential force, the resistance to sliding can be characterized through the roughness.", "These parameters have been previously considered to estimate mechanical properties from tactile information.", "The coefficient of restitution is an important property in characterising how a body reacts during impact.", "While this property has rarely been used for object recognition, related features have been extracted from acoustic and acceleration data by investigating signal magnitude in the frequency domain , or applying unsupervised learning methods or statistical tools , .", "The consideration of acceleration peak has also been used as a similar impact related measure for object recognition, proving able to recognise five different materials.", "Compliance-related features characterise deformation in response to continuous forces.", "Empirically, these features can be estimated by analyzing the normal force signal during interaction , , .", "Such approaches have been used to estimate stiffness , and to infer how full a bottle is when grasped .", "Stiffness, however, only characterises the static response.", "To estimate both stiffness and viscosity, a recursive least-square algorithm has been used , , as well as a Gaussian process .", "This estimation has then been applied to object recognition in simulation .", "To characterise the response to sliding roughness related features are typically used.", "These features are typically estimated through the force or vibration occurring in the tangential direction during sliding , , .", "In addition, a constant Coulomb model is commonly used to identify a surface's friction , .", "Using the surface's friction along with geometrical information, a robot could recognise 18 household objects with different shapes and materials .", "By considering dynamic friction parameters, the robot-environment interaction can be modelled as quasi-static LuGe model .", "Using such dynamic friction parameters can benefit the classification of objects with different surface's materials , .", "These previous works exhibit how a single mechanical property can be used to recognise objects.", "However, multiple objects may exhibit the same value of a specific mechanical property and thus cannot be distinguished by it.", "Integrating the collection of mechanical property estimations into a haptic exploration framework may improve object recognition.", "However, there is currently no method to estimate multiple mechanical properties simultaneously and use them together.", "Furthermore, the coefficient of restitution has rarely been used to recognise objects.", "This prompted us to develop a framework for the identification of mechanical properties and their use to recognise specific objects, which will be presented in this paper.", "In this new approach, the coefficient of restitution, stiffness, viscosity and friction coefficient are estimated from the interaction force during haptic exploration.", "Our work builds upon which adapted viscoelastic parameters to maintain a stable interaction.", "To address issues with parameter oscillation we used a dual Kalman filter to consider sensory noise.", "We further added estimation of the coefficients of friction and restitution.", "The resulting method is first validated in simulation.", "The role of each mechanical parameters in object recognition is then investigated before our method is compared to representative statistical and empirical methods of the literature.", "Fig.", "REF shows the overall recognition framework with its three components; identification, control, and object recognition.", "These components work together to identify and classify unknown objects based on their mechanical properties.", "A robot, driven by the controller, interacts mechanically with objects to retrieve interaction force data as well as its positions.", "The robot's estimator first estimates the coefficient of restitution when touching the object's surface.", "A dual extended Kalman filter (DEKF) is then used to identify the object's stiffness, viscosity and friction coefficient online from signals of haptic sensors.", "These mechanical features are also used to adapt the controller's parameters so as to interact with each object properly.", "Additionally, the features based on the estimated mechanical parameters are combined to form the dataset feeding object recognition algorithms, in order to identify and cluster objects.", "This process is performed offline after the haptic exploration.", "Figure: Diagram of the object recognition process.", "The end-effector force and position measured during interaction are utilized to identify mechanical properties using an estimator.", "The estimated mechanical features are then used to recognize objects and adjust the motor command with the controller." ], [ "Online estimation and control", "This section describes the online estimation and control.", "First, a discrete impact model and a continuous interaction model are introduced to capture the robot-environment interaction at different stages.", "Using these models, the estimation of impact (coefficient of restitution) and continuous interaction properties (stiffness, viscosity and friction coefficient) are presented.", "Finally, the interaction controller used to drive the robot interacting with the environment smoothly is explained." ], [ "Interaction model", "Let the dynamics of a $n$ -DOF robot interacting with its environment be described by $M(q)\\ddot{x} + C(q,\\dot{q})\\dot{x} + G(q) \\,=\\, u + F + \\omega \\,,$ where $x$ is the coordinate of the end effector in operational space and $q$ is the vector of joint angle.", "$M(q)$ and $C(q,\\dot{q})$ represent the inertia and Coriolis matrices and $G(q)$ the gravitation vector, $u$ is the control input and $\\omega $ motor noise.", "The interaction force $F$ can be modelled with a mass-spring-damper system in the normal direction and Coulomb friction in the tangential direction: $\\begin{split}F = \\begin{bmatrix}F_\\perp \\\\F_\\parallel \\end{bmatrix}& = \\begin{bmatrix}F_0 + \\kappa \\,x + d\\,\\dot{x}\\\\\\mu F_\\perp \\end{bmatrix},\\end{split}$ where $F_0 = - \\kappa \\,x_0$ is the force corresponding to the surface rest length $x_0$ (without interaction), $\\kappa $ is the surface stiffness, $d$ its viscosity, and $\\mu $ its friction coefficient." ], [ "Impact estimation", "The initial contact of a robot with an object occurs in two phases: deformation and restoration.", "The deformation phase occurs before the initial point of contact and continues until maximum deformation.", "It is followed by the restoration from the time of maximum deformation until when separation occurs.", "By investigating impulses of these two phases, the coefficient of restitution is defined as a ratio of a normal impulse of restoration to a normal impulse of deformation : $\\widehat{\\psi } = \\, \\frac{R}{D} \\,=\\, \\left|\\frac{\\int _{t^0}^{t^+} \\!\\!", "F_\\perp \\,dt}{m_\\perp [ \\dot{x}_{\\perp }(t^0) + \\dot{x}_{\\perp }(t^-)]}\\right| \\,.$ Here, $D$ is the momentum from 0.01 s before collision, $t^-$ , to the time of maximum deformation, $t^0$ .", "$R$ integrates the normal force from $t^0$ to 0.01 s after collision, $t^+$ ." ], [ "Continuous properties estimation", "We assume that the robot can measure the end-effector position (e.g.", "from joint encoders) as well as the force normal to the surface subjected to a large noise $\\nu $ .", "For the estimation, the system dynamics become nonlinear due to the coupling of the robot's states and mechanical parameters in the interaction force model (REF ).", "In discrete state-space form, the dynamics of the robot interacting with the environment is: $\\xi _{k+1} \\!\\!\\!\\!&=&\\!\\!\\!\\!", "f(\\xi _k,u_k,\\theta _k) + \\omega _k \\nonumber \\\\\\eta _k \\!\\!\\!\\!&=&\\!\\!\\!\\!", "h(\\xi _k) + \\nu _k \\\\\\xi \\!\\!\\!\\!& \\equiv &\\!\\!\\!\\!", "\\left[\\!\\!", "\\begin{array}{c} x_\\perp \\\\ \\dot{x}_\\perp \\\\ x_\\parallel \\\\ \\dot{x}_\\parallel \\\\ \\mu \\end{array} \\!\\!\\right], \\,\\,\\,\\eta \\equiv \\left[ \\!\\!", "\\begin{array}{c} x_\\perp \\\\ x_\\parallel \\end{array} \\!\\!", "\\right], \\,\\,\\,u \\equiv \\left[ \\!\\!", "\\begin{array}{c} u_\\perp \\\\ u_\\parallel \\end{array} \\!\\!", "\\right], \\,\\,\\,\\theta \\equiv \\left[ \\!\\!", "\\begin{array}{c} F_0 \\\\ k \\\\ d \\end{array} \\!\\!", "\\right] \\nonumber $ where $f$ is a nonlinear mapping obtained from (REF ) and $h$ is a nonlinear mapping between the states and observation.", "The augmented state $\\xi $ consists of the robot's states and the friction parameters, $u$ are motor commands, where $\\eta $ is the measured robot's positions and $\\theta $ is the viscoelasticity vector.", "Due to the system's nonlinearity, noise and the coupling between the states and parameters, we employ the dual extended Kalman filter method to estimate the robot's state and interaction mechanics' parameters simultaneously.", "The dual Kalman filter is a recursive estimation process, which uses partial measurements to estimate the parameters in the model before integrating the updated model and measurements to estimate the hidden states.", "Fig.", "REF depicts the two designed estimators.", "Estimator 1 estimates the state $\\xi $ , which includes the robot's states and friction parameter.", "Estimator 2 then estimates the viscoelasticity parameters $\\theta $ from the measured normal force.", "In principle, the prediction error cost will be minimized when the estimated parameters $\\hat{\\theta }, \\hat{\\mu }$ converge on the real values $\\theta , \\mu $ while the estimated states $\\hat{\\xi }$ converge to the real states $\\xi $ .", "Figure: Diagram of dual extended Kalman filter combining two estimators.", "At every time step, Estimator 1 identifies the robot states and friction coefficient based on the measured position and estimated interaction force.", "Estimator 2 identifies the object elastic-viscosity parameters and interaction force based on the measured force and estimated robot state.Figure: Estimator in simulation.", "(a) Filtering of the robot position in the normal (top) and tangential (bottom) directions.", "(b) Identified mechanical properties of the two environments.", "From top to bottom: feedforward force, stiffness, viscosity and friction coefficient." ], [ "Robot's states estimation", "The robot's states $\\xi $ and friction parameter $\\mu $ will be estimated together by using the nonlinear stochastic state-space model (REF ), with the linearization $\\begin{split}\\xi _{k+1} &= \\, A_k\\, \\xi _{k} + B_k \\, u_{k} + \\omega _k \\\\\\eta _{k} &= \\, C_{k} \\, \\xi _{k} + \\nu _k\\end{split}$ $\\begin{aligned}A_k & = \\left.\\frac{\\partial f(\\xi ,u,\\theta )}{\\partial \\xi }\\right|_{(\\hat{\\xi }_k, \\hat{\\theta }_k,u_k)} =\\begin{bmatrix}1 & \\triangle & 0 & 0 & 0 \\\\0 & 1 & 0 & 0 & 0 \\\\0 & 0 & 1 & \\triangle & 0 \\\\0 & 0 & 0 & 1 & \\hat{F}_{\\perp k}/m_\\parallel \\\\0 & 0 & 0 & 0 & 1 \\\\\\end{bmatrix},\\\\B_k & =\\left.\\frac{\\partial f(\\xi ,u,\\theta )}{\\partial u} \\right|_{(\\hat{\\xi }_k,\\hat{\\theta }_k,u_k)} =\\begin{bmatrix}0 & 0 \\\\0 & \\triangle /m_\\perp \\\\0 & 0 \\\\\\triangle /m_\\parallel & 0 \\\\0 & 0 \\\\\\end{bmatrix},\\\\C_k & = \\left.\\frac{\\partial h(\\xi )}{\\partial \\xi }\\right|_{(\\hat{\\xi }_k,\\hat{\\theta }_k,u_k)} = \\begin{bmatrix}1 & 0 & 0 & 0 & 0 \\\\0 & 0 & 1 & 0 & 0\\end{bmatrix},\\end{aligned}$ where $m_\\perp $ and $m_\\parallel $ are the mass in the normal and tangential directions, and $\\triangle $ is the integration time step.", "$\\hat{F}_\\perp $ is the estimated normal force from the environment model (REF ).", "The Kalman Filter to estimate $\\xi $ is then designed as $\\hat{\\xi }_{k+1} = \\, \\hat{\\xi }_{k+1}^-+ K_{\\xi , k+1}(\\eta _{k}-C \\hat{\\xi }_{k+1}^-) \\\\$ where $\\hat{\\xi }_{k+1}^- = f(\\hat{\\xi }_{k},u_{k},\\hat{\\theta }_{k})$ is the predicted states obtained by using the last estimated states and $K_{\\xi , k+1}$ is the filter gain for state estimation." ], [ "Viscoelasticity parameters estimation", "The stiffness parameter $\\theta $ can be estimated by using the measured normal force and interaction force model (REF ), and the estimated robot's state $\\hat{\\xi }$ .", "The EKF is used to estimate viscoelasticity parameters by considering the following state-space model: $\\begin{split}\\theta _{k+1} &= \\theta _k + \\omega _k \\\\\\eta _{\\theta ,k} &= h_\\theta \\left(\\xi _{k},\\theta _{k}\\right) + \\nu _k \\,.\\end{split}$ The observer for the estimation of viscoelasticity parameters is given by: $\\hat{\\theta }_{k+1} =\\hat{\\theta }_{k}+K_{\\theta ,k+1}(\\eta _{\\theta ,k}-C_{\\theta ,k}\\hat{\\theta }_{k})$ where $K_{\\theta ,k+1}$ is the Kalman filter gain for parameter estimation, $\\eta _{\\theta ,k}$ is the measured normal force, and the output matrix is $C_{\\theta ,k} = \\left.\\frac{\\partial h_\\theta ^T \\!\\left(\\xi _{k},\\theta \\right)}{\\partial \\theta } \\right|_{(\\hat{\\xi }_k,\\hat{\\theta }_k)} =\\begin{bmatrix}1 & \\hat{x}_{\\perp k} & \\dot{\\hat{x}}_{\\perp k} \\\\\\end{bmatrix}.$" ], [ "Interaction control", "To enable the robot to smoothly track a predefined trajectory $r = [x_{\\perp r},x_{\\parallel r}]^T$ during the interaction with the environment, an interaction controller using the estimated mechanical parameters is defined through $u_k = \\iota _k + \\phi _k \\, .$ The feedforward component $\\iota $ compensates for the interaction force using the predictive model (REF ).", "It is updated recursively with the estimated mechanical properties according to $\\iota _k = -\\left[\\!\\!\\begin{array}{c} \\hat{F}_{\\perp k} \\\\ \\hat{F}_{\\parallel k} \\end{array} \\!\\!\\right].$ The feedback component to track the target trajectory is defined as $\\phi _k = - K_P \\, e_k - K_D \\, \\dot{e}_k$ with the error $e_k = x_k - r_k$ and control gains $K_P$ and $K_D$ .", "To avoid overloading while in contact with a stiff surface, the control input is saturated: $\\tilde{u}_{k} = sat_M(u_{k})$ with $\\begin{aligned}sat(s) = \\left\\lbrace \\begin{matrix}& s \\quad & \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!", "|s|\\le M \\\\& -M \\,\\,\\,\\quad & \\,\\, s<-M<0 \\\\& M \\quad & s>M>0 \\, .\\end{matrix} \\right.\\end{aligned}$" ], [ "Simulation", "We first test the designed estimator through simulating a robot interacting with different environments.", "A desired trajectory was designed as $r =\\begin{bmatrix}x_{\\perp r} \\\\x_{\\parallel r}\\end{bmatrix}=\\begin{bmatrix}0.012\\sin (15t) \\\\0.01t\\end{bmatrix} m\\,, \\quad t \\in [0,20]\\,s.$ A sinusoidal movement was used in the normal direction that satisfies the persistent excitation condition thus ensuring that the estimator has suitable information to identify the viscoelastic parameters.", "The amplitude and frequency were adjusted according to the allowed surface deformation.", "In the tangential direction, sliding with constant speed was used to yield a homogeneous lateral contact.", "Two objects with different mechanical properties were considered: a stiff-and-smooth surface with {$F_0$  = 1 N, k = 1000 N/m, d = 10 Ns/m, $\\mu $  = 0.5} and a soft-and-rough surface with {$F_0$  = 1 N, k = 500 N/m, d = 30 Ns/m, $\\mu $  = 1.25}.", "The control and estimation parameters used in the simulations were {$K_P$  = 1000 kg/s$^2$ , $K_D$  = 200 kg/s, $P_{\\xi ,0}$  = 10$\\,I_5$ , $P_{\\theta ,0}$  = 5 $I_3$ , $\\triangle $  =  0.001 s} where $I_5$ , $I_3$ are identity matrices, sensory noise covariance R = 4 10$^{-4}$ , process noise covariance Q = 2.5 10$^{-3} I_5$ .", "Fig.", "REF a shows that the estimator identifies position and velocity to a value close to the real one even with large measurement noise.", "The estimated kinematic values were then fed back to the controller.", "As a result, the robot could track the target positions during the interaction in both environments.", "The estimated mechanical properties of the objects are shown in Fig.", "REF b.", "The estimation was stabilized after an approximately 2 s settling period to a value close to the ground truth (shown in Table REF ) for all mechanical properties in both stiff and compliant environments.", "This shows that the mechanical properties can be estimated together with the robot’s states for different objects.", "Note that the coefficient of restitution was not estimated as it is not involved in the continuous interaction and computed directly from (REF ).", "Table: Average value of mechanical parameters in the interval [10,20] s obtained by the estimator in simulation.Figure: Experimental setup.", "(a) HMan robot with a sensorized finger and an object to examine.", "A wooden platform frame is used to attach various objects for the robot to explore.", "The finger is thus driven by two motors in normal and tangential direction to the object's surface, where the force sensor is facing it.", "(b) Diagram of robot's finger interacting with an object's surface.", "(c) Objects used in the experiment." ], [ "Experimental validation", "To validate the designed estimator with real objects and collect data for object's recognition, the HMan robot shown in Fig.", "REF a used its finger to interact with objects while estimating their mechanical properties.", "A six-axis force sensor (SI-25-0.25; ATI Industrial Automation) was mounted between the tip and base of the robot's finger shown in Fig.REF b to measure the interaction forces.", "Note that no tangential force was required for real-time mechanical parameter estimation.", "The robot interacted with the 20 objects shown in Fig.", "REF c. The estimator was implemented on the robot to identify the mechanical properties of tested objects.", "Three actions were implemented to haptically explore the objects: Tapping: The robot made a first contact with the objects in the normal direction to estimate surface's elasticity.", "Indentation: The robot moved its finger on the objects over the normal direction with the desired trajectory $x_{\\perp r}=0.01\\sin (8\\tau )+0.01\\,m$ , $\\tau \\in (0,20]\\,s$ to estimate the surface impedance.", "Sliding: The robot slid its finger in the tangential direction along with object's surface at 0.04 m/s while applying a 4 N large force in the normal direction.", "The estimation was validated through 25 trials for each pair of action and object.", "The resulting estimated coefficient of restitution are listed in Table REF for four representative objects, while Fig.", "REF shows the estimated stiffness and friction coefficient estimations for some example trials.", "These results show that the estimations converged and that they resulted in unique values for different objects.", "Table: Estimated coefficient of restitutionFigure: Example of estimated mechanical properties values as a function of time.", "(a) Stiffness values obtained from three soft objects.", "(b) Friction coefficient values obtained from three hard objects." ], [ "Objects' recognition", "Object recognition was performed using the experimental data of Section REF .", "The estimated coefficient of restitution was directly used as a feature.", "The mean values of the estimated stiffness, viscosity and friction coefficient from the last $2\\,s$ of interaction were used to extract the steady-state values for additional features.", "To compare the object recognition enabled by the mechanical property features with that of previous methods used in the literature, 35 such statistical features were extracted from the raw force data.", "Considered were the mean and maximum values, and the standard deviation from the interaction force in each direction as well as from their magnitude were extracted.", "In addition, a value of the normal interaction force from the first contacts was used as another feature (referred to as the “tap peak“).", "The frequency spectrum of the force in both directions was obtained using a fast-Fourier transformation (FFT), which was averaged into four frequency bands: [0,35], [36,65], [66,100] and [101,500] Hz, where these intervals were identified in a preliminary data examination to characterise the interaction.", "The mean values of vibration amplitude for the frequency bands were also used as features.", "To avoid overfitting, these statistical features were ranked using a feature selection method.", "The Chi-square test was applied since it is commonly used to evaluate features by testing their independence with class label.", "Finally, five feature sets were also formed to recognize objects, based on the mechanical properties features, statistical features and empirical mechanical properties features as shown in Table REF .", "Table: Features' sets used to recognize objects.These feature's sets were used to evaluate their performances in object recognition using supervised and unsupervised learning methods as described in Table REF .", "The Naive Bayes classifier was selected as it exhibited superior performance than other classifiers for supervised learning.", "Gaussian mixture models (GMMs) were used to investigate clustering with unknown labels.", "These clustering results were evaluated by comparing them with the known labels using normalised mutual information defined as: $\\text{NMI} =\\frac{2\\,MI(C;L)}{H(C)+H(L)} \\\\$ where $MI(C;L)$ is the mutual information between a set of clustering results $C=\\lbrace c_1, c_2,...c_N\\rbrace $ and known labels $L=\\lbrace l_1, l_2,...l_N\\rbrace $ : $MI(C;L) =\\sum _{i}\\, \\sum _{j}\\, p(c_i \\cap l_j)\\,\\log \\,\\frac{p(c_i \\cap l_j)}{p(c_i)\\cdot p(l_j)}\\ \\ \\\\$ and $H(\\cdot )$ is an entropy: $H(X)= -\\sum _{\\begin{array}{c}x \\in X\\end{array}}\\,p(x)\\, \\log \\,p(x) \\,.$ NMI evaluates how random the generated clusters are with respect to the known labels in a range of [0,1], where 1 means the clusters are perfectly generated according to the known labels and 0 that they are generated randomly.", "Table: Learning method, feature selection algorithms and dataset used for classification.", "The abbreviations are defined in Table .Figure: Confusion matrix obtained by using (a) friction coefficient (b) stiffness and viscosity (c) coefficient of restitution (d) all mechanical properties as features.", "A colour blue corresponds to correct classified objects while a colour red corresponds to an incorrect classified objects." ], [ "Classification with mechanical properties", "To understand how each of the estimated mechanical properties impact objects' classification, an estimation using a single feature was first performed, then gradually other features were added to feed the classifier.", "The classification was evaluated through a four-fold cross-validation using a 3:1 train:test ratio with 100 repetitions.", "Fig.", "REF shows the object recognition confusion matrix results using the friction coefficients (a), stiffness and viscosity (b), coefficient of restitution (c) and all estimated mechanical properties (d).", "It can be seen that by using only friction or the coefficient of restitution, the classifier cannot recognise all the objects, resulting in a recognition rate lower than 50%.", "These features only recognise a group of hard objects (classes 1-10) better than soft objects (classes 11-20) as shown in Figs.", "REF a,c.", "Using stiffness and viscosity increased the recognition rate to 74.45%, but could not differentiate hard objects (Fig.", "REF b).", "By using all four mechanical properties in the classifier, the recognition rate increased to 98.18% (Fig.", "REF d).", "The resulting confusion matrix exhibits almost perfect recognition with a rate over 90% for each object.", "There still is some confusion between pairs of object class, but for each object the misclassification rate is lower than 0.05 %, which can be considered as negligible.", "These results demonstrate the advantages provided by using the combination of different mechanical properties in order to classify various objects." ], [ "Objects' classification with mechanical properties vs. statistical features", "To examine the role of using the estimated mechanical properties for object classification compared to other sets of features, the classifier was used to find the recognition rates from the five sets of features described in Table III: mechanical property features (MP), statistical features (SF and CSSF) and empirical mechanical properties features (EMP1 and EMP2).", "These object classifications were evaluated by a four-fold cross-validation using 100 repetitions.", "Fig.", "REF a shows that using mechanical properties as features resulted in a recognition rate of 98.18 $\\pm $  0.424 %.", "On the other hand, the statistical features with and without feature selection resulted in a recognition rate of 92.2 $\\pm $  0.60 % and 89.7 $\\pm $  3.20 % respectively.", "Lastly, features used in reflecting on EMP1 provides 77.5 $\\pm $  5.07 % and features used in and reflecting on EMP2 yields 82.9 $\\pm $  0.91 %.", "These results show that mechanical properties provided the highest recognition rate while using a lower number of features and without needing tangential force sensing." ], [ "Objects' clustering using mechanical properties or statistical features", "To study the benefit of using the coefficient of restitution stiffness, viscosity and friction coefficient together in an unsupervised learning method, GMMs clustering was used to perform a clustering task with the same five sets of feature as in section REF .", "We assumed that each cluster had its own diagonal covariance matrix and the number of clusters is set to 20, i.e.", "the number of tested objects.", "This clustering task was done and evaluated by NMI for 40 repetitions for each set of features.", "The evaluation of clustering results using NMI is shown in Fig.", "REF b.", "Using mechanical features as input data in a clustering task gave NMI values of 0.851 $\\pm $  0.03 which is similar to what the SF and CSSF provided at 0.863 $\\pm $  0.016 and 0.856 $\\pm $  0.018 respectively (p$>$ 0.05).", "However, the NMI results obtained using the MP were found to be significantly higher than the NMI results obtained using EMP1 and EMP2 (p$<$ 0.05).", "These results suggest that using four mechanical properties could provide the same results as with 35 statistical features.", "In addition, it also could outperform the other features representing empirical mechanical properties used in , and .", "Figure: Comparison of classification (a) and clustering using normalized mutual information (NMI) (b) with different feature sets described in Table III." ], [ "Discussion", "This paper introduced an object recognition framework based on the estimation of mechanical properties with a dual extended Kalman filter.", "This online identification extends and stably estimates the coefficient of restitution, stiffness, viscosity and friction parameters.", "The viability of this method was demonstrated in simulations and experiments.", "The classification performance was evaluated with 20 real-world objects.", "Using the four representative mechanical parameters, a recognition rate of 98.18 $\\pm $  0.424 % could be achieved using supervised learning, and clustering exhibited a normalized mutual information of 0.851 $\\pm $  0.03.", "Using only four mechanical properties resulted in a better classification and similar clustering as with 35 statistical features, suggesting that mechanical features entail a more compact and accurate representation than statistical features.", "Note that all of the coefficient of restitution, viscoelastic and friction parameters were required to distinguish objects.", "In particular, including the coefficient of restitution largely improved the recognition rate in comparison to when using only viscoelastic parameters.", "For example, using stiffness could not distinguish steel from wood as both are hard materials, but they had different impact properties as measured by the coefficient of restitution.", "The intrinsic mechanical properties identified in our scheme provided better and more consistent results than the empirical mechanical properties used in previous works , , .", "This illustrates the limitations of using empirical features to recognize objects, which may depend on the specific action used.", "For instance, the surface texture measure of was defined as a variance of the interaction force in the normal direction during a robot's finger sliding on object's surfaces, which may depend on the object's pose and robot's interaction and lead to inconsistent estimation results.", "In summary, this work emphasized the role of mechanical properties in haptic exploration, and how they can be used to reliably recognise different objects.", "The results demonstrated the superiority of the mechanical properties-based object recognition, yielding more reliable recognition than empirical properties and requiring far less features than based on statistical features.", "Moreover, using this intrinsic object representation makes the framework flexible to using various classification algorithms.", "While the presented system could successfully recognize objects during haptic exploration, considering the weight and inertial parameters would enable extending our framework from haptic exploration to enabling full object manipulation with transport.", "[]" ], [ "Full results for classification with mechanical properties", "Fig.", "REF shows the classification results from all combinations of mechanical properties used as features in the Naive Bayes classifier.", "Starting from a single feature until four features, the recognition rate increased as the number of mechanical properties increase.", "The highest value can be achieved by using all four estimated mechanical proprieties.", "Figure: Classification results with different combinations of the mechanical features.", "ψ ^\\hat{\\psi } is the coefficient of restitution, κ ^\\hat{\\kappa } is stiffness, d ^\\hat{d} is viscosity and μ ^\\hat{\\mu } the friction coefficient." ] ]
2210.07721
[ [ "Accelerating RNN-based Speech Enhancement on a Multi-Core MCU with Mixed\n FP16-INT8 Post-Training Quantization" ], [ "Abstract This paper presents an optimized methodology to design and deploy Speech Enhancement (SE) algorithms based on Recurrent Neural Networks (RNNs) on a state-of-the-art MicroController Unit (MCU), with 1+8 general-purpose RISC-V cores.", "To achieve low-latency execution, we propose an optimized software pipeline interleaving parallel computation of LSTM or GRU recurrent blocks, featuring vectorized 8-bit integer (INT8) and 16-bit floating-point (FP16) compute units, with manually-managed memory transfers of model parameters.", "To ensure minimal accuracy degradation with respect to the full-precision models, we propose a novel FP16-INT8 Mixed-Precision Post-Training Quantization (PTQ) scheme that compresses the recurrent layers to 8-bit while the bit precision of remaining layers is kept to FP16.", "Experiments are conducted on multiple LSTM and GRU based SE models trained on the Valentini dataset, featuring up to 1.24M parameters.", "Thanks to the proposed approaches, we speed-up the computation by up to 4x with respect to the lossless FP16 baselines.", "Differently from a uniform 8-bit quantization that degrades the PESQ score by 0.3 on average, the Mixed-Precision PTQ scheme leads to a low-degradation of only 0.06, while achieving a 1.4-1.7x memory saving.", "Thanks to this compression, we cut the power cost of the external memory by fitting the large models on the limited on-chip non-volatile memory and we gain a MCU power saving of up to 2.5x by reducing the supply voltage from 0.8V to 0.65V while still matching the real-time constraints.", "Our design results 10x more energy efficient than state-of-the-art SE solutions deployed on single-core MCUs that make use of smaller models and quantization-aware training." ], [ "Introduction", "Novel speech-centric devices, e.g.", "miniaturized Hearing Aids, make use of AI-based methods to process audio data in real-time for improving the signal intelligibility.", "Given the small sizes, these devices present a limited energy budget: a lifetime of up to 20h can be achieved with a small 60mAh battery if the average power consumption is 10mW, considering sensing, computation and actuation costs.", "Because of the severe energy constraints, low-power Micro-Controller Units (MCUs) are typically chosen as Digital Processing Units to handle control and processing tasks.", "These processing units feature a limited computational power (single core CPU) and up to few MB of on-chip memory, making the integration process of complex AI speech processing pipelines extremely challenging.", "Speech Enhancement (SE), the ability of removing background noises from a (noisy) audio signal, is getting popular among the AI capabilities of speech sensors.", "While in the past SE methods relied on digital signal processing filters [1], [2], recent approaches integrate Deep Learning (DL) strategies, which have demonstrated a superior effectiveness to deal with non-stationary noises [10].", "To cancel out noise components, DL based approaches learn in a supervised fashion to estimate spectral suppression masks from a set of features extracted from the noisy speech.", "Among the causal models tailored for real-time computation, Recurrent Neural Networks have shown promising results [16], [6], [15], [8].", "These approaches encode the input signal, typically in the frequency domain (e.g.", "STFT or Mel spectrograms), into an embedding vector that feeds one or multiple recurrent layers, i.e.", "GRU or LSTM, acting also as memory components of the RNN based SE filter.", "The cleaned audio signal is reconstructed by decoding the outputs of the recurrent layers in a frame-by-frame streaming fashion.", "Unfortunately, current DL methods target real time execution on high-end devices [11] and are not fully-optimized for MCUs.", "Only [9] and [4] described design methodologies of RNN based SE models, with less than 0.5M parameters, for single-core MCUs.", "More in details, the NNoM framework was used to deploy the RNNoise model [14] on a single-core ARM Cortex-M MCU [9].", "The RNNoise algorithm includes small GRU layers with constrained activation ranges, leading to an effective 8-bit quantization.", "On the other side, TinyLSTM [4] made use of Quantization-Aware Training (QAT) [7] to compress an LSTM based model to 8 bit without accuracy degradation.", "Despite its effectiveness, the QAT technique is not always applicable because of the additional compute and data resources needed to simulate the non-linear quantization error at (re-)training time [5].", "Hardware-specific fine-tuning such as Block Pruning has been also developed to efficiently map SE RNNs on MicroNPU accelerators [12] Differently from these solutions, (i) we aim at a lossless and low-cost Post-Training Quantization methodology for scalable RNN-based SE algorithms and (ii) we investigate an optimized deployment flow for general-purpose multi-core MCUs, to achieve a final design more energy-efficient than state-of-the-art solutions.", "To this aim, we combine multiple strategies.", "Firstly, we target a multi-core compute platform with 1+8 RISC-V CPUs, featuring 8-bit integer (INT8) and 16-bit floating-point (FP16) MAC vector units.", "Secondly, we design an optimized software pipeline, in charge of scheduling at runtime parallel compute calls with manually-managed memory transfers, also from external L3 memories.", "To gain an almost lossless compression, we also propose a novel Mixed-Precision FP16-INT8 (MixFP16-INT8) Post-Training Quantization scheme, which quantizes only the RNN parameters and activations to INT8 while keeping the bit precision of other tensors to FP16.", "This paper makes the following contributions: We present an optimized HW/SW design for LSTM and GRU based SE models for multi-core MCU systems with limited memory space.", "We propose an almost lossless Mixed-Precision FP16-INT8 Post-Training Quantization scheme to accelerate RNN-based SE on MCUs.", "We provide a detailed analysis of latency and HW/SW efficiency on a 22-nm RISC-V 1+8-core MCU.", "Our work demonstrates, for the first time, an optimized design for RNN-based SE models relying only on PTQ, without any need for expensive QAT, with Mixed-Precision FP16-INT8.", "When benchmarked on the Valentini dataset, the RNN trained models show an average reduction of the PESQ and STOI scores of only 0.06 and 0.007.", "The proposed HW/SW design results $>$ 10$\\times $ more energy efficient than state-of-the-art solutions deployed on single-core MCUs.", "Figure: TinyDenoiser models for Speech Enhancement on MCUs.Table: Characteristics of the RNN-based TinyDenoiser variants." ], [ "RNN based Speech Enhancement on Multi-Core MCUs", "This Section firstly describes the scalable RNN-based SE model family, denoted as TinyDenoiser, that we consider for this study.", "Second, we detail the target HW platform and the mapping of the proposed software pipeline.", "Lastly, we present our novel Mixed-Precision FP16-INT8 PTQ method." ], [ "TinyDenoiser models", "Fig.", "REF shows the TinyDenoiser pipeline.", "The model takes as input the STFT frequency map of a noisy speech signal and predicts a spectral gain mask, whose values are in the range $[0,1]$ .", "In more detail, the audio input is sampled at 16kHz and the STFT frequency features are computed over a 25 msec audio frame, after Hanning windowing.", "For every audio frame, a total of 257 STFT magnitude values are fed into the model and 257 gain values are returned as output.", "The hop size is 6.25 msec (25% of the window length), which determines the real-time constraint of the inference task.", "The filtered frequency spectrum of the audio frame, computed by masking the noisy spectrum, is converted back to the time domain using an inverse STFT transform.", "In a real-time streaming processing scenario, the denoised speech signal is obtained by overlap-and-add operations of the cleaned audio frames.", "Drawing inspiration from TinyLSTM [4], the TinyDenoiser includes two RNN layers with a parametric output size of length $k$ , a Fully-Connected (FC) input layer producing 257 features and two final FC layers both producing 257 features.", "With the exception of the last layer, which features a Sigmoid activation to estimate the frequency gains, the other FC layers are followed by a batchnorm and a ReLU activation.", "Concerning the RNN layers, we experiment with both LSTM and GRU layers with an output size of $k = \\lbrace 128,256\\rbrace $ , originating multiple variants of the TinyDenoiser denoted as LSTM256, GRU256, LSTM128 and GRU128.", "As reported in Table REF , these variants feature a number of parameters ranging from 0.4M and 1.24M.", "Note that the majority of the parameters (and the operations) are due to the RNN layers (up to 84% for LSTM256).", "Figure: Micro-Architecture of the target platform.", "The cluster on the right includes 1+8 cores.", "An external memory or an on-chip non-volatile memory can be used to permanently store the model parameters." ], [ "Memory Management for RNN deployment on the target HW", "Fig.", "REF depicts the architecture of the MCU platform targeted for the deployment of the RNN-based SE model.", "Internally, the system includes a cluster with 8 RISC-V CPUs tailored for computation and 1 core for control operation, denoted as the Cluster Controller (CC).", "The 1+8 cores can load data from a 128kB Tightly Coupled Data Memory, namely the L1 memory, in a single clock-cycle.", "Note that the L1 memory is not a data cache.", "Every core has a 8-bit MAC vector unit, capable of computing a dot-product between two $4\\times $ 8-bit vectors and accumulation in a single clock-cycle (i.e.", "4 MAC/clk), while 4 floating point units are shared among the 8 compute cores, implementing single-cycle $2\\times $ FP16 vector MAC (2 MAC/clk).", "Outside the cluster, the platform includes a 1.5 MB L2 memory; the cluster cores can access data from the L2 memory $\\sim 10 \\mathrm {x}$ slower than accessing the L1 memory.", "To reduce this overhead, the cluster DMA can be programmed by the CC core to copy data between L2 and L1 memories with a peak bandwidth of $8 \\mathrm {Byte/clk}$ .", "In the background of the DMA operations, the Control Core dispatches and synchronizes parallel tasks on the 8 compute cores.", "To deploy the RNN-based TinyDenoiser on the HW target platform, the layer parameters are permanently stored into a non-volatile memory.", "Because the storage requirements can grow up to several MBs, we use an off-chip FLASH memory (ExtFLASH), also denoted as L3 memory, with a capacity of 8MB and connected to the MCU via OctoSPI.", "Data can be copied from L3 to L2 memories in the background of other operations by programming the MicroDMA module.", "Note that the IO memory interface reaches a max bandwidth of $1 \\mathrm {Byte/clk}$ , 8$\\times $ slower than the L2 peak bandwidth.", "Alternatively, the on-chip eMRAM non-volatile memory can be used for permanent storage, gaining a lower power consumption and a higher bandwidth but the total capacity reduces to 2MB.", "At runtime, but before entering the infinite inference loop, layer-wise network parameters can be copied from L3 (either ExtFLASH or eMRAM) to L2, based on the available space.", "Thanks to this process, named tensor promotion, the time to copy parameters to the L1 memory during inference decreases linearly with respect to the amount of promoted tensors.", "If a parameter tensor does not fit the available L2 parameter buffer space, it is partitioned in sub-tensors that are sequentially loaded from L3 to L2.", "Besides storing the promoted parameters, the L2 memory must reserve space to store an activation buffer, for temporarily keeping the activation feature maps, and a parameter buffer, serving the dynamic load of not-promoted parameters from L3 to L2.", "During the inference task, the L1 memory inside the cluster acts as the working memory because of the fast access time from the compute cores: parameters and activation features are copied to this memory and then fetched concurrently by the cores.", "Because of the small size, the L1 memory is kept free from static tensor allocation.", "Activation or parameter tensors or sub-tensors are rather loaded from L2 to L1 at inference time using the Cluster DMA, as depicted in Fig.", "REF .", "Figure: Pseudo C code of layer-wise RNN processing.", "The CC core runs the code on the left.", "The parallel GRU and LSTM basic kernels dispatched on the compute cores are on the right.", "Biases are omitted for simplicity." ], [ "SW Computation Model", "The CC core runs the RNN-based SE inference SW code, which includes a sequence of layer-wise processing calls.", "Fig.", "REF shows the pseudo C-code for a RNN layer processing task; the same software architecture applies for FC layer processing.", "The input and output activation tensor arguments, including the RNN states, are L2 memory arrays.", "On the contrary, the RNN parameter array (Weights) can be stored in L2 or L3 memory, depending if any promotion occurred as discussed before.", "Every layer-wise function interleaves data copies from L3 and L2 memories to the L1 memory and calls to the compute tasks.", "These latter are dispatched and parallelized over the 8 compute cores of the cluster.", "To be more specific, the CC core programs the MicroDMA and Cluster DMA modules to operate, respectively, asynchronous data copies from L3 to L2 and from L2 to L1.", "Note that L3 transfers occurs only if layer parameters are not promoted to L2; in this case the MicroDMA is not programmed.", "Typically, input, weight and output tensors of a layer cannot entirely fit the L1 memory (limited to 128kB).", "For this reason, large tensors are sliced in sub-tensors, also referred as tiles, during the code generation process.", "The size of the tiles are computed such as to maximize the memory occupation of the available L1 memory.", "Therefore the layer-wise software routine implements a for loop to sequentially loads (with the DMAs) the tensor slices in the background of the computation that applies on the previously copied data (Fig.", "REF on the left).", "To realize this mechanism, we double the memory requirement of the tensor slices to account both the L1 memory needed by the compute cores and the memory buffer used by the DMA.", "Based on the proposed execution model, the minimal theoretical latency $\\tilde{t}_{layer}$ to process a layer can be estimated as: $ \\tilde{t}_{layer} = N_{tiles} \\cdot \\mathrm {max}(t_{dma}^{L3-L2}, t_{dma}^{L2-L1}, t_{core})$ where $N_{tiles}$ is the number of tiles, $t_{dma}^{L2-L1}$ and $t_{dma}^{L3-L2}$ are the latencies required by, respectively, the Cluster DMA and the MicroDMA to copy a single data tile from L2 to L1 and L3 to L2 and $t_{core}$ is the compute time due to the parallel task.", "Based on HW architecture described in Section REF , $ t_{dma}^{L3-L2} \\approx 8 \\times t_{dma}^{L2-L1} $ if considering an external SPI flash.", "$t_{dma}^{L3-L2}$ decreases when using instead the on-chip non-volatile memory (up to 2.6$\\times $ for eMRAM).", "Fig.", "REF shows on the right more in details the parallel SW kernels for LSTM and GRU computation.", "Both kernels consists of a 2 nested loops.", "The outer loop, which is parallelized over the available compute cores, iterates over the size of the output feature tile.", "The inner loop computes the MAC between the combination of input features and the previous state and the GRU or LSTM weight tensors.", "More specifically, we target INT8 or FP16 computation depending on the used quantization type.", "To speed-up the computation of the dot-product, we exploit vectorized INT8 and FP16 MAC instructions, which can perform respectively 4 or 2 MAC/cyc per core.", "Concerning the FP16 tanh and sigmoid functions applied on the accumulators, we use a fast approximation exploiting vectorized FP16 instructions, while the INT8 version makes use of LUTs.", "Because of the high number of iterations of the inner loop, the total latency of the kernel is typically dominated by the computation of this loop.", "For INT8 LSTM and GRU, we account a minimal theoretical per-core latency of 9 (5 vect LD + 4 vect MAC) and 7 (4 vect LD + 3 vect MAC) clock cycles to compute 4$\\times $ 4 and 3$\\times $ 4 MAC operations, respectively.", "In case of FP16, the software kernel computes half of the MAC operations during the same period, if not considering the stalls occurring while accessing concurrently the shared floating-point units.", "Note that the peak computation power scales linearly with number of compute cores, up to reach the memory bottleneck (see Eq.REF ).", "In fact, as we increase the number of compute cores, the total bandwidth requirement for RNN computation exceeds the capacity of the target platform for both L3 and L2 memories.", "For instance, a FP16 LSTM layer processing on 8 cores demands for 8 (cores) $\\times $ 5 (LD) $\\times $ 2 (FP16 datatype) bytes every 9 cycles, which is much higher than the bandwidth from ExtFlash memory (1 byte/clk).", "In this case, using a lower datatype, e.g.", "8-bit, results in faster computation for multiple reasons.", "Firstly, the memory bandwidth requirements of INT8 kernels is 2$\\times $ lower than FP16 ones.", "Secondly, the 2$\\times $ higher memory saving can lead the model parameters to entirely fit the on-chip non-volatile eMRAM memory.", "Lastly, a smaller tensor parameters can be promoted permanently to the L2 memory.", "On the other side, INT8 leads to a higher quantization error with respect to a full-precision model than FP16, potentially affecting the prediction quality of the full model.", "Our solution to this problem is discussed in the next section." ], [ "Mixed FP16-INT8 Post-Training Quantization", "TinyDenoiser models are quantized with Post-Training Quantization.", "We refer to the IEEE 754 standard for the FP16 format and quantization, i.e.", "a casting.", "On the other side, we follow [7] for INT8 symmetric quantization.", "According to this, every full-precision tensor $x$ is approximated as an integer tensor $X$ as: $X = \\biggl \\lfloor \\frac{clamp(x, q_{min}, q_{max} )}{S} \\biggl \\rceil ,\\qquad S = \\frac{q_{max} - q_{min} }{2^n-1}$ where $n$ is the number of bits, $S$ is the scale factor, which impacts the conversion resolution, and $[q_{min}, q_{max}]$ is the quantization range of an individual tensor.", "In particular, the PTQ routine estimates the quantization range of activation tensors (Eq.", "REF ) by collecting the intermediate tensor statistics after feeding a trained model with a set of calibration samples.", "For the parameters, we refer to the min/max values.", "For RNN-based TinyDenoiser models, we observe a degraded quality, measured using objective metrics (see Sec.", "), if using a uniform 8-bit quantization on the whole model.", "On the contrary, the FP16 quantization works lossless.", "We hypothesize the INT8 accuracy drop to originate from unbounded tensor ranges, e.g.", "STFT input or ReLU output, which are clamped after quantization (Eq.", "REF ) or causing a large scale factor $S$ .", "Quantization error propagates also over time on RNN-based models.", "However, we noticed that both LSTM layers and GRU layers use constrained activation functions (tanh and sigmoid), i.e.", "output features maps features a numerical range limited by design, with the exception of the LSTM C_state.", "This motivates us to quantize only the RNN layers, which demands the highest memory requirement of the whole model, to INT8 while leaving the rest to FP16.", "We named this quantization as Mixed-Precision FP16-INT8, also referred in short as MixFP16-INT8.", "To this aim, we restrict the tensor statistic collection during PTQ to the input, states and output values of the RNN layers.", "In addition, two extra-layers, computationally inexpensive, are inserted in the inference graph for data type conversion purpose between FP16 and INT8 nodes and viceversa, according to Eq.", "REF ." ], [ "Experimental Results", "Before showing the effectiveness, in terms of memory, latency and energy gains, of our optimized design, we report the accuracy of the trained SE models after quantization.", "Lastly, we compare our approach with state-of-the-art solutions.", "Table: STOI and PESQ scores and memory footprint of the quantized TinyDenoiser models after Post-Training Quantization using FP16, INT8 or MixFP16-INT8 options." ], [ "Accuracy after Mixed-Precision PTQ", "We train the TinyDenoiser models on the Valentini dataset [13], which consists of clean and noisy speech audio clips from 28 speakers sampled at 16kHz.", "The training environment is taken from [3]: the loss functions is a weighted combination of the L1 loss and the STFT loss and an ADAM optimizer is used with a learning rate of 3e-4.", "We use a batch size of 64 and set 200 epochs of training.", "At the end of the training procedure, we select the trained model with the highest score on a validation dataset composed by audio clips of speakers p286 and p287, opportunely removed from the train set.", "For evaluation purpose, we refer to the PESQ and STOI objective metrics.", "We implement the Post-Training Quantization procedure as a module of the GAPflow toolsethttps://greenwaves-technologies.com/tools-and-software/.", "The script imports a trained full-precision graph and quantizes it to FP16, INT8 or MixFP16-INT8, before generating the application code for deployment purpose.", "We use 4 randomly-chosen samples of the validation set (p286_035, p286_166, p287_151, p287_269) for the calibration of the quantization ranges.", "In particular, we consider either the maximum absolute values of the activation parameters $x$ or $q_{max} = \\mathrm {mean}(x) + 3 \\cdot \\mathrm {std\\_dev}(x)$ , that we denote as max and std3 calibration settings, respectively.", "Additionally, we make use of a moving average filter in the estimation of the quantization ranges when feeding the models with multiple calibration samples as done in [7].", "Table REF reports the PESQ and STOI scores of the TinyDenoiser models on the Valentini test dataset after PTQ, together with the memory occupation (in MB) of the whole quantized parameters.", "The FP16 models are lossless with respect to FP32 trained models but gain $2\\times $ memory saving.", "On the contrary, despite the additional $2\\times $ memory compression factor, a uniform 8-bit quantization leads to a score degradation of, on average, 0.3 and 0.015 concerning the PESQ and the STOI metrics, respectively.", "We applied multiple combinations of max and std3 quantization ranges to the RNN layers activations (Clamp RNN in the table) or the FC layers, including the input of the SE model.", "For INT8, we observed max quantization ranges to bring benefits to the RNN layer quantization, therefore we applied this setting also for MixFP16-INT8 quantization.", "On the contrary, we have not found any experimental evidence to select between std3 or max on other layers.", "Overall, our proposed Mixed Precision FP16-INT8 PTQ recovers the accuracy degradation of INT8: on average, PESQ and STOI scores result to degrade of only 0.06 and 0.007, respectively.", "The effectiveness of the approach is also assessed by the 1.4–1.7$\\times $ less memory to store the model parameters.", "Figure: (Top) Latency, measured in terms of MAC/cyc, and (Bottom) Power Consumption, in mW, of the TinyDenoiser models running on the target HW." ], [ "RNN-based SE inference performance on a multi-core MCU", "We analyze the effectiveness of the proposed software pipeline (Sec.", "REF ) and the novel quantization strategy by measuring the TinyDenoiser inference latency and energy consumption on a 22nm prototype of the target architecture (Sec.", "REF ).", "The chip prototype can be powered at 0.8V or 0.65V with a maximum clock frequency of 370MHz and 240MHz, respectively.", "More in details, we deploy LSTM256, GRU256, LSTM128, GRU128 models after FP16 and MixFP16-INT8 quantization.", "If exceeding 2MB of storage requirement for model parameters, we make use of an external FLASH memory while, on the contrary, the on-chip eMRAM memory can be used.", "This latter features a peak BW of $640 MB/sec$ , independently of the voltage supply.", "Figure REF reports on the top the measured inference latencies, expressed in terms MAC/cyc, and the MCU power consumption (in mW) on the bottom.", "In case of FP16 LSTM256 and GRU256 models, the ratio of parameters stored in the L3 memory over the total amount of parameters, denoted as $\\rho ^{L3}$ , achieves 0.84 and 0.79, thanks to the tensor promotion mechanism.", "However, the execution is L3 memory-bounded in this scenario.", "In accordance to the model of Eq.", "REF , the read time of a FP16 parameter from the ExtFlash takes 2 clock cycles that explains a latency close to 0.5MAC/cyc (every MAC requires one parameter to be loaded).", "Because of the activity of the external memory, an extra average power cost of 40-45 mW is measured, corresponding to $\\sim 50\\%$ of the total power.", "While FP16 LSTM256 cannot fit the on-chip non-volatile memory, the FP16 GRU256 can cut the extra power cost by storing the FP16 parameters into the eMRAM.", "The MCU power consumption increases because of the on-chip memory utilization, which was OFF before, and the higher density of operations (higher MAC/cyc) due to the higher eMRAM memory BW than the ExtFlash.", "If leveraging MixFP16-INT8 for LSTM256 and GRU256, the ratio $\\rho ^{L3}$ decreases to 0.45 and 0.33, meaning more tensors are promoted to L2 in contrast to FP16 quantization.", "Thanks to this and the faster INT8 kernels, the computation efficiency increases up to 1.9 and 2.2 MAC/cyc (one of the two RNN layer is still L3 memory-bound).", "At the same time, the power cost of the MCU increases because of the higher operation density.", "Lastly, we obtain a power saving of $\\sim 2\\times $ by reducing the power supply to 0.65V.", "Also note the MAC/cyc improves by up to 8% because the eMRAM bandwidth is independent from the system clock frequency, bringing benefits to the memory-bounded layers.", "On the other side, FP16 LSTM128 and GRU128 fits the eMRAM memory capacity and show a $\\rho ^{L3}$ ratio as low as 0.13 and 0.0, meaning that the majority or all the memory parameters are promoted to the L2 memory before the inference.", "This explains the high FP16 latency efficiency, reaching up to 2.2 MAC/cyc.", "The MixFP16-INT8 quantization further decreases latency by $1.8\\times $ and $1.3\\times $ .", "In case of $LSTM128$ the power consumption of MixFP16-INT8 slightly decreases with respect to FP16 because eMRAM is not used, while $GRU128$ presents a $1.8\\times $ higher power, in line with other settings.", "Scaling down the supply voltage do not contribute to a higher MAC/cyc metric because of the low (or null) L3 memory utilization, while the power consumption is reduced by $2.5\\times $ .", "Fig.", "REF also reports on the bottom the latency and the energy measures for the inference tasks in the most energy efficient configuration.", "Even if reducing the clock frequency, the real-time constraints (6.25 msec) are matched.", "When considering a duty cycled operation with a sleep power much lower than the active power, the average power reduces up to 3mW for the smallest model.", "Table: Comparison with other SE solutions for MCUs." ], [ "Comparison with other works", "Table REF compares our solution with state-of-the-art SE solutions on MCUs: TinyLSTM [4], which is benchmarked on a STM32F7 MCU, and RNNoise [14] deployed on a low-power STM32L4 using the NNoM software with CMSIS-NN backend [9].", "Both solutions leverage on single-core devices and 8-bit quantization, which results effective thanks to QAT and the model design constraint of using intermediate activation features with limited numerical ranges.", "Despite our solution being more subject to memory bottleneck issues because of $2.6-6\\times $ more parameters and the higher bit precision, we achieve a top latency efficiency, up to $5.3\\times $ and $6.7\\times $ MAC/cyc higher than RNNoise and TinyLSTM, respectively.", "This acceleration is obtained thanks to the optimized software pipeline that efficiently exploit the underlying hardware.", "Additionally, the energy efficiency results up to $9.7\\times $ and $123\\times $ higher than previous solutions.", "We also remark that our solution achieves low-degradation with respect to full-precision model without relying on any expensive QAT training procedures." ], [ "Conclusion", "This work proposed a novel design approach to efficiently bring RNN-based SE models on low-power multi-core MCUs.", "On the one side, we proposed a novel quantization scheme that mixes FP16 and INT8 PTQ to obtain low-accuracy degradation without relying on expensive QAT.", "On the other side, we designed an optimized software pipeline to efficiently exploit the compute performance of low-power 8-core MCU.", "Our design demonstrated the fastest RNN-based SE solution for MCUs, featuring $>10\\times $ energy-efficiency than previous solutions." ] ]
2210.07692
[ [ "SAILOR: Scaling Anchors via Insights into Latent Object Representation" ], [ "Abstract LiDAR 3D object detection models are inevitably biased towards their training dataset.", "The detector clearly exhibits this bias when employed on a target dataset, particularly towards object sizes.", "However, object sizes vary heavily between domains due to, for instance, different labeling policies or geographical locations.", "State-of-the-art unsupervised domain adaptation approaches outsource methods to overcome the object size bias.", "Mainstream size adaptation approaches exploit target domain statistics, contradicting the original unsupervised assumption.", "Our novel unsupervised anchor calibration method addresses this limitation.", "Given a model trained on the source data, we estimate the optimal target anchors in a completely unsupervised manner.", "The main idea stems from an intuitive observation: by varying the anchor sizes for the target domain, we inevitably introduce noise or even remove valuable object cues.", "The latent object representation, perturbed by the anchor size, is closest to the learned source features only under the optimal target anchors.", "We leverage this observation for anchor size optimization.", "Our experimental results show that, without any retraining, we achieve competitive results even compared to state-of-the-art weakly-supervised size adaptation approaches.", "In addition, our anchor calibration can be combined with such existing methods, making them completely unsupervised." ], [ "Introduction", "Acquiring and labeling data for the training of 3D object detectors requires considerable effort.", "The sheer size and the underlying unorganized structure of point clouds make this process cumbersome.", "Detecting objects in a point cloud can be difficult even for humans since an object may contain just a few points.", "Moreover, during labeling, an expert will usually examine a single LiDAR frame from multiple viewpoints to account for unavoidable occlusions and truncations, usually switching the context between an image and a point cloud.", "These interruptions substantially increase labeling time and, consecutively, labeling cost.", "Figure: Box sizes, showing the domain gap across datasets.Boxes are generated by a fully-supervised oracle model trained on KITTI , Waymo , nuScenes  and Lyft , respectively.To mitigate the high labeling cost, our research community is making immense advances in unsupervised 3D domain adaptation [15], [23], [31].", "State-of-the-art approaches typically rely on some variant of self-training [30], [31], input transformation [21], [24], [27], feature alignment [35] and/or tracking [7], [34].", "These approaches share a common problem, initially reported by Wang  [23]: the discrepancy between object sizes, as depicted in Figure REF , introduces a tremendous domain gap.", "In 2D object detection, object sizes vary depending on the distance of an object from a sensor.", "This naturally occurring augmentation increases a detector's robustness towards size variation.", "In 3D object detection, however, the size of an object does not correlate with the distance from the sensor.", "Instead, it depends on where the dataset has been acquired, on average, vehicles in the USA are larger than the vehicles in Europe [23], or on the labeling policy, Waymo Open Dataset [20] includes side mirrors of cars in its annotations, whereas KITTI [8] does not.", "Object detectors are commonly evaluated using the Mean Average Precision (mAP) metric, which further utilizes Intersection over Union (IoU) for ground truth matching.", "In 2D detection, IoU is calculated given the area of the predicted and ground truth bounding box, whereas in 3D, it is derived using the volumes of the predicted and the ground truth cuboid.", "Given the additional dimension, the overall influence of the incorrect size prediction leads to an exponential accuracy decrease.", "This correlation makes 3D detectors extremely volatile to changes in the object sizes.", "Statistical Normalization (SN) [23] has become the standard approach for bridging the size gap.", "It attempts to shift the source data statistics to the target statistics through deliberate scaling of the source annotations as a training augmentation.", "Random Object Scaling (ROS) [31] strives to overcome this size bias without directed scaling.", "Instead, it substantially augments the ground truth boxes via a wider range of scales.", "Nevertheless, both approaches exploit key target domain insights, which are usually not available in an unsupervised setting.", "First, they employ anchor sizes, manually optimized for the target domain.", "Second, they manually refine the magnitude of the augmentation scales.", "Lastly, the stochastic nature of these approaches does not allow a deterministic checkpoint selection, but rather the best performing one.", "We propose SAILOR, a novel unsupervised anchor calibration pipeline which estimates optimal target anchors given a pretrained source model.", "Our objective is to identify target anchors under which the feature similarity between a source and a target domain is maximized.", "We first establish a reference feature database by gathering instance features from the source domain.", "In the next phase, we iteratively perturb the anchors by a small amount, and analogously compute a target feature database.", "The fitness of the target feature database to the reference feature database provides a stochastic gradient, which we employ in a stochastic optimization method.", "Our method achieves similar improvements as SN or ROS, but does not introduce additional model parameters, does not require retraining, and does not require any knowledge from the target domain at all.", "Experimental results on the autonomous driving datasets KITTI [8], Waymo [20], nuScenes [1] and Lyft [11] indicate large performance gains with minimal effort.", "We demonstrate that a simple exchange of the source anchors with our optimized ones, leads to large precision gains on the target domain.", "Our method is competitive even with the popular weakly-supervised approaches for bridging the object size domain gap, but costs only a fraction of the computation time, since we do not require retraining.", "Additionally, our optimized anchors can also be used as an unsupervised prior to extend weakly-supervised methods, turning them into fully unsupervised approaches." ], [ "3D Object Detection", "State-of-the-art 3D detectors consist of a Region Proposal Network (RPN) followed by a detection head.", "RPN first abstracts an input point cloud with a feature extractor, which is commonly point-based [17], voxel-based [18], [28] or a hybrid between these two [16].", "Afterwards, at each discrete location of the abstracted point cloud, RPN predicts an objectness probability and regresses coarse bounding boxes.", "The magnitude of this regression is either absolute [33] or relative to anchor values [5], [12], [16], [18], [28].", "The anchors reduce the network's regression space in such a way that the predictions become residuals from a matched anchor.", "The anchor values are commonly derived from the training data and have to be manually selected with care.", "The following detection head refines such coarse predictions into the final predictions." ], [ "Unsupervised 3D Domain Adaptation", "A 3D detection model trained on one dataset usually does not generalize well to new, unseen data.", "The source and target dataset difference is commonly referred to as the domain gap.", "Unsupervised domain adaptation aims to reduce the gap without any additional annotations.", "Style-transfer approaches lessen the gap in the input space by transforming the target data into source-like data.", "Wei  [24] demonstrate the generalization capabilities of a model trained on dense data augmented by pseudo-sparse point clouds derived from the original dense data.", "Dropping LiDAR beams scales up with the size of a point cloud since it is not a costly operation.", "However, it is ambiguous to what extent the background transformation helps and if it justifies a more complex conversion.", "Modifying just the foreground objects by adding new points [21], [27] or reorganizing the existing ones [13] saves computational resources with a similar effect.", "The sequential acquisition of autonomous driving datasets allows exploiting temporal consistency for the adaptation.", "Some self-training approaches [15], [29], [34] refine per-frame detection via Multi Object Tracking (MOT) [25] and are thus able to improve naïve retraining by reducing the false positive and negative rate.", "Even without explicit tracking, aggregating several LiDAR frames into a common reference frame already leads to denser static objects.", "Detecting such objects is a trivial task for state-of-the-art 3D detectors [3], [16], [28].", "By propagating high-quality detections to all aggregated frames, [22] generates pseudo labels for even the more complex (sparse and static) samples.", "FAST3D [7] further extends this idea to moving objects, by leveraging scene flow [26] for aggregation.", "Besides tracking and flow pitfalls, association, domain gap, , temporal information may not always be available.", "Generating pseudo labels on a per-frame basis has also been beneficial for self-training [2].", "Leveraging the gradual improvement on the presented pseudo labels, memory ensembles [30], [31] enhance the robustness by cataloging the predictions during training.", "An auxiliary loss [32], [35] can further reduce the inevitable noise induced by the pseudo labels.", "Still, the self-training is going to amplify the noise.", "This is a well-known side effect called inductive bias, which is generally resolved with the student-teacher paradigm [10], [14].", "The self-training approaches are not able to independently overcome the object size bias induced by the source data.", "The extent of the object size gap immensely influences the performance of the final model.", "Therefore, state-of-the-art self-training approaches explicitly utilize mechanisms to mitigate the cross-domain size mismatch." ], [ "Overcoming Object Size Bias", "Mainstream methods exploit target domain knowledge to overcome the object size bias.", "Wang  [23] propose employing the target domain statistics in two ways.", "Output Transformation (OT) directly modifies a predicted bounding box size by adding a residual.", "The difference between an average source and a target sample defines this residual.", "Statistical Normalization (SN) scales source ground truth boxes and points inside during training.", "Analogously to OT, the scaled intensity is derived from the source and target dataset statistics.", "To avoid precise target domain statistics that are usually unavailable, Random Object Scaling (ROS) [31] implements heavy size augmentation on the source ground truth objects.", "However, increasing the network's search space leads to a frequent size prediction mismatch.", "Contrary to the weakly-supervised approaches, we propose a completely unsupervised method, which specifically optimizes a model's anchors for the target domain.", "Moreover, our approach does not require retraining and thus costs just a fraction of computational time.", "We achieve a similar effect as these weakly-supervised methods, since more suitable target anchors alleviate the model's extrapolation efforts and implicitly provide improved detections." ], [ "Unsupervised Anchor Calibration", "We consider a 3D detection model $H( \\, \\cdot \\, | \\, \\Theta ) = D \\circ RPN$ , with a Region Proposal Network $RPN$ and a detection head $D$ .", "Its parameters $\\Theta $ are trained on a labeled source dataset $\\mathcal {S} = \\lbrace (P^s_i, Y^s_i)\\rbrace _{i = 1}^{N_s}$ , which contains LiDAR point clouds $P^s$ and a respective set of labeled instances $Y^s$ .", "The target dataset $\\mathcal {T} = \\lbrace P^t_i\\rbrace _{i = 1}^{N_t}$ , however, contains only unlabeled point clouds.", "Besides the trainable parameters, the model is additionally defined via its hyperparameters.", "The anchors $\\Psi = \\lbrace (\\psi ^{(x)}, \\psi ^{(y)}, \\psi ^{(z)}, \\psi ^{(w)}, \\psi ^{(l)}, \\psi ^{(h)}, \\psi ^{(\\theta )}) \\rbrace \\subset \\Theta $ provide a starting point for the regression head, where the network predicts residuals relative to an anchor, instead of regressing the absolute values.", "The anchors' width, length and height are usually handpicked to match the average size of annotated objects from $\\mathcal {S}$ .", "The size discrepancy between objects in $\\mathcal {S}$ and $\\mathcal {T}$ induces an apparent domain gap, as illustrated in Figure REF .", "With our unsupervised anchor calibration, we adapt the anchor sizes to the target dataset $\\mathcal {T}$ without any supervision.", "As depicted in Figure REF , using a model $H(\\,\\cdot \\,|\\,\\Theta )$ pretrained on the source data, we first construct a reference feature database (Section REF ) by accumulating the proposal feature vectors from the source domain $\\mathcal {S}$ .", "Then, we iteratively perturb the model's anchor sizes by a small amount $\\varepsilon $ , while leaving all other parameters unchanged $(\\psi ^{(x)}, \\psi ^{(y)}, \\psi ^{(z)}, \\psi ^{(w)} + \\varepsilon ^{(w)}, \\psi ^{(l)} + \\varepsilon ^{(l)}, \\psi ^{(h)} + \\varepsilon ^{(h)}, \\psi ^{(\\theta )})$ and, analogous to the reference feature database, compute a target feature database from $\\mathcal {T}$ .", "The fitness of such a target feature database to the source feature database (Section REF ) provides a stochastic gradient, which we utilize to adapt the model's anchor sizes (Section REF ).", "Our approach yields anchors which are specifically tailored to the given model for the target domain, without any retraining." ], [ "Reference Feature Database", "Using the model $H(\\cdot \\,|\\,\\Theta )$ , pretrained on the source data $\\mathcal {S}$ , we first generate a reference feature database $\\mathcal {F}^s = \\left\\lbrace RPN(P^s \\,|\\, \\Theta )\\delta _{ [ \\hat{c} > \\tau ]}\\;|\\; \\forall P^s \\in \\mathcal {S}\\right\\rbrace \\text{.", "}$ We select the $RPN$ features of the frame $P^s$ where the respective prediction score $\\hat{c}$ , obtained from the classification head, exceeds a certain threshold $\\tau $ .", "Ultimately, $\\mathcal {F}^s$ is a latent feature database of the source database samples.", "Depending on the size of the source dataset, $\\mathcal {F}^s$ can have a large memory footprint.", "Therefore, to compresses a potentially abundant reference feature database we fit a Gaussian Mixture Model (GMM) to the data.", "A GMM consists of $K$ weighted Gaussian distributions, where the probability of observing a sample $f$ is defined as $p(f\\,|\\,\\Omega ) = \\sum _{i=1}^K \\omega _i \\cdot \\mathcal {N} (f\\,|\\,\\mu _i, \\Sigma _i),\\quad \\text{where}$ $\\mathcal {N}(f\\,|\\,\\mu _i, \\Sigma _i) =\\frac{\\exp \\left( -\\frac{1}{2} (f - \\mu _i)^T \\Sigma _i^{-1}(f - \\mu _i) \\right)}{\\sqrt{(2 \\pi )^K \\vert \\Sigma _i \\vert }}\\text{.", "}$ Here, $\\omega _i$ weighs the $i^{\\text{th}}$ Gaussian distribution, which is defined by its mean vector $\\mu _i$ and covariance matrix $\\Sigma _i$ .", "We fit the GMM parameters $\\Omega _{\\mathcal {F}^s} = \\lbrace (\\omega _i, \\mu _i, \\Sigma _i) \\rbrace _{i = 1}^K$ to a source feature database $\\mathcal {F}^s$ using Expectation-Maximization [4]." ], [ "Target Fitness Quantification", "The reference probability model $\\Omega _{\\mathcal {F}^s}$ describes ideal latent instances, high-dimensional feature vectors that are abstractions of a complete object without background noise.", "The inference on the target data, using the source model, will inevitably lead to predictions which include background noise or do not contain the object completely.", "However, we can reduce such noisy predictions by selecting adequate anchor sizes.", "In this section, we show how we quantify the quality of the selected anchors.", "Using the source model $H( \\, \\cdot \\, | \\, \\Theta )$ , we compute the target database $\\mathcal {F}^t$ from the target dataset $\\mathcal {T}$ , analogous to Equation (REF ).", "However, during this inference, we ignore the network's size residual predictions $(\\Delta ^{(w)}, \\Delta ^{(l)}, \\Delta ^{(h)})$ and use only matched anchor sizes, $(\\psi ^{(x)} + \\Delta ^{(x)}, \\psi ^{(y)} + \\Delta ^{(y)}, \\psi ^{(z)} + \\Delta ^{(z)}, \\psi ^{(w)}, \\psi ^{(l)}, \\psi ^{(h)}, \\psi ^{(\\theta )} + \\Delta ^{(\\theta )})$ .", "This isolates the influence of the selected anchors, because otherwise, the consecutive regression stage of the detector would obfuscate the changes.", "To quantify the fitness of the anchors, we leverage the joint probability density $p( \\mathcal {F}^t\\,|\\,\\Omega _{\\mathcal {F}^s} ) = \\prod _{f_i^t \\in \\mathcal {F}^t} p ( f_i^t\\,|\\,\\Omega _{\\mathcal {F}^s} )\\text{,}$ where $\\Omega _{\\mathcal {F}^s}$ are the parameters of the reference probability model and the features $f_i^t$ are independent and identically distributed.", "To avoid numerical instabilities we do not explicitly employ the joint probability but instead use the per-sample average log-likelihood $\\mathcal {L} ( \\mathcal {F}^t \\, | \\, \\Omega _{\\mathcal {F}^s}) =\\frac{1}{ \\vert \\mathcal {F}^t \\vert }\\sum _{ f_i^t \\in \\mathcal {F}^t } \\log p ( f_i^t\\,|\\,\\Omega _{\\mathcal {F}^s} )\\text{.", "}$ The sum of the logarithms is numerically stable and the cardinality normalization ensures independence of the number of target features.", "Smaller values of $\\mathcal {L}$ indicate unfit predictions, if target features include more background clutter or less object cues compared to the source features, and is maximized for anchors which are optimal for the target domain.", "We present an experiment which demonstrates this behavior in Figure REF ." ], [ "Anchor Calibration", "Using the reference feature database $\\mathcal {F}^s$ and fitness quantification $\\mathcal {L}$ , we can now compute the optimal anchors $\\Psi ^*$ for a model $H (\\, \\cdot \\, | \\, \\Theta )$ by optimizing $\\Psi ^* =\\operatornamewithlimits{arg\\,min}_{ \\Psi }-\\mathcal {L}( \\mathcal {F}^t_{\\Psi } \\, | \\, \\Omega _{\\mathcal {F}^s} )\\text{,}$ where $\\Psi ^* \\subset \\Theta $ represents the optimal anchors for the target dataset $\\mathcal {T}$ .", "We outline our approach in Algorithm REF .", "Pseudocode of our SAILOR method 3D detection model $H( \\, \\cdot \\, | \\, \\Theta )$ and anchors $\\Psi $ optimized on a source dataset $\\mathcal {S}$ ; Labeled source $\\mathcal {S}$ and unlabeled target $\\mathcal {T}$ dataset Optimized anchors $\\Psi ^*$ for $\\mathcal {T}$ Generate $\\mathcal {F}^s$ Section REF Fit GMM parameters $\\Omega _{\\mathcal {F}^s}$ to $\\mathcal {F}^s$ with EM $\\Psi ^* = \\Psi $ Termination criteria is not reached Randomly sample small $\\varepsilon $ $\\Psi ^{\\prime } = \\Psi ^* + \\varepsilon $ Anchor size perturbation Generate $\\mathcal {F}^t_{\\Psi ^{\\prime }}$ Section REF Update $\\Psi ^*$ with $\\nabla \\mathcal {L} ( \\mathcal {F}^t_{\\Psi ^{\\prime }} \\, | \\, \\Omega _{\\mathcal {F}^s} )$ Section  REF Since Equation (REF ) is not differentiable the model hyperparameters, we do not use standard gradient methods.", "Instead, we utilize Differential Evolution (DE) [19], a stochastic optimization technique.", "At each iteration, given a population vector, DE constructs a mutation vector.", "A trial candidate is constructed in a crossover phase by mixing the mutation vector with a candidate solution.", "If the fitness of the trial candidate exceeds the current candidate solution, it becomes the next candidate solution.", "This is repeated until a convergence criterion is met.", "To accelerate the optimization, we first perform a linear sweep for each parameter separately.", "Figure REF depicts an example of this search.", "We freeze other anchor sizes, and while varying a single parameter, assess the fitness at each step.", "We use the parameter with the highest overall fitness as the initial candidate solution for the following joint optimization phase.", "The reduced search space expedites the final joint fine-tuning.", "Figure: Comparing anchor fitness (Equation ()) and Average Precision.The model is trained on Waymo , whereas the fitness and AP are assessed on KITTI .Here, we fix width and height of the anchors and vary only the length.The fitness is computed without any annotation in the target domain and strongly correlates with the AP 3D _{3D}.We denote ground truth Waymo and KITTI anchors in red and green, respectively.Starting from this initial candidate solution $\\Psi ^*$ , we jointly optimize the length, width and height.", "For this, we generate the initial population vector by uniformly sampling $N_p$ anchor values $\\Psi = \\lbrace \\Psi _i \\rbrace _{i=1}^{N_p}$ , where the sampling range is a percentage of the source anchor value.", "A mutation vector is constructed as $\\Psi ^{\\prime } = \\Psi _{0} + \\eta \\cdot ( \\Psi _{r_1} - \\Psi _{r_2} )\\text{,}$ where $\\Psi _{0}$ is the population vector member with the best fitness score, $r_1$ and $r_2$ are two randomly selected indices, and $\\eta $ is the mutation amplitude constant.", "We terminate the optimization when the loss converges or when we reach a maximum number of iterations." ], [ "Experiments", "We focus our evaluation on KITTI [8], Waymo [20] and nuScenes [1], and include Lyft [11] for further generalization evaluations.", "Inter-domain anchor calibration demonstrates the capability, whereas intra-domain calibration verifies the correctness of our approach.", "The variety of the data, sparse to dense (nuScenes $\\leftrightarrow $  Waymo) or a large size gap (KITTI $\\leftrightarrow $  Waymo), faithfully represents an arbitrary use-case.", "The source and target data are the training and validation splits of the datasets mentioned above, respectively.", "We always perform the evaluation following the evaluation protocol of the target dataset." ], [ "Comparison with the State-of-the-Art", "To the best of our knowledge, there is no existing work in the unsupervised domain to compare to.", "Therefore, we compare SAILORhttps://github.com/malicd/sailor with the widely adapted weakly-supervised approaches that address the object size bias, Statistical Normalization (SN) [23], Output Transformation (OT) [23] and Random Object Scaling (ROS) [31].", "For SN, we follow the original publication and set the model anchors to the average of the source and target domain and scale the labeled source bounding boxes and points inside using the difference between the statistics.", "Similarly, ROS exploits this knowledge, but in a more coarse way.", "Knowing that KITTI objects are smaller than Waymo's, we can apply the appropriate scaling for ROS during the adaptation.", "We retrain both models according to the source configuration.", "OT does not require training but instead uses the source model and directly adds the difference to the predictions.", "We refer the reader to the supplementary material, where we list the exact anchor configuration for reproducibility.", "During the evaluation, we directly employ the anchors optimized by SAILOR.", "We do not perform any retraining whatsoever.", "Table REF demonstrates the potential of our method, given a Part-$A^2$  source model.", "In the case of Waymo $\\rightarrow $  KITTI, we observe an improvement of 34 AP$_{3D}$ @R11 for the Car class, beating even the weakly-supervised approaches by a large margin.", "We depict this improvement in Figure REF .", "Pedestrian and Cyclists are mostly unchanged, except for a slight improvement, mainly due to having similar anchors.", "We report similar behavior in the nuScenes $\\rightarrow $ KITTI experiment, where we observe an improvement of almost 28 AP$_{3D}$ @R11.", "We also note that the slight precision increase for the Cyclists class stems from the significant height difference (due to different labeling policies) between the two datasets.", "In situations where a source model shows inferior performance and the size domain gap is large, KITTI $\\rightarrow $  Waymo and KITTI $\\rightarrow $  nuScenes, we still accomplish substantial relative gain.", "Calibrating KITTI anchors on Waymo, we achieve a relative improvement over 200%.", "This tremendous increase is significantly better than the weakly-supervised SN and ROS, which even degrade the performance slightly.", "Note that the KITTI $\\rightarrow $  Waymo scenario is well-known to be extremely challenging and has thus been often neglected from evaluations, with very few exceptions,  [7].", "The overall low performance on KITTI $\\rightarrow $  nuScenes stems from a small source dataset and the sparsity of the nuScenes point clouds, where the detector struggles.", "Thus, there are only very few objects with insufficient latent semantics to apply our method.", "The datasets that share similar anchors, Waymo $\\leftrightarrow $  nuScenes, do not exhibit a substantial change in the overall evaluation score.", "Since SAILOR introduces no additional hyperparameters, the weakly-supervised approaches perform favorable in this scenario, as they can exploit the target domain statistics.", "However, note that similar improvements can also be achieved by fiercer augmentation, as shown in [9].", "Similarly, when our source and target data come from the same dataset, the diagonal in Table REF , we report an insignificant change in the results.", "We performed this additional experiment mainly as a sanity check.", "Table: Results of the adaptation tasks to Lyft.We report AP 3D _{3D}@R11 for the classes Car / Pedestrian / Bicycle.To further demonstrate the generalization capabilities of our method, we perform additional experiments on the Lyft [11] dataset.", "Our findings, which we summarize in Table REF , confirm our initial experiments.", "In cases where the source and target anchors are similar, Waymo $\\rightarrow $ Lyft, we report slight performance gains.", "Additionally, when the object size gap is large, KITTI $\\rightarrow $  Lyft for class car, we report increase of around 6 AP points.", "Similarly to KITTI $\\rightarrow $  nuScenes, we found that further improvement is limited by an ineffective source model caused by a small source dataset.", "However, due to sufficient density of the Lyft point clouds (LiDAR with 64 beams), we do not observe any precision drop.", "Moreover, we demonstrate that combining our method with existing weakly-supervised approaches leads to competitive results in a completely unsupervised manner.", "For this, we replace the target statistics required by SN and ROS by the anchor sizes calibrated with SAILOR.", "The results in Table REF show that this is well suited to make ROS fully unsupervised, whereas for SN it works well for vehicles, but is not suitable for pedestrians and cyclists.", "This issue is due to SN, which is tailored explicitly to knowing the target statistics and requires additional manual fine-tuning (which we omitted due to the unsupervised setting of this experiment) to achieve the best results.", "Note that we are not interested in estimating the actual target domain object sizes, but in estimating the optimal target domain anchors for the given source model.", "Using these calibrated anchors to guide ROS, our unsupervised ROS variant performs on par with the weakly-supervised one, Waymo $\\rightarrow $ KITTI, or even better, nuScenes $\\rightarrow $ KITTI.", "Table: SN and ROS become entirely unsupervised in combination with SAILOR, otherwise they are weakly-supervised and denoted with † ^\\dag .The results are the moderate case for Car / Pedestrian / Cyclist classes.Figure: Qualitative comparison of source only, Statistical Normalization (SN) , Random Object Scaling (ROS)  and our method on Waymo →\\rightarrow KITTI case.We indicate the ground truth box in green and the predicted boxes in blue.The object points, according to the ground truth annotation, are shown in orange.Best viewed on screen." ], [ "Ablation Studies", "We conduct our ablation experiments to investigate the interaction between the components of our system, as well as to study the vital points of our pipeline.", "We also indicate the known caveats and demonstrate how we overcome them." ], [ "Joint Optimization", "We demonstrate the benefits of our joint optimization in Table REF , which shows that linear search already provides a decent performance.", "However, it optimizes each component individually, disregarding their entanglement.", "We leverage this even further to boost the performance using joint optimization from Section REF .", "In some instances, nuScenes $\\rightarrow $ KITTI for the class Pedestrian, DE degrades the result.", "Our observation shows that this happens for two reasons.", "When the number of instances in the source domain is too low for the proper estimation of the GMM parameters, KITTI $\\rightarrow $  Waymo for Pedestrians and Cyclists, our objective function becomes harder to optimize due to the appearing local minima.", "A similar effect can be observed by inadequate latent representations of the source model, caused by the point cloud sparsity, nuScenes $\\rightarrow $ KITTI for Pedestrians.", "Table: Detection performance the optimization strategy for the classes Car / Pedestrian / Cyclist.LS stands for Linear Search and DE is Differential Evolution." ], [ "Smoothness of the Optimization Objective", "As previously hinted, the number of samples used for the GMM fitting plays an important role in our method.", "Our objective function from Equation (REF ) gets smoother with the number of instances we use.", "If we build the source feature database from underrepresented samples, it becomes hard to optimize many appearing local minima.", "This implicitly affects the performance on the target dataset, as depicted in Figure REF .", "On the other hand, SAILOR is not sensitive to the number of components in the GMM at all.", "We found that any reasonable choice of $K \\ge 4$ works similarly well.", "Figure: Applying SAILOR on Waymo →\\rightarrow KITTI while varying the number of instances used for GMM fitting.We report Average Precision for the moderate case of cars." ], [ "Conclusion", "We presented SAILOR, an unsupervised approach for anchor calibration on the target domain.", "We estimate an optimal anchor configuration under the source model without prior knowledge.", "We compare our approach with weakly-supervised methods, which are widely used for unsupervised 3D domain adaptation.", "Moreover, SAILOR can be used as a stand-alone method or can make these weakly-supervised approaches completely unsupervised.", "In this work, we focus explicitly on optimizing anchor sizes due to the tremendous domain gap across datasets, particularly KITTI $\\leftrightarrow $ Waymo.", "Note, however, that any model hyperparameter is optimizable with our SAILOR schema, even the checkpoint selection procedure, which we want to investigate in future experiments." ], [ "Acknowledgments", "We gratefully acknowledge the financial support by the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association.", "The presented experiments have been achieved (in part) using the Vienna Scientific Cluster." ] ]
2210.07811
[ [ "LEATHER: A Framework for Learning to Generate Human-like Text in\n Dialogue" ], [ "Abstract Algorithms for text-generation in dialogue can be misguided.", "For example, in task-oriented settings, reinforcement learning that optimizes only task-success can lead to abysmal lexical diversity.", "We hypothesize this is due to poor theoretical understanding of the objectives in text-generation and their relation to the learning process (i.e., model training).", "To this end, we propose a new theoretical framework for learning to generate text in dialogue.", "Compared to existing theories of learning, our framework allows for analysis of the multi-faceted goals inherent to text-generation.", "We use our framework to develop theoretical guarantees for learners that adapt to unseen data.", "As an example, we apply our theory to study data-shift within a cooperative learning algorithm proposed for the GuessWhat?!", "visual dialogue game.", "From this insight, we propose a new algorithm, and empirically, we demonstrate our proposal improves both task-success and human-likeness of the generated text.", "Finally, we show statistics from our theory are empirically predictive of multiple qualities of the generated dialogue, suggesting our theory is useful for model-selection when human evaluations are not available." ], [ "Introduction", "Generating coherent, human-like text for dialogue remains a challenge.", "Yet, it is an inseparable component of open domain and task oriented dialogue systems like Alexa and Siri.", "Undoubtedly, it is also a complex process to learn.", "Generation based on classification (e.g., next-word prediction) over-emphasizes the likelihood of text, leading to bland qualities, which are not human-like [10].", "Meanwhile, framing dialogue generation as a Markov decision process is highly data-inefficient when compared to classification [13].", "Further, without careful design of rewards, models can suffer from mode-collapse in dialogue, producing repetitive behaviors that are not human-like [25].", "Even carefully designed rule-based systems are brittle in the presence of unforeseen data-shift.", "Theoretical analyses of learning are imperative as they provide solutions to these issues.", "For example, traditional (PAC) learning theory [33] studies similar issues arising from computational algorithms for learning to classify.", "Progress in our understanding has been impressive, ranging from comprehensive guarantees on data-efficiency [24] to insights for algorithm-design when the learner is faced with data-shift [37], [36], [32].", "While traditional theory may be applicable to simple generation objectives like next-word prediction, it is unfortunately unable to model more diverse goals.", "That is to say, it is insufficient to study replication of the diverse qualities inherent to human dialogue.", "The goal of this paper is to provide a new theory for analyzing the multi-faceted objectives in computational learning of dialogue generation.", "In particular, we propose LEATHERLEArning Theory for Human-like dialogue genERation based on existing theories of computational learning.", "We demonstrate the utility of LEATHER with a focus on understanding data-shift in learning algorithms.", "We also show empirical results for a task-oriented visual dialogue game.", "In detail, we contribute as follows: In Section , we propose LEATHER, our novel theory for computational learning of dialogue generation.", "We use the GuessWhat?!", "visual dialogue game [8] as an example to ground abstract terminology in practice.", "We conclude Section  by applying our theory to analyze a cooperative learning algorithm for GuessWhat?!.", "Our theory unveils harmful shifts in data-distribution that occur during training.", "In Section , we use LEATHER to study the general problem of data-shift in text-generation.", "We provide new theoretical study that characterizes statistical energy as an effective empirical tool for quantifying the impact of data-shift.", "Aptly, to conclude Section , we use energy to motivate an improved learning algorithm for our running example – the GuessWhat?!", "game.", "In Section , empirically, we demonstrate the benefits of our LEATHER-inspired algorithm compared to common baselines.", "Importantly, we also show our proposed statistic (energy) is predictive of the quality of generated dialogue; i.e., we exhibit a linear relationship.", "This suggests LEATHER is useful, not only as a theoretical tool for algorithm design, but also as an empirical tool for model-selection.", "Our framework is publicly available through experimental code and a Python package.github.com/anthonysicilia/LEATHER-AACL2022" ], [ "Theories of Learning to Generate Text", "Most widely, text-generation is framed as a classification problem, in which a model predicts the next word provided existing context (e.g., previous words).", "While common PAC learning analyses do apply to classification, this theory only describes the learner's ability at the next-word prediction task.", "In some specific cases, instead, PAC analysis has also been used to analyze high-level objectives and motivate conversational strategies [28], but this analysis is problem-dependent.", "In contrast, our work offers a general problem-independent formalism for studying high-level qualities of generated text.", "Another frequent formalism comes from partially observable Markov decision processes (POMDPs) used to motivate reinforcement learning.", "For example, see [29].", "While POMDPs remedy the issues of typical PAC analysis by supporting implementation of high-level objectives, as we are aware, there are no empirically verified theoretical studies of learning under data-shift in POMDPs.", "In contrast, we demonstrate LEATHER admits such a theory of learning, using it to predict the human-likeness of generated text under data-shift (where POMDPs fall short)." ], [ "Theories of Learning with Data-Shift", "Early learning theoretic models of data-shift in classification and regression are due to [2], [3] and [17].", "While modern approaches are generally similar in spirit, new statistics incorporate increasing information about the learning algorithm [16], [14], [9], [27].", "Ultimately, such techniques tend to improve the predictive capabilities of a theory in practical application [20], [1].", "Diverse additional approaches to describing the impact of data-shift have also been proposed, for example, using integral probability metrics [21], [22], [26], [12].", "Unfortunately, existing works focus on classification and regression which, as discussed, are not directly applicable to dialogue generation.", "Further, this theory does not easily extend to generation (see Section REF ).", "Ultimately, using LEATHER, our work derives a new statistic (energy) for predicting changes in model performance, which is applicable to dialogue generation." ], [ "Evaluation of Generated Text", "There are many automated metrics for evaluation of generated text including metrics based on $n$ -grams such as BLEU [18], ROUGE [15], and CIDEr [34].", "Automated metrics based on neural models are also becoming more prevalent including BLEURT [23], BertScore [35], and COSMic [11].", "[5] propose use of an adversary to discriminate between human and generated text, evaluating based on the generator's ability to fool the adversary.", "Human annotation and evaluation, of course, remains the gold-standard.", "Notably, our proposed framework encapsulates these techniques, since it is suitable for analyzing the impact of the learning process on all of these evaluation strategies and more (see Section  for examples)." ], [ "Theory with Examples", "In this section, we develop our new theoretical framework.", "To assist our exposition, we use the GuessWhat?!", "visual dialogue game – a variant of the child's game I Spy – as a running example.", "We first describe the game along with our modeling interests within the game.", "We continue with a description of our theory and then apply this theory to analyze an algorithm that learns to play the game." ], [ "An image and goal-object within the image are both randomly chosen.", "A question-player with access to the image asks yes/no questions to an answer-player who has access to both the image and goal-object.", "The question-player's goal is to identify the goal-object.", "The answer-player's goal is to reveal the goal-object to the question-player by answering the yes/no questions appropriately.", "The question- and answer-player converse until the question-player is ready to make a guess or at most $m$ questions have been asked.By default, $m=8$ following [25].", "The question-player then guesses which object was the secret goal." ], [ "Notation for Human Games", "To discuss this game within our theoretical framework next, we provide some notation.", "We assume the possible questions, answers, and objects are respectively confined to the sets $\\mathcal {Q}$ , $\\mathcal {A}$ , and $\\mathcal {O}$ .", "We also assume a set of possible images $\\mathcal {I}$ .", "A game between two human players can be represented by a series of random variables.", "The image-object pair is represented by the random tuple $(I, O)$ .", "The dialogue between the question- and answer-player is represented by the random-tuple $D = (Q_1, A_1, \\ldots , Q_P, A_P)$ with some random length $P \\le m$ .", "Each $Q_i$ is a random question taking value from the set $\\mathcal {Q}$ and each $A_i$ is a random answer from the set $\\mathcal {A}$ ." ], [ "Notation for Modeled Games", "From a modeling perspective, in this paper, we focus on the question-player and assume a human answer-player.", "We consider learning a model that generates the random dialogue $\\hat{D} = (\\hat{Q}_1, \\tilde{A}_1, \\ldots \\hat{Q}_m, \\tilde{A}_m)$ along with a predicted goal object $\\hat{O}$ .Notice, although the answer-player is still human, the answers may follow a distinct distribution due to dependence on the questions, so we demarcate this difference by $\\tilde{\\square }$ .", "For example, consider the model of [25] we study later.", "It generates dialogue/predicted goal as below: $\\small \\begin{split}& \\hat{O} = \\mathtt {Gues}_\\alpha (\\mathtt {Enc}_\\beta (I, \\hat{D})) \\\\& \\hat{Q}_{i+1} = \\mathtt {QGen}_\\theta (\\mathtt {Enc}_\\beta (I, \\hat{Q}_1, \\tilde{A}_1, \\ldots \\hat{Q}_i, \\tilde{A}_i)\\end{split}$ where, aptly, the neural-model $\\mathtt {QGen}_\\theta : \\mathbb {R}^d \\rightarrow \\mathcal {Q}$ is called the question-generator and the neural-model $\\mathtt {Gues}_\\alpha : \\mathbb {R}^d \\rightarrow \\mathcal {O}$ is called the object-guesser.", "The final neural-model $\\mathtt {Enc}_\\beta : \\mathcal {I} \\times (\\mathcal {Q} \\times \\mathcal {A})^* \\rightarrow \\mathbb {R}^d$ is called the encoder and captures pertinent features for the former models to share.", "Subscripts denote the parameters of each model (to be learned)." ], [ "Modeling Goals", "There are two main objectives we consider.", "The first is task-oriented: $\\small \\min \\nolimits _{\\alpha , \\beta } \\ \\mathbf {E}[1\\lbrace \\hat{O} \\ne O\\rbrace ]$ which requires the predicted goal-object align with the true goal.", "The second objective is more elusive from a mathematical perspective: the generated dialogue $\\hat{D}$ should be human-like.", "That is, it should be similar to the human dialogue $D$ .", "As we see next, our theory is aimed at formalizing this objective." ], [ "Theoretical Framework (", "Now, we present our proposed theory with examples from the GuessWhat?!", "game just discussed." ], [ "Sets", "Assume a space $\\mathcal {C}$ , which encompasses the set of dialogue contexts, and a space $\\mathcal {D}$ , which encompasses the set of possible dialogues.", "In general, the structure of these sets and representation of elements therein are arbitrary to allow wide applicability to any dialogue system.", "For particular examples, consider the Guess What?!", "game: $c \\in \\mathcal {C}$ is an image-goal pair and $d \\in \\mathcal {D}$ is a list of question-answer pairs.", "Note, we also allow an additional, arbitrary space $\\mathcal {U}$ to account for any unobserved effects on the test outputs (discussed next)." ], [ "Test Functions", "To evaluate generated text, we assume a group of fixed test functions $\\lbrace h_1\\ldots h_L\\rbrace $ where for each $\\ell \\in [L]$ the function $h_\\ell : \\mathcal {D} \\times \\mathcal {U} \\rightarrow [0,1]$ assigns a $[0,1]$ -valued score that characterizes some high-level property of the dialogue.", "For example, a test function might be a binary value indicating presence of particular question-type, a continuous value indicating the proportion of clarification questions, a sentiment score, or some other user-evaluation.", "A test function can also be an automated metric like lexical diversity, for example." ], [ "Random Outputs", "As noted, the space $\\mathcal {U}$ primarily allows the test $h_\\ell $ to exhibit randomness due to unobserved effects.", "For example, this is the case when our test function is a human evaluation and randomness arises from the human annotator.", "To model this, we assume an unknown distribution $\\mathbb {U}$ over $\\mathcal {U}$ , so that for $U \\sim \\mathbb {U}$ and dialogue $d \\in \\mathcal {D}$ , the score $h_\\ell (d, U)$ is a random variable.", "In general, we do not assume too much access to this randomness, since sampling from $\\mathbb {U}$ can be costly; e.g., it can require recruiting new annotators or collecting new annotations.", "Note, $U$ can also be used to encapsulate additional (observable) information needed to conduct the test $h_\\ell $ (e.g., a reference dialogue)." ], [ "Goal Distribution", "Next, we assume a goal distribution $\\mathbb {G}$ over the set of contextualized dialogues; i.e., context-dialogue pairs in $\\mathcal {C} \\times \\mathcal {D}$ .", "Typically, $\\mathbb {G}$ is the distribution of contextualized dialogues between human interlocutors.", "In the GuessWhat?!", "example, $\\mathbb {G}$ is the distribution of the random, iterated tuple $((I, O), D)$ .", "Recall, $I$ is the random image and $O$ is the random goal-object, which together form the context.", "$D = (Q_1, A_1\\ldots Q_P, A_P)$ is the variable-length tuple of question-answer pairs produced by humans discussing the context $(I, O)$ ." ], [ "Dialogue Learner and Environment", "We also assume some dialogue learner parameterized by $\\theta \\in \\mathbb {R}^d$ .", "The learner may only partially control each dialogue – e.g., the learner might only control a subset of the turns in each dialogue – and the mechanism through which this occurs is actually unimportant in the general setting; i.e., it will not be assumed in our theoretical results.", "Ultimately, we need only assume existence of some function $(\\theta , c) \\overset{\\mathsf {E}}{\\longrightarrow } \\mathbb {P}_\\theta (c)$ where $\\theta $ are the learned parameters, $c \\in \\mathcal {C}$ is the context, and $\\mathbb {P}_\\theta (c)$ is a distribution over dialogues $\\mathcal {D}$ .", "In the GuessWhat?!", "example discussed previously, the dialogue learner is $\\mathtt {QGen}_\\theta $ and the function $\\mathsf {E}$ is implicitly defined by Eq.", "(REF ).", "In particular, we have $\\hat{D} \\sim \\mathbb {P}_\\theta (I, O)$ where image $I$ and object $O$ are sampled from the goal-distribution of contextualized dialogues $((I,O), D) \\sim \\mathbb {G}$ .", "We call $\\mathsf {E}$ the environment of the learner and use sans serif in notation.", "In the GuessWhat?!", "example, the environment can change for a myriad of reasons: the answer-player could change strategies (inducing a new answer-distribution), the distribution of image $I$ could change, or the distribution of the object $O$ could change.", "All of which, can impact the function $(\\theta , c) \\overset{\\mathsf {E}}{\\longrightarrow } \\mathbb {P}_\\theta (c)$ .", "One implicit factor we encounter later is the dependence of the environment $\\mathsf {E}$ on the encoder parameters $\\beta $ in Eq.", "(REF ).", "In discussion, we may explicitly write $\\mathsf {E}_\\beta $ to denote this dependence." ], [ "Formal Objective of Learner", "As discussed before, the conceptual task of the dialogue learner is to produce human-like text.", "To rephrase more formally: the task of the learner is to induce a contextualized dialogue distribution that is indistinguishable from the the goal distribution.", "Unfortunately, this objective is made difficult by the complexity of dialogue.", "In particular, it is unclear what features of the dialogue are important to measure: should we focus on the atomic structure of a dialogue, the overall semantics, or maybe just the fluency?", "Surely, the answer to this question is dependent on the application.", "For this reason, we suggest the general notion of a test function.", "Each test $\\lbrace h_1\\ldots h_L\\rbrace $ can be hand selected prior to learning to emphasize a particular goal for the dialogue learner; e.g., as in Figure REF , $h_1$ can represent a user evaluation of question relevance, $h_2$ can capture lexical diversity, etc.", "Then, the quality of the contextualized dialogue distribution induced by the dialogue learner is measured by preservation of the output of the test functions.", "That is, the output of test functions should be similar when applied to human dialogue about the same context.", "We capture this idea through the test divergence: $\\small \\begin{split}& \\mathbf {TD}_\\mathsf {E}(\\theta ) = \\sum \\nolimits _{\\ell =1}^L \\mathbf {TD}_\\mathsf {E}^\\ell (\\theta ) \\\\& \\text{where} \\quad \\mathbf {TD}_\\mathsf {E}^\\ell (\\theta ) = \\mathbf {E}[\\vert h_\\ell (D, U) - h_\\ell (\\hat{D}, U) \\vert ], \\\\& \\hspace{33.00003pt} (C, D) \\sim \\mathbb {G}, \\ \\hat{D} \\sim \\mathbb {P}_\\theta (C), \\ U \\sim \\mathbb {U}.\\end{split}$ Notice, the test divergence is not only dependent on the parameters of the dialogue learner, but also the environment $\\mathsf {E}$ which governs the distribution $\\mathbb {P}_\\theta (C)$ .", "Recall, this function is induced by the learner's environment and its role in eliciting generated dialogue.", "Finally, with all terms defined, the formal objective of the dialogue learner is typically to minimize the test divergence: $\\small \\min \\nolimits _\\theta \\ \\mathbf {TD}_\\mathsf {E}(\\theta ).$" ], [ "Example (BLEU/ROUGE)", "Useful examples of test divergence are traditional evaluation metrics, using a human reference – metrics like BLEU, ROUGE, or accuracy at next-word prediction.", "To see the connection, in Eq.", "(REF ), let $L = 1$ , let $h_1$ be one of the metrics, and set $U = D$ .", "Then, $h_1(D, U)$ computes some form of $n$ -gram overlap between the human reference and itself, so it evaluates to 1 (full overlap).", "On the other hand, $h_1(\\hat{D}, U)$ is the traditional notion of the metric (e.g., BLEU or ROUGE).", "So, the test divergence simply becomes 1 minus the average of the metric.", "Notice, this example shows how $U$ can be used to encapsulate observable (random) information as well." ], [ "Example (", "We can also consider a more complicated example in the GuessWhat?!", "game.", "Here, [25] evaluate the human-likeness of dialogue with respect to the question strategies.", "Specifically, the authors consider a group of strategy classifiers $s_i : \\mathcal {Q} \\rightarrow \\lbrace 0,1\\rbrace , i \\in [L]$ which each indicate presence of a particular strategy in the input question.", "For example, $s_1$ might identify if its input is a color question “Is it blue?” and $s_2$ might identify if its input is a spatial question “Is it in the corner?”.", "Then, one intuitive mathematical description of the question-strategy dissimilarity may be written $\\small \\mathbf {E}\\Bigg [\\sum _{i=1}^\\ell \\Big \\vert \\frac{1}{P}\\sum _{j=1}^P s_i(Q_j) - \\frac{1}{m}\\sum _{k=1}^m s_i(\\hat{Q}_k) \\Big \\vert \\Bigg ]$ Above captures expected deviation in proportion of color/spatial questions from the human- to the generated-text.", "It also coincides with the definition of test divergence.", "To see this, note the above is Eq.", "(REF ) precisely when $h_i$ returns the proportion of questions in a dialogue with type identified by $s_i$ ." ], [ "Example (Human Annotation)", "Human annotation is also an example, in which, human subjects are presented with two dialogue examples: one machine generated and one from a goal corpus with both dialogues pertaining to the same context.", "The human then annotates both examples with a score pertaining to the quality of the dialogue (e.g., the relevance of questions as in Figure REF ).", "So, $h_i$ is represented by the annotation process, using $U$ to encapsulate any unobserved random effects.", "Then, the test divergence simply reports average absolute difference between annotations." ], [ "Application to a ", "In this next part, we apply the theory just discussed to analyze a cooperative learning algorithm ($\\mathtt {CL}$ ) proposed by [25].", "Recall Eq.", "(REF ), $\\mathtt {CL}$ generates dialogue/predicted goal as below: $\\small \\begin{split}& \\hat{O} = \\mathtt {Gues}_\\alpha (\\mathtt {Enc}_\\beta (I, \\hat{D})) \\\\& \\hat{Q}_{i+1} = \\mathtt {QGen}_\\theta (\\mathtt {Enc}_\\beta (I, \\hat{Q}_1, \\tilde{A}_1, \\ldots \\hat{Q}_i, \\tilde{A}_i)\\end{split}$ where $\\mathtt {QGen}_\\theta $ is the question-generator, $\\mathtt {Gues}_\\alpha $ is the object-guesser, and $\\mathtt {Enc}_\\beta $ is the encoder." ], [ "$\\mathtt {CL}$ Algorithm", "Conceptually, cooperative learning encompasses a broad class of algorithms in which two or more independent model components coordinate during training to improve each other’s performance.", "For example, this can involve a shared learning objective [7].", "In the algorithm we consider, [25] coordinate training of a shared encoder using two distinct learning phases.", "Written in the context of our theory, they are: Task-Oriented Learning: Solve Eq.", "(REF ).", "Update $\\alpha $ and $\\beta $ to minimize $\\mathbf {E}[1\\lbrace \\hat{O} \\ne O\\rbrace ]$ .", "Language Learning: Solve Eq.", "(REF ).", "Update $\\theta $ and $\\beta $ to minimize $\\mathbf {TD}_{\\mathsf {E}_\\beta }(\\theta )$ where the test measures accuracy at next-word prediction.", "The two phases repeat, alternating until training is finished.", "As is typical when training neural-networks, the parameter weights are updated using batch SGD with a differentiable surrogate loss.", "To do so in the task-oriented learning phase, $\\mathtt {Gues}_\\alpha $ is designed to output probability estimates for each object and the negative log-liklihood of this output distribution is minimized.", "In the language learning phase, $\\mathtt {QGen}_\\theta $ is designed to output probabilities for the individual utterances that compose each question.", "Then, the surrogate optimization is: $\\small \\begin{split}& \\min \\nolimits _{\\theta , \\beta } \\mathbf {E}\\Big [\\sum _{i + 1 \\le P } \\ \\mathcal {L}(\\hat{Q}_{i+1}, Q_{i+1}) \\Big ] \\quad \\text{where} \\\\& \\hat{Q}_{i+1} = \\mathtt {QGen}_\\theta (\\mathtt {Enc}_\\beta (I, Q_1, A_1 \\ldots Q_i, A_i)\\end{split}$ and $\\mathcal {L}$ sums the negative logliklihood of the individual utterances.", "Notice, a form of teacher-forcing is used in this objective, so that the encoder and question-generator are conditioned on only human dialogue during the language learning phase.", "This fact will become important in the next part." ], [ "Problem", "Importantly, the encoder parameters $\\beta $ are updated in both the task-oriented and language learning phases.", "So, in the language learning phase, the dialogue learner selects $\\theta $ to minimize the test divergence in cooperation with a particular choice of the encoder parameters – let us call these $\\beta ^s$ .", "Then, in the task-oriented learning phase, the learned encoder parameters may change to a new setting $\\beta ^t$ .", "Importantly, by changing the parameters in Eq.", "(REF ), we induce a new environment $\\mathsf {E}_{\\beta ^t} \\ne \\mathsf {E}_{\\beta ^s}$ , which governs a new generation process.", "For brevity, we set $\\mathsf {T} = \\mathsf {E}_{\\beta ^t}$ and $\\mathsf {S} = \\mathsf {E}_{\\beta ^s}$ .", "This change brings us to our primary issue: the shift in learning environment does not necessarily preserve the quality of the generated dialogue.", "In terms of our formal theory, we rephrase: $\\small \\mathbf {TD}_{\\mathsf {S}}(\\theta ) \\overset{?", "}{=} \\mathbf {TD}_{\\mathsf {T}}(\\theta ).$ Without controlling the change in test divergence across these two environments, it is possible the two learning phases are not “cooperating” at all." ], [ "In general, it is clear equality will not hold, but we can still ask how different these quantities will be.", "If they are very different, the quality of the dialogue generation learned in the language learning phase may degrade substantially during the task-oriented learning phase.", "More generally, the problem we see here is a problem of data-shift.", "In learning theory, the study of data-shift is often referred to as domain adaptation.", "The test divergence on the environment $\\mathsf {S}$ – in which we learn $\\theta $ – is referred to as the source error, while the test divergence on the environment $\\mathsf {T}$ – in which we evaluate $\\theta $ – is referred to as the target error.", "The tool we use to quantify the change between the source error and the target error is an adaptation bound, in which we find a statistic $\\Delta $ for which the following is true:The inequality is approximate because there are often other statistics in the bound, but through reasonable assumptions, one statistic $\\Delta $ is identified as the key quantity of interest.", "These assumptions should be carefully made to avoid undesirable results [3], [37].", "$\\small \\mathbf {TD}_{\\mathsf {T}}(\\theta ) \\lesssim \\mathbf {TD}_{\\mathsf {S}}(\\theta ) + \\Delta .$ Then, we can be sure the error in the new environment has not increased much more than $\\Delta $ .", "In this sense, we say $\\Delta $ is a predictive statistic because it predicts the magnitude of the target error $\\mathbf {TD}_{\\mathsf {T}}$ from the magnitude of the source error $\\mathbf {TD}_{\\mathsf {S}}$ .", "To put it more concisely, it predicts the change in error from source to target.", "When $\\Delta $ is small, the change should be small too or the target error should be even lower than the source error.", "When $\\Delta $ is large, we cannot necessarily come to this conclusion.", "Importantly, for $\\Delta $ to be useful in practice it should not rely on too much information.", "In dialogue generation, it is important for $\\Delta $ to avoid reliance on the test functions, since these can often encompass costly sampling processes like human-evaluation.", "As alluded in Section , many adaptation bounds exist, but as it turns out, none of them are directly applicable to dialogue generation contexts.", "This is because, as we are aware, computation of all previous bounds relies on efficient access to the test functions $\\lbrace h_1\\ldots h_L\\rbrace $ and samples $U \\sim \\mathbb {U}$ , which is not always possible in dialogue.", "In particular, these functions, along with the sampling process $U \\sim \\mathbb {U}$ , might represent a time-consuming, real-world processes like human-evaluation.", "For this reason, in the next section, we prove a new adaptation bound with new statistic $\\Delta $ , which does not require access to the test functions." ], [ "Text-Generation under Data-Shift", "Motivated by the GuessWhat?!", "example and algorithm $\\mathtt {CL}$ , we continue in this section with a general study of domain adaptation for dialogue generation.", "We begin by proposing a new (general) adaptation bound for LEATHER.", "We then apply this general bound to the GuessWhat?!", "algorithm $\\mathtt {CL}$ , motivating fruitful modifications through our analysis." ], [ "The Energy Statistic and Computation", "Definition 4.1 For any independent random variables $A$ and $B$ , the discrete energy distance is defined $\\varepsilon _{01}(A, B)$ equal to $\\small 2\\mathbf {E}[1\\lbrace A\\ne B\\rbrace ] - \\mathbf {E}[1\\lbrace A \\ne A^{\\prime }\\rbrace ] - \\mathbf {E}[1\\lbrace B \\ne B^{\\prime }\\rbrace ]$ where $A^{\\prime }$ is an i.i.d copy of $A$ , $B^{\\prime }$ is an i.i.d.", "copy of $B$ , and $1\\lbrace \\cdot \\rbrace $ is the indicator function; i.e., it returns 1 for true arguments and 0 otherwise.", "The discrete energy distance is a modification of the energy distance sometimes called the statistical energy.", "It was first proposed by [30] and was studied extensively by [31] in the case where $A$ and $B$ are continuous variables admitting a probability density function.", "In general, and especially in dialogue, this is not the case.", "Aptly, our newly suggested form of the energy distance is more widely applicable to any variables $A$ and $B$ for which equality is defined.", "While general, this distance can be insensitive, especially when $A$ and $B$ take on many values.", "To remedy this, we introduce the following.", "Definition 4.2 Let $\\mathcal {D}$ be any set.", "A coarsening function is a map $c : \\mathcal {D} \\rightarrow \\mathcal {D}$ such that $c(\\mathcal {D}) = \\lbrace c(d) \\mid d \\in \\mathcal {D}\\rbrace $ is finite, and further, $\\vert c(\\mathcal {D}) \\vert < \\vert \\mathcal {D} \\vert $ .", "Since $\\mathcal {D}$ is likely an immensely large set, this can make the signal $1\\lbrace a \\ne b\\rbrace $ for $a,b \\in \\mathcal {D}$ overwhelming compared to the signal $1\\lbrace a = b\\rbrace $ , and therefore, weaken the sensitivity of the discrete energy distance, overall.", "Coarsening functions allow us to alleviate this problem by effectively “shrinking” the set $\\mathcal {D}$ to a smaller set.", "To do this, the role of the coarsening function is to exploit additional context to arrive at an appropriate clustering of the dialogues, which assigns conceptually “near” dialogues to the same cluster.", "So, the choice of $c(d)$ should be a “good” representation of $d$ , in the sense that too much valuable information is not lost.", "As a general shorthand, for a coarsening function $c$ and variables $A, B$ , we write $\\small \\varepsilon _c(A, B) = \\varepsilon _{01}(c(A), c(B)).$ In this paper, we implement $c$ using the results of a $k$ -means clustering with details in Appendix ." ], [ "Adaptation Bound", "With these defined, we give the novel bound.", "Proof of a more general version of this bound – applicable beyond dialogue contexts (e.g., classification) – is provided in Appendix  Thm.", "REF .", "Notably, our proof requires some technical results on the relationship between discrete energy and the characteristic functions of discrete probability distributions.", "These may also be of independent interest, outside the scope of this paper.", "Theorem 4.1 For any $\\theta \\in \\mathbb {R}^d$ , any coarsening function $c : \\mathcal {D} \\rightarrow \\mathcal {D}$ , and all $\\ell \\in [L]$ $\\small \\mathbf {TD}_\\mathsf {T}^\\ell (\\theta ) \\le \\gamma + \\varphi + \\mathbf {TD}_\\mathsf {S}^\\ell (\\theta ) + \\sqrt{\\varepsilon _c(\\tilde{D}_1, \\tilde{D}_2) \\times \\delta }$ where $\\tilde{D}_1 \\sim \\mathbb {P}_\\theta (C) = \\mathsf {T}(\\theta , C), \\ \\tilde{D}_2 \\sim \\mathbb {Q}_\\theta (C) = \\mathsf {S}(\\theta , C), \\ (C,D) \\sim \\mathbb {G}, \\ U \\sim \\mathbb {U}$ ,For simplicity, let $\\tilde{D}_1, \\tilde{D}_2, U$ be pairwise-independent.", "$\\small \\begin{split}& \\gamma = \\sum \\nolimits _{i \\in \\lbrace 1,2\\rbrace } \\mathbf {E}[\\vert h_\\ell (c(\\tilde{D}_i), U) - h_\\ell (\\tilde{D}_i, U)\\vert ] \\\\& g \\in \\operatornamewithlimits{arg\\,min}_{f \\in [0,1]^{\\mathcal {D} \\times \\mathcal {U}}} \\sum \\nolimits _i \\mathbf {E}[\\vert f( c(\\tilde{D}_i), U) - h_\\ell (D, U)\\vert ] \\\\& \\quad \\text{where}\\quad [0,1]^{\\mathcal {D} \\times \\mathcal {U}} = \\lbrace f \\mid f : \\mathcal {D} \\times \\mathcal {U} \\rightarrow [0,1]\\rbrace .", "\\\\& \\varphi = \\sum \\nolimits _{i \\in \\lbrace 1,2\\rbrace } \\mathbf {E}[\\vert g(c(\\tilde{D}_i), U) - h_\\ell (D, U)\\vert ] \\\\& \\delta = \\mathbf {E} \\Big [ \\sum \\nolimits _{x \\in c(\\mathcal {D})} \\vert g(x, U) - h_\\ell (x, U) \\vert \\Big ].\\end{split}$" ], [ "Unobserved Terms in Dialogue", "As noted, an important benefit of our theory is that we need not assume computationally efficient access to the test functions $\\lbrace h_1\\ldots h_L\\rbrace $ or samples $U \\sim \\mathbb {U}$ .", "Yet, the reader likely notices a number of terms in Eq.", "(REF ) dependent on both of these.", "Similar to the traditional case, we argue that our theory is still predictive because it is often appropriate to assume these unobserved terms are small, or otherwise irrelevant.", "We address each of them in the following: The term $\\gamma $ captures average change in test output as a function of the coarsening function $c$ .", "Whenever $c(\\tilde{D}_i)$ is a good representative of $\\tilde{D}_i$ (i.e., it maintains information to which $h_\\ell $ is sensitive) $\\gamma $ should be small.", "The next term $\\varphi $ is the smallest sum of expected differences that any function of the coarsened dialogues $c(\\tilde{D}_i)$ and the arbitrary randomness $U$ can achieve in mimicking the true test scores $h_\\ell (D, U)$ .", "Since the set of all functions from $\\mathcal {D} \\times \\mathcal {U}$ to $[0,1]$ should be very expressive, this can be seen as another requirement on our coarsened dialogues $c(\\tilde{D}_i)$ .", "For example, when $c(\\tilde{D}_i) = \\tilde{D}_i \\approx D$ this term can be close to zero.", "When instead $\\vert c(\\mathcal {D}) \\vert $ is much smaller than $\\vert \\mathcal {D} \\vert $ (e.g., a singleton set), we expect $\\varphi $ to grow.", "The last term $\\delta $ can actually be large.", "Fortunately, since $\\delta $ is multiplied by the energy distance, this issue is mitigated when the statistical energy is small enough.", "Ultimately, the energy is paramount in controlling the impact of this term on the bound's overall magnitude." ], [ "A Predictive Theory", "Granted the background above, our discussion reduces the predictive aspect of the bound to a single key quantity: the discrete energy distance $\\varepsilon _c(\\tilde{D}_1, \\tilde{D}_2)$ .", "In particular, besides the test divergence $\\mathbf {TD}_\\mathsf {S}$ , all other terms can be assumed reasonably small by proper choice of the coarsening function, or otherwise controlled by the statistical energy through multiplication.", "Note, the first issue is discussed in Appendix .", "Ultimately, the main takeaway is that statistical energy plays the role of $\\Delta $ as discussed in Section REF ." ], [ "A New Cooperative Learning Algorithm", "With all theoretical tools in play, we return to the algorithm $\\mathtt {CL}$ and the problem raised in Section REF ." ], [ "Recall, we are interested in quantifying and controlling the change in error from source $\\mathbf {TD}_{\\mathsf {S}}(\\theta )$ to target $\\mathbf {TD}_{\\mathsf {T}}(\\theta )$ across the training phases.", "Based on our theory, we know we should decrease the statistical energy between dialogues to reduce this change.", "That is, we should reduce the distance between the generated dialogue distributions across learning phases.", "We hypothesize this may be done by incorporating human dialogue in the task-oriented learning phase.", "The encoder in $\\mathtt {CL}$ sees no human dialogue when forming the prediction $\\hat{O}$ that is compared to $O$ during task-oriented learning – as seen in Eq.", "(REF ), only the generated dialogue $\\hat{D}$ is used.", "In contrast, the encoder sees only the human dialogue $D$ in the alternate language learning phase – i.e., as seen in the surrogate objective in Eq.", "(REF ).", "We hypothesize this stark contrast produces large shifts in the parameters $\\beta ^s \\rightarrow \\beta ^t$ between phases.", "Instead, we propose to regularize the task-oriented learning phase with human dialogue as below: $\\small \\begin{split}& \\min _{\\alpha , \\beta } \\mathbf {E}[1[\\hat{O} \\ne O]] + \\mathbf {E}[1[\\hat{O}^{\\prime } \\ne O]] \\ \\quad \\text{where} \\\\& \\hat{O}^{\\prime } = \\mathtt {Gues}_\\alpha (\\mathtt {Enc}_\\beta (I, D)), \\quad ((I,O), D) \\sim \\mathbb {G}\\end{split}$ and $\\hat{O}$ is still as described in Eq.", "(REF ).", "Intuitively, this should constrain parameter shift from $\\beta ^s \\rightarrow \\beta ^t$ , thereby constraining the change in outputs of the encoder, and ultimately constraining the change in outputs of the question-generator, which is conditioned on the encoder outputs.", "As the generated dialogue distributions from distinct learning phases will be more similar by this constraint, we hypothesize the penultimate effect will be decreased statistical energy (i.e., since energy measures distance of distributions).", "Based on our theory, reduced energy provides resolution to our problem: test divergence should be preserved from source to target." ], [ "Setup", "In general, we use experimental settings of [25] (e.g., hyperparameters, validation, etc.)", "with full details available in the code.", "CL denotes the original algorithm proposed by [25] (Section REF ).", "LEATHER denotes our LEATHER-inspired modification (Section REF )." ], [ "Automated Metrics", "We report average accuracy $\\mathbf {acc}$ of the guesser module in identifying the true goal-object across three random seeds as well as average lexical diversity ($\\mathbf {lex div}$ ; type/token ratio over all dialogues), average question diversity ($\\mathbf {q div}$ ; % unique questions over all dialogues), and average percent of dialogues with verbatim repeated questions ($\\mathbf {rep q}$ ).", "$\\mathbf {acc}$ quantifies task-success, while subsequent metrics are designed to quantify human-likeness of the generated dialogue.", "These metrics were all previously computed by [25] with details in their code." ], [ "Human Evaluation", "We asked two annotators to help us further evaluate the results.", "Throughout the process, human subject guidelines from the authors' institution were followed and the task was approved by our institution human subject board.", "The annotators examined contextualized human dialogues and generated dialogues from a CL model and LEATHER model.", "All dialogues used the same image/goal context and annotators observed all dialogues for a specific context in random order without knowing how each dialogue was created.", "Across 50+ dialogues, average percentage of irrelevant questions per dialogue ($\\mathbf {irr q}$ ) was determined.An irrelevant question ignores the image or current dialogue context.", "For example, in Figure REF , CL asks about the man's “face” (Q5) after learning the goal-object is a car, which ignores dialogue-context.", "CL also hallucinates an object “cut off” on the right side (Q4), which ignores image context.", "Average percentage of specific questions ($\\mathbf {spc q}$ ) was also determined.A specific question contains two or more modifiers of one or more nouns.", "For example, LEATHER modifies “car” with “behind” and “man” with “the white shirt” in Figure REF Q7.", "We report $\\mathbf {TD}$ , which gives the average difference in percentages from the corresponding human dialogue.", "Sans scaling, these $\\mathbf {TD}$ metrics are examples of the test divergence in Eq.", "(REF ) using a human-evaluation test function.", "Qualitative analysis of errors was also conducted based on annotator remarks (provided later in this section)." ], [ "Impact of ", "In Table REF , we compare the cooperative learning algorithms CL and LEATHER.", "The former uses only the generated dialogue during task-oriented learning, while the latter incorporates human data to regularize the change in parameters underlying the environmental shift.", "As predicted by our theory, regularization is very beneficial, improving task-success and human-likeness.", "For example, LEATHER decreases % of irrelevant questions by 4.8% compared to CL, which is more similar to human dialogue according to the test divergence ($\\mathbf {TD}$ ).", "Interestingly, LEATHER also decreased % of specific questions by 1.7%.", "Based on the $\\mathbf {TD}$ , this is also more similar to human dialogue, indicating humans ask fewer specific questions too.", "The design of the $\\mathbf {TD}$ allows us to capture these non-intuitive results.", "Notably, regularization inspired by LEATHER allows us to train longer without degrading task-success or suffering from mode collapse (i.e., repeated questions).", "Automated human-likeness metrics for the last epoch (in parentheses) show substantial improvements over CL in this case." ], [ "Cooperative vs. Reinforcement Learning", "In Table REF , we compare the two cooperative learning algorithms CL and LEATHER to the reinforcement learning algorithm (RL).", "We use the results reported by [25] for RL, since we share an experimental setup.", "Compared to RL, both cooperative learning approaches improve task success and human-likeness.", "As noted in Section , the theoretical framework for RL (i.e., POMDPs) is not equipped to study interaction of the distinct learning phases within this algorithm (i.e., with respect to data-shift).", "Better theoretical understanding could explain poor performance and offer improvement as demonstrated with LEATHER, which improves human-likeness of CL." ], [ "Qualitative Analysis", "In dialogue generated by CL, questions with poor relevance ignored the image context (e.g., model hallucination).", "In dialogue generated by the LEATHER model, irrelevant questions ignored current dialogue context (e.g., a question which should already be inferred from existing answers).", "We hypothesize this may be due to poor faith in the automated answer-player used for training, which also has problems with model hallucination (e.g., Figure REF ).", "Both models had issues with repeated questions.", "In human dialogue, issues were grammatical with few irrelevant questions." ], [ "Here, we show statistical energy predicts test divergence, empirically.", "Computation of energy can be automated, so predictive ability is useful for model-selection when human evaluation is not available.", "We consider test divergence ($\\mathbf {TD}$ ) with 4 groups of tests: (A) the 9 fine-grained strategy classifiers of [25] used as in Eq.", "(REF ), (B) lexical diversity computed as type/token ratio per dialogue, (C) question repetition computed as a binary indicator for each dialogue, and (D) the discussed human-evaluations of question relevance/specificity.", "Figure REF plots change in $\\mathbf {TD}$ for (A-C) as a function of energy.", "Specifically, change in $\\mathbf {TD}$ is the difference $\\mathbf {TD}_{\\mathsf {T}}(\\theta ) - \\mathbf {TD}_{\\mathsf {S}}(\\theta )$ where ${\\mathsf {S}}$ and ${\\mathsf {T}}$ are defined by the transition from language learning to task-oriented learning discussed in Section .", "We plot this change at the transitions after epochs 65, 75, 85, and 95 (out of 100 total).", "Notably, energy is predictive and, specifically, is linearly related to change in test divergence.", "For (D), in Table REF , we show average energy across all transitions compared to test divergence.", "Energy is also predictive for these human-evaluation tests." ], [ "Conclusion", "This work presents LEATHER, a theoretically motivated framework for learning to generate human-like dialogue.", "The energy statistic, which is derived from this theory, is used to analyze and improve an algorithm for task-oriented dialogue generation.", "Further, energy is empirically predictive of improvements in dialogue quality, measured by both automated and human evaluation.", "Future work may involve more experiments to test the utility of LEATHER in other dialogue settings.", "Theoretically, we hope to study sample-complexity in LEATHER, which is a hallmark of common PAC theories." ], [ "Acknowledgments", "We thank the anonymous reviewers for helpful feedback and Jennifer C. Gates, CCC-SLP, for input on qualitative evaluations of dialogue in experiments." ], [ "Novel Adaptation Bound and Computation of Energy Statistic", "In this section, we give our novel adaptation bound and details for the accompanying energy statistic.", "There is some redundancy between this section and Section , but in general, this section is more detailed.", "Recall, source error is denoted $\\mathbf {TD}_\\mathsf {S}$ and is observed on the environment $\\mathbb {Q}_\\theta (c) = \\mathsf {S}(\\theta , c)$ .", "The target error is denoted $\\mathbf {TD}_\\mathsf {T}$ and is observed on the environment $\\mathbb {P}_\\theta (c) = \\mathsf {T}(\\theta , c)$ .", "For the algorithm CL discussed in the main text, the target is induced by the task-oriented learning phase and the source is induced by the language learning phase." ], [ "Predictive Adaptation Theories", "An important quality of traditional domain adaptation bounds, proposed for classification and regression problems, is that they offer a predictive theory.", "Namely, without observing the target error $\\mathbf {TD}_\\mathsf {T}$ , we can infer this quantity from $\\Delta $ and the source error $\\mathbf {TD}_\\mathsf {S}$ .", "The utility of this is two-fold: first, it allows us to design algorithms that prepare a learner for data-shift by controlling $\\Delta $ ; second, it allows a practitioner to select an appropriate model to deploy in the presence of data-shift by comparing the different values of $\\Delta $ for each model.", "In general, these use-cases would not be possible without $\\Delta $ because the target error $\\mathbf {TD}_\\mathsf {T}$ is not observable until it is too late.", "In contrast, the quantity $\\Delta $ should be observable.", "While this is not always true of $\\Delta $ , authors typically reduce the main effect of $\\Delta $ to one key statistic, which is observable.", "For example, [1] reduce $\\Delta $ to one key statistic called the $h$ -discrepancy by suggesting the other components making up $\\Delta $ are small.", "This is why we use an “approximate” inequality in the main text, since other (small) terms may contribute to the bound." ], [ "Traditional Theories Are Not Predictive", "Traditional theories of adaptation are not predictive for dialogue generation.", "Namely, computation of $\\Delta $ and its key components generally relies on computationally efficient access to the tests $\\lbrace h_1 \\ldots h_L\\rbrace $ and requires sampling from the unknown distribution $U \\sim \\mathbb {U}$ .", "While we can always observe the outputs of $\\lbrace h_1\\ldots h_L\\rbrace $ with randomness $U \\sim \\mathbb {U}$ through the source error $\\mathbf {TD}_\\mathsf {S}(\\theta )$ , it is not always the case that we have computational efficiently access to these tests or the randomness.", "For example, as noted in Section REF , the group of tests $\\lbrace h_1\\ldots h_L\\rbrace $ along with samples $U$ from the unknown distribution $\\mathbb {U}$ may represent complex real-world processes such as human-evaluation.", "Even for simpler evaluation metrics based on text-classifiers (e.g., like $\\lbrace s_1\\ldots s_L\\rbrace $ in Eq.", "(REF )) algorithms for computing $\\Delta $ turn out to be non-trivial, and must be handled on a case-by-case basis.", "Thus, in generation contexts, we typically have no way of computing $\\Delta $ algorithmically, and when we do, it can be difficult to implement.", "If we require an easily implemented, predictive theory, then the classical theory is ruled out.", "As a solution, we propose a novel adaptation bound." ], [ "A Novel Adaptation Bound", "First, we define some terms." ], [ "The Energy Statistic and Computation", "Definition A.1 For any independent random variables $A$ and $B$ , the discrete energy distance is defined: $\\small \\varepsilon _{01}(A, B) = 2\\mathbf {E}[1\\lbrace A\\ne B\\rbrace ] - \\mathbf {E}[1\\lbrace A \\ne A^{\\prime }\\rbrace ] - \\mathbf {E}[1\\lbrace B \\ne B^{\\prime }\\rbrace ]$ where $A^{\\prime }$ is an i.i.d copy of $A$ , $B^{\\prime }$ is an i.i.d.", "copy of $B$ , and $1\\lbrace \\cdot \\rbrace $ is the indicator function; i.e., it returns 1 for true arguments and 0 otherwise.", "The discrete energy distance is a modification of the energy distance sometimes called the statistical energy.", "It was first proposed by [30] and was studied extensively by [31] in the case where $A$ and $B$ are continuous variables admitting a probability density function.", "In general, and especially in dialogue, this is not the case.", "Aptly, we suggest the above form of the energy distance, which is widely applicable to any variables $A$ and $B$ for which equality is defined.", "While general, this energy distance can be strict and insensitive, especially when $A$ and $B$ take on many possible values.", "To remedy this, we propose the following addendum.", "Definition A.2 Let $\\mathcal {D}$ be any set.", "A coarsening function is a map $c : \\mathcal {D} \\rightarrow \\mathcal {D}$ such that $c(\\mathcal {D}) = \\lbrace c(d) \\mid d \\in \\mathcal {D}\\rbrace $ is finite, and further, $\\vert c(\\mathcal {D}) \\vert < \\vert \\mathcal {D} \\vert $ .", "Since $\\mathcal {D}$ is likely an immensely large set, this can make the signal $1\\lbrace a \\ne b\\rbrace $ for $a,b \\in \\mathcal {D}$ overwhelming compared to the signal $1\\lbrace a = b\\rbrace $ , and therefore, weaken the sensitivity of the discrete energy distance, overall.", "Coarsening functions allow us to alleviate this problem by effectively “shrinking” the set $\\mathcal {D}$ to a smaller set.", "To do this, the role of the coarsening function is to exploit additional context to arrive at an appropriate clustering of the dialogues, which assigns conceptually “near” dialogues to the same cluster.", "So, the choice of $c(d)$ should be a “good” representation of $d$ , in the sense that too much valuable information is not lost.", "As a general shorthand, for a coarsening function $c$ and variables $A, B$ , we write $\\small \\varepsilon _c(A, B) = \\varepsilon _{01}(c(A), c(B)).$" ], [ "Example", "One example of a coarsening function for dialogues is $k$ -means clustering.", "In fact, this is the coarsening function we use to compute energy in Section , selecting $k = 100$ .", "Real-valued vector representations of dialogues (e.g., from model latent space) can capture semantic information about the dialogue [4], so we use latent space representations (i.e., the output of the encoder) to represent each dialogue and conduct a $k$ -means clustering on these representations.", "For a dialogue $d$ the output $c(d)$ is then defined by the cluster of $d$ ; i.e., we select an arbitrary dialogue to represent the whole of each cluster and assign this dialogue as the output $c(d)$ .", "In practical implementations, it is typically easier to just compute the energy distance on the cluster labels themselves; this statistic is always equivalent to the energy on the coarsened dialogues, since the map between cluster representatives and cluster labels is bijective.", "Later, within Lemma REF , we prove this equivalence for any bijective map.", "Of course, regardless of implementation, this clustering is dependent on the choice of $k$ .", "Figure REF shows that the results in Section  are robust to different choices of $k$ .", "In all cases, there is a linear relationship between the energy and the change in the test divergence.", "Figure: Comparison of energy statistics and automated test functions as in Section .", "Here, we vary the parameter kk in the kk-means clustering used to determine the coarsening function when computing energy.", "Trends reported in the main text are robust to variation in kk." ], [ "Adaptation Bound", "With these defined, we give the novel bound.", "Proof of a more general version of this bound – applicable beyond dialogue contexts – is provided in Appendix  Thm.", "REF .", "In particular, the general version is “backwards compatible” in the sense that it also applies to traditional learning theoretic settings like classification and regression.", "Arguably, in these settings, it also remains more computationally efficient than existing theories.", "Notably, our proof requires some technical results on the relationship between discrete energy and the characteristic functions of discrete probability distributions.", "These may also be of independent interest, outside the scope of this paper.", "Theorem A.1 For any $\\theta \\in \\mathbb {R}^d$ , any coarsening function $c : \\mathcal {D} \\rightarrow \\mathcal {D}$ , and all $\\ell \\in [L]$ $\\small \\mathbf {TD}_\\mathsf {T}^\\ell (\\theta ) \\le \\gamma + \\varphi + \\mathbf {TD}_\\mathsf {S}^\\ell (\\theta ) + \\sqrt{\\varepsilon _c(\\tilde{D}_1, \\tilde{D}_2) \\times \\delta }$ where $\\tilde{D}_1 \\sim \\mathbb {P}_\\theta (C) = \\mathsf {T}(\\theta , C), \\ \\tilde{D}_2 \\sim \\mathbb {Q}_\\theta (C) = \\mathsf {S}(\\theta , C), \\ (C,D) \\sim \\mathbb {G}, \\ U \\sim \\mathbb {U}$ ,For simplicity, let $\\tilde{D}_1, \\tilde{D}_2, U$ be pairwise-independent.", "When independence does not hold, similar results can be derived under assumption of context-conditional independence.", "$\\small \\begin{split}& \\gamma = \\mathbf {E}[\\vert h_\\ell (c(\\tilde{D}_1), U) - h_\\ell (\\tilde{D}_1, U)\\vert ] + \\mathbf {E}[\\vert h_\\ell (c(\\tilde{D}_2), U) - h_\\ell (\\tilde{D}_2, U)\\vert ] \\\\& g \\in \\operatornamewithlimits{arg\\,min}_{f \\in [0,1]^{\\mathcal {D} \\times \\mathcal {U}}} \\sum \\nolimits _i \\mathbf {E}[\\vert f( c(\\tilde{D}_i), U) - h_\\ell (D, U)\\vert ] \\quad \\text{where}\\quad [0,1]^{\\mathcal {X} \\times \\mathcal {U}} = \\lbrace f \\mid f : \\mathcal {X} \\times \\mathcal {U} \\rightarrow [0,1]\\rbrace .", "\\\\& \\varphi = \\mathbf {E}[\\vert g(c(\\tilde{D}_1), U) - h_\\ell (D, U)\\vert ] + \\mathbf {E}[\\vert g(c(\\tilde{D}_2), U) - h_\\ell (D, U)\\vert ]\\\\& \\delta = \\mathbf {E} \\Big [ \\sum \\nolimits _{x \\in c(\\mathcal {D})} \\vert g(x, U) - h_\\ell (x, U) \\vert \\Big ].\\end{split}$" ], [ "Unobserved Terms in Dialogue", "As noted, an important benefit of our theory is that we need not assume computationally efficient access to the test functions $\\lbrace h_1\\ldots h_L\\rbrace $ or samples $U \\sim \\mathbb {U}$ .", "Yet, the reader likely notices a number of terms in Eq.", "(REF ) dependent on both of these.", "Similar to the traditional case, we argue that our theory is still predictive because it is typically appropriate to assume these unobserved terms are small, or otherwise irrelevant.", "We address each of them in the following: The term $\\gamma $ captures average change in test output as a function of the coarsening function $c$ .", "Whenever $c(\\tilde{D}_i)$ is a good representative of $\\tilde{D}_i$ (i.e., it maintains information to which $h_\\ell $ is sensitive) $\\gamma $ should be small.", "Since we choose the coarsening function, the former premise is not a strong requirement.", "In practice, if choice of $c$ is unclear, we recommend studying many choices as in Figure REF .", "The next term $\\varphi $ is the smallest sum of expected differences that any function of the coarsened dialogues $c(\\tilde{D}_i)$ and the arbitrary randomness $U$ can achieve in mimicking the true test scores $h_\\ell (D, U)$ .", "In general, the set of all functions from $\\mathcal {D} \\times \\mathcal {U}$ to $[0,1]$ should be very expressive; e.g., it contains $h_\\ell $ itself and any other function which might mimic $h_\\ell (D, U)$ better when applied to $c(\\tilde{D}_i)$ and $U$ .", "So, it is not unreasonable to expect some good minimizer to exist, and therefore, $\\varphi $ to be small.", "Using this logic, one additional constraint is that $c(\\tilde{D}_i)$ has appropriate variance.", "For instance, if $c(\\tilde{D}_i)$ is constant and $D$ is not, $\\varphi $ can easily be large.", "Instead, when $c(\\tilde{D}_i)$ does have variance, the expressiveness of the function class $[0,1]^{\\mathcal {D} \\times \\mathcal {U}}$ can be well exploited.", "For reasonable dialogue learners and a well-chosen $c$ , the variance of $c(\\tilde{D}_i)$ is a non-issue.", "The last term $\\delta $ may actually be large, but we argue this is also a non-issue for interpretation purposes.", "In general, because $\\delta $ is an unnormalized sum, its magnitude grows with the size of $c(\\mathcal {D})$ , even if the individual summands may be small.", "Fortunately, since $\\delta $ is multiplied by the energy distance, this issue is mitigated when the statistical energy is small enough.", "Ultimately, the energy is paramount in controlling the impact of this term on the bound's overall magnitude." ], [ "A Predictive Theory", "Granted the background above, our discussion reduces the predictive aspect of the bound to a single key quantity: the discrete energy distance $\\varepsilon _c(\\tilde{D}_1, \\tilde{D}_2)$ .", "In particular, besides the test divergence $\\mathbf {TD}_\\mathsf {S}$ (known prior to the environmental change), all other terms can be assumed reasonably small, or otherwise controlled by the statistical energy through multiplication.", "Therefore, if the statistical energy between environments is small, it can be reasonable to assume the dialogue quality has been maintained or improved.", "Otherwise, it is possible the quality of the generated dialogue has substantially degraded.", "In this way, the statistical energy is an easily observable quantity that assists us in determining if the source error $\\mathbf {TD}_\\mathsf {S}$ known before the environmental change is a good representative of the unknown target error $\\mathbf {TD}_\\mathsf {T}$ , which is observed after the environmental change." ], [ "Use Cases", "In general, controlling the statistical energy between dialogues ensures we preserve dialogue quality when the evaluation metrics we care about are not available.", "As demonstrated in the main text, this makes it useful in algorithm design; i.e., to inform decisions in model training.", "Energy can also be useful for model selection.", "Namely, the generation model whose dialogues have the smallest energy compared to goal dialogue should produce the highest quality dialogue.", "To see this, simply set $\\tilde{D}_2 = D$ in the bound.", "Similar logical reduction shows the energy is the dominating term in this case as well." ], [ "Proofs", "In this section we prove the claimed theoretical results.", "So that the results may be more broadly applicable, we prove them in a more general context and then specify to the context of dialogue generation (in the main text and Appendix )." ], [ "An Adaptation Bound Based on a Discrete Energy Statistic", "In this section, we propose an adaptation bound based on the energy statistic.", "As we are aware, ours are the first theoretical results relating the statistical energy between distributions to the change in function outputs across said distributions.", "Given the use of the discrete energy distance (Def.", "REF ) and the accompanying coarsening function (Def.", "REF ), we appropriately choose to prove our theoretical results for discrete random variables (i.e., those which take on only a countable number of values and exhibit a probability mass function).", "The effect of this choice is that we also contribute a number of new theoretical results relating the probability mass function of a real-valued, discrete random variable to its characteristic function (i.e., in similar style to the Parseval-Plancherel Theorem).", "Furthermore, we expand on the relationship between the statistical energy of distributions and their characteristic functions.", "While this has been well studied in the continuous setting [31] where the distributions of random variables admit probability densities (i.e., absolutely continuous with respect to the Lesbesgue measure), it has not been studied in the case of discrete random variables.", "We start our results using only real-valued discrete variables, but prove our main results for all discrete random variables using Lemma  REF" ], [ "Setup", "Suppose $A$ and $B$ are discrete random variables taking on values in $\\mathbb {R}^d$ for some $d$ .", "Respectively, the distribution of $A$ is $\\alpha $ and the distribution of $B$ is $\\beta $ .", "The space $\\Omega \\subset \\mathbb {R}^d$ is the countable subset of $\\mathbb {R}^d$ for which $\\alpha $ or $\\beta $ assigns non-zero probability; i.e., $\\Omega = \\mathrm {supp}(\\alpha ) \\cup \\mathrm {supp}(\\beta )$ .", "Then, the expectation of any function $f : \\mathbb {R}^d \\rightarrow \\mathbb {R}$ of $A$ is defined: $\\mathbf {E}[f(A)] = \\int _{\\mathbb {R}^d} f\\mathrm {d}\\alpha = \\sum _{a \\in \\Omega } f(a)p_\\alpha (a)$ where $p_\\alpha $ is the probability mass function for $A$ (i.e., $\\alpha $ ).", "Expectations of functions of $B$ are similarly defined.", "The characteristic function of $A$ is defined as the complex-conjugate of the Fourier-Stieltjes transform of the probability mass function $p_\\alpha $ .", "More explicitly, it is the function $\\hat{p}_\\alpha : \\mathbb {R}^d \\rightarrow \\mathbb {R}$ defined $\\hat{p}_\\alpha (\\tau ) = \\mathbf {E}[\\mathrm {exp}\\lbrace i \\tau ^\\mathrm {T}A\\rbrace ] = \\sum _{a \\in \\Omega } p_\\alpha (a)\\mathrm {exp}\\lbrace i \\tau ^\\mathrm {T}a\\rbrace $ where $i$ is the imaginary unit (i.e., $i^2 = -1$ ) and $\\tau ^\\mathrm {T}a$ is the (inner) product between column vectors $\\tau $ and $a$ .", "Note, the characteristic function always exists and is finite for each $\\tau $ ." ], [ "Parseval-Plancherel Theorem (Reprise)", "One notable use for the characteristic function is the following inversion formula.", "In the discrete context we consider, [6] proves the following $p_\\alpha (a) = \\lim _{\\tau _1 \\rightarrow \\infty } \\lim _{\\tau _2 \\rightarrow \\infty } \\ldots \\lim _{\\tau _d \\rightarrow \\infty } \\Bigg ( \\prod _{i=1}^d 1 / (2\\tau _i) \\Bigg ) \\int _{B(\\tau )} \\ \\hat{p}_\\alpha (t)\\mathrm {exp}\\lbrace -i t^\\mathrm {T}a\\rbrace \\lambda (\\mathrm {d}t)$ where $\\tau = (\\tau _1, \\tau _2, \\ldots , \\tau _d)^\\mathrm {T}$ , $B(\\tau ) = \\lbrace x \\in \\mathbb {R}^d \\mid -\\tau _i \\le x_i \\le \\tau _i\\rbrace $ , and $\\lambda $ is the Lebesgue measure.", "This inversion formula highlights the connection between the characteristic function and the general Fourier transform as alluded to just before Eq.", "(REF ), since Fourier transforms are well known for their own inversion formulas.", "Another commonly used result in Fourier Analysis (related to inversion) is the Parseval-Plancherel Theorem.", "We prove a variation on this result below.", "As we are aware, it is the first which uses the transform given in Eq.", "(REF ) (i.e., specific to discrete, real-valued random variables).", "Lemma B.1 For any discrete random variables $A$ and $B$ as described, taking values in $\\mathbb {R}^d$ , $\\sum _{x \\in \\Omega } \\vert p_\\alpha (x) - p_\\beta (x)\\vert ^2 = \\lim _{\\tau _1 \\rightarrow \\infty } \\lim _{\\tau _2 \\rightarrow \\infty } \\ldots \\lim _{\\tau _d \\rightarrow \\infty } \\Bigg ( \\prod _{i=1}^d 1 / (2\\tau _i) \\Bigg ) \\int _{B(\\tau )} \\ \\vert \\hat{p}_\\alpha (t) - \\hat{p}_\\beta (t) \\vert ^2\\lambda (\\mathrm {d}t).$ For any function $f : \\mathbb {R}^d \\rightarrow \\mathbb {R}^+$ such that $\\sum _{x \\in \\Omega } f(x) < \\infty $ for all $t \\in \\mathbb {R}^d$ , we prove the following more general result $\\sum _{x \\in \\Omega } f^2(x) = \\lim _{\\tau _1 \\rightarrow \\infty } \\lim _{\\tau _2 \\rightarrow \\infty } \\ldots \\lim _{\\tau _d \\rightarrow \\infty } \\Bigg ( \\prod _{i=1}^d 1 / (2\\tau _i) \\Bigg ) \\int _{B(\\tau )} \\ \\hat{f}(x)\\hat{f}^*(x)\\lambda (\\mathrm {d}t)$ where as before a “hat” denotes the Fourier-Stieltjes transform given in Eq.", "(REF ) and the new notation $\\hat{f}^*$ denotes the complex-conjugate of $\\hat{f}$ .", "Observe, this proves the desired results because setting $f(x) = p_\\alpha (x) - q_\\alpha (x) $ we have $f^2(x) = (p_\\alpha (x) - q_\\alpha (x))^2 = \\vert p_\\alpha (x) - q_\\alpha (x)\\vert ^2$ and $\\begin{split}\\hat{f}(x)\\hat{f}^*(x) & = \\widehat{(p_\\alpha (x) - p_\\alpha (x))}\\widehat{(p_\\alpha (x) - p_\\alpha (x))}^* \\\\& = (\\hat{p}_\\alpha (x) - \\hat{p}_\\alpha (x))(\\hat{p}_\\alpha (x) - \\hat{p}_\\alpha (x))^* = \\vert \\hat{p}_\\alpha (x) - \\hat{p}_\\alpha (x) \\vert ^2.\\end{split}$ Proceeding with the proof of Eq.", "(REF ) we have $\\begin{split}& \\lim _{\\tau _1 \\rightarrow \\infty } \\lim _{\\tau _2 \\rightarrow \\infty } \\ldots \\lim _{\\tau _d \\rightarrow \\infty } \\Bigg ( \\prod _{i=1}^d 1 / (2\\tau _i) \\Bigg ) \\int _{B(\\tau )} \\ \\hat{f}(x)\\hat{f}^*(x)\\lambda (\\mathrm {d}t) \\\\& = \\lim _{\\tau _i \\rightarrow \\infty } \\Bigg ( \\prod _{i=1}^d 1 / (2\\tau _i) \\Bigg ) \\int _{B(\\tau )} \\ \\Bigg (\\sum _{x \\in \\Omega } f(x)\\mathrm {exp}\\lbrace i t^\\mathrm {T}x\\rbrace \\Bigg ) \\Bigg ( \\sum _{x \\in \\Omega } f(x)\\mathrm {exp}\\lbrace -i t^\\mathrm {T}x\\rbrace \\Bigg ) \\lambda (\\mathrm {d}t) \\\\& = \\lim _{\\tau _i \\rightarrow \\infty } \\Bigg ( \\prod _{i} 1 / (2\\tau _i) \\Bigg ) \\int _{B(\\tau )} \\ \\sum _{x \\in \\Omega } \\sum _{x^{\\prime } \\in \\Omega } f(x)f(x^{\\prime }) \\mathrm {exp} \\lbrace i(t^\\mathrm {T}x - t^\\mathrm {T}x^{\\prime })\\rbrace \\lambda (\\mathrm {d}t) \\qquad \\text{(Fubini-Tonelli)} \\\\& = \\lim _{\\tau _i \\rightarrow \\infty } \\Bigg ( \\prod _{i=1}^d 1 / (2\\tau _i) \\Bigg ) \\ \\sum _{x \\in \\Omega } \\sum _{x^{\\prime } \\in \\Omega } f(x)f(x^{\\prime }) \\ \\int _{B(\\tau )} \\mathrm {exp} \\lbrace i(t^\\mathrm {T}x - t^\\mathrm {T}x^{\\prime })\\rbrace \\lambda (\\mathrm {d}t) \\qquad \\text{(Fubini-Tonelli)} \\\\& = \\lim _{\\tau _i \\rightarrow \\infty } \\sum _{x \\in \\Omega } \\sum _{x^{\\prime } \\in \\Omega } f(x)f(x^{\\prime }) \\Bigg ( \\prod _{i=1}^d 1 / (2\\tau _i) \\Bigg ) \\Bigg [\\ \\int _{B(\\tau )} \\mathrm {exp} \\lbrace i(t^\\mathrm {T}x - t^\\mathrm {T}x^{\\prime })\\rbrace \\lambda (\\mathrm {d}t) \\Bigg ] \\\\& = \\lim _{\\tau _i \\rightarrow \\infty } \\sum _{x \\in \\Omega } \\sum _{x^{\\prime } \\in \\Omega } f(x)f(x^{\\prime }) \\Bigg ( \\prod _{i=1}^d \\Bigg [ 1 / (2\\tau _i) \\ \\int _{-\\tau _i}^{\\tau _i} \\mathrm {exp} \\lbrace i(t_i(x_i - x^{\\prime }_i)\\rbrace \\mathrm {d}t_i \\Bigg ] \\Bigg ) \\quad \\text{(Fubini-Tonelli)} \\\\& = \\lim _{\\tau _i \\rightarrow \\infty } \\sum _{x \\in \\Omega } \\sum _{x^{\\prime } \\in \\Omega } f(x)f(x^{\\prime }) \\Bigg ( \\prod _{i=1}^d \\chi (x_i, x^{\\prime }_i , \\tau _i) \\Bigg ) \\quad \\text{where}\\quad \\chi = {\\left\\lbrace \\begin{array}{ll}\\frac{\\sin \\tau _i(x_i - x^{\\prime }_i)}{\\tau _i (x_i - x^{\\prime }_i)} & \\text{if} \\ x_i \\ne x^{\\prime }_i, \\\\1 & \\text{else}\\end{array}\\right.}", "\\\\& = \\sum _{x \\in \\Omega } \\sum _{x^{\\prime } \\in \\Omega } f(x)f(x^{\\prime }) \\Bigg ( \\lim _{\\tau _i \\rightarrow \\infty } \\prod _{i=1}^d \\chi (x_i, x^{\\prime }_i , \\tau _i) \\Bigg ) \\qquad \\text{(DCT)} \\\\& = \\sum _{x \\in \\Omega } \\sum _{x^{\\prime } \\in \\Omega } f(x)f(x^{\\prime }) 1[x = x^{\\prime }] \\quad \\text{where}\\quad 1[\\mathrm {arg}] = {\\left\\lbrace \\begin{array}{ll}1 & \\text{if} \\ \\mathrm {arg} \\ \\text{holds}, \\\\0 & \\text{else}\\end{array}\\right.}", "\\\\& = \\sum _{x \\in \\Omega } f^2(x).\\end{split}$ In details: the first equality follows by definition; the second and third by Fubini-Tonelli Theorem;The primary assumption of Fubini-Tonelli Theorem requires the absolute value of the integrand have finite double or iterated integral/sum.", "In the first case, with the iterated sum, it is clear for each fixed $t$ since $\\sum _x f(x)$ is bounded and so is $\\exp \\lbrace -iz\\rbrace $ for all $z$ .", "In the second and third cases, we simply cite the boundedness of $B(\\tau )$ for each fixed $\\tau $ .", "the fourth by simple rules of arithmetic; the fifth again by Fubini-Tonelli Theorem to decompose the volume calculation into a product; the sixth by evaluating the integral; seventh by the dominated convergence theorem;The primary assumption of the DCT is that the sequence of functions being integrated (or summed in our case) is dominated by some function $g$ with finite integral (i.e., in the sense that the absolute value of every function in the sequence is less than or equal to $g$ on all inputs).", "Again, this is easy to see using properties assumed on $f$ and the fact that $|\\chi | \\le 1$ for all inputs.", "the eighth by evaluating the limit; and the last by simple arithmetic." ], [ "The Energy of Discrete Distributions as Described by their Characteristic Functions", "Lemma B.2 For any independent, discrete random variables $A$ and $B$ as described, taking values in $\\mathbb {R}^d$ , $\\varepsilon _{\\mathrm {01}}(A, B) = \\lim _{\\tau _1 \\rightarrow \\infty } \\lim _{\\tau _2 \\rightarrow \\infty } \\ldots \\lim _{\\tau _d \\rightarrow \\infty } \\Bigg ( \\prod _{i=1}^d 1 / (2\\tau _i) \\Bigg ) \\int _{B(\\tau )} \\ \\vert \\hat{p}_\\alpha (t) - \\hat{p}_\\beta (t) \\vert ^2\\lambda (\\mathrm {d}t).$ According to [31], for independent $A$ and $B$ , we have $\\begin{split}& \\vert \\hat{p}_\\alpha (t) - \\hat{p}_\\beta (t) \\vert ^2 = \\mathbf {E}[\\cos \\lbrace t^\\mathrm {T}(A - A^{\\prime }) \\rbrace + \\cos \\lbrace t^\\mathrm {T}(B - B^{\\prime }) \\rbrace - \\cos \\lbrace t^\\mathrm {T}(A - B) \\rbrace ] \\\\& = \\mathbf {E}\\lbrace 2[1 - \\cos \\lbrace t^\\mathrm {T}(A - B) \\rbrace ] - [1 - \\cos \\lbrace t^\\mathrm {T}(A - A^{\\prime }) \\rbrace ] - [1 - \\cos \\lbrace t^\\mathrm {T}(B - B^{\\prime }) \\rbrace ]\\rbrace \\end{split}$ where $A^{\\prime }$ and $B^{\\prime }$ are i.i.d.", "copies of $A$ and $B$ , respectively.", "With the equivalence above, by Fubini's Theorem, we may interchange the expectation and integral in Eq.", "(REF ).", "We may also change the order of integration to arrive at $\\begin{split}& \\lim _{\\tau _1 \\rightarrow \\infty } \\lim _{\\tau _2 \\rightarrow \\infty } \\ldots \\lim _{\\tau _d \\rightarrow \\infty } \\Bigg ( \\prod _{i=1}^d 1 / (2\\tau _i) \\Bigg ) \\int _{B(\\tau )} \\ \\vert \\hat{p}_\\alpha (t) - \\hat{p}_\\beta (t) \\vert ^2\\lambda (\\mathrm {d}t) \\\\& = \\lim _{\\tau _i \\rightarrow \\infty } \\mathbf {E} \\Bigg [ \\Bigg ( \\prod _{i=1}^d \\frac{1}{(2\\tau _i)} \\Bigg ) \\int _{-\\tau _1}^{\\tau _1} \\ldots \\int _{-\\tau _d}^{\\tau _d} \\Big \\lbrace 2 \\Big ( 1 - \\cos \\sum _{i=1}^d \\tau _i(A_i - B_i) \\Big ) \\\\& \\hspace{70.0pt} - \\Big ( 1 - \\cos \\sum _{i=1}^d \\tau _i(A_i - A_i^{\\prime }) \\Big ) - \\Big ( 1 - \\cos \\sum _{i=1}^d \\tau _i(B_i - B_i^{\\prime }) \\Big ) \\Big \\rbrace \\mathrm {d}\\tau _d\\ldots \\mathrm {d}\\tau _1 \\Bigg ].\\end{split}$ To evaluate the integral we first observe, for any $x \\in \\mathbb {R}^d$ , $\\begin{split}\\int _{-\\tau _d}^{\\tau _d} 1 - \\cos \\sum _{i=1}^d \\tau _ix_i \\mathrm {d}\\tau _d & = 2\\tau _d - \\frac{\\sin \\Big ( \\tau _dx_d + \\sum _{i=1}^{d-1} \\tau _i x_i\\Big ) - \\sin \\Big ( -\\tau _dx_d + \\sum _{i=1}^{d-1} \\tau _i x_i\\Big )}{x_d} \\\\& = 2\\tau _d - \\frac{2\\cos \\Big (\\sum _{i=1}^{d-1} \\tau _i x_i\\Big )\\sin (\\tau _dx_d )}{x_d}.\\end{split}$ Notice, the above equation implies an iterative pattern which can be used to solve the multiple integral.", "Keeping in mind which terms are constants with respect to the differential, we have $\\begin{split}& \\int _{-\\tau _1}^{\\tau _1} \\ldots \\int _{-\\tau _{d-1}}^{\\tau _{d-1}} \\Big ( \\int _{-\\tau _d}^{\\tau _d} 1 - \\cos \\sum _{i=1}^d \\tau _ix_i \\mathrm {d}\\tau _d \\Big ) \\mathrm {d}\\tau _{d-1}\\ldots \\mathrm {d}\\tau _1 \\\\& = \\int _{-\\tau _1}^{\\tau _1} \\ldots \\int _{-\\tau _{d-2}}^{\\tau _{d-2}} \\Bigg ( \\int _{-\\tau _{d-1}}^{\\tau _{d-1}} 2\\tau _d - \\frac{2\\cos \\Big (\\sum _{i=1}^{d-1} \\tau _i x_i\\Big )\\sin (\\tau _dx_d )}{x_d} \\mathrm {d}\\tau _{d-1} \\Bigg )\\mathrm {d}\\tau _{d-2}\\ldots \\mathrm {d}\\tau _1 \\\\& = \\int _{-\\tau _1}^{\\tau _1} \\ldots \\int _{-\\tau _{d-2}}^{\\tau _{d-2}} \\Bigg ( (2\\tau _d)(2{\\tau _{d-1}}) - \\frac{4\\cos \\Big (\\sum _{i=1}^{d-2} \\tau _i x_i\\Big )\\sin (\\tau _d x_d )\\sin (\\tau _{d-1}x_{d-1} )}{x_d x_{d-1}} \\Bigg )\\mathrm {d}\\tau _{d-2}\\ldots \\mathrm {d}\\tau _1 \\\\& = \\ldots \\\\& = \\int _{-\\tau _1}^{\\tau _1} \\ldots \\int _{-\\tau _{d-j}}^{\\tau _{d-j}} \\Bigg (\\prod _{i=1}^j(2\\tau _{d - i + 1}) - \\frac{\\cos \\Big (\\sum _{i=1}^{d-j} \\tau _i x_i\\Big )\\prod _{i=1}^j2\\sin (\\tau _{d - i + 1} x_{d - i + 1} )}{\\prod _{i=1}^jx_{d - i + 1}} \\Bigg )\\mathrm {d}\\tau _{d-j}\\ldots \\mathrm {d}\\tau _1 \\\\& \\ldots \\\\& = \\prod _{i=1}^{d}(2\\tau _{d - i + 1}) - \\frac{\\prod _{i=1}^{d}2\\sin (\\tau _{d - i + 1} x_{d - i + 1} )}{\\prod _{i=1}^{d}x_{d - i + 1} } \\\\& = \\prod _{i=1}^{d}(2\\tau _{i}) - \\frac{\\prod _{i=1}^{d}2\\sin (\\tau _{i} x_{i} )}{\\prod _{i=1}^{d}x_{i} } .\\end{split}$ Now, returning to the RHS of Eq.", "(REF ), linearity of the integral implies $\\begin{split}& \\Bigg ( \\prod _{i=1}^d \\frac{1}{(2\\tau _i)} \\Bigg ) \\int _{-\\tau _1}^{\\tau _1} \\ldots \\int _{-\\tau _d}^{\\tau _d} \\Big \\lbrace 2 \\Big ( 1 - \\cos \\sum _{i=1}^d \\tau _i(A_i - B_i) \\Big ) \\\\& \\hspace{70.0pt} - \\Big ( 1 - \\cos \\sum _{i=1}^d \\tau _i(A_i - A_i^{\\prime }) \\Big ) - \\Big ( 1 - \\cos \\sum _{i=1}^d \\tau _i(B_i - B_i^{\\prime }) \\Big ) \\Big \\rbrace \\mathrm {d}\\tau _d\\ldots \\mathrm {d}\\tau _1 \\\\& = \\Bigg ( \\prod _{i=1}^d \\frac{1}{(2\\tau _i)} \\Bigg ) \\int _{-\\tau _1}^{\\tau _1} \\ldots \\int _{-\\tau _d}^{\\tau _d} \\Big \\lbrace 2 \\Big ( 1 - \\cos \\sum _{i=1}^d \\tau _i(A_i - B_i) \\Big ) \\rbrace \\mathrm {d}\\tau _d\\ldots \\mathrm {d}\\tau _1 \\\\& \\hspace{30.0pt} - \\Bigg ( \\prod _{i=1}^d \\frac{1}{(2\\tau _i)} \\Bigg ) \\int _{-\\tau _1}^{\\tau _1} \\ldots \\int _{-\\tau _d}^{\\tau _d} \\Big \\lbrace \\Big ( 1 - \\cos \\sum _{i=1}^d \\tau _i(A_i - A_i^{\\prime }) \\Big ) \\rbrace \\mathrm {d}\\tau _d\\ldots \\mathrm {d}\\tau _1 \\\\& \\hspace{30.0pt} - \\Bigg ( \\prod _{i=1}^d \\frac{1}{(2\\tau _i)} \\Bigg ) \\int _{-\\tau _1}^{\\tau _1} \\ldots \\int _{-\\tau _d}^{\\tau _d} \\Big \\lbrace \\Big ( 1 - \\cos \\sum _{i=1}^d \\tau _i(B_i - B_i^{\\prime }) \\Big ) \\Big \\rbrace \\mathrm {d}\\tau _d\\ldots \\mathrm {d}\\tau _1.\\end{split}$ Thus, we can apply the solution in Eq.", "(REF ) to solve the integral in Eq.", "(REF ).", "Taking $x_i = (A_i - B_i)$ in Eq.", "(REF ), we consider the first integral of Eq.", "(REF ) above along with its multiplicative constant: $\\begin{split}& \\Bigg ( \\prod _{i=1}^d \\frac{1}{(2\\tau _i)} \\Bigg ) \\int _{-\\tau _1}^{\\tau _1} \\ldots \\int _{-\\tau _d}^{\\tau _d} ( 1 - \\cos \\sum _{i=1}^d \\tau _i(A_i - B_i) \\Big ) \\\\& = \\Bigg ( \\prod _{i=1}^d \\frac{1}{(2\\tau _i)} \\Bigg ) \\Bigg (\\prod _{i=1}^{d}(2\\tau _{i}) - \\frac{\\prod _{i=1}^{d}2\\sin \\Big \\lbrace \\tau _{i}(A_{i} - B_i) \\Big \\rbrace }{\\prod _{i=1}^{d} (A_{i} - B_i) } \\Bigg ) \\\\& = 1 - \\prod _{i=1}^{d} \\frac{\\sin \\Big \\lbrace \\tau _{i}(A_{i} - B_i) \\Big \\rbrace }{\\tau _i (A_{i} - B_i) } = 1 - \\prod _{i=1}^d \\chi (A_i, B_i, \\tau _i)\\end{split}$ where $\\chi $ is defined in the proof of Eq.", "(REF ) (Lemma REF ).", "Taking $x_i = (A_i - A_i^{\\prime })$ and $x_i = (B_i - B_i^{\\prime })$ and proceeding as above allows us to resolve the entire integral.", "In particular, we have $\\begin{split}& \\lim _{\\tau _1 \\rightarrow \\infty } \\lim _{\\tau _2 \\rightarrow \\infty } \\ldots \\lim _{\\tau _d \\rightarrow \\infty } \\Bigg ( \\prod _{i=1}^d 1 / (2\\tau _i) \\Bigg ) \\int _{B(\\tau )} \\ \\vert \\hat{p}_\\alpha (t) - \\hat{p}_\\beta (t) \\vert ^2\\lambda (\\mathrm {d}t) \\\\& = \\lim _{\\tau _i} \\mathbf {E} \\Bigg [ 2 \\Big (1 - \\prod _{i=1}^d \\chi (A_i, B_i, \\tau _i) \\Big ) - \\Big (1 - \\prod _{i=1}^d \\chi (A_i, A_i^{\\prime }, \\tau _i) \\Big ) - \\Big (1 - \\prod _{i=1}^d \\chi (B_i, B_i^{\\prime }, \\tau _i \\Big ) \\Bigg ] \\\\& = \\mathbf {E} \\Bigg [ \\lim _{\\tau _i} \\Bigg \\lbrace 2 \\Big (1 - \\prod _{i=1}^d \\chi (A_i, B_i, \\tau _i) \\Big ) - \\Big (1 - \\prod _{i=1}^d \\chi (A_i, A_i^{\\prime }, \\tau _i) \\Big ) - \\Big (1 - \\prod _{i=1}^d \\chi (B_i, B_i^{\\prime }, \\tau _i \\Big ) \\Bigg \\rbrace \\Bigg ] \\\\& = \\mathbf {E}\\big [2 \\times 1[A_i \\ne B_i] - 1[A_i \\ne A^{\\prime }_i] - 1[B_i \\ne B^{\\prime }_i] \\big ].\\end{split}$ Here, the second equality follows from the dominated convergence theorem and $1[\\mathrm {arg}]$ is defined as in proof of Eq.", "(REF ) (Lemma REF )." ], [ "Moving from Real-Valued Discrete Variables to Any Discrete Variables", "Lemma B.3 Let $\\tilde{A}$ and $\\tilde{B}$ be any independent, discrete random variables over a countable set $\\Omega $ (i.e., not necessarily contained in $\\mathbb {R}^d$ ).", "Then, $\\sum _{x \\in \\Omega } \\vert \\tilde{p}_\\alpha (x) - \\tilde{p}_\\beta (x) \\vert = \\varepsilon _{01}(\\tilde{A}, \\tilde{B}).$ where $\\tilde{p}_\\alpha $ and $\\tilde{p}_\\beta $ are the mass functions of $\\tilde{A}$ and $\\tilde{B}$ , respectively.", "Let $\\Pi \\subset \\mathbb {R}^d$ with $\\vert \\Pi \\vert = \\vert \\Omega \\vert $ .", "Note, $\\Pi $ exists because $\\Omega $ is countable and $\\mathbb {R}^d$ is not.", "Next, let $f : \\Omega \\rightarrow \\Pi $ be any bijective map.", "Then, supposing $p_\\alpha $ and $p_\\beta $ are the mass functions of $f(\\tilde{A})$ and $f(\\tilde{B})$ respectively, by definition of the pushforward measure, for any $y \\in \\Pi $ such that $y = f(x)$ for $x \\in \\Omega $ $p_\\alpha (y) = \\tilde{p}_\\alpha (\\lbrace a \\in \\Omega \\mid f(a) = y\\rbrace ) = \\tilde{p}_\\alpha (x).$ Notice, bijectivity of $f$ ensures the last step, because each $y \\in \\Pi $ has a unique inverse $x \\in \\Omega $ .", "From bijectivity of $f$ , we also have injectivity, which implies $1[a\\ne b] = 1[f(a) \\ne f(b)]$ for all $a,b \\in \\Omega $ .", "By simple substitution, the previous two facts tells us $\\begin{split}& 2\\sum _{a,b \\in \\Omega } 1[a \\ne b] \\tilde{p}_\\alpha (a)\\tilde{p}_\\beta (b) - \\sum _{a,a^{\\prime } \\in \\Omega } 1[a \\ne a^{\\prime }] \\tilde{p}_\\alpha (a)\\tilde{p}_\\alpha (a^{\\prime }) - \\sum _{b,b^{\\prime } \\in \\Omega } 1[b \\ne b^{\\prime }] \\tilde{p}_\\beta (b)\\tilde{p}_\\beta (b^{\\prime }) \\\\& = 2\\sum _{a,b \\in \\Omega } 1[f(a) \\ne f(b)] p_\\alpha (f(a))p_\\beta (f(b)) - \\sum _{a,a^{\\prime } \\in \\Omega } 1[f(a) \\ne f(a)^{\\prime }] p_\\alpha (f(a))p_\\alpha (f(a^{\\prime })) \\\\&\\hspace{30.0pt} - \\sum _{b,b^{\\prime } \\in \\Omega } 1[f(b) \\ne f(b^{\\prime })] p_\\beta (f(b))p_\\beta (f(b^{\\prime }))\\end{split}$ Since $f$ is surjective too (i.e., along with injective), summation of any function $g(f(a), f(b))$ over $a,b \\in \\Omega $ and summation of $g(c,d)$ over $c,d \\in \\Pi $ are equivalent.In particular, because $f$ is surjective, we know all pairs $(c,d) \\in \\Pi ^2$ have some pair $(a,b) \\in \\Omega ^2$ for which $(f(a),f(b)) = (c,d)$ ; i.e., we do not “miss” a term in this sum.", "Because $f$ is injective, we know all pairs $(c,d) \\in \\Pi ^2$ have only one pair $(a,b) \\in \\Omega ^2$ for which $(f(a),f(b)) = (c,d)$ ; i.e., we do not “repeat” a term in this sum.", "So, we can continue as follows: $\\begin{split}& 2\\sum _{a,b \\in \\Omega } 1[f(a) \\ne f(b)] p_\\alpha (f(a))p_\\beta (f(b)) - \\sum _{a,a^{\\prime } \\in \\Omega } 1[f(a) \\ne f(a)^{\\prime }] p_\\alpha (f(a))p_\\alpha (f(a^{\\prime })) \\\\&\\hspace{30.0pt} - \\sum _{b,b^{\\prime } \\in \\Omega } 1[f(b) \\ne f(b^{\\prime })] p_\\beta (f(b))p_\\beta (f(b^{\\prime })) \\\\& = 2\\sum _{c,d \\in \\Pi } 1[c \\ne d] p_\\alpha (c)p_\\beta (d) - \\sum _{c,c^{\\prime } \\in \\Omega } 1[c \\ne c^{\\prime }] p_\\alpha (c)p_\\alpha (c^{\\prime }) - \\sum _{d,d^{\\prime } \\in \\Omega } 1[d \\ne d^{\\prime }] p_\\beta (d)p_\\beta (d^{\\prime }) \\\\\\end{split}$ In other words, the previous two equations tell us $\\varepsilon _{01}(\\tilde{A}, \\tilde{B}) = \\varepsilon _{01}(f(\\tilde{A}), f(\\tilde{B}))$ .", "Applying equivalence of the mass functions, then Lemmas REF and REF , then equivalence of the energies: $\\sum _{x \\in \\Omega } \\vert \\tilde{p}_\\alpha (x) - \\tilde{p}_\\beta (x) \\vert = \\sum _{y \\in \\Pi } \\vert p_\\alpha (y) - p_\\beta (y) \\vert = \\varepsilon _{01}(f(\\tilde{A}), f(\\tilde{B})) = \\varepsilon _{01}(\\tilde{A}, \\tilde{B}).$ Note, this uses the fact that functions of independent random variables are also independent." ], [ "The Main Bound", "Theorem B.1 Let $A$ and $B$ be any independent random variables over any space $\\mathcal {X}$ and let $S$ , $S^{\\prime }$ be random variables over $[0,1]$ .", "Let $U$ be a random variable, independent from $A$ and $B$ , over any set $\\mathcal {U}$ .", "Suppose $c : \\mathcal {X} \\rightarrow \\Omega $ is a coarsening function (so, $\\Omega \\subset \\mathcal {X})$ and let $f \\in [0,1]^{\\mathcal {X} \\times \\mathcal {U}}$ .", "Then, $\\mathbf {E}[\\vert S - f(A, U)\\vert ] \\le \\gamma + \\varphi + \\mathbf {E}[\\vert S^{\\prime } - f(B, U)\\vert ] + \\sqrt{\\varepsilon _c(A, B) \\times \\delta }$ where $\\begin{split}\\gamma & = \\mathbf {E}[\\vert f(c(B), U) - f(B)\\vert ] + \\mathbf {E}[\\vert f(c(A), U) - f(A)\\vert ], \\\\g & \\in \\operatornamewithlimits{arg\\,min}_{h \\in [0,1]^{\\mathcal {X} \\times \\mathcal {U}} }\\mathbf {E}[\\vert S - h(c(A), U) \\vert ] + \\mathbf {E}[\\vert h(c(B), U) - S^{\\prime } \\vert ], \\\\\\varphi & = \\mathbf {E}[\\vert S - g(c(A), U) \\vert ] + \\mathbf {E}[\\vert g(c(B), U) - S^{\\prime } \\vert ], \\\\\\delta & = \\sum _{x \\in \\Omega } \\vert g(x) - f(x) \\vert ^2\\end{split}$ For any $g \\in {[0,1]}^{\\mathcal {X} \\times \\mathcal {U}}$ , by way of the triangle inequality and monotonicity of the expectation, $\\begin{split}& \\mathbf {E}[\\vert S - f(A, U)\\vert ] = \\mathbf {E}[\\vert S - f(A, U)\\vert ] + \\mathbf {E}[\\vert S^{\\prime } - f(B, U)\\vert ] - \\mathbf {E}[\\vert S^{\\prime } - f(B, U)\\vert ] \\\\& = \\mathbf {E}[\\vert S - g(c(A), U) + g(c(A), U) - f(A, U)\\vert ] + \\mathbf {E}[\\vert S^{\\prime } - f(B, U)\\vert ] - \\mathbf {E}[\\vert S^{\\prime } - f(B, U)\\vert ] \\\\& \\le \\mathbf {E}[\\vert S - g(c(A), U) \\vert ] + \\mathbf {E}[\\vert g(c(A), U) - f(A, U)\\vert ] + \\mathbf {E}[\\vert S^{\\prime } - f(B, U)\\vert ] \\\\& \\hspace{30.0pt}- \\mathbf {E}[\\vert S^{\\prime } - f(B, U)\\vert ] \\\\& \\le \\mathbf {E}[\\vert S - g(c(A), U) \\vert ] + \\mathbf {E}[\\vert g(c(A), U) - f(A, U)\\vert ] + \\mathbf {E}[\\vert S^{\\prime } - f(B, U)\\vert ] \\\\& \\hspace{30.0pt} - \\mathbf {E}[\\vert g(c(B), U) - f(B, U)\\vert ] + \\mathbf {E}[\\vert g(c(B), U) - S^{\\prime } \\vert ] \\\\& \\le \\mathbf {E}[\\vert S - g(c(A), U) \\vert ] + \\mathbf {E}[\\vert g(c(A), U) - f(c(A), U) \\vert ] + \\mathbf {E}[\\vert f(c(A), U) - f(A, U)\\vert ] \\\\& \\hspace{30.0pt} + \\mathbf {E}[\\vert S^{\\prime } - f(B, U)\\vert ] - \\mathbf {E}[\\vert g(c(B), U) - f(B, U)\\vert ] + \\mathbf {E}[\\vert g(c(B), U) - S^{\\prime } \\vert ] \\\\& \\le \\mathbf {E}[\\vert S - g(c(A), U) \\vert ] + \\mathbf {E}[\\vert g(c(A), U) - f(c(A), U)\\vert ] + \\mathbf {E}[\\vert f(c(A), U) - f(A, U)\\vert ] \\\\& \\hspace{30.0pt}+ \\mathbf {E}[\\vert S^{\\prime } - f(B, U)\\vert ] - \\mathbf {E}[\\vert g(c(B, U) - f(c(B), U) \\vert ] \\\\& \\hspace{30.0pt} + \\mathbf {E}[\\vert f(c(B), U) - f(B, U)\\vert ] + \\mathbf {E}[\\vert g(c(B), U) - S^{\\prime } \\vert ].", "\\\\\\end{split}$ Set $\\tilde{B} = c(B)$ , $\\tilde{A} = c(A)$ and set $\\begin{split}\\gamma & = \\mathbf {E}[\\vert f(\\tilde{B}, U) - f(B, U)\\vert ] + \\mathbf {E}[\\vert f(\\tilde{A}, U) - f(A, U)\\vert ], \\\\g & \\in \\operatornamewithlimits{arg\\,min}_{h \\in [0,1]^{\\mathcal {X}\\times \\mathcal {U}} }\\mathbf {E}[\\vert S - h(\\tilde{A}, U) \\vert ] + \\mathbf {E}[\\vert h(\\tilde{B}, U) - S^{\\prime } \\vert ], \\\\\\varphi & = \\mathbf {E}[\\vert S - g(\\tilde{A}, U) \\vert ] + \\mathbf {E}[\\vert g(\\tilde{B}, U) - S^{\\prime } \\vert ].\\end{split}$ Then, Eq.", "(REF ) implies $\\mathbf {E}[\\vert S - f(A, U)\\vert ] \\le \\gamma + \\varphi + \\mathbf {E}[\\vert S^{\\prime } - f(B, U)\\vert ] + \\mathbf {E}[\\vert g(\\tilde{A}, U) - f(\\tilde{A}, U)\\vert ] - \\mathbf {E}[\\vert g(\\tilde{B}, U) - f(\\tilde{B}, U)\\vert ].$ Now, suppose $\\tilde{p}_\\alpha $ and $\\tilde{p}_\\beta $ are probability mass functions for $\\tilde{A}$ and $\\tilde{B}$ , respectively.", "Then, using basic properties of the expectation along with other noted facts, $\\begin{split}& \\mathbf {E}[\\vert g(\\tilde{A}, U) - f(\\tilde{A}, U)\\vert ] - \\mathbf {E}[\\vert g(\\tilde{B}, U) - f(\\tilde{B}, U)\\vert ] \\\\& = \\mathbf {E}\\Big [ \\sum _{a \\in \\Omega } \\vert g(a, U) - f(a, U)\\vert \\tilde{p}_\\alpha (a) - \\sum _{b \\in \\Omega } \\vert g(b, U) - f(b, U)\\vert \\tilde{p}_\\beta (b) \\Big ] \\quad \\text{(Fubini)}\\\\& = \\mathbf {E}\\Big [ \\sum _{x \\in \\Omega } \\vert g(x, U) - f(x, U)\\vert (\\tilde{p}_\\alpha (x) - \\tilde{p}_\\beta (x)) \\Big ] \\le \\mathbf {E}\\Big [ \\sum _{x \\in \\Omega } \\vert g(x, U) - f(x, U)\\vert \\vert \\tilde{p}_\\alpha (x) - \\tilde{p}_\\beta (x) \\vert \\Big ]\\\\& \\le \\mathbf {E} \\Bigg [\\Bigg ( \\sum _{x \\in \\Omega } \\vert g(x, U) - f(x, U) \\vert ^2 \\Bigg )^{1/2} \\Bigg ( \\sum _{x\\in \\Omega } \\vert \\tilde{p}_\\alpha (x) - \\tilde{p}_\\beta (x) \\vert ^2 \\Bigg )^{1/2} \\ \\Bigg ]\\qquad \\text{(Cauchy-Schwarz)} \\\\& \\le \\sqrt{\\varepsilon _{01}(\\tilde{A}, \\tilde{B})} \\times \\mathbf {E} \\Bigg [ \\Bigg ( \\sum _{x \\in \\Omega } \\vert g(x, U) - f(x, U) \\vert ^2 \\Bigg )^{1/2} \\ \\Bigg ] \\qquad \\text{(Lemma~\\ref {lem:bijective_for_all})}\\end{split}$ In the last step, we may apply Lemma REF because $\\tilde{A}$ and $\\tilde{B}$ are still independent (i.e., they are functions of independent random variables) and are now discrete too.", "Defining $\\delta $ appropriately yields the result." ], [ "Thm. ", "Thm.", "REF is simply a specification of Thm.", "REF above.", "In fact, it is better stated as a corollary of Thm.", "REF .", "We set $\\mathcal {X} = \\mathcal {D}$ , leave $\\mathcal {U}$ and its variable $U$ unchanged, and set $S = S^{\\prime } = h_\\ell (D, U)$ .", "Then, $A = \\tilde{D}_1$ and $B = \\tilde{D}_2$ .", "Taking $f = h_\\ell $ yields the result." ], [ "Classification and Regression", "In adaptation for classification and regression, we consider a source distribution $\\mathbb {S}$ governing random variables $(X_S, Y_S)$ and a target distribution $\\mathbb {T}$ governing random variables $(X_T, Y_T)$ .", "In general, the goal is to predict $Y_\\square $ from $X_\\square $ .", "We can set $S = Y_T$ and $S^{\\prime } = Y_S$ .", "We may also set $A = X_T$ and $B = X_S$ .", "Then, we learn $f$ from a pre-specified hypothesis class $\\mathcal {H} \\subseteq [0,1]^{\\mathcal {X} \\times \\mathcal {U}}$ .", "Typically, $U$ is ignored in these settings, but it seems possible to employ this term to model stochastic (Gibbs) predictors; i.e., in PAC-Bayesian Frameworks [9], [27].", "Notice, for regression, our framework only considers a normalized response variable and the mean absolute error." ], [ "Sample Complexity", "As alluded in Section , a key shortcoming of our framework compared to existing frameworks is the absence of any terms measuring sample-complexity.", "That is, we do not explicitly quantify the difference between our empirical observation of the energy and the true energy (i.e., the population version of the statistic) using the number of samples in our observation.", "This is a big part of computational learning theory, as the act of choosing a function $f$ using data – or, in dialogue contexts, choosing the parameter $\\theta $ using data – can have significant impact on the difference between our observations of a statistical processes and reality.", "In fact, this impact is the basis of overfitting and, besides computational efficiency, is the main pillar of study in traditional PAC learningProbably Approximately Correct learning [33], [24].", "In more recent studies of domain adaptation, like our work, the population-only bound can be just as important for purpose of understanding and interpretation.", "Furthermore, if we only care about the empirical samples in-hand, these population-only bounds are directly applicable,The empirical sample becomes the whole population about which we are concerned.", "which partly explains the empirical effectiveness of our theory in Section .", "Nonetheless, the role of sample-complexity can be very informative and useful in practice [19] and would be important for model-selection applications as described at the end of Appendix .", "We leave investigation of sample-complexity as future work.", "As we are aware, there is currently no appropriate description of sample-complexity for dialogue generation contexts." ] ]
2210.07777
[ [ "A few fixed point theorems in linear n-normed space" ], [ "Abstract Banach's fixed point theorem in linear n-normed space is being developed.", "Also, we present several theorems on fixed points in linear n-normed space." ], [ "Introduction", "     The Banach fixed point theorem concerns certain mappings of a complete metric space into itself.", "It states sufficient conditions for the existence and uniqueness of a fixed point.", "Also, the theorem gives an iterative process by which we can obtain approximations to the fixed point and error bounds.", "This theorem has important applications to finding the unique solution of linear algebraic equations, differential equations, integral equations and as well as to implicit function theorem.", "The idea of linear 2-normed space was first introduced by S. Gahler [4] and thereafter the geometric structure of linear 2-normed spaces was developed by A.", "White, Y. J. Cho, R. W. Freese [8], [9].In recent times, some important results in classical normed spaces have been proved into 2-norm setting by many researchers.", "The concept of  2-Banach space is briefly discussed in [8].H.", "Gunawan and Mashadi [5] developed the generalization of a linear 2-normed space for  $n \\,\\ge \\, 2$ .", "Some fundamental results of classical normed space with respect to  $b$ -linear functional in linear$n$ -normed space have been studied by P. Ghosh and T. K. Samanta [1].", "Also they have studied the reflexivity of linear  $n$ -normed space with respect to  $b$ -linear functional in [2] and slow convergence of sequence of  $b$ -linear functionals in linear  $n$ -normed space in [3].", "In this paper, we first define the contraction mapping in linear  $n$ -normed space and then present the Banach fixed point theorem in linear  $n$ -normed space.", "Next, we discuss some further theorems on fixed points of operators in linear  $n$ -normed space." ], [ "Preliminaries", "     In this section, we give some necessary definitions.", "Definition 2.1 [5] Let  $X$   be a linear space over the field  $ \\mathbb {K}$ , where  $ \\mathbb {K} $   is the real or complex numbers field with  $\\text{dim}\\,X \\,\\ge \\, n$ , where  $n$   is a positive integer.A real valued function  $\\left\\Vert \\,\\cdot \\,,\\, \\cdots \\,,\\, \\cdot \\,\\right\\Vert \\,:\\, X^{\\,n} \\,\\rightarrow \\, \\mathbb {R}$   is called an n-norm on  $X$   if (N1) $\\left\\Vert \\,x_{\\,1} \\,,\\, x_{\\,2} \\,,\\, \\cdots \\,,\\, x_{\\,n}\\,\\right\\Vert \\,=\\,0$   if and only if  $x_{\\,1},\\, \\cdots ,\\, x_{\\,n}$   are linearly dependent, (N2) $\\left\\Vert \\,x_{\\,1} \\,,\\, x_{\\,2} \\,,\\, \\cdots \\,,\\, x_{\\,n}\\,\\right\\Vert $ is invariant under permutations of  $x_{\\,1},\\, x_{\\,2},\\, \\cdots ,\\, x_{\\,n}$ , (N3) $\\left\\Vert \\,\\alpha \\,x_{\\,1} \\,,\\, x_{\\,2} \\,,\\, \\cdots \\,,\\, x_{\\,n}\\,\\right\\Vert \\,=\\, |\\,\\alpha \\,|\\, \\left\\Vert \\,x_{\\,1} \\,,\\, x_{\\,2} \\,,\\, \\cdots \\,,\\, x_{\\,n}\\,\\right\\Vert \\; \\;\\;\\forall \\;\\; \\alpha \\,\\in \\, \\mathbb {K}$ , (N4) $\\left\\Vert \\,x \\,+\\, y \\,,\\, x_{\\,2} \\,,\\, \\cdots \\,,\\, x_{\\,n}\\,\\right\\Vert \\,\\le \\, \\left\\Vert \\,x \\,,\\, x_{\\,2} \\,,\\, \\cdots \\,,\\, x_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,y \\,,\\, x_{\\,2} \\,,\\, \\cdots \\,,\\, x_{\\,n}\\,\\right\\Vert $ hold for all  $x,\\, y,\\, x_{\\,1},\\, x_{\\,2},\\, \\cdots ,\\, x_{\\,n} \\,\\in \\, X$ .The pair  $\\left(\\,X,\\, \\left\\Vert \\,\\cdot ,\\, \\cdots ,\\, \\cdot \\,\\right\\Vert \\,\\right)$ is then called a linear n-normed space.", "Throughout this paper,  $X$   will denote linear$n$ -normed space over the field  $\\mathbb {K}$   of complex or real numbers, associated with the $n$ -norm  $\\Vert \\,\\cdot \\,,\\, \\cdots \\,,\\, \\cdot \\,\\Vert $ .", "Definition 2.2 [5] A sequence  $\\lbrace \\,x_{\\,k}\\,\\rbrace \\,\\subseteq \\, X$   is said to converge to  $x \\,\\in \\, X$ if $\\lim \\limits _{k \\rightarrow \\infty }\\,\\left\\Vert \\,x_{\\,k} \\,-\\, x \\,,\\, e_{\\,2} \\,,\\, \\cdots \\,,\\, e_{\\,n} \\,\\right\\Vert \\,=\\, 0$ for every  $ e_{\\,2},\\, \\cdots ,\\, e_{\\,n} \\,\\in \\, X$   and it is called a Cauchy sequence if $\\lim \\limits _{l \\,,\\, k \\rightarrow \\infty }\\,\\left\\Vert \\,x_{\\,l} \\,-\\, x_{\\,k} \\,,\\, e_{\\,2} \\,,\\, \\cdots \\,,\\, e_{\\,n}\\,\\right\\Vert \\,=\\, 0$ for every  $ e_{\\,2},\\, \\cdots ,\\, e_{\\,n} \\,\\in \\, X$ .The space  $X$   is said to be complete or n-Banach space if every Cauchy sequence in this space is convergent in  $X$ .", "Definition 2.3 [7] We define the following open and closed ball in  $X$ : $B_{\\,\\lbrace \\,e_{\\,2} \\,,\\, \\cdots \\,,\\, e_{\\,n}\\,\\rbrace }\\,(\\,a \\,,\\, \\delta \\,) \\,=\\, \\left\\lbrace \\,x \\,\\in \\, X \\,:\\, \\left\\Vert \\,x \\,-\\, a \\,,\\, e_{\\,2} \\,,\\, \\cdots \\,,\\, e_{\\,n}\\,\\right\\Vert \\,<\\, \\delta \\,\\right\\rbrace \\;\\text{and}$ $B_{\\,\\lbrace \\,e_{\\,2} \\,,\\, \\cdots \\,,\\, e_{\\,n}\\,\\rbrace }\\,[\\,a \\,,\\, \\delta \\,] \\,=\\, \\left\\lbrace \\,x \\,\\in \\, X \\,:\\, \\left\\Vert \\,x \\,-\\, a \\,,\\, e_{\\,2} \\,,\\, \\cdots \\,,\\, e_{\\,n}\\,\\right\\Vert \\,\\le \\, \\delta \\,\\right\\rbrace ,\\hspace{14.22636pt}$ where  $a,\\, e_{\\,2},\\, \\cdots ,\\, e_{\\,n} \\,\\in \\, X$   and  $\\delta $   be a positive number.", "Definition 2.4 [7] A subset  $G$   of  $X$   is said to be open in  $X$   if for all  $a \\,\\in \\, G $ , there exist  $e_{\\,2},\\, \\cdots ,\\, e_{\\,n} \\,\\in \\, X $   and   $\\delta \\,>\\, 0 $ such that  $B_{\\,\\lbrace \\,e_{\\,2} \\,,\\, \\cdots \\,,\\, e_{\\,n}\\,\\rbrace }\\,(\\,a \\,,\\, \\delta \\,) \\,\\subseteq \\, G$ .", "Definition 2.5 [7] Let  $ A \\,\\subseteq \\, X$ .Then the closure of  $A$   is defined as $\\overline{A} \\,=\\, \\left\\lbrace \\, x \\,\\in \\, X \\;|\\; \\,\\exists \\, \\;\\lbrace \\,x_{\\,k}\\,\\rbrace \\,\\in \\, A \\;\\;\\textit {with}\\; \\lim \\limits _{k \\,\\rightarrow \\, \\infty } x_{\\,k} \\,=\\, x \\,\\right\\rbrace .$ The set  $ A $   is said to be closed if $ A \\,=\\, \\overline{A}$ ." ], [ "Fixed point theorems in linear $n$ -normed space", "   Definition 3.1 A sequence  $\\lbrace \\,x_{\\,k}\\,\\rbrace \\,\\subseteq \\, X$   is said to be a b-Cauchy sequence if for every  $\\epsilon \\,>\\, 0$   there exists  $N \\,>\\, 0$   such that $\\text{for every \\,$k,\\, l \\,\\ge \\, N$},\\; \\left\\Vert \\,x_{\\,l} \\,-\\, x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,<\\, \\epsilon .$ The space  $X$   is said to be b-complete if every b-Cauchy sequence is convergent in the semi-normed space  $\\left(\\,X,\\, \\left\\Vert \\,\\cdot ,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\right)$ .", "Definition 3.2 Let  $X$   be a linear  $n$ -normed space and  $T \\,:\\, X \\,\\rightarrow \\, X$   be an operator.", "Then the operator  $T$   is called b-bounded if there exists some positive constant  $M$   such that $\\left\\Vert \\,T\\,x,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, M\\,\\left\\Vert \\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\forall \\; x \\,\\in \\, X.$ Let  $T$   be a  $b$ -bounded linear operator on  $X$ .", "Then the norm of  $T$   is denoted by  $\\Vert \\,T\\,\\Vert $   and is defined as $&\\Vert \\,T\\,\\Vert \\,=\\, \\inf \\,\\left\\lbrace \\,M \\,:\\, \\left\\Vert \\,T\\,x,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, M\\,\\left\\Vert \\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\forall \\; x \\,\\in \\, X\\,\\right\\rbrace .$    Remark 3.3 If  $T$   be a  $b$ -bounded on  $X$ , norm of  $T$   can be expressed by any one of the following equivalent formula: (I)  $\\Vert \\,T\\,\\Vert \\,=\\, \\sup \\,\\left\\lbrace \\,\\left\\Vert \\,T\\,x,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\;:\\; \\left\\Vert \\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, 1\\,\\right\\rbrace $ .", "(II)  $\\Vert \\,T\\,\\Vert \\,=\\, \\sup \\,\\left\\lbrace \\,\\left\\Vert \\,T\\,x,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\;:\\; \\left\\Vert \\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,=\\, 1\\,\\right\\rbrace $ .", "(III)  $ \\Vert \\,T\\,\\Vert \\,=\\, \\sup \\,\\left\\lbrace \\,\\dfrac{\\left\\Vert \\,T\\,x,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert }{\\left\\Vert \\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert } \\;:\\; \\left\\Vert \\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\ne \\, 0\\,\\right\\rbrace $ .", "Also, we have $\\left\\Vert \\,T\\,x,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\Vert \\,T\\,\\Vert \\, \\left\\Vert \\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\, \\;\\forall \\; x \\,\\in \\, X.$ It is easy to see that  $X_{b}^{\\,\\ast }$ , collection of all  $b$ -bounded linear operators, forms a Banach space, on  $X$ .", "Definition 3.4 A linear operator  $T \\,:\\, X \\,\\rightarrow \\, X$   is said to be continuous at  $x_{\\,0} \\,\\in \\, X$   if for any open ball  $B_{\\,\\lbrace \\,e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\rbrace }\\,(\\,T\\,(\\,x_{\\,0}\\,),\\, \\epsilon \\,) \\,>\\, 0$   there exists a  $\\delta \\,>\\, 0$   such that $T\\,\\left(\\,B_{\\,\\lbrace \\,e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\rbrace }\\,(\\,x_{\\,0},\\, \\delta \\,)\\,\\right) \\,\\subset \\, B_{\\,\\lbrace \\,e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\rbrace }\\,(\\,T\\,(\\,x_{\\,0}\\,),\\, \\epsilon \\,).$ Equivalently, for given  $\\epsilon \\,>\\, 0$ , there exist some  $e_{\\,2},\\, \\cdots ,\\, e_{\\,n} \\,\\in \\, X$   and  $\\delta \\,>\\, 0$   such that for  $x \\,\\in \\, X$ $\\left\\Vert \\,x \\,-\\, x_{\\,0},\\,e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\right\\Vert \\,<\\, \\delta \\,\\Rightarrow \\, \\left\\Vert \\,T\\,(\\,x\\,) \\,-\\, T\\,(\\,x_{\\,0}\\,),\\,e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\right\\Vert \\,<\\, \\epsilon .$ For particular case  $e_{\\,2} \\,=\\, b_{\\,2},\\, \\cdots ,\\, e_{\\,n} \\,=\\, b_{\\,n}$ , it is said to be b-continuous.", "Definition 3.5 A linear operator  $T \\,:\\, X \\,\\rightarrow \\, X$   is said to be b-sequentially continuous at  $x \\,\\in \\, X$   if for every sequence  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $   b-converging to  $x$   in  $X$ , the sequence  $\\left\\lbrace \\,T\\,(\\,x_{\\,k}\\,)\\,\\right\\rbrace $   b-converges to  $T\\,(\\,x\\,)$   in  $X$ .", "Theorem 3.6 Let  $T \\,:\\, X \\,\\rightarrow \\, X$   be a linear operator, where  $X$   is a linear n-normed space.", "Then  $T$   is b-continuous at  $x \\,\\in \\, X$   if and only if  $T$   is b-bounded.", "First we suppose that  $T$   is  $b$ -continuous.", "If possible that  $T$   is not  $b$ -bounded.", "Then there exists a sequence  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $   in  $X$   such that $\\left\\Vert \\,T\\,(\\,x_{\\,k}\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,>\\, k\\,\\left\\Vert \\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\text{for}\\; k \\,=\\, 1,\\,2,\\,\\cdots .$ Clearly,  $x \\,\\ne \\, \\theta $ , for any  $k$ .", "Let  $x^{\\,\\prime }_{\\,k} \\,=\\, \\dfrac{x_{\\,k}}{\\,k\\,\\left\\Vert \\,x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert }$ .", "Then  $\\left\\Vert \\,x^{\\,\\prime }_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,=\\, \\dfrac{1}{k} \\,\\rightarrow \\, 0$   as  $k \\,\\rightarrow \\, \\infty $ , so  $\\left\\lbrace \\,x^{\\,\\prime }_{\\,k}\\,\\right\\rbrace $    $b$ -converges to  $\\theta $ .", "Since  $T$   is  $b$ -continuous at  $x \\,=\\, \\theta $ ,  $\\left\\lbrace \\,T\\,(\\,x_{\\,k}\\,)\\,\\right\\rbrace $    $b$ -converges to  $T\\,(\\,\\theta \\,) \\,=\\, \\theta $ , i .", "e.,  $\\left\\Vert \\,T\\,(\\,x^{\\,\\prime }_{\\,k}\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\rightarrow \\, 0$   as  $k \\,\\rightarrow \\, \\infty $ .", "But on the other hand $\\left\\Vert \\,T\\,(\\,x^{\\,\\prime }_{\\,k}\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,=\\, \\dfrac{1}{\\,k\\,\\left\\Vert \\,x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert }\\,\\left\\Vert \\,T\\,(\\,x_{\\,k}\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,>\\, 1,$ for  $k \\,=\\, 1,\\, 2,\\, \\cdots $ , which is a contradiction.", "Therefore  $T$   must be  $b$ -bounded.", "Conversely, suppose that  $T$   is  $b$ -bounded.", "So, there exists a constant  $M \\,>\\, 0$   such that $\\left\\Vert \\,T\\,x,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, M\\,\\left\\Vert \\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\forall \\; x \\,\\in \\, X.$ Let  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $    $b$ -converges to  $x$ , i .", "e.,  $\\left\\Vert \\,x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\rightarrow \\, 0$   as  $k \\,\\rightarrow \\, \\infty $ .", "Then $\\left\\Vert \\,T\\,(\\,x_{\\,k}\\,) \\,-\\, T\\,(\\,x\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,=\\, \\left\\Vert \\,T\\,\\left(\\,x_{\\,k} \\,-\\, \\,x\\,\\right),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert $ $\\le \\, M\\,\\left\\Vert \\,x_{\\,k} \\,-\\, x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\rightarrow \\, 0\\; \\;\\text{as \\,$k \\,\\rightarrow \\, \\infty $}.$ Therefore  $\\left\\lbrace \\,T\\,(\\,x_{\\,k}\\,)\\,\\right\\rbrace $    $b$ -converges to  $T\\,(\\,x\\,)$   and so  $T$   is  $b$ -continuous at  $x \\,\\in \\, X$ .", "Because  $x \\,\\in \\, X$   is arbitrary,  $T$   is  $b$ -continuous on  $X$ .", "This completes the proof.", "Theorem 3.7 Let  $X$   be a linear n-normed space and  $T$   be a linear operator on  $X$ .Then  $T$   is  $b$ -bounded if and only if  $T$   maps bounded sets in  $X$   into bounded sets in  $X$ .", "Suppose  $T$   is  $b$ -bounded and  $S$   is any bounded subset of  $X$ .Then there exists  $M_{\\,1} \\,>\\, 0$   such that $\\left\\Vert \\,T\\,x\\,,\\, b_{\\,2}\\,,\\, \\cdots \\,,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, M_{\\,1}\\, \\left\\Vert \\,x \\,,\\, b_{\\,2} \\,,\\, \\cdots \\,,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\forall \\; x \\,\\in \\, X$ and in particular, for all  $x \\,\\in \\, S$ .The set  $S$   being bounded, for some real number  $M \\,>\\, 0$ , we have $\\left\\Vert \\,T\\,x\\,,\\, b_{\\,2}\\,,\\, \\cdots \\,,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, M\\; \\;\\forall \\; x \\,\\in \\, S \\,\\Rightarrow \\; \\;\\text{the set}\\; \\left\\lbrace \\,T\\,x \\,:\\, x \\,\\in \\, S \\,\\right\\rbrace $ is bounded in  $Y$   and hence  $T$   maps bounded sets in  $X$   into bounded sets in  $Y$ .", "Conversely, for the closed unit ball $B_{\\,\\lbrace \\,e_{\\,2} \\,,\\, \\cdots \\,,\\, e_{\\,n}\\,\\rbrace }\\,[\\,0 \\,,\\, 1\\,] \\,=\\, \\left\\lbrace \\,x \\,\\in \\, X \\,:\\, \\left\\Vert \\,x \\,,\\, e_{\\,2} \\,,\\, \\cdots \\,,\\, e_{\\,n}\\,\\right\\Vert \\,\\le \\, 1\\,\\right\\rbrace \\;,$ the set  $\\left\\lbrace \\,T\\,x \\,:\\, x \\,\\in \\, B_{\\,\\lbrace \\,e_{\\,2} \\,,\\, \\cdots \\,,\\, e_{\\,n}\\,\\rbrace }\\,[\\,0 \\,,\\, 1\\,]\\,\\right\\rbrace $   is bounded set in  $Y$ .", "Therefore, there exists  $K \\,>\\, 0$   such that $\\left\\Vert \\,T\\,x\\,,\\, b_{\\,2}\\,,\\, \\cdots \\,,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, K\\; \\;\\;\\forall \\; x \\,\\in \\, B_{\\,\\lbrace \\,e_{\\,2} \\,,\\, \\cdots \\,,\\, e_{\\,n}\\,\\rbrace }\\,[\\,0 \\,,\\, 1\\,].$ If  $x \\,=\\, 0$ , then  $T\\,x \\,=\\, 0$   and the assertion $\\left\\Vert \\,T\\,x\\,,\\, b_{\\,2}\\,,\\, \\cdots \\,,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, K\\, \\left\\Vert \\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\text{is obviouly true.\\;If}\\; x \\,\\ne \\, 0, \\,\\text{then}$  $\\dfrac{x}{\\Vert \\,x,\\, e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\Vert } \\,\\in \\, B_{\\lbrace \\,e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\rbrace }\\,[\\,0 \\,,\\, 1\\,]$ , and for particular  $e_{\\,2} \\,=\\, b_{\\,2},\\, \\,\\cdots ,\\, e_{\\,n} \\,=\\, b_{\\,n}$ $&\\left\\Vert \\,T\\,\\left(\\, \\dfrac{x}{\\Vert \\,x \\,,\\, b_{\\,2} \\,,\\, \\cdots \\,,\\, b_{\\,n}\\,\\Vert }\\,\\right)\\,\\right\\Vert \\,\\le \\, K\\\\&\\Rightarrow \\, \\left\\Vert \\,T\\,x \\,,\\, b_{\\,2}\\,,\\, \\cdots \\,,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, K\\, \\left\\Vert \\,x \\,,\\, b_{\\,2} \\,,\\, \\cdots \\,,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\forall \\; x \\,\\in \\, X.$ Hence,  $T$   is a  $b$ -bounded linear operator.", "Definition 3.8 Let  $X$   be a linear  $n$ -normed space and  $T \\,:\\, X \\,\\rightarrow \\, X$   be an operator.", "The operator  $T$   is called b-contraction operator if $\\left\\Vert \\,T\\,x \\,-\\, T\\,y,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha \\,\\left\\Vert \\,x \\,-\\, y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\forall \\; x,\\,y \\,\\in \\, X,$ where  $0 \\,<\\, \\alpha \\,<\\, 1$ .", "Theorem 3.9 Any b-contraction operator is b-continuous.", "Let  $T \\,:\\, X \\,\\rightarrow \\, X$   be a  $b$ -contraction operator.", "Choose  $\\epsilon \\,>\\, 0$   arbitrary and take  $0 \\,<\\, \\delta \\,<\\, \\dfrac{\\epsilon }{\\alpha }$ .", "Then for  $x,\\, y \\,\\in \\, X$ , we have $&\\left\\Vert \\,T\\,x \\,-\\, T\\,y,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha \\,\\left\\Vert \\,x \\,-\\, y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\& \\,<\\, \\alpha \\,\\delta \\;\\; \\;\\text{if}\\;\\;\\left\\Vert \\,x \\,-\\, y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,<\\, \\delta .$ So,  $\\left\\Vert \\,T\\,x \\,-\\, T\\,y,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,<\\, \\epsilon $   whenever  $\\left\\Vert \\,x \\,-\\, y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,<\\, \\delta ,\\; x,\\,y \\,\\in \\, X$ .", "Therefore,  $T$   is  $b$ -continuous.", "Theorem 3.10 Let  $X$   be a  $b$ -complete linear  $n$ -normed space and  $T \\,:\\, X \\,\\rightarrow \\, X$   be an  $b$ -contraction mapping.", "Then there exists a unique fixed point  $x_{\\,0}$   of the operator  $T$   in  $X$   provided the set  $\\left\\lbrace \\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\rbrace $   is linearly independent.", "Let  $x \\,\\in \\, X$   be an arbitrary element.Starting from  $x$ , we form the iterations $x_{\\,1} \\,=\\, T\\,x,\\, x_{\\,2} \\,=\\, T\\,x_{\\,1},\\, x_{\\,3} \\,=\\, T\\,x_{\\,2},\\, \\cdots ,\\, ,\\, x_{\\,k} \\,=\\, T\\,x_{\\,k \\,-\\, 1},\\, \\cdots .", "$ We verify that  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $   is a  $b$ -Cauchy sequence.", "We have $\\left\\Vert \\,x_{\\,1} \\,-\\, x_{\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert &\\,=\\, \\left\\Vert \\,T\\,x \\,-\\, T\\,x_{\\,1},\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\\\& \\,\\le \\, \\alpha \\,\\left\\Vert \\,x \\,-\\, x_{\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\& =\\, \\alpha \\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ $\\left\\Vert \\,x_{\\,2} \\,-\\, x_{\\,3},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert & \\,=\\, \\left\\Vert \\,T\\,x_{\\,1} \\,-\\, T\\,x_{\\,2},\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\\\& \\,\\le \\, \\alpha \\,\\left\\Vert \\,x_{\\,1} \\,-\\, x_{\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&=\\, \\alpha ^{\\,2}\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ $\\left\\Vert \\,x_{\\,3} \\,-\\, x_{\\,4},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert &\\,=\\, \\left\\Vert \\,T\\,x_{\\,2} \\,-\\, T\\,x_{\\,3},\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\\\& \\,\\le \\, \\alpha \\,\\left\\Vert \\,x_{\\,2} \\,-\\, x_{\\,3},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&=\\, \\alpha ^{\\,3}\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ In general, for any positive integer  $k$ , $\\left\\Vert \\,x_{\\,k} \\,-\\, x_{\\,k \\,+\\, 1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha ^{\\,k}\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ Also, for any positive integer  $p$ , we have $&\\left\\Vert \\,x_{\\,k} \\,-\\, x_{\\,k \\,+\\, p},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\nonumber \\\\&\\le \\,\\left\\Vert \\,x_{\\,k} - x_{\\,k \\,+\\, 1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,x_{\\,k \\,+\\, 1} - x_{\\,k \\,+\\, 2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\,\\cdots \\nonumber \\\\&\\hspace{56.9055pt}\\cdots \\,+\\,\\left\\Vert \\,x_{\\,k\\,+\\,p\\,-1} - x_{\\,k \\,+\\, p},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\nonumber \\\\&\\le \\, \\alpha ^{\\,k}\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\alpha ^{\\,k\\,+\\,1}\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\,\\cdots \\nonumber \\\\&\\hspace{56.9055pt}\\cdots \\,+\\,\\alpha ^{\\,k\\,+\\,p\\,-\\,1}\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\nonumber \\\\&=\\, \\left(\\,\\alpha ^{\\,k} \\,+\\, \\alpha ^{\\,k\\,+\\,1} \\,+\\, \\cdots \\,+\\, \\alpha ^{\\,k\\,+\\,p\\,-\\,1}\\,\\right)\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\nonumber \\\\&=\\,\\dfrac{\\alpha ^{\\,k} \\,-\\, \\alpha ^{\\,k\\,+\\,p}}{1 \\,-\\, \\alpha }\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&<\\,\\dfrac{\\alpha ^{\\,k}}{1 \\,-\\, \\alpha }\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert ,\\;[\\;\\text{since}\\; \\,0 \\,<\\, \\alpha \\,<\\, 1\\;].$ Since  $\\alpha \\,<\\, 1$ , the relation () shows that  $\\left\\Vert \\,x_{\\,k} \\,-\\, x_{\\,k \\,+\\, p},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\rightarrow \\, 0$   as  $k \\,\\rightarrow \\, \\infty $ .Therefore the sequence  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $   is a  $b$ -Cauchy sequence.", "Since  $X$   is  $b$ -complete linear  $n$ -normed space, the sequence  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $   is convergent in the semi-normed space  $\\left(\\,X,\\, \\left\\Vert \\,\\cdot ,\\,b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\right)$ .So, let  $\\lim \\limits _{k \\,\\rightarrow \\, \\infty }\\,x_{\\,k} \\,=\\, x_{\\,0}$   with the property that the set  $\\left\\lbrace \\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\rbrace $   is linearly independent.", "We now show that  $x_{\\,0}$   is a fixed point of the operator  $T$ , i .", "e.,  $T\\,x_{\\,0} \\,=\\, x_{\\,0}$ .We have $&\\left\\Vert \\,x_{\\,0} \\,-\\, T\\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\left\\Vert \\,x_{\\,0} \\,-\\, x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,x_{\\,k} \\,-\\, T\\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&=\\, \\left\\Vert \\,x_{\\,0} \\,-\\, x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,T\\,x_{\\,k\\,-\\,1} \\,-\\, T\\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&\\le \\, \\left\\Vert \\,x_{\\,0} \\,-\\, x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\alpha \\,\\left\\Vert \\,x_{\\,k\\,-\\,1} \\,-\\, x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; [\\;\\text{by (\\ref {eq1})}\\;]\\\\&\\rightarrow \\, 0\\; \\;\\text{as}\\; \\,k \\,\\rightarrow \\, \\infty \\; \\;\\left[\\;\\text{since}\\;\\lim \\limits _{k \\,\\rightarrow \\, \\infty }\\,x_{\\,k} \\,=\\, x_{\\,0}\\,\\right].$ So,  $T\\,x_{\\,0} \\,=\\, x_{\\,0}$ .Therefore  $x_{\\,0}$   is a fixed point of  $T$ .We now verify that there exists only one fixed point of  $T$ .", "Let  $y_{\\,0} \\,\\in \\, X$   with  $\\left\\lbrace \\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\rbrace $   is linearly independent such that  $T\\,y_{\\,0} \\,=\\, y_{\\,0}$ .", "Then using (REF ), we have $\\left\\Vert \\,x_{\\,0} \\,-\\, y_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert &\\,=\\, \\left\\Vert \\,T\\,x_{\\,0} \\,-\\, T\\,y_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\& \\,\\le \\, \\alpha \\,\\left\\Vert \\,x_{\\,0} \\,-\\, y_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ If  $\\left\\Vert \\,x_{\\,0} \\,-\\, y_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,>\\, 0$   then from above inequality, we obtain  $\\alpha \\,\\ge \\, 1$ , which is a contradiction.", "Hence,  $\\left\\Vert \\,x_{\\,0} \\,-\\, y_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,=\\, 0$ , i .", "e.,  $x_{\\,0} \\,=\\, y_{\\,0}$   and so  $T$   has a unique fixed point in  $X$ .", "This proves the theorem.", "Note 3.11 We consider the inequality (REF ) $\\left\\Vert \\,x_{\\,k} \\,-\\, x_{\\,k \\,+\\, p},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\dfrac{\\alpha ^{\\,k} \\,-\\, \\alpha ^{\\,k\\,+\\,p}}{1 \\,-\\, \\alpha }\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ As  $p \\,\\rightarrow \\, \\infty $ , since  $\\alpha \\,<\\, 1$ , the right hand side tends to  $\\dfrac{\\alpha ^{\\,k}}{1 \\,-\\, \\alpha }\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert $   and the left hand side tends to  $\\left\\Vert \\,x_{\\,k} \\,-\\, x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert $   because  $x_{\\,k \\,+\\, p} \\,\\rightarrow \\, x_{\\,0}$ .", "So, $\\left\\Vert \\,x_{\\,k} \\,-\\, x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\dfrac{\\alpha ^{\\,k}}{1 \\,-\\, \\alpha }\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ The relation (REF ) gives an estimation for the error of the  $k\\text{th}$   approximation.", "Sometimes it may happen that although the condition (REF ) is not satisfied on the entire space  $X$ , it is satisfied on a certain subset of  $X$ .", "Under such situation, we prove the following theorem.", "Theorem 3.12 Let  $X$   be a  $b$ -complete linear  $n$ -normed space and  $T \\,:\\, X \\,\\rightarrow \\, X$   be an operator such that $\\left\\Vert \\,T\\,x \\,-\\, T\\,y,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha \\,\\left\\Vert \\,x \\,-\\, y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert ,$ for  $x,\\, y \\,\\in \\, \\overline{\\,B} \\,=\\, B_{\\,\\lbrace \\,b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\rbrace }\\,[\\,x_{\\,0},\\,r\\,]$ .", "Moreover, assume that $\\left\\Vert \\,x_{\\,0} \\,-\\, T\\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,<\\, (\\,1 \\,-\\, \\alpha \\,)\\,r.$ Then the iterative sequence starting from  $x_{\\,0}$   converges to an  $x \\,\\in \\, \\overline{\\,B}$   which is a unique fixed point of  $T$   in  $\\overline{\\,B}$ We verify by induction that  $x_{\\,1} \\,=\\, T\\,x_{\\,0},\\, x_{\\,2} \\,=\\, T\\,x_{\\,1},\\, x_{\\,3} \\,=\\, T\\,x_{\\,2},\\, \\cdots ,\\, x_{\\,k} \\,=\\, T\\,x_{\\,k \\,-\\, 1},\\, \\cdots $   are in  $\\overline{\\,B}$ .", "Clearly,  $x_{\\,0} \\,\\in \\, \\overline{\\,B}$ .", "Assume that  $x_{\\,1},\\, x_{\\,2},\\, \\cdots ,\\, x_{\\,k \\,-\\, 1}$   are in  $\\overline{\\,B}$ .", "We show that  $x_{\\,k}$   also lies in  $\\overline{\\,B}$ .", "We have $&\\left\\Vert \\,x_{\\,2} \\,-\\, x_{\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha \\,\\left\\Vert \\,x_{\\,1} \\,-\\, x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha \\,(\\,1 \\,-\\, \\alpha \\,)\\,r\\\\&\\left\\Vert \\,x_{\\,3} \\,-\\, x_{\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha \\,\\left\\Vert \\,x_{\\,2} \\,-\\, x_{\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha ^{\\,2}\\,(\\,1 \\,-\\, \\alpha \\,)\\,r\\\\&\\hspace{8.5359pt}\\cdots \\hspace{184.9429pt}\\cdots \\hspace{113.81102pt}\\cdots \\\\&\\left\\Vert \\,x_{\\,k\\,-\\,1} \\,-\\, x_{\\,k\\,-\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha \\,\\left\\Vert \\,x_{\\,k\\,-\\,2} \\,-\\, x_{\\,k\\,-\\,3},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha ^{\\,k\\,-\\,2}\\,(\\,1 \\,-\\, \\alpha \\,)\\,r\\\\&\\left\\Vert \\,x_{\\,k} \\,-\\, x_{\\,k\\,-\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha \\,\\left\\Vert \\,x_{\\,k\\,-\\,1} \\,-\\, x_{\\,k\\,-\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha ^{\\,k\\,-\\,1}\\,(\\,1 \\,-\\, \\alpha \\,)\\,r.$ Therefore, $&\\left\\Vert \\,x_{\\,0} \\,-\\, x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&\\le \\, \\left\\Vert \\,x_{\\,0} - x_{\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,x_{\\,1} - x_{\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\, + \\cdots + \\left\\Vert \\,x_{\\,k\\,-\\,1} - x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&\\le \\,(\\,1 \\,-\\, \\alpha \\,)\\,r \\,+\\, \\alpha \\,(\\,1 \\,-\\, \\alpha \\,)\\,r \\,+\\, \\alpha ^{\\,2}\\,(\\,1 \\,-\\, \\alpha \\,)\\,r \\,+\\, \\cdots \\,+\\,\\alpha ^{\\,k\\,-\\,1}\\,(\\,1 \\,-\\, \\alpha \\,)\\,r\\\\&=\\,(\\,1 \\,-\\, \\alpha \\,)\\,r\\left(\\,1 \\,+\\, \\alpha \\,+\\, \\alpha ^{\\,2} \\,+\\, \\cdots \\,+\\, \\alpha ^{\\,k\\,-\\,1}\\,\\right) \\,=\\, \\left(\\,1 \\,-\\, \\alpha ^{\\,k}\\,\\right)\\,r \\,<\\, r.$ So,  $x_{\\,k}$   lies in  $\\overline{\\,B}$ .", "Therefore, every member of the sequence  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $   is contained in  $\\overline{\\,B}$ .", "According to the Theorem REF , we now see that the sequence  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $   converges to an element  $x^{\\,\\prime }_{\\,0}$ , say.", "The element  $x^{\\,\\prime }_{\\,0}$   belong to  $\\overline{\\,B}$   because  $\\overline{\\,B}$   is closed.", "The method of proof of Theorem REF , shows that  $x^{\\,\\prime }_{\\,0}$   is the unique fixed point of  $T$   in  $\\overline{\\,B}$ .", "This completes the proof of the theorem.", "The Cartesian product  $X \\,\\times \\, X$   of linear$n$ -normed space  $X$   is a linear$n$ -normed space with respect to the  $n$ -norm given by $ \\left\\Vert \\,(\\,x_{\\,1},\\, y_{\\,1}\\,),\\, (\\,x_{\\,2},\\, y_{\\,2}\\,),\\, \\cdots ,\\, (\\,x_{\\,n},\\, y_{\\,n}\\,)\\,\\right\\Vert _{1} \\,=\\, \\,\\Vert \\,x_{\\,1},\\, x_{\\,2},\\, \\cdots ,\\, x_{\\,n}\\,\\Vert \\,+\\, \\Vert \\,y_{\\,1},\\, y_{\\,2},\\, \\cdots ,\\, y_{\\,n}\\,\\Vert ,$ for all  $(\\,x_{\\,1},\\, y_{\\,1}\\,),\\, (\\,x_{\\,2},\\, y_{\\,2}\\,),\\, \\cdots ,\\, (\\,x_{\\,n},\\, y_{\\,n}\\,) \\,\\in \\, X \\,\\times \\, X$ .", "Lemma 3.13 Let  $X$   be a linear  $n$ -normed space and  $X \\,\\times \\, X$   be the product linear  $n$ -normed space with respect to the n-norm  $\\left\\Vert \\,\\cdot ,\\, \\cdots ,\\, \\cdot \\,\\right\\Vert _{1}$ .", "Consider the open ball  $B_{\\,\\lbrace \\,a_{\\,2},\\, \\cdots ,\\, a_{\\,n}\\,\\rbrace }\\,\\left(\\,(\\,x_{\\,0},\\, y_{\\,0}\\,),\\, r_{\\,1}\\,\\right)$   in  $X \\,\\times \\, X$ , where  $a_{\\,2} \\,=\\, \\left(\\,e_{\\,2},\\,e^{\\,\\prime }_{\\,2}\\,\\right),\\, \\cdots ,\\, a_{\\,n} \\,=\\, \\left(\\,e_{\\,n},\\,e^{\\,\\prime }_{\\,n}\\,\\right)$ .", "Then there exist two open balls  $B_{\\,\\lbrace \\,e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\rbrace }\\,(\\,x_{\\,0},\\, r\\,)$   and  $B_{\\,\\lbrace \\,e^{\\,\\prime }_{\\,2},\\, \\cdots ,\\, e^{\\,\\prime }_{\\,n}\\,\\rbrace }\\,(\\,y_{\\,0},\\, r^{\\,\\prime }\\,)$   in  $X$   such that $B_{\\,\\lbrace \\,e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\rbrace }\\,(\\,x_{\\,0},\\, r\\,) \\,\\times \\, B_{\\,\\lbrace \\,e^{\\,\\prime }_{\\,2},\\, \\cdots ,\\, e^{\\,\\prime }_{\\,n}\\,\\rbrace }\\,(\\,y_{\\,0},\\, r^{\\,\\prime }\\,) \\,\\subseteq \\, B_{\\,\\lbrace \\,a_{\\,2},\\, \\cdots ,\\, a_{\\,n}\\,\\rbrace }\\,\\left(\\,(\\,x_{\\,0},\\, y_{\\,0}\\,),\\, r_{\\,1}\\,\\right)$ Select  $r$   and  $r^{\\,\\prime }$   positive numbers both less than  $r_{\\,1}$ .", "Let $x \\,\\in \\, B_{\\,\\lbrace \\,e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\rbrace }\\,(\\,x_{\\,0},\\, r\\,)\\; \\;\\text{and}\\; \\,y \\,\\in \\, B_{\\,\\lbrace \\,e^{\\,\\prime }_{\\,2},\\, \\cdots ,\\, e^{\\,\\prime }_{\\,n}\\,\\rbrace }\\,(\\,y_{\\,0},\\, r^{\\,\\prime }\\,).$ Then $\\left\\Vert \\,x \\,-\\, x_{\\,0},\\, e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\right\\Vert \\,<\\, r\\; \\;\\text{and}\\; \\;\\left\\Vert \\,y \\,-\\, y_{\\,0},\\, e^{\\,\\prime }_{\\,2},\\, \\cdots ,\\, e^{\\,\\prime }_{\\,n}\\,\\right\\Vert \\,<\\, r^{\\,\\prime }.$ Now $&\\left\\Vert \\,(\\,x,\\,y\\,) \\,-\\, (\\,x_{\\,0},\\,y_{\\,0}\\,),\\, (\\,e_{\\,2},\\,e^{\\,\\prime }_{\\,2}\\,),\\, \\cdots ,\\, (\\,e_{\\,n},\\,e^{\\,\\prime }_{\\,n}\\,)\\,\\right\\Vert \\\\& \\,=\\, \\left\\Vert \\,(\\,x \\,-\\, x_{\\,0}\\,),\\, (\\,y \\,-\\, y_{\\,0}\\,),\\, (\\,e_{\\,2},\\,e^{\\,\\prime }_{\\,2}\\,),\\, \\cdots ,\\, (\\,e_{\\,n},\\,e^{\\,\\prime }_{\\,n}\\,)\\,\\right\\Vert \\\\&=\\, \\left\\Vert \\,x \\,-\\, x_{\\,0},\\, e_{\\,2},\\, \\cdots ,\\, e_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,y \\,-\\, y_{\\,0},\\, e^{\\,\\prime }_{\\,2},\\, \\cdots ,\\, e^{\\,\\prime }_{\\,n}\\,\\right\\Vert \\,<\\, r \\,+\\, r^{\\,\\prime } \\,<\\, r_{\\,1}.$ So,  $(\\,x,\\,y\\,) \\,\\in \\, B_{\\,\\lbrace \\,a_{\\,2},\\, \\cdots ,\\, a_{\\,n}\\,\\rbrace }\\,\\left(\\,(\\,x_{\\,0},\\, y_{\\,0}\\,),\\, r_{\\,1}\\,\\right)$ .", "This proves the lemma.", "Theorem 3.14 Let  $T$   be a linear operator on a  $X$   such that $\\left\\Vert \\,T\\,x \\,-\\, T\\,y,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\left\\Vert \\,x \\,-\\, y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\forall \\; x,\\,y \\,\\in \\, X, \\,x \\,\\ne \\, y.$ Suppose that there exists a point  $x \\,\\in \\, X$   such that the sequence of iterates  $\\left\\lbrace \\,T^{\\,k}\\,(\\,x\\,)\\,\\right\\rbrace $   has a subsequence  $b$ -converging to  $\\xi \\,\\in \\, X$ .", "Then  $\\xi $   is the unique fixed point of  $T$ .", "Let  $\\left\\lbrace \\,T^{\\,k_{\\,i}}\\,(\\,x\\,)\\,\\right\\rbrace $   be that subsequence of  $\\left\\lbrace \\,T^{\\,k}\\,(\\,x\\,)\\,\\right\\rbrace $   which is  $b$ -convergent.", "So, take $\\lim \\limits _{i \\,\\rightarrow \\, \\infty }\\,T^{\\,k_{\\,i}}\\,(\\,x\\,) \\,=\\, \\xi \\; \\;\\text{(\\,say\\,)}.$ Suppose, if possible that  $\\xi $   is not a fixed point of  $T$ , i .", "e.,  $T\\,(\\,\\xi \\,) \\,\\ne \\, \\xi $ .", "Let  $Y$   be the subset of  $X \\,\\times \\, X$   defined by $Y \\,=\\, X \\,\\times \\, X \\,-\\, \\Delta ,\\; \\;\\text{where}\\; \\;\\Delta \\,=\\, \\left\\lbrace \\,(\\,x,\\,y\\,) \\,\\in \\, X \\,\\times \\, X \\,:\\, x \\,=\\, y\\,\\right\\rbrace .$ We now define a real-valued function of two variables on  $Y$   by $f\\,(\\,p,\\,q\\,) \\,=\\, \\dfrac{\\left\\Vert \\,T\\,(\\,p\\,) \\,-\\, T\\,(\\,q\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert }{\\left\\Vert \\,p \\,-\\, q,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert },\\; \\;(\\,p,\\,q\\,) \\,\\in \\, Y.$ If  $\\left\\lbrace \\,p_{\\,k}\\,\\right\\rbrace $    $b$ -converges to  $p$   and  $\\left\\lbrace \\,q_{\\,k}\\,\\right\\rbrace $    $b$ -converges to  $q$ , then because  $T$   is  $b$ -continuous, we get that  $\\left\\lbrace \\,T\\,(\\,p_{\\,k}\\,)\\,\\right\\rbrace $    $b$ -converges to  $T\\,(\\,p\\,)$   and  $\\left\\lbrace \\,T\\,(\\,q_{\\,k}\\,)\\,\\right\\rbrace $    $b$ -converges to  $T\\,(\\,q\\,)$   and so we have $f\\,(\\,p_{\\,k},\\,q_{\\,k}\\,) &\\,=\\, \\dfrac{\\left\\Vert \\,T\\,(\\,p_{\\,k}\\,) \\,-\\, T\\,(\\,q_{\\,k}\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert }{\\left\\Vert \\,p_{\\,k} \\,-\\, q_{\\,k},\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert }\\\\& \\,\\rightarrow \\, \\dfrac{\\left\\Vert \\,T\\,(\\,p\\,) \\,-\\, T\\,(\\,q\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert }{\\left\\Vert \\,p \\,-\\, q,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert }$ for  $(\\,p_{\\,k},\\,q_{\\,k}\\,),\\, (\\,p,\\,q\\,) \\,\\in \\, Y$ .", "This implies that  $f\\,(\\,p,\\,q\\,)$   is  $b$ -continuous on  $Y$ .", "By (REF ),  $f\\,(\\,p,\\,q\\,) \\,<\\, 1$   for  $(\\,p,\\,q\\,) \\,\\in \\, Y$   and so  $f\\left(\\,\\xi ,\\,T\\,(\\,\\xi \\,)\\,\\right) \\,<\\, 1$ .", "Since  $f\\,(\\,p,\\,q\\,)$   is  $b$ -continuous at  $(\\,\\xi ,\\,T\\,(\\,\\xi \\,)\\,)$ , if  $f\\left(\\,\\xi ,\\,T\\,(\\,\\xi \\,)\\,\\right) \\,<\\, R \\,<\\, 1$ , there exists a open ball  $U$ , say arround  $(\\,\\xi ,\\,T\\,(\\,\\xi \\,)\\,)$   such that $\\text{for}\\; \\;(\\,p,\\,q\\,) \\,\\in \\, U \\,\\cap \\, Y, \\;0 \\,\\le \\, f\\,(\\,p,\\,q\\,) \\,\\le \\, R \\,<\\, 1.$ By lemma REF , there exists two open balls  $B_{\\,1} \\,=\\, B_{\\,\\lbrace \\,b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\rbrace }\\,(\\,\\xi ,\\, \\alpha \\,)$   and  $B_{\\,2} \\,=\\, B_{\\,\\lbrace \\,b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\rbrace }\\,(\\,T\\,\\xi ,\\, \\alpha \\,)$   such that  $B_{\\,1} \\,\\times \\, B_{\\,2} \\,\\subset \\, U$ .", "The positive number  $\\alpha $   may be choosen small enough to ensure also  $\\alpha \\,<\\, \\dfrac{1}{3}\\,\\left\\Vert \\,\\xi \\,-\\, T\\,\\xi ,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert $   and this only shows that the open balls  $B_{1}$   and  $B_{2}$   are disjoint.", "By (REF ), there exists a positive integer  $N$   such that  $T^{\\,k_{\\,i}}\\,(\\,x\\,) \\,\\in \\, B_{1}$   for  $i \\,>\\, N$ .", "By (REF ) $\\left\\Vert \\,T^{\\,k_{\\,i} \\,+\\, 1}\\,(\\,x\\,) \\,-\\, T\\,\\xi ,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,<\\, \\left\\Vert \\,T^{\\,k_{\\,i}}\\,(\\,x\\,) \\,-\\, \\xi ,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,<\\, \\alpha $ so that  $T^{\\,k_{\\,i} \\,+\\, 1}\\,(\\,x\\,) \\,\\in \\, B_{2}$   for  $i \\,>\\, N$ .", "Since  $B_{1} \\,\\cap \\, B_{2} \\,=\\, \\phi $ , it follows that $\\left\\Vert \\,T^{\\,k_{\\,i}}\\,(\\,x\\,) \\,-\\, T^{\\,k_{\\,i} \\,+\\, 1}\\,(\\,x\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,>\\, \\alpha \\; \\;\\text{for}\\; \\;i \\,>\\, N.$ Now, for  $i \\,>\\, N$ ,  $T^{\\,k_{\\,i}}\\,(\\,x\\,) \\,\\in \\, B_{1}$   and  $T^{\\,k_{\\,i} \\,+\\, 1}\\,(\\,x\\,) \\,\\in \\, B_{2}$   and so from (REF ) $f\\,\\left(\\,T^{\\,k_{\\,i}}\\,(\\,x\\,),\\,T^{\\,k_{\\,i} \\,+\\, 1}\\,(\\,x\\,)\\,\\right) \\,<\\, R$ , i .", "e., $&\\left\\Vert \\,T^{\\,k_{\\,i} \\,+\\, 1}\\,(\\,x\\,) \\,-\\, T^{\\,k_{\\,i} \\,+\\, 2}\\,(\\,x\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\nonumber \\\\& \\,<\\, R\\,\\left\\Vert \\,T^{\\,k_{\\,i}}\\,(\\,x\\,) \\,-\\, T^{\\,k_{\\,i} \\,+\\, 1}\\,(\\,x\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert .$ Let  $l \\,>\\, j \\,>\\, N$ .", "A repeated use of (REF ) and (REF ) gives $&\\left\\Vert \\,T^{\\,k_{\\,l}}\\,(\\,x\\,) \\,-\\, T^{\\,k_{\\,l} \\,+\\, 1}\\,(\\,x\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\\\& \\,\\le \\, \\left\\Vert \\,T^{\\,k_{\\,l \\,-\\,1} \\,+\\, 1}\\,(\\,x\\,) \\,-\\, T^{\\,k_{\\,l \\,-\\,l} \\,+\\, 2}\\,(\\,x\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\\\&<\\, R\\,\\left\\Vert \\,T^{\\,k_{\\,l \\,-\\,1}}\\,(\\,x\\,) \\,-\\, T^{\\,k_{\\,l \\,-\\,l} \\,+\\, 1}\\,(\\,x\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\\\&\\le \\, \\cdots \\,\\hspace{42.67912pt}\\,\\cdots \\,\\hspace{28.45274pt}\\,\\cdots \\hspace{42.67912pt},\\,\\cdots \\\\&\\le \\, R^{\\,l \\,-\\, j}\\,\\left\\Vert \\,T^{\\,k_{\\,j}}\\,(\\,x\\,) \\,-\\, T^{\\,k_{\\,j} \\,+\\, 1}\\,(\\,x\\,),\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\rightarrow \\, 0\\; \\;\\text{as}\\; \\;k \\,\\rightarrow \\, \\infty \\; \\;[\\;\\text{since}\\;R \\,<\\, 1\\;].$ But this last relation contradicts (REF ).", "Therefore we arrive at a contradiction.", "Hence,  $T\\,(\\,\\xi \\,) \\,=\\, \\xi $   and  $\\xi $   is a fixed point of  $T$ .", "If  $\\eta \\,\\ne \\, \\xi $   is another fixed point i .", "e.,  $T\\,(\\,\\eta \\,) \\,=\\, \\eta $ , then $\\left\\Vert \\,T\\,\\xi \\,-\\, T\\,\\eta ,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\left\\Vert \\,\\xi \\,-\\, \\eta ,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert $ against (REF ).", "Hence,  $\\xi $   is the unique fixed point  $T$ .", "This proves the theorem.", "Next, we prove some further theorems on fixed points of operators in linear  $n$ -normed space.", "Theorem 3.15 Let  $X$   be a  $b$ -complete linear  $n$ -normed space and  $T$   be a mapping of  $X$   into itself.", "Suppose that for each positive integer  $k$ , $\\left\\Vert \\,T^{\\,k}\\,x \\,-\\, T^{\\,k}\\,y,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, a_{\\,k}\\,\\left\\Vert \\,x \\,-\\, y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\forall \\; x,\\,y \\,\\in \\, X,$ where  $a_{\\,k} \\,>\\, 0$   is independent of  $x,\\,y$ .", "If the series  $\\sum \\limits _{k \\,=\\, 1}^{\\,\\infty }\\,a_{\\,k}$   is convergent, then  $T$   has a unique fixed point in  $X$ .", "Let  $x_{\\,0}$   be an arbitrary element in  $X$   and consider the sequence  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $   of iterates,  $x_{\\,k} \\,=\\, T^{\\,k}\\,x_{\\,0},\\; \\;k \\,=\\, 1,\\,2,\\,3,\\cdots $ .", "We note that  $x_{k \\,+\\, 1} \\,=\\, T^{\\,k \\,+\\, 1}\\,x_{\\,0} \\,=\\, T^{\\,k}\\,\\left(\\,T\\,x_{\\,0}\\,\\right) \\,=\\, T^{\\,k}\\,x_{\\,1}$   and also,  $x_{k \\,+\\, 1} \\,=\\, T\\,\\left(\\,T^{\\,k}\\,x_{\\,0}\\,\\right) \\,=\\, T\\,x_{\\,k}$ .", "If  $p$   and  $q\\;(\\,p \\,>\\, q\\,)$   be positive integers, then we obtain $\\left\\Vert \\,x_{\\,p} \\,-\\, x_{\\,q},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert &\\,\\le \\, \\sum \\limits _{v \\,=\\, q}^{\\,p \\,-\\, 1}\\,\\left\\Vert \\,x_{\\,v} \\,-\\, x_{\\,v \\,+\\, 1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&=\\, \\sum \\limits _{v \\,=\\, p}^{\\,p \\,-\\, 1}\\,\\left\\Vert \\,T^{\\,v}\\,x_{\\,0} \\,-\\, T^{\\,v}\\,x_{\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\& \\,\\le \\, \\sum \\limits _{v \\,=\\, q}^{\\,p \\,-\\, 1}\\,a_{\\,v}\\,\\left\\Vert \\,x_{\\,0} \\,-\\, x_{\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; \\;[\\;\\text{by}\\;(\\,\\ref {eq1.8}\\,)\\;]$ If  $x_{\\,0} \\,=\\, x_{\\,1}$ , then a fixed point is obtained.", "Let  $x_{\\,0} \\,\\ne \\, x_{\\,1}$   and  $r$   be a positive integer with  $r \\,>\\, \\left\\Vert \\,x_{\\,0} \\,-\\, x_{\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert $ .", "As the series  $\\sum \\limits _{k \\,=\\, 1}^{\\,\\infty }\\,a_{\\,k}$   is convergent, for  $\\epsilon \\,>\\, 0$   arbitrary there exists a positive integer  $k_{\\,0}$   such that  $\\sum \\limits _{v \\,=\\, q}^{\\,p \\,-\\, 1}\\,a_{\\,v} \\,<\\, \\dfrac{\\epsilon }{r}$   if  $p \\,>\\, q \\,\\ge \\, k_{\\,0}$ .", "Then for  $p \\,>\\, q \\,\\ge \\, k_{\\,0}$ , we have $\\left\\Vert \\,x_{\\,p} \\,-\\, x_{\\,q},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,<\\, \\dfrac{\\epsilon }{r}\\,\\left\\Vert \\,x_{\\,0} \\,-\\, x_{\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,<\\, \\epsilon .$ Therefore  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $   is a  $b$ -Cauchy sequence and  $b$ -completeness of  $X$   implies the existence of  $\\xi \\,\\in \\, X$   such that  $\\lim \\limits _{k \\,\\rightarrow \\, \\infty }\\,x_{\\,k} \\,=\\, \\xi $ .", "If  $k$   be a positive integer then $&\\left\\Vert \\,\\xi \\,-\\, T\\,\\xi ,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\left\\Vert \\,\\xi \\,-\\, x_{\\,k \\,+\\, 1},\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,x_{k \\,+\\, 1} \\,-\\, T\\,\\xi ,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\\\&=\\, \\left\\Vert \\,\\xi \\,-\\, x_{\\,k \\,+\\, 1},\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,T\\,x_{\\,k} \\,-\\, T\\,\\xi ,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\\\&\\le \\, \\left\\Vert \\,\\xi \\,-\\, x_{\\,k \\,+\\, 1},\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,+\\, a_{\\,1}\\left\\Vert \\,x_{\\,k} \\,-\\, \\xi ,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\; \\;[\\;\\text{by (\\ref {eq1.8})}\\;]\\\\&\\,\\rightarrow \\, 0\\, \\;\\text{as}\\, \\,k \\,\\rightarrow \\, \\infty $ So,  $T\\,\\xi \\,=\\, \\xi $   and  $\\xi $   becomes a fixed point of  $T$ .", "We now prove the uniqueness of  $\\xi $ .", "If  $\\eta $   be a fixed point of  $T$   then for any positive integer  $k$ ,  $\\eta \\,=\\, T^{\\,k}\\,\\eta $   and  $\\xi \\,=\\, T^{\\,k}\\,\\xi $ .", "So, $\\left\\Vert \\,\\xi \\,-\\, \\eta ,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,=\\, \\left\\Vert \\,T^{\\,k}\\,\\xi \\,-\\, T^{\\,k}\\,\\eta ,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, a_{\\,k}\\,\\left\\Vert \\,\\xi \\,-\\, \\eta ,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ If  $\\left\\Vert \\,\\xi \\,-\\, \\eta ,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,>\\, 0$   then  $a_{\\,k \\,} \\,\\ge \\, 1$   for all  $k$ .", "So,  $a_{\\,k}$   cannot tends to zero and this contradiction shows that  $\\xi \\,=\\, \\eta $   and the theorem is proved.", "Remark 3.16 We now deduce Theorem REF from the above Theorem REF .", "Since  $T$   is a  $b$ -contraction mapping, there exists  $0 \\,<\\, \\alpha \\,<\\, 1$   such that $\\left\\Vert \\,T\\,x \\,-\\, T\\,y,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha \\,\\left\\Vert \\,x \\,-\\, y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\; \\;\\forall \\; x,\\,y \\,\\in \\, X.$ For  $x,\\,y \\,\\in \\, X$ , we get from (REF ) $\\left\\Vert \\,T^{\\,2}\\,x \\,-\\, T^{\\,2}\\,y,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert &\\,\\le \\, \\alpha \\,\\left\\Vert \\,T\\,x \\,-\\, T\\,y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\& \\,\\le \\, \\alpha ^{\\,2}\\,\\left\\Vert \\,x \\,-\\, y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ $\\left\\Vert \\,T^{\\,3}\\,x \\,-\\, T^{\\,3}\\,y,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert &\\,\\le \\, \\alpha \\,\\left\\Vert \\,T^{\\,2}\\,x \\,-\\, T^{\\,2}\\,y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\& \\,\\le \\, \\alpha ^{\\,3}\\,\\left\\Vert \\,x \\,-\\, y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert $ and in general $\\left\\Vert \\,T^{\\,k}\\,x \\,-\\, T^{\\,k}\\,y,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\alpha ^{\\,k}\\,\\left\\Vert \\,x \\,-\\, y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ Since the series  $\\sum \\limits _{k \\,=\\, 1}^{\\,\\infty }\\,\\alpha ^{\\,k}$   is convergent, by previous Theorem  $T$   has a fixed point in  $X$ .", "Theorem 3.17 Let  $T \\,:\\, X \\,\\rightarrow \\, X$   be an operator, where  $X$   is a  $b$ -complete linear  $n$ -normed space and  $T$   satisfies the condition $&\\left\\Vert \\,T\\,x \\,-\\, T\\,y,\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\nonumber \\\\& \\,\\le \\, \\beta \\,\\left\\lbrace \\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,y \\,-\\, T\\,y,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\right\\rbrace $ where  $0 \\,<\\, \\beta \\,<\\, \\dfrac{1}{2}$   and  $x,\\,y \\,\\in \\, X$ .", "Then there exists a unique fixed point  $x_{\\,0}$   of the operator  $T$   in  $X$   provided the set  $\\left\\lbrace \\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\rbrace $   is linearly independent.", "Let  $x \\,\\in \\, X$   and  $x_{\\,1} \\,=\\, T\\,x,\\, x_{\\,2} \\,=\\, T\\,x_{\\,1},\\, x_{\\,3} \\,=\\, T\\,x_{\\,2},\\, \\cdots ,\\, ,\\, x_{\\,k} \\,=\\, T\\,x_{\\,k \\,-\\, 1},\\, \\cdots $ .", "Then $&\\left\\Vert \\,x_{\\,1} \\,-\\, x_{\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,=\\, \\left\\Vert \\,T\\,x \\,-\\, T\\,x_{\\,1},\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\\\&\\le \\, \\beta \\left\\lbrace \\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,x_{\\,1} \\,-\\, T\\,x_{\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\right\\rbrace \\\\&=\\, \\beta \\left\\lbrace \\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,x_{\\,1} \\,-\\, x_{\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\right\\rbrace .\\\\&\\Rightarrow \\left\\Vert \\,x_{\\,1} \\,-\\, x_{\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\dfrac{\\beta }{1 \\,-\\, \\beta }\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ And $&\\left\\Vert \\,x_{\\,2} \\,-\\, x_{\\,3},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,=\\, \\left\\Vert \\,T\\,x_{\\,1} \\,-\\, T\\,x_{\\,2},\\, b_{\\,2},\\,\\cdots ,\\,b_{\\,n}\\,\\right\\Vert \\\\&\\le \\, \\beta \\left\\lbrace \\,\\left\\Vert \\,x_{\\,1} \\,-\\, T\\,x_{\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,x_{\\,2} \\,-\\, T\\,x_{\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\right\\rbrace \\\\&=\\, \\beta \\left\\lbrace \\,\\left\\Vert \\,x_{\\,1} \\,-\\, x_{\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,x_{\\,2} \\,-\\, x_{\\,3},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\right\\rbrace .\\\\&\\Rightarrow \\left\\Vert \\,x_{\\,2} \\,-\\, x_{\\,3},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\dfrac{\\beta }{1 \\,-\\, \\beta }\\,\\left\\Vert \\,x_{\\,1} \\,-\\, x_{\\,2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\\\ &\\hspace{133.72786pt}\\,\\le \\, \\left(\\,\\dfrac{\\beta }{1 \\,-\\, \\beta }\\,\\right)^{\\,2}\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ Similarly, $\\left\\Vert \\,x_{\\,3} \\,-\\, x_{\\,4},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\left(\\,\\dfrac{\\beta }{1 \\,-\\, \\beta }\\,\\right)^{\\,3}\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ In general, if  $k$   is any positive integer, then $\\left\\Vert \\,x_{\\,k} \\,-\\, x_{\\,k\\,+\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\left(\\,\\dfrac{\\beta }{1 \\,-\\, \\beta }\\,\\right)^{\\,k}\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ So, if  $p$   is any positive integer, then $&\\left\\Vert \\,x_{\\,k} \\,-\\, x_{\\,k \\,+\\, p},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\nonumber \\\\&\\le \\,\\left\\Vert \\,x_{\\,k} - x_{\\,k \\,+\\, 1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,x_{\\,k \\,+\\, 1} - x_{\\,k \\,+\\, 2},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\,\\cdots \\nonumber \\\\&\\hspace{28.45274pt}\\cdots \\,+\\,\\left\\Vert \\,x_{\\,k\\,+\\,p\\,-1} - x_{\\,k \\,+\\, p},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\nonumber \\\\&\\le \\, \\left(\\,r^{\\,k} \\,+\\, r^{\\,k\\,+\\,1} \\,+\\, \\cdots \\,+\\, r^{\\,k\\,+\\,p\\,-\\,1}\\,\\right)\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert , \\;\\text{where}\\;\\,r \\,=\\, \\dfrac{\\beta }{1 \\,-\\, \\beta }\\nonumber \\\\&<\\, \\dfrac{r^{\\,k}}{1 \\,-\\, r}\\,\\left\\Vert \\,x \\,-\\, T\\,x,\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ Since  $0 \\,<\\, \\beta \\,<\\, \\dfrac{1}{2}$ , we have  $0 \\,<\\, r \\,<\\, 1$   and so by (REF ),  $\\left\\Vert \\,x_{\\,k} \\,-\\, x_{\\,k \\,+\\, p},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\rightarrow \\, 0$   as  $k \\,\\rightarrow \\, \\infty $ .", "Therefore,  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $   is a  $b$ -Cauchy sequence.", "Since  $X$   is  $b$ -complete linear  $n$ -normed space, the sequence  $\\left\\lbrace \\,x_{\\,k}\\,\\right\\rbrace $   is convergent in the semi-normed space  $\\left(\\,X,\\, \\left\\Vert \\,\\cdot ,\\,b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\right)$ .So, let  $\\lim \\limits _{k \\,\\rightarrow \\, \\infty }\\,x_{\\,k} \\,=\\, x_{\\,0}$   with the property that  $\\left\\lbrace \\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\rbrace $   is linearly independent.", "We now show that  $x_{\\,0}$   is a fixed point of  $T$ .", "We have $&\\left\\Vert \\,x_{\\,0} \\,-\\, T\\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\le \\, \\left\\Vert \\,x_{\\,0} \\,-\\, x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,x_{\\,k} \\,-\\, T\\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&=\\, \\left\\Vert \\,x_{\\,0} \\,-\\, x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,T\\,x_{\\,k\\,-\\,1} \\,-\\, T\\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&\\le \\, \\beta \\left\\lbrace \\,\\left\\Vert \\,x_{k\\,-\\,1} - T\\,x_{k\\,-\\,1},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert + \\left\\Vert \\,x_{\\,0} - T\\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\right\\rbrace \\,+\\\\&\\hspace{28.45274pt}+\\,\\left\\Vert \\,x_{\\,0} - x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert .$ This implies that $&(\\,1 \\,-\\, \\beta \\,)\\,\\left\\Vert \\,x_{\\,0} - T\\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\& \\,\\le \\, \\left\\Vert \\,x_{\\,0} - x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\beta \\,\\left\\Vert \\,x_{k\\,-\\,1} - x_{\\,k},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&\\rightarrow \\, 0\\; \\;\\text{as}\\; \\,k \\,\\rightarrow \\, \\infty \\; \\;\\left[\\;\\text{since}\\;\\lim \\limits _{k \\,\\rightarrow \\, \\infty }\\,x_{\\,k} \\,=\\, x_{\\,0}\\,\\right].$ So,  $T\\,x_{\\,0} \\,=\\, x_{\\,0}$   and  $x_{\\,0}$   is a fixed point of  $T$ .", "We now prove that  $x_{\\,0}$   is the only fixed point of  $T$ .", "Let  $T\\,y_{\\,0} \\,=\\, y_{\\,0}$   such that  $\\left\\lbrace \\,x_{\\,0} \\,-\\, y_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\rbrace $   is linearly independent.", "Then $&\\left\\Vert \\,x_{\\,0} \\,-\\, y_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,=\\, \\left\\Vert \\,T\\,x_{\\,0} \\,-\\, T\\,y_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\\\&\\le \\, \\beta \\left\\lbrace \\,\\left\\Vert \\,x_{\\,0} \\,-\\, T\\,x_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,+\\, \\left\\Vert \\,y_{\\,0} \\,-\\, T\\,y_{\\,0},\\, b_{\\,2},\\, \\cdots ,\\, b_{\\,n}\\,\\right\\Vert \\,\\right\\rbrace \\,=\\, 0.$ This shows that  $x_{\\,0} \\,=\\, y_{\\,0}$ .", "This proves the theorem." ] ]
2210.07849
[ [ "Diversified Recommendations for Agents with Adaptive Preferences" ], [ "Abstract When an Agent visits a platform recommending a menu of content to select from, their choice of item depends not only on fixed preferences, but also on their prior engagements with the platform.", "The Recommender's primary objective is typically to encourage content consumption which optimizes some reward, such as ad revenue, but they often also aim to ensure that a wide variety of content is consumed by the Agent over time.", "We formalize this problem as an adversarial bandit task.", "At each step, the Recommender presents a menu of $k$ (out of $n$) items to the Agent, who selects one item in the menu according to their unknown preference model, which maps their history of past items to relative selection probabilities.", "The Recommender then observes the Agent's chosen item and receives bandit feedback of the item's reward.", "In addition to optimizing reward from selected items, the Recommender must also ensure that the total distribution of chosen items has sufficiently high entropy.", "We define a class of preference models which are locally learnable, i.e.", "behavior over the entire domain can be estimated by only observing behavior in a small region; this includes models representable by bounded-degree polynomials as well as functions with a sparse Fourier basis.", "For this class, we give an algorithm for the Recommender which obtains $\\tilde{O}(T^{3/4})$ regret against all item distributions satisfying two conditions: they are sufficiently diversified, and they are instantaneously realizable at any history by some distribution over menus.", "We show that these conditions are closely connected: all sufficiently high-entropy distributions are instantaneously realizable at any item history.", "We also give a set of negative results justifying our assumptions, in the form of a runtime lower bound for non-local learning and linear regret lower bounds for alternate benchmarks." ], [ "Introduction", "Suppose you manage an online platform that repeatedly provides menus of recommended content to visitors, such as sets of videos to watch or items to purchase, aiming to display options which agents will engage favorably with and yield you high rewards (in the form of ad revenue, watch time, purchases, or other metrics).", "In many settings, the preferences of agents are not fixed a priori, but rather can change as a function of their consumption patterns—the deeper one goes down a content “rabbit hole”, the further one might be likely to keep going.", "This “rabbit hole” effect can lead to (unforeseen) loss of revenue for the platform, as advertisers may later decide that they are not willing to pay as much for this “rabbit hole” content as they would for other content.", "The scope of negative effects emerging from these feedback loops is large, ranging from the emergence of “echo chambers” [15] and rapid political polarization [23] to increased homogeneity which can decrease agent utility [9], amplify bias [20], or drive content providers to leave the platform [22].", "These are harms which many platforms aim to avoid, both for their own sake and out of broader societal concerns.", "Hence, the evolving preferences of the Agent can be directly at odds with the Recommender's objectives of maximizing revenue and ensuring diverse consumption patterns in this dynamic environment.", "Our goal is to study such tensions between the interaction of these two players: the Recommender that recommends menus based on past choices of the Agent so as to maximize its reward (subject to diversity constraints), and the Agent whose preferences evolve as a function of past recommendations.", "To this end, we consider a stylized setting where the Recommender is tasked with providing a menu of $k$ recommended items (out of $n$ total) every round to an Agent for $T$ sequential rounds.", "In each round, the Agent observes the menu, then selects one of the items according to their preference model $M$ , which the Recommender does not know in advance.", "The preference model $M$ takes as input the Agent's memory vector $v$ , which is the normalized histogram of their past chosen items, and assigns relative selection probabilities to each item.", "The selected item at each round results in a reward for the Recommender, specified by an adversarial sequence of reward vectors, which the Recommender receives as bandit feedback, in addition to observing which item was selected.", "The Recommender must choose a sequence of menus to maximize their reward (or minimize regret), subject to a diversity constraint, expressed as a minimum entropy for the empirical item distribution.", "However, any regret minimization problem is incomplete without an appropriate benchmark for comparing the performance of a learner.", "An entropy constraint alone is insufficient to define a such benchmark.", "Due to intricacies of the Agent's preference model, there may be item distributions which are impossible to induce under any sequence of menus (e.g.", "they may strongly dislike the most profitable content).", "Adding to the challenge is the fact that the preference model is initially unknown and must be learned, and the set of item distributions which are instantaneously realizable by sampling a menu from some distribution can shift each round as well.", "Several immediate proposals are infeasible: it is impossible to obtain sublinear regret against the best fixed menu distribution, or even against the best item distribution realizable from the uniform memory vector.", "We propose a natural benchmark for which regret minimization becomes possible: the set of item distributions which are everywhere instantaneously realizable (the $\\texttt {EIRD}(M)$ set), i.e.", "item distributions such that, at any memory vector, there is always some menu distribution which induces them.", "We show that this set is also closely related to entropy constraints: when $M$ is sufficiently dispersed (a condition on the minimum selection probability for each item), $\\texttt {EIRD}(M)$ contains all sufficiently high-entropy distributions, and so regret minimization can occur over the entire high-entropy set." ], [ "Our Results", "We give an algorithm which, for a minimum entropy set $H_c$ and preference model $M$ , allows the Recommender to obtain ${O}(T^{3/4 })$ regret against the best distribution in the intersection of $H_c$ and $\\texttt {EIRD}(M)$ , provided that $M$ satisfies $\\lambda $ -dispersion and belongs to a class $$ which is locally learnable.", "A $\\lambda $ -dispersed preference model $M$ assigns a preference score of at least $\\lambda > 0$ to every item, ensuring a minimum positive probability of selection to each item in a menu.", "Dispersion is a natural assumption, given our restriction to $\\texttt {EIRD}(M)$ , as items which only have positive selection probability in part of the domain cannot be induced everywhere.", "The local learnability condition for a model class enforces that the behavior of any particular model can be predicted by observing behavior only in a small region.", "This is essentially necessary to have any hope of model estimation in this setting: we show that if learning a class from exact queries requires making queries to many points which are pairwise well-separated, exponentially many rounds are required to implement query learning.", "Despite this restriction, we show that several rich classes of preference models are indeed locally learnable, including those where preference scoring functions are expressed by bounded-degree multivariate polynomials, or by univariate functions with a sparse Fourier basis.", "Our algorithm is explicitly separated into learning and optimization stages.", "The sole objective for the learning stage is to solve the outer problem: recover an accurate hypothesis for the preference model.", "We select sequences of menus which move the Agent's memory vector to various points near the uniform distribution, enabling us to implement local learning and produce a model hypothesis $\\hat{M}$ .", "We then shift our focus to the inner problem for the Recommender, which is natural to view as a bandit linear optimization problem over the set of distributions in consideration, as we can use $\\hat{M}$ to identify a distribution of menus which generates a particular item distribution.", "However, representing the $\\texttt {EIRD}(\\hat{M})$ set explicitly is impractical, as the functions which generate feasible sets from the history can be highly non-convex.", "Instead, we operate over the potentially larger set where intersections are taken only over the sets $\\texttt {IRD}(v, \\hat{M})$ of instantaneously realizable distributions we have observed thus far.", "This precludes us from using off-the-shelf bandit linear optimization algorithms as a black box, as they typically require the decision set to be specified in advance.", "We introduce a modification of the FKM algorithm [13], RC-FKM, which can operate over contracting decision sets, and additionally can account for the imprecision in $\\hat{M}$ when generating menu distributions.", "This enables the Recommender to guide the Agent to minimize regret on their behalf via the sequence of menus they present." ], [ "Summary of Contributions", "Briefly, our main contributions are: We formulate the dynamic interaction between a Recommender and an Agent as an adversarial bandit task.", "We show that no algorithm can obtain $o(T)$ regret against the best menu distribution, or against the best item distribution in the IRD set of uniform vector.", "We then consider $\\texttt {EIRD}(M)$ and argue that it is a natural benchmark for regret as it also contains all sufficiently high entropy distributions over items.", "We define a class of locally learnable functions, which are functions that can be learned only using samples from a small neighborhood.", "We show a number of rich classes of functions where this is possible, and further we show that any class which is not locally learnable cannot be learned quickly by any algorithm which fits a hypothesis using queries.", "We give an algorithm for the Recommender that achieves $\\widetilde{O}(T^{3/4})$ regret against $\\texttt {EIRD}(M)$ for locally learnable classes of preference models that are $\\lambda $ -dispersed, which implements local learning to obtain a sufficiently accurate hypothesis for use in optimizing menu distributions.", "As a component of this, we develop a new algorithm for bandit linear optimization which can operate over contracting decision sets, and which can account for bounded adversarial imprecision in the played action.", "Overall, by considering this stylized setting we are able to provide several insights into the dynamic interaction between an Agent and a Recommender.", "While our algorithm is a useful tool for a Recommender who is already committed to providing diversified recommendations, we also view our results as presenting an intrinsic argument for incorporating such constraints.", "When preferences adapt over time, and Agents may be prone to venturing down content “rabbit holes”, restricting attention to recommendation patterns which are not too concentrated on small sets of items can in fact make the regret minimization problem tractable by discouraging consumption patterns which may be difficult to draw the Agent back from.", "This suggests a synergy between the goal of regret minimization and showing diverse content to the user." ], [ "Related Work", "Feedback loops in user preferences have received significant attention in the recommender systems literature, particularly for models with multiple agents which make use of collaborative filtering methods, and with explicit adaptivity models which are less flexible than those we consider [7], [9], [20], [28], [22].", "Within the online learning literature, our formalization bears some resemblance to bandit problems where multiple arms can be pulled simultaneously, which have received much recent attention [30], [29], [8], [3].", "Our results also share similarities with work on optimization from revealed preferences, where a mapping to a nested convex problem must be learned [27], [12]; with the performative prediction literature, where actions induce a distribution shift which impacts instantaneous reward potential [24], [18]; and more broadly, with repeated game problems against adaptive agents [5], [11], [10].", "Further related work is discussed in Appendix ." ], [ "Organization", "In Section , we introduce our setting and key definitions, analyze the local learnability of several classes of preference models, and give a series of negative and structural results.", "In Section we introduce a bandit linear optimization algorithm for contracting sets, which we use as a subroutine for our main algorithm in Section .", "We discuss the intuition for our proof techniques throughout, with full proofs deferred to the appendix." ], [ "Model and Preliminaries", "The central object of our setting is the preference model of the Agent, which dictates their relative item preferences based on their selection history and expresses their adaptivity over time.", "[Preference Models] A preference model is a mapping $M : \\Delta (n) \\rightarrow [0,1]^n$ which maps memory vectors $v$ to a preference score vector $s_v = M(v)$ .", "We assume that any input $v \\notin \\Delta (n)$ to $M$ (such as the empty history at $t=1$ ) results in the uniform score vector where $M(v)_i = 1$ for all $i$ .", "A constraint on our sequence of interactions with the Agent is that the resulting item distribution must have sufficiently high entropy.", "[Diversity Constraints] A diversity constraint $H_c \\subset \\Delta (n)$ is the convex set containing all item distributions $v \\in \\Delta (n)$ with entropy at least $c$ , i.e.", "$v$ is in $H_c$ if and only if: H(v) = - i=1n vi (vi) c. We say that a constraint $H_c$ is $\\epsilon $ -satisfied by a distribution $v$ if we have that $\\min _{x \\in H_c}d_{TV}(x, v) \\le \\epsilon $ , where $d_{TV}$ is the total variation distance between probability distributions.", "Our algorithmic results can be extended to any convex constraint set which contains a small region around the uniform distribution, but we focus on entropy constraints as they are quite natural and have interesting connections to our setting which we consider in Section REF ." ], [ "Recommendation Menus for Adaptive Agents", "An instance of our problem consists of an item set $N = [n]$ , a menu size $k$ , a preference model $M$ for the Agent, a constraint $H_c$ , a horizon length of $T$ rounds, and a sequence of linear reward functions $\\rho _1, \\ldots , \\rho _T$ for the Recommender.", "In each round $t \\in \\lbrace 1,\\ldots ,T\\rbrace $ : The Recommender chooses a menu $K_t \\subset N$ with ${K_t} = k$ .", "The Agent chooses item $i \\in K_t$ with probability pKt, vt, it = svt, itj Kt svt, j and updates its memory vector to the normalized histogram $v_{t+1} = \\frac{e_{i}}{t+1} + \\frac{t \\cdot v_t}{t+1},$ where $e_{i}$ is the $i$ th standard unit vector.", "The Recommender observes receives reward $\\rho _{t}(e_{i})$ for the chosen item.", "The goal of the Recommender is to maximize their reward over $T$ rounds subject to $v_T$ satisfying $H_c$ .", "It might seem to the reader that the Recommender can `manipulate' the Agent to achieve any preference score vector over time; however, this is not true as many score vectors might not be achievable depending on the preference model." ], [ "Realizability Conditions for Item Distributions", "For any memory vector $v$ , we define the feasible set of item choice distributions for Agent in the current round, each generated by a distribution over menus which the Recommender samples from.", "[Instantaneously-Realizable Distributions at $v$ ] Let $p_{K,v} \\in \\Delta (n)$ be the item distribution selected by an Agent presented with menu $K$ at memory vector $v$ , given by: pK,v,i = sv,ij K sv,j.", "The set of instantaneously-realizable distributions at $v$ is given by: IRD(v, M) = K n k pK, v. For any $x \\in \\textup {\\texttt {IRD}}(v, M)$ , any menu distribution $z \\in \\Delta ({n k})$ specifying a convex combination of menu score vectors $p_{K, v}$ which sum to $x$ will generate the item distribution $x$ upon sampling.", "One might hope to match the performance of the best menu distribution, or perhaps the best realizable item distribution from the uniform vector.", "Unfortunately, neither of these are possible.", "There is no algorithm which can obtain $o(T)$ regret against the best item distribution in the $\\textup {\\texttt {IRD}}$ set for the uniform vector, or against the best menu distribution in $\\Delta {{n k}}$ , even when the preference model is known exactly and is expressible by univariate linear functions.", "We give a separate construction for each claim, with the full proof deferred to Appendix .", "The first is a case where the optimal distribution from the uniform vector cannot be played every round, as it draws the the memory vector into $\\textup {\\texttt {IRD}}$ sets where the reward opportunities are suboptimal.", "The second considers menu distributions where obtaining their late-round performance requires committing early to an irreversible course of action.", "Instead, our benchmark will be the set of distributions which are realizable from any memory vector.", "[Everywhere Instantaneously-Realizable Distributions] For a preference model $M$ , the set of everywhere instantaneously-realizable distributions is given by: EIRD(M) = v (n) IRD(v, M).", "This is the set of distributions $x \\in \\Delta (n)$ such that from any memory vector $v$ , there is some menu distribution $z$ such that sampling menus from $d$ induces a choice distribution of $x$ for the agent.", "Note that the set $\\texttt {EIRD}(M)$ is convex, as each $\\texttt {IRD}(v, M)$ is convex by construction." ], [ "Conditions for Preference Models", "The algorithm we present in Section requires two key conditions for a class of preference models: each model in the class must be dispersed, and the class must be locally learnable.", "This enforces that the Agent is always willing to select every item in the menu they see with some positive probability, and that the behavior at any memory vector can be estimated by observing behavior in a small region.", "[Dispersion] A preference model $M$ is $\\lambda $ -dispersed if $s_{v, i} \\ge \\lambda $ for all $v \\in \\Delta (n)$ and for all $i$ , i.e.", "items always have a score of at least $\\lambda $ at any memory vector.", "The dispersion condition plays an important role in the analysis of our algorithm by enabling efficient exploration, but it additionally coincides with diversity constraints in appropriate regimes.", "[High-Entropy Containment in EIRD] Consider the diversity constraint $H_{c}$ for $c = \\log (n) - \\gamma $ , and let $\\tau \\ge \\exp (-\\gamma )$ .", "Let $M$ be a $\\lambda $ -dispersed preference model with $\\lambda \\ge \\frac{k^2 \\exp (\\gamma / \\tau )}{n}$ .", "For any vector $v \\in H_c$ , there is a vector $v^{\\prime } \\in \\textup {\\texttt {EIRD}}(M)$ such that $d_{TV}(v, v^{\\prime })$ is at most $O(\\tau )$ .", "The key step here, proved in Appendix REF , is that $\\textup {\\texttt {EIRD}}(M)$ contains the uniform distribution over any large subset of items, and taking mixtures of these can approximate any high-entropy distribution.", "Next, for a class of models to be locally learnable, one must be able to accurately estimate a model's preference scores everywhere when only given access to samples in an arbitrarily small region.", "[Local Learnability] Let $$ be a class of preference models, and let EIRD() = M EIRD(M).", "Let $v^*$ be a point in $\\textup {\\texttt {EIRD}}()$ , and $V_{\\alpha }$ be the set of points within distance $\\alpha $ from $v^*$ , for $\\alpha $ such that $V_{\\alpha } \\subseteq \\textup {\\texttt {EIRD}}()$ .", "$$ is $h$ -locally learnable if there is some $v^*$ and an algorithm $$ which, for any $M \\in $ and any $\\alpha > 0$ , given query access to normalized score estimates $\\hat{s}_v$ where ${\\hat{s}_v - M(v) / {M}_v^*}_{\\infty } \\le \\beta $ for any $v \\in V_{\\alpha }$ (where ${M}_v^* = \\sum _i M(v)_i$ ) and for some $\\beta $ , can produce a hypothesis model $\\hat{M}$ such that ${\\hat{M}(x)/\\hat{M}^*_x - M(x)/M^*_x} \\le \\epsilon $ for any $x \\in \\Delta (n)$ and $\\epsilon = \\Omega (\\beta )$ .", "The local learnability condition, while covering many natural examples shown in Section REF , is indeed somewhat restrictive.", "In particular, it is not difficult to see that classes of piecewise functions, such as neural networks with ReLU activations, are not locally learnable.", "However, this appears to be essentially a necessary assumption for efficient learning, given the cumulative nature of memory in our setting.", "We show a runtime lower bound for any algorithm that hopes to learn an estimate $\\hat{M}$ for the preference model $M$ via queries.", "Even a Recommender who can force the Agent to pick a particular item each round, and exactly query the preference model for free at the current memory vector, may require exponentially many rounds to learn $\\hat{M}$ if the points it must query are far apart.", "[Query Learning Lower Bound] Suppose the Recommender can force the Agent to select any item at each step $t$ , and can query $M(v_t)$ at the current memory vector $v_t$ .", "Let $_S$ be an algorithm which produces a hypothesis $\\hat{M}$ by receiving queries $M(v)$ for each $v \\in S$ .", "For points $v$ and $v^{\\prime }$ , let $d_{\\max }(v, v^{\\prime }) = \\max _i v_i - v_i^{\\prime }$ .", "Then, any sequence of item selections and queries by the Recommender requires at least T (S) i=1|S|- 1 (1 + d((i), (i+1)) ) rounds to run $(S)$ , where $\\pi (S)$ is the set of permutations over $S$ and $\\sigma (i)$ is the $i$ th item in $\\sigma $ .", "We prove this in Appendix REF .", "Notably, this implies that if $S$ contains $m$ points which, for any pair $(v, v^{\\prime })$ have both $d_{\\max }(v, v^{\\prime }) \\ge \\gamma $ and $d_{\\max }(v^{\\prime }, v)\\ge \\gamma $ , at least $O{(1 + \\gamma )^m}$ rounds are required." ], [ "Locally Learnable Preference Models", "There are several interesting examples of model classes which are indeed locally learnable, which we prove in Appendix .", "In general, our approach is to query a grid of points inside the radius $\\alpha $ ball around the uniform vector, estimate each function's parameters and show that the propagation of over the entire domain is bounded.", "Note that the normalizing constants for each query we observe may differ; for univariate functions, we can handle this by only moving a subset of values at a time, allowing for renormalization.", "For multivariate polynomials, we consider two distinct classes and give a separate learning algorithm for each; we can estimate ratios of scores directly for multilinear functions, and if scores are already normalized we can avoid rational functions altogether.", "Each local learning result we prove involves an algorithm which makes queries near the uniform vector.", "We later show in Lemma REF that taking $\\lambda \\ge k^2/n$ suffices to ensure that these queries can indeed be implemented via an appropriate sequence of menu distributions for any $M$ in such a class." ], [ "Bounded-Degree Univariate Polynomials", "Let $_{BUP}$ be the class of bounded-degree univariate polynomial preference models where: For each $i$ , $M(v)_i = f_i(v_i)$ , where $f_i$ is a degree-$d$ univariate polynomial which takes values in $[\\lambda , 1]$ over the range $[0,1]$ for some constant $\\lambda > 0$ .", "Univariateness captures cases where relative preferences for an item depend only on the weight of that item in the agent's memory, i.e.", "there are no substitute or complement effects between items.", "$_{BUP}$ is $O(d)$ -locally learnable by an algorithm $_{BUP}$ with $\\beta \\le O(\\epsilon \\lambda ^2 \\cdot (\\frac{\\alpha }{nd})^d)$ ." ], [ "Bounded-Degree Multivariate Polynomials", "Let $_{BMLP}$ be the class of bounded-degree multilinear polynomial preference models where: For each $i$ , $M(v)_i = f_i(v)$ , where $f_i$ is a degree-$d$ multilinear (i.e.", "linear in each item) polynomial which takes values in $[\\lambda , 1]$ over $\\Delta (n)$ for some constant $\\lambda > 0$ , and let $_{BNMP}$ be the class of bounded-degree normalized multivariate polynomial preference models where: For each $i$ , $M(v)_i = f_i(v)$ , where $f_i$ is a degree-$d$ polynomial which takes values in $[\\lambda , 1]$ over $\\Delta (n)$ for some constant $\\lambda > 0$ , where $\\sum _i f_i(v) = C$ for some constant $C$ .", "Together, these express a large variety of adaptivity patterns for preferences which depend on frequencies of many items items simultaneously.", "In particular, these can capture relatively intricate “rabbit hole” effects, in which some subsets of items are mutually self-reinforcing, and where their selection can discourage future selection of other subsets.", "$_{BMLP}$ and $_{BNMP}$ are both ${O}({n^d})$ -locally learnable, for $\\beta \\le {O}(\\frac{\\epsilon ^2 }{ \\textup {poly}(n (d/ \\alpha )^{d})) })$ , and $\\beta \\le \\frac{\\epsilon }{\\alpha ^{d} F(n, d)} $ , respectively, where $F(n,d)$ is independent of other parameters." ], [ "Univariate Functions with Sparse Fourier Representations", "We can also allow for classes of functions where the minimum allowable $\\alpha $ depends on some parameter.", "Functions with sparse Fourier representations are such an example, and naturally capture settings where preferences are somewhat cyclical, such as when an Agent goes through “phases” of preferring some type of content for a limited window.", "We say that a function $f : \\rightarrow $ is $\\ell $ -sparse if $f(x) = \\sum _{i=1}^\\ell \\xi _i e^{2\\pi \\mathbf {i} \\eta _i x}$ where $\\eta _i \\in [-F,F]$ denotes the $i$ -th frequency and $\\xi _i$ denotes the corresponding magnitude.", "We say that an $\\ell $ -sparse function $f$ is $\\hat{\\alpha }$ -separable when $\\min _{i \\ne j} |\\eta _i - \\eta _j| > \\hat{\\alpha }$ .", "Let $_{SFR}(\\hat{\\alpha })$ be the class of univariate sparse Fourier representation preference models where: For each $i$ , $M(v)_i = f_i(v_i)$ , where $f_i$ is a univariate $\\ell $ -sparse and $\\hat{\\alpha }$ -separable function which, over $[0,1]$ , is $L$ -Lipschitz and takes values in $[\\lambda , 1]$ for some constant $\\lambda > 0$ .", "$_{SFR}(\\hat{\\alpha })$ is $\\widetilde{O}(n \\ell )$ -locally learnable by an algorithm $_{SFR}$ with $\\beta \\le O(\\frac{\\epsilon \\lambda \\alpha }{\\sqrt{n} \\ell } )$ and any $\\alpha \\ge {\\Omega }(1/\\hat{\\alpha })$ ." ], [ "Bandit Linear Optimization with Contracting Sets", "The inner problem for the Recommender can be viewed as a bandit linear optimization problem over $H_c \\cap \\texttt {EIRD}(M)$ .", "However, representing $\\texttt {EIRD}(M)$ is challenging even if we know $M$ exactly, as it involves an intersection over infinitely many sets (generated by a possibly non-convex function), and a net approximation would involve exponential dependence on $n$ .", "Instead, our approach will be to operate over the larger set $H_c \\cap { \\bigcap _{t} \\texttt {IRD}(v_t, M)}$ for the memory vectors $v_t$ we have seen thus far, where representing each $\\texttt {IRD}$ has exponential dependence only on $k$ (from enumerating all menus).", "The tradeoff is that we can no longer directly use off-the-shelf bandit linear optimization algorithms for a known and fixed decision set such as FKM [13] or SCRiBLe [1] as a subroutine, as our decision set is contracting each round.", "We introduce an algorithm for bandit linear optimization, a modification of the FKM algorithm we call Robust Contracting FKM (RC-FKM), which handles this issue by projecting to our estimate of the contracted decision set at each step.", "Additionally, RC-FKM can handle the imprecision resulting from our model estimation step, which can be represented by small adversarial perturbations to the action vector in each round; we modify the sampling rule to ensure that our target action remains in the true decision set even when perturbations are present.", "We prove the regret bound for RC-FKM in Appendix .", "(Robust Contracting FKM).", "Input: sequence of contracting convex decision sets $_1, \\ldots _T$ containing $\\mathbf {0}$ , perturbation vectors $\\xi _1,\\ldots , \\xi _T$ where ${\\xi _t} \\le \\epsilon $ , parameters $\\delta $ , $\\eta $ .", "Set $x_1 = \\mathbf {0}$ $t = 1$ to $T$ Draw $u_t \\in \\mathbb {S}_1$ uniformly at random, set $y_t = x_t + \\delta u_t + \\xi _t$ Play $y_t$ , observe and incur loss $\\phi _t \\in [0,1]$ , where $[\\phi _t] = f_t(y_t)$ Let $g_t = \\frac{n}{\\delta } \\phi _t u_t$ Let $_{t+1,\\delta , \\epsilon } =\\lbrace x \\vert \\frac{r}{r-\\delta - \\epsilon } x_t \\in _{t+1} \\rbrace $ Update $x_{t+1} = \\Pi _{_{t+1,\\delta ,\\epsilon }}[x_t - \\eta g_t]$ [Regret Bound for Algorithm ] For a sequence of $G$ -Lipschitz linear losses $f_1,\\ldots ,f_T$ and a contracting sequence of domains $_1,\\ldots ,_T$ (with $_j \\subseteq _i$ for $j > i$ , each with diameter at most $D$ , and where a ball of radius $r > \\delta +\\epsilon $ around $\\mathbf {0}$ is contained in $_T$ ), and adversarially chosen unobserved vectors $\\xi _1,\\ldots , \\xi _T$ with ${\\xi _t} \\le \\epsilon $ which perturb the chosen action at each step, with parameters $\\eta = \\frac{D}{nT^{3/4}}$ and $\\delta = \\frac{1}{T^{1/4}}$ , Algorithm obtains the expected regret bound t=1T [t] - x T t=1T ft(x) n GDT3/4 + G D T3/4r + 2 GDTr." ], [ "Recommendations for Adaptive Agents ", "Our main algorithm begins with an explicit learning phase, after which we conduct regret minimization, and at a high level works as follows: First, we learn an estimate of the preference model $\\hat{M}$ by implementing local learning with a set of points close to the uniform memory vector, which suffices to ensure high accuracy of our representation with respect to $M$ .", "If the number of local learning queries is independent of error terms and $\\beta = \\Theta (\\epsilon )$ , we can complete this stage in $t_0 = \\tilde{O}(1/\\epsilon ^3) = \\tilde{O}(T^{3/4})$ steps.", "For the remaining $T - t_0$ steps, we implement RC-FKM by using the learned model $\\hat{M}$ at each step to solve for a menu distribution which generates the desired item distribution from the current memory vector, then contracting the decision set based on the memory update.", "[Regret Bound for Algorithm ] Algorithm obtains regret bounded by RegretC EIRD(M)(T) O t0 + n GT3/4 + (+ ) G Tr + GT = O(T3/4) where $t_0$ is the time required for local learning, $r = O(k^2 / n)$ , and $\\epsilon , \\delta = O(r \\cdot T^{-1/4})$ , and results in an empirical distribution such that $H_c$ is $O(\\epsilon )$ -satisfied with probability at least $1 - O(T^{-1/4})$ .", "A no-regret recommendation algorithm for adaptive agents.", "Input: Item set $[n]$ , menu size $k$ , Agent with $\\lambda $ -dispersed memory model $M$ for $\\lambda \\ge \\frac{k^2}{n}$ , where $M$ belongs to an $S$ -locally learnable class $$ , diversity constraint $H_c$ , horizon $T$ , $G$ -Lipschitz linear losses $\\rho _i, \\ldots , \\rho _T$ .", "Let $t_{\\text{pad}} = {\\Theta }(1/\\epsilon ^3)$ Let $t_{\\text{move}} = {\\Theta }(1/\\epsilon ^3)$ Let $t_{\\text{query}} = {\\Theta }(1/\\epsilon ^2)$ Let $\\alpha = \\Theta ( \\frac{k}{n^2 S} )$ Get set of $S$ points in the $\\alpha $ -ball around uniform vector $x_U$ to query from $_{\\mathcal {M}}$ Let $t_0 = t_{\\text{pad}} + S(2 \\cdot t_{ \\text{move}} + t_{\\text{query}})$ Run $\\texttt {UniformPad}$ for $t_{\\text{pad}}$ rounds $x_i$ in $S$ Run $\\texttt {MoveTo}(x_i)$ for $t_{\\text{move}}$ rounds Run $\\texttt {Query}(x_i)$ for $t_{\\text{query}}$ rounds, observe result $\\hat{q}(x_i)$ Run $\\texttt {MoveTo}(x_U)$ for $t_{\\text{move}}$ rounds Estimate model $\\hat{M}$ using $_{}$ for $\\beta = \\Theta (\\epsilon )$ Let $v_{t_0}$ be the empirical item distribution of the first items $t_0$ items Let $_{t_0} = H_c$ (in $n-1$ dimensions, with $x_{t,n} = 1 - \\sum _{i=1}^{n-1} x_{t,i}$ , and s.t.", "$x_U$ translates to $\\mathbf {0}$ ) Initialize $\\textsc {RC-FKM}$ to run for $T^* = T - t_0 - 1$ rounds with $r = O(k^2 / n)$ , $\\delta , \\epsilon = \\frac{r}{{T^*}^{1/4}}$ $t = t_0 + 1$ to $T$ Let $x_t$ be the point chosen by RC-FKM Use $\\texttt {PlayDist}(x_t)$ to compute menu distribution $z_t$ Sample $K_t \\sim z_t$ , show $K_t$ to Agent Observe Agent's chosen item $i_t$ and reward $\\rho _t(e_{i_t})$ Update RC-FKM with $\\rho _t(e_{i_t})$ Let $v_{t} = \\frac{t-1}{t}v_{t-1} + \\frac{1}{t} \\cdot e_{i_t}$ Update the decision set to $_{t+1} = _t \\cap ~ \\texttt {IRD}(v_t, \\hat{M})$" ], [ "Structure of $\\texttt {EIRD}(M)$", "The key tool which enables us to implement local learning is a construction for generating any point near the uniform via an adaptive sequence of menu distributions, provided $\\lambda $ is sufficiently large.", "For any $\\lambda $ -dispersed $M$ where $\\lambda \\ge \\frac{k^2}{n}$ , $\\textup {\\texttt {EIRD}}(M)$ contains all points $x \\in \\Delta (N)$ satisfying x - xU k-1n(n-1), where $x_U$ is the uniform $\\frac{1}{n}$ vector.", "We give an algorithmic variant of this lemma which is used directly by Algorithm , as well as a variant for uniform distributions over smaller subsets as $\\lambda $ grows, which we use to prove Theorem REF ." ], [ "Subroutines", "Our algorithm makes use of a number of subroutines for navigating the memory space, model learning, and implementing RC-FKM.", "We state their key ideas here, with full details deferred to Appendix ." ], [ " In each round, include the $k$ items with smallest counts, breaking ties randomly." ], [ " Apply the same approach from UniformPad to the difference between the current histogram and $x$ ." ], [ " Play a sequence of $O(n/k)$ partially overlapping menus which cover all items, holding each constant long enough for concentration, and compute relative probabilities of each item." ], [ " Given an item distribution $x$ , we solve a linear program to compute a menu distribution $z_x$ using $\\hat{M}(v)$ which induces $x$ when a menu is sampled and the Agent selects an item.", "The intuition behind our learning stage is that each call to Query($x$ ) can be accurately estimated by bounding the “drift” in the memory vector while sampling occurs, as the number of samples per query is small compared to the history thus far.", "Each call to MoveTo($x$ ) for a point within the $\\alpha $ -ball can be implemented by generating an empirical distribution corresponding to a point in $\\texttt {EIRD}(M)$ for sufficiently many rounds.", "The resulting model estimate $\\hat{M}$ yields score estimates which are accurate for any memory vector.", "To run RC-FKM, we translate to an $n-1$ dimensional simplex representation, and construct a menu distribution to implement any action $x_t$ via a linear program (PlayDist($x$ )).", "The robustness guarantee for RC-FKM ensures that the loss resulting from imprecision in $\\hat{M}$ is bounded, and further ensures that the resulting expected distribution remains inside $H_c$ (and that $H_c$ is approximately satisfied with high probability by the empirical distribution).", "We contract our decision set in each step with the current space $\\texttt {IRD}(v_t, \\hat{M})$ , which will always contain $\\texttt {EIRD}(\\hat{M})$ , the best point in which is competitive with the best point in $\\texttt {EIRD}({M})$ ." ], [ "Conclusion and Future Work", "Our work formalizes a bandit setting for investigating online recommendation problems where agents' preferences can adapt over time and provides a number of key initial results which highlight the importance of diversity in recommendations, including lower bounds for more “ambitious” regret benchmarks, and a no-regret algorithm for the EIRD set benchmark, which can coincide with the high-entropy set under appropriate conditions.", "Our results showcase a tradeoff between the space of strategies one considers and the ability to minimize regret.", "Crucially, our lower bound constructions illustrate that we cannot hope to optimize over the set of recommendation patterns which may send agents down “rabbit holes” that drastically alter their preferences, whereas it is indeed feasible to optimize over the space of sufficiently diversified recommendations.", "There are several interesting directions which remain open for future investigation, including additional characterizations of the EIRD set, discovering more examples or applications for local learnability, identifying the optimal rate of regret or dependence on other parameters, settings involving multiple agents with correlated preferences, and consideration of alternate models of agent behavior which circumvent the difficulties posed by uniform memory." ], [ "Empirical Investigation Of Recommendation Feedback Loops", "A substantial body of evidence has emerged in recent years indicating that recommendation systems can create feedback loops which drive negative social consequences.", "[23] observed that users accessing videos with extreme political views are likely to get caught in an “ideological bubble” in just a few clicks, and [16] explore the role of recommendation algorithms in creating distrust and amplifying political polarization on social media platforms.", "By investigating a real-world e-commerce dataset, [15] study the way in which recommendation systems drive agents' self-reinforcing preferences and lead them into “echo chambers” where they are separated from observing a diversity of content.", "[31] conduct a meta-analysis over many datasets which focuses specifically on the “rabbit hole” problem by means of exploring “taste distortion” of agents who observe recommendations which are more extreme than their current preferences.", "Such results motivate investigating these dynamics from game-theoretic and learning-theoretic foundations." ], [ "Modeling Feedback Loops in Recommendation Systems", "A number of recent works from the recommendation systems literature have explored the role of collaborative filtering algorithms for various models of agent behavior, aiming to understand how feedback loops in recommendation patterns emerge, the harms they cause, and how they can be corrected [7], [28].", "A common theme is homogenization of recommendations across a population of users, which can lead to exacerbation of biased utility distributions for minority groups [20], long-run utility degradation [9], and a lack of traffic to smaller content providers which results in them being driven to exit the platform [22].", "Our work indirectly addresses this phenomenon by encouraging diverse recommendations, but our primary focus is from the perspective of a single agent, who may be led down a “rabbit hole” by an algorithm which optimizes for their immediate engagement." ], [ "Dueling Bandits", "The “dueling bandits” problem, initially proposed as a model for similar recommendation systems challenges [30], [29], and which has been generalized for sets larger than two [3], [26], considers a similar setting in which bandit optimization is conducted with respect to the preference model of an agent, occasionally represented via an explicit parametric form.", "Here, one presents a set of choices to an agent, then receives only ordinal feedback about the relative rewards of the choices, and must optimize recommendations with regret measured against the best individual choice.", "In contrast to our setting, these works consider preferences which are fully determined a priori, and do not change as a function of item history or exhibit preference feedback loops." ], [ "Online Stackelberg Problems", "A number of works in recent years explore online problems where an agent responds to the decision-maker's actions, influencing their reward.", "The performative prediction setting, introduced in [24], captures settings in which a deployed classifier results in changes to the distribution itself, in turn affecting performance.", "This work has been extended to handle stochastic feedback [21] and notably, to a no-regret variant [18] which involves learning mapping between classifiers and distribution shifts, which bears some conceptual similarities to our procedure for locally learning an agent's preference model.", "The “revealed preferences” literature involves a similar requirement of learning a mapping between actions and agent choices [27], [12].", "Some features of our setting resemble elements of other well-studied online problems, including the restricted exploration ability for limited switching problems (e.g.", "[4]), and the contracting target set for chasing nested convex bodies (e.g.", "[6])." ], [ "Strategizing Against Adaptive Agents", "Some recent work has begun to explore the problem of designing optimal strategies in a repeated game against agents who adapt their strategies over time using a no-regret algorithm.", "In auction problems, [5] study the extent to which an auction designer can extract value from bidders who use different kinds of no-regret algorithms.", "More generally, [11] connect this line of investigation to Stackelberg equilibria for normal-form games.", "In strategic classification problems, [32] study the behavior when using a learning rate which is either much faster or much slower than that of the agents which one aims to classify, and draw connections to equilibrium concepts as well.", "Our work extends this notion of strategizing against adaptive agents to recommendation settings, with novel formulations of adaptivity and regret to suit the problem's constraints." ], [ "Proof of Linear Regret Lower Bounds (Theorem ", "We give a separate lower bound construction for the uniform $\\texttt {IRD}$ item distribution benchmark and the menu distribution benchmark, yielding the theorem.", "There is no algorithm which can obtain $o(T)$ regret against the best item distribution in the $\\textup {\\texttt {IRD}}$ set for the uniform vector, even when the preference model is known exactly and is expressible by univariate linear functions.", "First we give an example for which obtaining $o(T)$ regret against $\\textup {\\texttt {IRD}}(v, M)$ for the uniform vector $v_U$ is impossible.", "Consider the memory model $M$ where: $M(v)_1 = \\lambda + 0.5 + \\frac{n}{n-1}\\cdot (v_1 - \\frac{1}{n})\\cdot (0.5 - \\lambda )$ ; $M(v)_2 = \\lambda + 0.5 (1 - v_1 + \\frac{1}{n})$ ; $M(v)_i = 0.5 + \\lambda $ for $i > 2$ .", "Observe that at the uniform distribution where $v_1 = \\frac{1}{n}$ , all items have a score of $0.5 + \\lambda $ .", "If $v_1 = 1$ , we have that: $M(v)_1 = 1$ , and $M(v)_2 = \\lambda + \\frac{0.5 }{n}$ .", "If $v_1 = 0$ , we have that: $M(v)_1 = \\lambda + 0.5 - \\frac{0.5 - \\lambda }{n-1}$  , and $M(v)_2 = \\lambda + 0.5 \\cdot (1 + \\frac{1}{n})$ As scores linearly interpolate between these endpoints for any $v_1$ , $M$ is $\\lambda $ -dispersed, and scores lie in $[\\lambda , 1]$ .", "Let $k=2$ .", "Consider reward functions which give reward $\\alpha > 0$ for item 1 in each round up to $t^* = T/2$ , giving reward 0 to each other item; after $t^*$ , a reward of $\\beta > 0$ is given for item 2 while the rest receive a reward 0.", "The distribution which assigns probability $1/2$ each to item 1 and 2, with all other items having probability 0, is contained in $\\textup {\\texttt {IRD}}(v_U, M)$ , as one can simply play the menu with both items.", "This distribution yields a total expected reward of Rv = t*2 + t*2 over $T$ steps.", "Consider the performance of any algorithm $$ which results in item 1 being selected with an empirical probability $p$ over the first $t^*$ rounds.", "At $t=t^*$ , we have $v_{t^*,1} = p$ ; its total reward over the first $t^*$ rounds is $\\alpha p t^*$ .", "For sufficiently large $n$ and small $\\lambda $ , the score for item 2 is approximated by $M(v)_2 = 0.5(1 -p)$ up to any desired accuracy.", "In future rounds $t \\ge t^*$ , the value $v_{t,s1}$ is at least $\\frac{pt^*}{t}$ , and so the score for item 2 is at most M(v)2 = 0.5(1 - pt*t).", "Each other item has a score of at least $0.5$ , yielding an upper bound on the probability that item 2 can be selected even if it is always in the menu, as well as a maximum expected per-round reward of Rt = 0.5(1 - pt*t)1 - 0.5pt*t .", "At time $T = 2 t^*$ , the instantaneous reward is at most RT = 2 - p4 - p , which is also a per-round upper bound for each $t \\ge t^*$ .", "This bounds the total reward for $$ by R = p t* + t* 2 - p4 - p. We can now show that for any $p$ , there exists a $\\beta $ such that $R_v - R_{} = \\Theta (T)$ .", "For any $p \\le \\frac{1}{3}$ , we have R t* 3 + t* 2, and for any $p > \\frac{1}{3}$ we have: R t* + 5t* 11.", "In the first case, we immediately have $R_v - R_{} \\ge T \\alpha / 6$ for any $\\beta $ .", "In the second case, let $\\beta \\ge 22 \\alpha $ .", "We then have: Rv - R t* 22 - t* 2 T / 4.", "The value of $\\beta $ can be determined adversarially, and so there is no algorithm $$ which can obtain $o(T)$ regret against $\\textup {\\texttt {IRD}}(v, M)$ .", "Next we show a similar impossibility result for regret minimization with respect to the set of all menu distributions.", "There is no algorithm which can obtain $o(T)$ regret against the best menu distribution in $\\Delta {{n k}}$ , even when the preference model is known exactly and is expressible by univariate linear functions.", "Let $M$ be the $\\lambda $ -dispersed memory model where the functions for items $(a,b,c)$ , and every other item $i$ , are given by: $M(v)_a = \\lambda + (1 - \\epsilon ) (1 - v_b)$ ; $M(v)_b = \\lambda + (1 - \\epsilon ) v_b$ ; $M(v)_c = \\lambda + (1 - \\epsilon ) v_c$ ; $M(v)_i = \\lambda + (1 - \\epsilon ) (1 - v_b)$   for $i \\notin \\lbrace a, b, c\\rbrace $ ; for some $\\lambda > 0$ and $\\epsilon > \\lambda $ .", "Let $k=2$ .", "Consider a sequence of rewards $\\lbrace f_t\\rbrace $ which yields reward $\\alpha $ to items $(a,b)$ for each round $t \\le t^*$ and 0 to the rest, then in each step after $t^*$ , yields a reward of $\\beta $ for item $c$ , a reward of 0 for item $b$ , and reward of $-\\beta $ for every other item.", "Note the total expected reward for the following distributions: R(a,b)(T) = t* - (T - t*)/2 ; R(b,c)(T) = t*/2 + (T - t*)/2 ; The bound for $R_{(a,b)}(t^*)$ follows from symmetricity of the resulting stationary distribution, given by the unique solution $v_a = 0.5$ to the recurrence: $v_b = \\frac{\\lambda + (1 - \\epsilon )v_b}{2\\lambda + (1 - \\epsilon )}$ which is approached in expectation for large $T$ regardless of initial conditions for any constant $\\lambda $ .", "Symmetricity also results in balanced expectations for each item in $R_{(b,c)}$ .", "Consider the distribution $p_{t^*}$ played by an algorithm $$ over the first $t^*$ rounds, where $t^*$ is large enough to ensure concentration.", "If $p_{t^*,a} + p_{t^*, b} \\le 1 - \\delta $ for some constant $\\delta $ , then for $\\beta = 0$ the algorithm has regret $\\delta \\alpha t^* = \\Theta (T)$ for any $t^* = \\Theta (T)$ .", "Further, if regret is not bounded, the menu $(a, b)$ must be played in nearly every round, as other item placed in the menu has positive selection probability.", "As such, the empirical probability of $b$ must be close to $1/2$ .", "After $t^*$ , the algorithm cannot obtain a per-round utility which matches that of $(b,c)$ up to $\\delta $ until a round $t$ where either: + (1 - ) pt,c 2+ (1 - )(pt,b + pt,c) 1/2 - or + (1 - ) pt,c 2+ (1 - )(1 - pt,b + pt,c) 1/2 - , which requires the total number of rounds in which $c$ is chosen to approach $t^* / 2 - C \\cdot \\delta t^*$ , where $C$ is a constant depending on $\\epsilon $ and $\\lambda $ .", "Let $T = 3t^*/2$ , and so this cannot happen for small enough constant $\\delta $ , resulting in a regret of $\\delta \\beta T / 3 - \\alpha T/3 $ with respect to $(b,c)$ , which is $\\Theta (T)$ when $\\delta \\beta > \\alpha $ ." ], [ "Proof of High-Entropy Containment of ", "By Lemma REF , for a $\\lambda $ -dispersed preference model $M$ with $\\lambda \\ge \\frac{Ck^2}{n}$ , any uniform distribution over $n/C$ items lies inside $\\textup {\\texttt {EIRD}}(M)$ .", "We make use of a lemma from [2], which we restate here.", "[Lemma 8 in [2]] For a random variable $A$ over $[n]$ with $H(A) \\ge \\log n - \\gamma $ , there is a set of $\\ell + 1 = O(\\gamma / \\tau ^3))$ distributions $\\psi _i$ for $i \\in \\lbrace 0,\\ldots ,n\\rbrace $ over a partition of the support of $A$ which can be mixed together to generate $A$ , where $\\psi _0$ has weight $O(\\tau )$ , and where for each $i\\ge 1$ : $\\log { \\textup {supp}(\\psi _i)} \\ge \\log n - \\gamma / \\tau $ .", "$\\psi _i$ is within total variation distance $O(\\tau )$ from the uniform distribution on its support.", "Using this, we can explicitly lower bound the support of each $\\psi _i$ : supp(i) (n) - / = (n) - ((/ )) = n(/ ).", "As such: supp(i) n(/ ).", "Each uniform distribution over $ \\text{supp}(\\psi _i)$ lies inside $\\textup {\\texttt {EIRD}}(M)$ for $\\lambda \\ge \\frac{Ck^2}{n}$ , provided that $C \\ge \\exp (\\gamma / \\tau )$ .", "The $O(\\tau )$ bound on total variation distance is preserved under mixture, as well as when redistributing the mass of $\\psi _0$ arbitrarily amongst the uniform distributions." ], [ "Proof of Query Learning Runtime Lower Bound (Theorem ", "For any permutation $\\sigma $ , we can lower bound the steps required to move between any two vectors adjacent in the ordering in terms of $d_{\\max }$ and the number of rounds elapsed thus far.", "Consider two vectors $v$ and $v^{\\prime }$ , where $v$ is the current empirical item distribution after $t$ steps.", "Reaching an empirical distribution of $v^{\\prime }$ requires at least $t \\cdot d_{\\max }(v, v^{\\prime })$ additional steps.", "Let $x$ be the histogram representation of $v$ with total mass $t$ , and let $j^* = \\text{arg max}_j v_j - v_j^{\\prime }$ , where $v_j - v_j^{\\prime } = d_{\\max }(v, v^{\\prime })$ .", "Let $x^{\\prime } = t^{\\prime } \\cdot v^{\\prime }$ be the histogram representation of $v^{\\prime }$ with total mass $t^{\\prime }$ , such that $x_{j^*} = x_{j^*}^{\\prime }$ .", "Note that $t^{\\prime }$ is the smallest total mass (or total number of rounds) where a histogram can normalize to $v^{\\prime }$ , as any subsequent histogram must have $x_{j^*}^{\\prime } \\ge x_{j^*}$ .", "As such, we must have that $t^{\\prime }\\cdot v_{j^*}^{\\prime } \\ge t\\cdot v_{j^*}$ , implying that: t't vj*vj*' = vj*' + d(v, v')vj*' 1 + d(v, v').", "At least one round is required to reach the first vector in a permutation, and we can use the above lemma to lower-bound the rounds between any adjacent vectors in the ordering.", "Taking the minimum over all permutations gives us the result." ], [ "Proofs of Local Learnability for Section ", "Each proof gives a learning algorithm which operates in a ball around the uniform vector, which is contained in $\\texttt {EIRD}(M)$ whenever $\\lambda \\ge \\frac{k^2}{n}$ by Lemma REF ." ], [ "Proof of Univariate Polynomial Local Learnability", "Query the uniform vector $v_U$ where each $v_i=\\frac{1}{n}$ .", "Let $Z= \\frac{\\sqrt{n d / 6}}{\\alpha } $ .", "Consider three sets each of $d/2$ memory vectors where the items with indices satisfying $i\\mod {3} = z$ each have memory values $\\frac{1}{n}+\\frac{j}{Z}$ , items satisfying $i \\mod {3} = z+1$ have values $\\frac{1}{n}-\\frac{j}{Z}$ , and the remainder have $\\frac{1}{n}$ (for $z \\in \\lbrace 0,1,2\\rbrace $ , and for $1 \\le j \\le \\frac{d}{2}$ ).", "All such vectors lie in $V_{\\alpha }$ , as $2n / 3 \\cdot (d / (2Z))^2 \\le \\alpha ^2$ .", "Query each of the $3d/2$ vectors.", "For each query, let $R_v$ be the sum of all scores of the items held at $\\frac{1}{n}$ , divided by the sum of those same items' scores in the uniform query.", "Divide all scores by $R_v$ .", "Let $R_v^*$ be the be the corresponding ratio of these sums of scores under $\\lbrace f_i \\rbrace $ ; each sum is within $[\\frac{\\lambda }{3}, 1]$ at each vector, and the sums of observed scores have additive error at most $n\\beta /3$ .", "As such, $R_v$ has additive error at most $\\frac{2n\\beta }{\\lambda }$ from $R_v^*$ .", "This gives us estimates for $d+1$ points of $\\hat{f}_i(x_j) = \\hat{y}_j$ for each polynomial, up to some universal scaling factor.", "We can express this $d$ -degree polynomial $\\hat{f}_i$ via Lagrange interpolation: Ld, j(x) = k jd x - xkxj - xk ; fi(x) = j=0d yj Ln,j.", "Note that $\\sum _i \\hat{f}_i(v_U) = 1$ as the scores coincide exactly with our query results at the uniform vector.", "To analyze the representation error, let $\\lbrace f^*_i\\rbrace $ be the set of true polynomials $f_i$ rescaled to sum to 1 at the uniform vector; this involves dividing by a factor $S \\in [n\\lambda , n]$ , and produces identical scores at every point.", "Consider the difference ${ \\hat{y}_j - y^*_j }$ for each $y^*_j = f^*_i(x_j)$ .", "The query error for $\\hat{y}_j$ prior to rescaling is at most $\\beta $ ; rescaling by $R^*_v$ would increase this to at most $3\\beta / \\lambda $ , which is amplified to at most yj - y*j 3 + 2n 3n as each query score is at most 1 (and our setting is trivial for $n\\le 2$ ).", "The magnitude of each of the $d+1$ Lagrange terms can be bounded by: Ld,j(x) j=1d/2 Z2j2 Zd((d/2)!", ")2 for any $x \\in [0, 1]$ , and so for any function $\\hat{f}_i(x)$ we can bound its distance from $f^*_i(x)$ by: f*i(x) - fi(x) = (d+1) 3nZd((d/2)!", ")2 (d+1)3nZd2d/2.", "This holds simultaneously for each $\\hat{f}_i$ which, using the fact that the true ratio is at least $\\lambda /n$ and the per-function bound applies to each denominator term, gives us a total bound on the score estimates we generate: fi(x)j=1x fj(x) - fi(x)j=1 fj(x) 1 + (d+1)3nZd2d/2 (d+1)3n3Zd 2 2d/2 7n3dZd2 2d/2 3(6nd)d/2 + 2 d 2 2d/2 = (3nd)d/2 + 2 d 2.", "Taking $\\beta \\le \\frac{\\epsilon \\alpha ^d \\lambda ^2 }{(3nd)^{d/2 + 2}}$ gives us an absolute error of at most $\\epsilon $ per item score, satisfying a Euclidean bound of $\\epsilon $ from any true score vector $M(w)/M^*_w$ for our hypothesis $\\hat{M}(v) = \\lbrace \\hat{f}_i(v_i): i \\in [n]\\rbrace $ ." ], [ "Proofs of Multivariate Polynomial Local Learnability", "Recall that the two classes of multivariate polynomial models we consider are bounded-degree multilinear polynomial preference models $_{BMLP}$ , where: for each $i$ , $M(v)_i = f_i(v)$ , where $f_i$ is a degree-$d$ multilinear (i.e.", "linear in each item) polynomial which takes values in $[\\lambda , 1]$ over $\\Delta (n)$ for some constant $\\lambda > 0$ , and the class of bounded-degree normalized multivariate polynomial preference models $_{BNMP}$ , where: for each $i$ , $M(v)_i = f_i(v)$ , where $f_i$ is a degree-$d$ polynomial which takes values in $[\\lambda , 1]$ over $\\Delta (n)$ for some constant $\\lambda > 0$ , where $\\sum _i f_i(v) = C$ for some constant $C$ .", "We prove local learnability results for each case.", "$_{BMLP}$ is $O(n^d)$ -locally learnable by an algorithm $_{BMLP}$ with $\\beta \\le O(\\frac{\\epsilon ^2 }{ \\textup {poly}(n (d / \\alpha )^d)} )$ .", "Consider the set of polynomials where each $v_n$ term is reparameterized as $1 - \\sum _{i=1}^{n-1} v_i$ , then translated so that the uniform vector appears at the origin (i.e.", "with $x_n = -\\sum _{i=1}^{n-1} x_i$ ).", "Our approach will be to learn a representation of each polynomial normalized their sum, which is unique up to a universal scaling factor.", "Let $f_i^*$ be the representation of $f_i$ in this translation.", "Consider the $N = \\sum _{j=0}^d{n - 1 j}$ -dimensional basis $\\mathcal {B}$ where each variable in a vector $x$ corresponds to a monomial of at most $d$ variables in $v$ , each with degree 1, with the domain constrained to ensure mutual consistency between monomials, e.g.", ": B = { 1, v1, ..., vn-1, v1 v2, ..., j=n-dn-1 vj }.", "Observe that each $f_i^*$ is a linear function in this basis.", "Let $q_{i}(x) = M(v)_i / M^*_v$ denote the normalized score for item $i$ at $v$ , where $v$ translates to $x$ in the new basis.", "For we any $x$ we have: fi*(x)j=1n fj*(x) = qi(x), and let $\\hat{q}_i(x)$ denote the analogous perturbed query result, both of which sum to 1 over each $i$ .", "We are done if we can estimate the vector $q(x)$ up to distance $\\epsilon $ for any $x$ .", "With $f^*_i(x) = \\langle a, x \\rangle + a_0$ and $\\sum _{i=1}^{n} f_i^*(x) = \\langle b, x \\rangle + b_0$ , our strategy will be to estimate the ratio of each coefficient with $b_0$ , for each $f^*_i$ , in increasing order of degree.", "While our parameterization does not include item $n$ , we will explicitly estimate $b$ separate from each $a$ , which we can then use to estimate $f_n^*(x) = \\langle b, x \\rangle + b_0 - \\sum _{i=1}^{n-1} f^*_i(x)$ .", "For a monomial $m$ of degree $j$ , we can estimate its coefficient for all $f^*_i$ simultaneously by moving the values for variables it contains simultaneously from the $\\mathbf {0}$ vector, and viewing the restriction to its subset monomials as a univariate polynomial of degree $j$ .", "We will use a single query to the $\\mathbf {0}$ vector, and $2j+1$ additional queries for each degree-$j$ monomial (which can be used for learning that monomial's coefficient in all $f^*_i$ simultaneously), resulting in a total query count of: 1 + j=1d (2j+1) n-1 j = 1 + j=1d (2j+1) (n-1)!j!", "(n-j-1)!", "= O(nd).", "Querying $\\mathbf {0}$ gives us an estimate for each additive term: a0i b0 = qi(0) which sum to 1 over all items (and we will take $\\hat{b}_0 = 1$ ).", "We now describe our strategy for computing higher-order coefficients in terms of lower-order coefficients under the assumption of exact queries, after which we conduct error propagation analysis.", "For a monomial $m$ of degree $j$ , let $x_{(h,m)}$ be the point where $x_{(h,m),i} = hZ$ if an item $i$ belongs to $m$ and 0 otherwise, with higher degree terms satisfying the basis constraints (i.e.", "$(hZ)^{3}$ for a degree-3 subset of $m$ , and $(hZ)^j$ for $m$ ), which also results in the term for a monomial containing any item not in $m$ being set to zero.", "Query $x_{(h,m)}$ for $2j + 1$ distinct values $h$ in $\\lbrace \\pm 1, \\ldots , \\pm (j+1)\\rbrace $ .", "For $Z = \\alpha /(2d(d+1))$ all queries lie in the $\\alpha $ -ball, as the $\\ell _1$ norm of the positive coefficients, as well as the negative offset for item $n$ , are both bounded by $\\alpha /2$ in the original simplex basis.", "Suppose all coefficients up to degree $j-1$ are known.", "The result of such a query (with $z = hZ$ ) is equivalent to: q(x(h,m)) = am zj + fa(z)bm zj + fb(z) where $f_a$ and $f_b$ are $(j-1)$ -degree univariate polynomials, where each coefficient of some degree $k \\le j-1$ is expressed by summing the coefficients for degree-$k$ monomials which are subsets of $m$ , for $a$ and $b$ respectively.", "Rearranging, we have: am = qi(x(h,m)) bm + q(x(h,m))fb(z) - fa(z)zj.", "This gives us a linear relationship between $a_m$ and $b_m$ in terms of known quantities after just one query where $z \\ne 0$ .", "Suppose we could make exact queries; if we observe two distinct linear relationships, we can solve for $a_m$ and $b_m$ .", "If each query gives us the same linear relationship, i.e.", "$q_i(x_{(h,m)}) = q_i(x_{(h^{\\prime },m)})$ for every query pair $(h, h^{\\prime })$ , then equality also holds for each of the $(q_i(x_{(h,m)})\\cdot f_b(z) - f_a(z)) / {z^j}$ terms.", "If the latter term is truly a constant function $c$ : qi(x(h,m))fb(z) - fa(z)zj = c then we also have: (am zj + fa(z)) fb(z) - (bm zj + fb(z)) fa(z) = c zj (bm zj + fb(z)).", "Each side is a polynomial with degree at most $2j$ , and thus cannot agree on $2j + 1$ points unless equality holds.", "However, if equality does hold, we have that either $c=0$ or $b_m = 0$ , as the left side has degree at most $2j-1$ , and both $z^j$ and $b_m z^j + f_b(z)$ are bounded away from 0 for any $z \\ne 0$ .", "If $c \\ne 0$ , then we have that $b_m = 0$ and $a_m = c$ .", "If $c=0$ , then we have am zj fa(z) fb(z) - bm zj fa(z) fb(z) = 0, which implies $a_m = b_m$ , as $f_a(z) f_b(z)$ cannot be equal to 0 everywhere due to each $a^i_0$ and $b_0$ being positive.", "Our answer to $q(x_{(h,m)})$ will be bounded above 0 and below 1, allowing for us to solve for both $a_m$ and $b_m$ as am = bm = qi(x(h,m))fb(z) - fa(z)(1 - qi(x(h,m))) zj.", "To summarize, if given exact query answers for $2j+1$ distinct points, we must be in one of the following cases: We observe at least two distinct linear relationships between $a_m$ and $b_m$ from differing query answers; We observe a non-zero constant $\\frac{q_i(x_{(h,m)})\\cdot f_b(z) - f_a(z)}{z^j} = c$ for each query, and have $a_m = c$ ; We observe $\\frac{q_i(x_{(h,m)})\\cdot f_b(z) - f_a(z)}{z^j} = 0$ for each query, and can solve for $a_m = b_m$ .", "To begin our error analysis for perturbed queries, we first show a bound on the size of the coefficients for a polynomial which is bounded over a range.", "Each degree-$d^{\\prime }$ coefficient of $f_i^*$ is at most ${d^{\\prime }}^{2d^{\\prime }}$ .", "First note that the constant coefficient and the coefficient for each linear term have magnitude at most 1, as the function is bounded in $[\\lambda , 1]$ over the domain (which includes $\\mathbf {0}$ ).", "For a degree-$d^{\\prime }$ monomial $m$ , consider the univariate polynomial corresponding to moving each of its variables in synchrony while holding the remaining variables at 0, whose degree-$d^{\\prime }$ coefficient is equal to $a_m$ .", "Consider the Lagrange polynomial representation of this polynomial Ld', j(x) = k jd' x - xkxj - xk ; fi(x) = j=0d' yj Ln,j.", "for $d^{\\prime }+ 1$ evenly spaced points in the range $[-1/n, 1/d^{\\prime } - 1/n]$ , which are all feasible under the simplex constraints (corresponding to $v_i \\in [0, 1/d^{\\prime }]$ in the original basis, for each $i \\in m$ ).", "Each pair of points is separated by a distance of at least $1 / ({d^{\\prime }}^2)$ , and so the leading coefficient of each Lagrange term is at most $ {d^{\\prime }}^{2(d^{\\prime }-1)}$ .", "Each $\\hat{y}_j$ is in $[\\lambda , 1]$ and so we have am (d' + 1) d'2(d'-1) d'2d' for each $d^{\\prime } > 1$ .", "As we estimate coefficients for monomials of increasing degree, we will maintain the invariant that each degree-$j$ coefficient of $a$ and $b$ is estimated up to additive error $\\epsilon _j$ , with respect to the normalization where $b_0 = 1$ .", "Immediately we have $\\epsilon _0 = \\beta $ for the estimates $\\hat{a}_0$ from our query to the $\\mathbf {0}$ vector.", "We will also let $\\beta _j$ denote the error of a polynomial $\\hat{f}_a$ restricted to terms for subsets of a $j$ -degree monomial $m$ For a monomial $m$ , suppose we receive 2 queries $\\hat{q}_{i}(x_{(h, m)})$ and $\\hat{q}_{i}(x_{(h^{\\prime }, m)})$ for some $h$ and $h^{\\prime }$ where qi(x(h, m)) - qi(x(h', m)) Fj for some quantity $F_j$ .", "Then we have: am = qi(x(h, m)) bm + qi(x(h, m)) fb(hZ) - fa(h Z)(hZ)j = qi(x(h', m)) bm + qi(x(h,' m)) fb(h' Z) - fa(h' Z)(h' Z)j bm = amqi(x(h, m)) + fa(h Z) qi(x(h, m)) - fb(hZ)(hZ)j ; = amqi(x(h', m)) + fa(h' Z) qi(x(h', m)) - fb(h'Z)(h'Z)j ; amqi(x(h', m)) - amqi(x(h, m)) = fa(h Z) qi(x(h, m)) - fb(hZ)(hZ)j - fa(h' Z) qi(x(h', m)) - fb(h'Z)(h'Z)j ; am = qi(x(h', m)) fa(h Z) qi(x(h, m)) - qi(x(h', m)) fb(hZ) 1 - qi(x(h', m))qi(x(h, m)) (hZ)j - fa(h' Z) - fb(h'Z)qi(x(h', m)) 1 - qi(x(h', m))qi(x(h, m)) (h'Z)j ; bm = qi(x(h, m)) fb(hZ) - fa(h Z)(hZ)j - qi(x(h,' m)) fb(h' Z) - fa(h' Z)(h' Z)j qi(x(h', m)) - qi(x(h', m)); where $\\hat{f}_a$ and $\\hat{f}_b$ are the univariate polynomials from summing the lower-order coefficient estimates for each degree up to $j-1$ .", "The additive error to each $\\hat{f}_a(hZ)$ and $\\hat{f}_a(hZ)$ can be bounded by: + k=1j-1 n k (hZ)k k2k k =+ k=1j-1 n-1 k (k2hZ)k k. Further, the magnitude of each $\\hat{f}_a(hZ)$ and $\\hat{f}_b(hZ)$ is at most $1 + \\sum _{k=1}^{j-1} {n -1 k} (k^2hZ)^k$ .", "We can bound the error of other terms as follows: Each $\\hat{q}_{i}(x_{(h^{\\prime }, m)}) - \\hat{q}_{i}(x_{(h^{\\prime }, m)})$ has magnitude at least $F_j$ and at most 1, and additive error at most $2\\beta $ ; Each $\\hat{q}_{i}(x_{(h^{\\prime }, m)})$ has value at least $\\frac{\\lambda }{n}$ and at most 1, and additive error at most $\\beta $ ; Each $\\frac{\\hat{q}_{i}(x_{(h^{\\prime }, m)})}{\\hat{q}_{i}(x_{(h, m)})}$ term is either greater than $\\frac{1}{1 - F_j}$ or at most $1 - F_j$ ; the true ratio between the numerator and denominator is at least $\\lambda / n$ most $n / \\lambda $ , with additive error up to $\\beta $ in both.", "Each $1 - \\frac{\\hat{q}_{i}(x_{(h^{\\prime }, m)})}{\\hat{q}_{i}(x_{(h, m)})}$ term, is either greater than $F_j$ or at most $1 - \\frac{1}{1 - F_j}$ ; Each $(hZ)^j$ has magnitude at least $Z^j$ ; The error in the numerator of $\\hat{a}_m$ , and the fractional terms in the numerator of $\\hat{b}_m$ is dominated by multiplying the functions of $\\hat{q}_i$ with the polynomials themselves.", "As such, we can bound the error to $a_m$ and $b_m$ by $\\epsilon _j$ if we have that: j O n Fj Zj 1 + k=1j-1 n -1 k (k2hZ)k = O n Fj Zj 1 + k=1j-1 n-1 k (h )k = O nd2j j Fj for any $\\alpha < 1/(nd)$ .", "Now suppose all pairs of query answers we see are separated by less than $F_j$ ; the additive error to each estimate of the quantity c(h,m) = qi(x(h,m))fb(z) - fa(z)(hZ)j is $\\mathcal {E}_j = O{ \\frac{\\beta }{Z^j} \\cdot { 1 + \\sum _{k=1}^{j-1} {n k} (k^2hZ)^k }} = O{\\beta \\cdot nd^{2j}/\\alpha ^j}$ .", "If each such quantity has value at most $\\mathcal {E}_j$ , we assume this quantity is zero and solve for $a_m = b_m$ .", "If some are larger, we must be in the case where $\\hat{b}_m \\approx 0$ and so we set $a_m = \\hat{c}_{(h,m)}$ for any query result.", "By taking each $F_j = O(\\sqrt{\\beta }(n, d^j, 1/\\alpha ^j))$ we can obtain a bound of $\\epsilon _j = O(\\sqrt{\\beta } (n, d^j, 1/\\alpha ^j))$ to each coefficient regardless of which case we are in; after summing the error contribution across coefficients and accounting for renormalization, recalling that $\\lambda = \\Omega (1/n)$ , we obtain a bound of $\\epsilon $ on score vector errors (for any desired norm) provided that $\\epsilon \\ge \\sqrt{\\beta }(n, d^d, 1/\\alpha ^d)$ .", "Next, we prove the local learnability result for normalized multivariate polynomials.", "$_{BMNP}$ is $O(n^d)$ -locally learnable by an algorithm $_{BMNP}$ with $\\beta \\le \\frac{\\epsilon }{\\alpha ^{d} F(n, d)} $ , where $F(n,d)$ is some function depending only on $n$ and $d$ which is finite for all $n,d \\in \\mathbb {Z}$ .", "Our approach will be to construct a set of $O(n^d)$ queries which results in a data matrix which is nonsingular in the space of $d$ -degree multivariate polynomials, solve for the coefficients of each $f_i$ as a linear function over this basis, and show that the basis is sufficiently well-conditioned such that our approximation error is bounded.", "Consider the set of polynomials where each $v_n$ term is reparameterized as $1 - \\sum _{i=1}^{n-1} v_i$ , then translated so that the uniform vector appears at the origin (i.e.", "with $v_n = -\\sum _{i=1}^{n-1} v_i$ ).", "Our approach will be to learn a representation of each polynomial directly, as they are already normalized to sum to a constant (which must be in the range $[1,n]$ ).", "Let $f_i^*$ be the representation of $f_i$ in this translation.", "Let $\\mathcal {B}$ be the $N = \\sum _{j=0}^d (n-1)^j$ -dimensional basis where each variable in a vector $x$ corresponds to a monomial of variables in $v$ with degree at most $d$ , with the domain constrained to ensure mutual consistency between monomials, e.g.", ": = { 1, v1, ..., vn-1, v12, v1 v2, ..., vn-1d }.", "Observe that $f_i^*$ is a linear function in this basis, with $f^*_i(x) = \\langle a, x \\rangle $ and $\\sum _{i=1}^n f_i^*(x) = \\langle b, x \\rangle $ for any $x$ represented in $\\mathcal {B}$ .", "There is a large literature on constructing explicit query sets for multivariate polynomial interpolation, which ensure that the resulting data matrix is nonsingular; see [14] for an overview.", "The set must have at least $N$ points to ensure uniqueness of interpolation, and this is sufficient when points are appropriately chosen.", "Let $S^*$ be any such set such that each point ${w}_1 \\le 1/2$ for each $w \\in S^*$ , and let $C_{n,d}$ be the $\\ell _{\\infty }$ condition number of the resulting matrix $Y$ (which will be positive due to nonsingularity) given by: Y = y(1)1 y(1)N $\\vdots $ $\\vdots $ y(j)1 y(j)N $\\vdots $ $\\vdots $ y(N)1 y(N)N where $y^{(j)}$ is the representation of $s^{(j)}$ in the basis $\\mathcal {B}$ .", "We show that for any $\\alpha $ , we can construct a matrix $X$ from a query set $S^{\\alpha }$ of size $N$ where ${v}_1 \\le \\alpha /2$ for each $v \\in S^{\\alpha }$ .", "For each $s^{(j)}$ , let $v^{(j)} = \\alpha s^{(j)}$ , which results in ${v}_1 \\le \\alpha /2$ for the parameterization over $n-1$ items, and so radius of $\\alpha $ holds when including all $n$ items.", "This results in a matrix $X$ given by X = x(1)1 x(1)N $\\vdots $ $\\vdots $ x(j)1 x(j)N $\\vdots $ $\\vdots $ x(N)1 x(N)N We then have X = YD, where $D$ is a diagonal matrix with the $j$ th diagonal entry $\\nu _j$ equal to $\\alpha ^{d_j}$ , where $d_j$ is degree of the $j$ th monomial in $\\mathcal {B}$ , as our scaling by $\\alpha $ is amplified for each column in correspondence with the associated degree; the values of $D$ will range from $\\alpha ^d$ to 1.", "We can then bound the condition number of $X$ as: cond(X) = cond(YD) = YD(DY)-1 YDD-1Y-1 = cond(Y) cond(D) Cn,d j jj k = Cn,d d. Let $q$ denote the vector of exact answers to each query in $x$ from $f_i$ , equal to $a \\dot{x}$ and let $\\hat{q}$ be the answers we observe for item $i$ from querying each $x$ .", "As $X$ is nonstationary, we have that $Xa = q$ , and by standard results in perturbation theory for linear systems, for $\\hat{a}$ such that $X\\hat{a} = \\hat{q}$ we have that: a - aa cond(X) q - q q n Cn,d k2 d as each entry in $q$ is at least $\\lambda \\ge k^2/n$ .", "Further note that the maximum coefficient of a degree-$d$ multivariate polynomial which takes maximum value 1 over the unit ball (and hence the simplex) can be shown to be bounded by a finite function of $n$ and $d$ (see [19]); when accounting for this factor in relative error across all terms and items, as well as the condition number, we have that for $\\beta \\le \\frac{\\epsilon }{\\alpha ^d F(n,d)}$ for some function $F(n,d)$ , the scores generated by the functions $\\hat{f}_i$ using our estimated coefficients $\\hat{a}$ result in score vector estimates bounded by $\\epsilon $ ." ], [ "Proof of SFR Local Learnability", "We now prove that functions with local sparse Fourier transformation are locally learnable.", "Recall that a function $f(x)$ has a $\\ell $ -sparse Fourier transform if it can be written as $f(x) = \\sum _{i=1}^\\ell \\xi _i e^{2\\pi \\mathbf {i} \\eta _i x}\\,,$ where $\\eta _i$ is the $i$ -th frequency, $\\xi _i$ is the corresponding magnitude, and $\\mathbf {i} = \\sqrt{-1}$ .", "We will use the following result about learning sparse Fourier transforms [25].", "[[25]] Consider any function $f(x) : \\rightarrow $ of the form $f(x) = f^*(x) + g(x)\\,,$ where $f^*(x) = \\sum _{i=1}^\\ell \\xi _i e^{2\\pi \\mathbf {i} \\eta _i x}$ with frequencies $\\eta _i \\in [-F,F]$ and frequency separation $\\hat{\\alpha } = \\min _{i \\ne j}|\\eta _i - \\eta _j|$ , and $g(x)$ is the arbitrary noise function.", "For some parameter $\\delta > 0$ , we define the noise-level over an interval $I = [a,b] \\subseteq $ as $\\mathcal {N}^2 = \\frac{1}{|I|} \\int _{I} |g(x)|^2 dx + \\delta \\sum _{i=1}^\\ell |\\xi _i|^2\\,.$ There exists an algorithm that takes samples from the interval $I$ with length $|I| > O(\\frac{\\log (\\ell / \\delta )}{ \\hat{\\alpha }})$ and returns a set of $\\ell $ pairs $\\lbrace (\\xi _i^{\\prime }, \\eta _i^{\\prime })\\rbrace $ such that for any $|\\xi _i| = \\Omega (\\mathcal {N})$ we have for an appropriate permutation of the indices $|\\eta _i - \\eta _i^{\\prime }| = O\\Big (\\frac{\\mathcal {N}}{ |I| |\\xi _i|}\\Big ),\\qquad | \\xi _i - \\xi _i^{\\prime }| = O(\\mathcal {N}), \\forall i \\in [\\ell ]\\,.$ The algorithm takes $O(\\ell \\log (F |I|) \\log (\\frac{\\ell }{\\delta }) \\log (\\ell ))$ samples and $O(\\ell \\log (F |I|) \\log (\\frac{F|I|}{\\delta }) \\log (\\ell ))$ and succeeds with probability at least $1-1/k^c$ for any arbitrary constant $c$ .", "Furthermore, the algorithm used in the above theorem uses samples of the form $x_0, x_0 + \\sigma \\cdots x_0 + \\ell \\log (\\ell /\\delta ) \\sigma $ for randomly chosen $x_0$ and $\\sigma = O(|I|/\\ell \\log (\\ell /\\delta ))$ .", "We will use the above theorem to learn the sparse Fourier representation of the preference model.", "Recall that for a memory vector $v$ and item $i \\in [n]$ , $M(v)_i = f_i(v_i)$ .", "Let $v_{\\textnormal {unif}}$ denote the uniform memory vector.", "We will learn each function $f_i$ separately.", "Fix $i \\in [n]$ .", "We will set the interval $I$ to be $[1/n-Z, 1/n + Z]$ for some sufficiently small $\\frac{\\log (\\ell /\\delta )}{\\hat{\\alpha }} \\le Z \\le \\alpha /2$ where $\\hat{\\alpha }$ is the frequency separation, where $\\alpha = {\\Omega }(1/\\hat{\\alpha })$ so that $Z$ is defined.", "Let $S = \\lbrace x_j\\rbrace _{j = 1}^{\\tilde{O}(\\ell )}$ for $x_j \\in [-Z, Z]$ be a set of points such that the Fourier learning algorithm queries $1/n +x$ for each $x \\in S$ .", "For each point $x \\in S$ , we define the memory vector $v^x = v_{\\textnormal {unif}}+ x e_i - x e_j$ where $j$ is a fixed randomly chosen other index.", "All such vectors lie in $V_{\\alpha }$ , as $2 (\\alpha /2)^2 \\le \\alpha ^2$ .", "We query all vectors $v^x$ for $x \\in S$ , along with $v_{\\textnormal {unif}}$ .", "Recall that $\\hat{s}_v$ is the empirical score vector at a memory vector $v$ .", "For each vector $v$ , let $R_v$ be the sum of all scores of all the $n-2$ items held at $\\frac{1}{n}$ , divided by the sum of those same items' scores in the uniform vector $v_{\\textnormal {unif}}$ .", "For each vector $v^x$ we multiply the score $\\hat{s}_{v^x,i}$ of item $i$ by $R_{v^x}$ to obtain a noisy sample of $f_i(1/n+x)$ .", "For $i \\in \\widetilde{O}(\\ell )$ , let the $i$ -th sample be denoted by $\\hat{y}_i$ and the true value $f_i(1/n+x_i)$ be denoted by $y_i$ .", "We then pass all these samples to the Fourier learning algorithm in Theorem REF in order to get an estimate $\\hat{f}$ of $f$ .", "We now analyze the error in the samples.", "Let $R_v^*$ be the corresponding ratio of these sums of scores under $\\lbrace f_i \\rbrace $ ; each sum is within $[\\frac{\\lambda }{3}, 1]$ at each vector, and the sums of observed scores have additive error at most $2n\\beta $ .", "As such, $R_v$ has additive error at most $\\frac{2n\\beta }{\\lambda }$ from $R_v^*$ .", "For each vector $v^x$ we have that $\\hat{s}_{v^x, i}/(\\sum _j \\hat{s}_{v^x, j})$ is within a $\\beta $ error from $s_{v^x, i}/(\\sum _j s_{v^x, j})$ .", "Hence, the total error in each sample is bounded as: ${\\hat{y}_i - y_i} \\le \\frac{7n \\beta }{\\lambda }\\,.$ Using this we can bound the total noise term by $\\mathcal {N}= 8n \\beta /\\lambda $ using our choice of $\\delta = (\\beta n)/(\\lambda \\sum _{i=1}^\\ell |\\xi _i|)$ .", "The algorithm will return a set of $\\lbrace (\\hat{\\eta }_i, \\hat{\\xi }_i)\\rbrace $ such that $|\\eta _i - \\eta _i^{\\prime }| = O\\Big (\\frac{1}{\\alpha }\\Big ),\\qquad | \\xi _i - \\xi _i^{\\prime }| = O(\\frac{\\beta n}{\\lambda }), \\forall i \\in [\\ell ]\\,.$ So for function $\\hat{f}_i(x)$ we can bound its distance from $f_i(x)$ by: fi(x) - fi(x) = i=1i e2i i x - i [] i e2i i x i [] i e2i i x - i e2i i x i [] i - i i - i O(n), since we normalize the above estimates to get a score estimate, the total bound on the score estimates can be bounded as: fi(x)j=1x fj(x) - fi(x)j=1 fj(x) O(n).", "Taking $\\beta \\le \\frac{\\epsilon \\lambda \\alpha }{ \\sqrt{n} \\ell }$ gives us an error of at most $\\epsilon \\sqrt{n}$ , satisfying a Euclidean bound of $\\epsilon $ from any true score vector $M(w)/M^*_w$ for our hypothesis model $\\hat{M}(v) = \\lbrace \\hat{f}_i(v_i): i \\in [n]\\rbrace $ ." ], [ "Proof of Theorem ", "First observe that $y_t \\in _t$ every round, as For $x^* = \\textup {arg min}_{x \\in _T} \\sum _{t=1}^T f_t(x)$ , let $x^*_{\\delta ,\\epsilon } = \\Pi _{_{T, \\delta ,\\epsilon }}(x^*)$ .", "By linearity and properties of projection, we also have that $x^*_{\\delta ,\\epsilon } = \\textup {arg min}_{x \\in _{T,\\delta , \\epsilon }} \\sum _{t=1}^T f_t(x)$ , and that ${x^*_{\\delta ,\\epsilon } - x^*} \\le (\\delta + \\epsilon )\\frac{D}{r}$ .", "For $G$ -Lipschitz losses $\\lbrace f_t\\rbrace $ we have t=1T [t] - t=1T ft(x*) = t=1T [ft(yt)] - t=1T ft(x*) t=1T [ft(yt)] - t=1T ft(x*, ) + TGDr + TGDr.", "Let $\\hat{f}_t(x) = _{u \\sim \\mathbb {B}}[f(x + \\delta u + \\xi _t)] = f_t(x + \\xi _t) $ by linearity.", "Then we can bound the regret by: t=1T [t] - t=1T ft(x*) t=1T [ft(yt)] - t=1T ft(x*,) + TGDr + TGDr = t=1T [ft(xt)] - t=1T ft(x*,) + TGDr + TGDr t=1T [ft(xt)] - t=1T ft(x*,) + TGDr + TGDr +1 t=1T [ft(xt)] - t=1T ft(x*,) + TGDr + 2 TGDr Next, we prove a series of lemmas — an analysis of online gradient descent for contracting decision sets, and a corresponding bandit-to-full-information reduction — which allow us to view the remaining summation terms involving $\\lbrace x_t\\rbrace $ as the expected regret of stochastic online gradient descent for the loss function sequence $\\lbrace \\hat{f}_t\\rbrace $ with respect to $_{T, \\delta , \\epsilon }$ .", "When modifying online gradient descent to project into smaller sets each round, the analysis is essentially unchanged.", "Contracting Online Gradient Descent.", "Input: sequence of contracting convex decision sets $_1, \\ldots _T$ , $x_1 \\in _1$ , step size $\\eta $ Set $x_1 = \\mathbf {0}$ $t = 1$ to $T$ Play $x_t$ and observe cost $f_t(x_t)$ Update and project: yt+1 = xt - t(xt) xt+1 = t+1(yt+1) For a sequence of contracting convex decision sets $_1, \\ldots _T$ , $x_1 \\in _1$ each with diameter at most $D$ , a sequence of $G$ -Lipschitz losses $\\ell _1,\\ldots , \\ell _T$ , and $\\eta = \\frac{D}{G\\sqrt{T}}$ , the regret of Algorithm REF with respect to $_t$ is bounded by t=1T t(xt) - x* T t=1T t(x*) GD T. Let $x^* = \\text{arg min}_{x \\in _T} \\sum _{t=1}^T \\ell _t(x)$ , and let $\\nabla _t = \\nabla \\ell _t(x_t)$ .", "First, note that t(xt) - t(x*) t (xt - x*) by convexity; we can then upper-bound each point's distance from $x^*$ by: xt+1 - x* = t+1(xt - t(xt)) xt - t - x*, using projection properties for convex bodies.", "Then we have xt+1 - x*2 xt - x*2 + 2 t 2 - 2t (xt - x*) and t (xt - x*) xt - x*2 - xt+1 - x*2 2 + t2 2.", "We can then conclude: t=1T t(xt) - t=1T t(x*) t=1T t (xt - x*) t=1T xt - x*2 - xt+1 - x*2 2 + 2 t=1T t2 xT - x*22 + 2 t=1T t2 D22 + 2 t=1T t2 = GDT                                        (when = DGT) The bandit-to-full-information reduction is fairly standard as well, with a proof equivalent to that of e.g.", "Lemma 6.5 in [17], modified for a full-information algorithm $$ for over contracting sets.", "Let $u$ be a fixed point in $_T$ , let $\\lbrace \\ell _t : _t \\rightarrow | ~ t \\in [T]\\rbrace $ be a sequence of differentiable loss functions, and let $$ be a first-order online algorithm that ensures a regret bound $\\text{Regret}_{_T}() \\le B_{}( \\nabla \\ell _1(x_1), \\ldots , \\nabla \\ell _T(x_T))$ in the full-information setting for contracting sets $_1,\\ldots , _T$ .", "Define the points $\\lbrace x_t\\rbrace $ as $x_1 \\leftarrow (\\emptyset )$ , $x_t \\leftarrow (g_1, \\ldots , g_{t-1})$ , where $g_t$ is a random vector satisfying [gt | x1, 1, ..., xt, t] = t(xt).", "Then for all $u \\in _T$ : [t=1T t(xt)] - t=1T t(u) E[B(g1, ..., gT)] Let $h_t : _t \\rightarrow $ be given by: ht(x) = t(x) + t x , where t = gt - t(xt).", "Note that $\\nabla h_t(x_t) = g_t$ , and so deterministically applying a first order algorithm $$ on $\\lbrace h_t\\rbrace $ is equivalent to applying $$ on stochastic first order approximations of $\\lbrace f_t\\rbrace $ .", "Thus, t=1T ht(xt)-t=1T ht(u) = B(g1 , ..., gT).", "Using the fact that the expectation of each $\\psi _t$ is 0 conditioned on history, and expanding, we get that [ht(xt)] = [t(xt)] + [t xt] = [t(xt)] + [[t xt | x1,1,..., xt, t ] ] = [t(xt)] + [[t | x1,1,..., xt, t ] xt ] = [t(xt)], and we can conclude by taking the expectation of Equation REF for any point $u \\in _T$ .", "The key remaining step is to observe that each $g_t$ is an unbiased estimator of $\\nabla \\hat{f}_t(x_t)$ : [gt | x1, f1,..., xt, ft] = n [ t ut | xt, ft ] = n [ [t | xt, ft, ut] ut | xt, ft ] = [ ft(xt + ut + t) ut | xt, ft ] = [ ft(xt + ut) ut ] = ft(xt), where the final line makes use the sphere sampling estimator for linear functions (as in e.g.", "Lemma 6.7 in [17]).", "This allows us to apply Lemma REF to Algorithm REF : t=1T [t] - t=1T ft(x*) t=1T [ft(xt)] - t=1T ft(x*,) + TGDr + 2 TGDr RegretCOGD g1, ..., gT | {ft} + TGDr + 2 TGDr D22 + 2 t=1T gt2 + TGDr + 2 TGDr D22 + n22 2T + TGDr + 2 TGDr             (def.", "of gt, 1) n GDT3/4 + G D T3/4r + 2 TGDr       (= DnT3/4, = 1T1/4)." ], [ "Proof of Lemma ", "Consider any memory vector $v \\in \\Delta (n)$ .", "We can show constructively that there is some distribution of menus $z_U$ which induces the all-$\\frac{1}{n}$ vector.", "We construct $z_U$ in $\\frac{1}{\\tau } + 1$ stages for some $\\tau > 0$ , through a process where we continuously add weight $a_{z_j}$ to a sequence of distributions $\\lbrace z_j | j \\ge 1 \\rbrace $ over menus until the total weight $\\sum _j a_{z_j}$ sums to 1.", "The uniform-inducing menu distribution $z_U$ will then be defined by taking the mixture of the menu distributions $z_j$ where each is weighted by $a_{z_j}$ .", "Consider the uniform distribution over all menus; continuously add weight to this distribution until some item (the one with the largest score in $M$ ) has selection weight $\\tau /n$ (its selection probability under $M$ at memory vector $v$ in each distribution of menus $z_j$ considered thus far, weighted by $a_{z_j}$ ).", "While there are at least $k$ items with selection weight $\\tau /n$ , continuously add weight to the uniform distribution over all menus containing only items with weight below $\\tau /n$ .", "Once there are fewer than $k$ items with selection weight at most $\\tau /n$ , we terminate stage 1.", "In general, for stage $i$ , we always include every item with weight below $\\tau i / n$ in the menu, with all others chosen uniformly at random.", "Inductively, we can see that every item starts stage $i$ with at least weight $\\tau (i-1)/n$ and at most $\\tau i /n$ , with at most $k-1$ items having weight less than $\\tau (i-1)/n$ .", "Crucially, any item with weight less than $\\tau i / n$ at the start of stage $i$ will reach weight $\\tau i / n$ before any item starting at weight $\\tau i / n$ reaches weight $\\tau (i+1) / n$ .", "Such an item is included in every menu until this occurs, resulting in a selection probability of at least $\\frac{\\lambda }{k}$ in each menu distribution considered, whereas any other item is only included in the menu with probability at most $\\frac{k}{n}$ , which bounds its selection probability in the menu distribution.", "As $\\frac{\\lambda }{k} \\ge \\frac{k}{n}$ , the selection weight of items beginning stage $i$ below $\\tau i / n$ reaches $\\tau i / n$ no later than when the stage terminates.", "After stage $\\frac{1}{\\tau }$ , every item has weight at most $\\frac{1}{n}$ and at least $\\frac{1}{n} - \\frac{\\tau }{n}$ .", "We continue for one final stage until the sum of weights is 1, at which point every item has a final weight $p_{z_U} \\in [\\frac{1}{n} - \\frac{\\tau }{n}, \\frac{1}{n} + \\frac{\\tau }{n}]$ .", "Taking the limit of $\\tau $ to zero gives us that $x_U$ is in $\\texttt {IRD}(v, M)$ for any $v$ , and hence $x_U$ is in $\\texttt {EIRD}(M)$ as well.", "Further, there is a distribution of menus $z_{b_i}$ where $i$ has probability $p_{b_i, i} = k / n$ and every other item $j$ has probability pbi, j = 1n - k- 1n(n-1) Here, we include $i$ in every menu and run the previous approach over the remaining $n - 1$ items for menus of size $k-1$ , which we then augment with $i$ .", "The required bound on $\\lambda $ still holds for any $\\lambda < 1$ , as $\\frac{k^2}{n} \\ge \\frac{(k-1)^2}{n-1}$ (for any $k\\le \\sqrt{n}$ , which holds as $\\lambda < 1$ ).", "The selection probability of $i$ will be at least $\\frac{\\lambda }{k} \\ge \\frac{k}{n}$ ; we can take a mixture of this menu distribution with $z_U$ such that $p_{b_i, i} = \\frac{k}{n}$ exactly.", "The convex hull of each $p_{b_i}$ is thus contained in $\\texttt {EIRD}(M)$ , as any point $p \\in \\lbrace p_{b_i} | i \\in [n]\\rbrace $ can be generated by taking the corresponding convex combination of menu distributions $z_{b_i}$ .", "Any point $x \\in \\Delta (n)$ where ${x_U - x}_{\\infty } \\le \\frac{k-1}{n(n-1)}$ can then be induced by taking mixtures of the $z_{b_i}$ menu distributions." ], [ "Subset-Uniform Distributions in ", "For any $\\lambda $ -dispersed $M$ where $\\lambda \\ge \\frac{C k^2}{n}$ , $\\textup {\\texttt {EIRD}}(M)$ contains the uniform distribution over any $\\frac{n}{C}$ items.", "The proof of Lemma REF carries through directly for a universe with only $\\frac{n}{C}$ items." ], [ "Implementing Near-Uniform Vectors", "For any $\\lambda $ -dispersed $M$ where $\\lambda \\ge \\frac{k^2}{n}$ , for any point $x \\in \\Delta (N)$ satisfying x - xU k-1n(n-1), there is an adaptive strategy for selecting a sequence of menus over $t^*$ rounds, resulting in a $t^*$ -round empirical distribution $\\hat{x}$ such that ${x - \\hat{x}}_{\\infty } \\le \\gamma t^* + O(n)$ with probability at least $1 - 2n\\exp (-\\gamma ^2 t^* /8)$ , for any $\\gamma $ .", "Our strategy will essentially correspond to the construction in Lemma REF , which shows that our vector is indeed in $\\texttt {EIRD}(M)$ .", "For each item $i$ , let $V_i = t^* \\cdot x_i$ be the target number of rounds where $i$ is selected over the window.", "For any $t \\le t^*$ let $\\hat{V}_{t, i}$ be the number of additional rounds an item must be selected before reaching its target, with $\\hat{V}_{1, i} = V_i$ .", "In each round $t$ , construct a menu for the Agent by choosing the $k$ items with largest remaining counts $\\hat{V}_{t, i}$ , breaking ties uniformly at random, and decrement by 1 the count of the item selected in that round.", "Our approach will be to show that each item's final count under this process is close to its target in expectation after $t^*$ rounds, and use the sequence of expectations as rounds progress to define a martingale which will be close to its final expectation with high probability.", "Let $\\hat{V}_{t, \\bot }$ denote the minimum value of $\\hat{V}_{t, i}$ across items.", "Observe that our procedure maintains the invariant that $\\hat{V}_{t, \\bot }$ can only decrease in a round where at most $k-1$ items have remaining counts $\\hat{V}_{t, i} > \\hat{V}_{t, \\bot }$ .", "We will consider each round in which $\\hat{V}_{t, \\bot }$ decreases as the beginning of a “trial”, and we will track the expectations of $\\hat{V}_{t, i}$ over sequences of trials across two cases: Case 1: For every round $t$ at the start of a trial, we have had $\\hat{V}_{t, i} - \\hat{V}_{t, \\bot } > 2$ ; Case 2: There has been some round $t$ at the start of a trial where $\\hat{V}_{t, i } - \\hat{V}_{t, \\bot } \\le 2$ .", "When the first trial begins, we have at most $k-1$ items in Case 1, and items can never enter Case 1 after being in Case 2.", "We assume without loss of generality that we begin in a state where the first trial has just begun, as no prior rounds can increase the distance of any item from the minimum.", "Case 1.", "Note that the probability of an item in the menu being selected in a given round is at least $\\lambda /k \\ge k/n$ .", "We can upper-bound the expected distance of some count $\\hat{V}_{t, i}$ from $\\hat{V}_{t, \\bot }$ by analyzing a “pessimistic” process where we assume that this minimum selection probability is tight, where every selection of an item other than item $i$ corresponds to the beginning of an “event”, where the number of selections of $i$ in each event is geometrically distributed with parameter $p = 1 - \\frac{k}{n}$ .", "While these counts are not truly geometrically distributed, as the maximum number of selections is bounded, we will only need to analyze the probabilities of sums corresponding to items remaining in Case 1, in which case truncation does not affect the resulting distribution.", "Not every event corresponds to a new trial; there are deterministically at least $n-k$ events per trial, as every item begins a trial with a strictly higher count than $\\hat{V}_{t, \\bot }$ , and so at least $n-k-1$ selections of items other than $i$ must occur before an item with minimum count can enter the menu (conditioned on $\\hat{V}_{t, i}$ remaining above $\\hat{V}_{t, \\bot }$ ).", "Under this process, after $z$ events, the distribution of $\\hat{V}_{t, i}$ is given by subtracting the sum of $z$ of the aforementioned geometric variables from $\\hat{V}_{1, i}$ , which is distributed according to a negative binomial: V1, i - Vt, i = y = z + y - 1 z - 1 kn y 1 - kn z with mean $\\frac{z(k/n)}{1 - k/n} = \\frac{z k}{n - k} = [y]$ and variance $\\frac{zk/n}{(1 - k/n^2)}$ .", "After $z$ events, $\\hat{V}_{1, \\bot }$ has dropped by at most $\\frac{z}{n-k}$ .", "As such, by the time $\\hat{V}_{t, \\bot }$ reaches 0, we would also have that the expectation of $\\hat{V}_{i, t}$ would reach 0 if we were to keep item $i$ in the menu at every round and allowed its count to drop below $\\hat{V}_{t, \\bot }$ without replacing it (and become negative); however, our process truncates (and enters Case 2) upon reaching 2 from the minimum, and so we can simply show that the contribution of the left tail of this distribution is small.", "Note that at the beginning of our process, we have $ \\hat{V}_{1, i} - \\hat{V}_{1, \\bot } \\le \\frac{t^*(k-1)}{n(n-1)}$ , and so the expected difference from the minimum upon reaching $\\hat{V}_{t, \\bot } = 0$ while remaining in Case 1 is at most: [( Vt*, i - Vt*, ) I(Case 1) ] 2 + y = 0V1, i - 2 (2 + V1, i - y) t* - 1 t* - y - 1 kn y 1 - kn t* - y 2 + y = 0V1, i - 2 (2 + V1, i - y) t*t* - V1, i t* t* - y kn y 1 - kn t* - y.", "For any $y$ in this range we have going from $y-1$ to $y$ : t* t* - y kn y 1 - kn t* - y t* t* - y - 1 kn y-1 1 - kn t* - y- 1 = t* - y - 1y kn 1 - kn t*V1,i kn t*1/n + k/n2 kn k1 + k/n which is greater than 1 for any $k \\ge 2$ .", "As such, we can bound the tail summation by: [( Vt*, i - Vt, ) I(Case 1) ] 2 + t*t* - V1, i y = 0 1 + k/nk y 2 + t*t* - V1, i kk -1 - k/n 5 for $k \\ge 2$ and sufficiently large $n$ .", "Case 2.", "Here we show that once an item has reached Case 2, its expected distance from $\\hat{V}_{t, \\bot }$ in any future round is at most a constant.", "Separating this analysis is necessitated by the fact that there exist edge cases where an item's expected distance from the minimum can be increasing (e.g.", "if all items start a trial at one above the minimum, an item can only have a decreasing distance if it becomes the next minimum, and can have a higher likelihood of remaining in the menu when the next trial begins).", "Our approach will be to show by induction that, beginning from the first trial in Case 2, the distribution of item $i$ 's distance from the minimum, where $p_y$ is the probability of distance $y$ , satisfies: $p_{y+1} \\le p_y / 2^{k/2 - 1}$ for $y \\ge 2$ .", "This holds at the first trial in Case 2, as we have $p_{y+1} = 0$ for each $y \\ge 2$ .", "An item can only have a distance increase of 1 in a given trial (if it is not picked in any of the at least $n-k$ rounds), which occurs with probability at most $\\frac{1}{(1 + k/n)}^{n-k}\\le e^{-k/2} \\le \\frac{1}{2^{k/2}}$ , using that $n - k > n/2$ (which holds given that $k \\ge 2$ and $n \\ge k^2$ ).", "Further, using the same negative binomial process as in Case 1 to describe the number of selections of item $i$ in a given trial, we can see that $1/2$ upper bounds its density function after $n-k$ events for any valid setting of our parameters, and so the probability that an item is selected $j$ times, for $j$ such that it remains in every menu, is at most $1/2$ .", "Letting $p^*$ describe the distribution after another trial, we can solve for: p*y = py-1 / 2k/2 + j=0 py+j [drops j+1 ] py-1 / 2k/2 + py / 2; py+1* = z py / 2k/2 - 1; using the induction hypothesis on $p$ .", "As such, in any future trial, the expected distance from minimum can be given by: [ Vt*, i - Vt, | Case 2 ] 2 + y=3 py 2 + y=3 2y(k/2 - 1) 3.21 for any $k \\ge 3$ .", "One can strengthen this to yield a constant sum for $k=2$ via a more delicate analysis on the upper bound of the negative binomial density function, which we omit.", "Concentration Analysis.", "We now have that in either case, the expectation $[( \\hat{V}_{t^*, i} - \\hat{V}_{t^*, \\bot }) ]$ is a constant, for every item $i$ .", "Given any current empirical counts counts $\\lbrace \\hat{V}_{t,i} : i \\in [n]\\rbrace $ and scores for every item at any time $t$ (which we as the Recommender need not know), the distribution over subsequent items chosen is fully defined.", "Let $X_{t,i} = \\Pr [~i \\text{ chosen } | ~\\lbrace \\hat{V}_{t-1,i} : i \\in [n]\\rbrace , \\lbrace f_i(v_t)\\rbrace ]$ .", "For this process, we can now view each quantity $Y_{t,i} = (\\hat{V}_{t-1,i} - \\hat{V}_{t,i})$ as a Bernoulli random variable with mean $X_{t,i}$ .", "Then we can define $Z_{t,i} = \\sum _{h=1}^t Y_{h,i} - X_{h,i}$ as a martingale, where $[Z_{t,i}] = Z_{t-1, i}$ and ${Z_{t,i} - Z_{t-1,i}} \\le 2$ .", "Note that $[Z_{t^*,i}]$ is equal to $V_{i}$ up to a small constant $c_i$ .", "We can then apply Azuma's inequality to get: Pr Zt*, i - Vi - ci t* 2 -2 t*8.", "These constants are independent of $t^*$ , and will vanish when $t^*$ is sufficiently large." ], [ "Proof of Theorem ", "Let: $F_{LL} = f_{LL}(\\lambda , \\alpha , n, )$ s.t.", "$_{}$ with $\\beta / F_{LL}$ results in $\\epsilon _{LL} = \\frac{\\epsilon \\lambda k}{n}$ ; $F_Q = \\frac{8 L \\sqrt{n} k}{ \\lambda } F_{LL}$ ; $t_{\\text{query} } = \\frac{2 n}{k-1} {\\frac{F_{LL}}{\\beta }}^2 \\log { \\frac{2n k S}{(k-1)\\delta _{\\text{query}}} } = {\\Theta }(1/\\epsilon ^2)$ ; $t_{\\text{pad}} = \\max { \\frac{2 F_Q t_{\\text{query} }}{\\beta }, \\frac{32 n^2 F_Q^2 \\log (2/\\delta _{\\text{pad}})}{\\beta ^2}} = {\\Theta }(1/\\epsilon ^3)$ ; $t_{\\text{move}} = \\max { \\frac{n(n-1) t_{\\text{query}} }{k-1}, \\frac{32 n^2 F_Q^2 \\log (4S/\\delta _{\\text{move}})}{( 1 - 4k/n) \\beta ^2}, t_{\\text{pad}} } = {\\Theta }(1/\\epsilon ^3)$ ; $t_0 = t_{\\text{pad}} + S(2 \\cdot t_{ \\text{move}} + t_{\\text{query}}) = {\\Theta }(1/\\epsilon ^3)$ .", "After running UniformPad via the first Lemma REF construction for $t_{\\text{pad}}$ steps, our empirical memory vector is within $\\ell _{\\infty }$ distance $\\frac{\\beta }{n F_Q}$ of $x_U$ with probability at least $1 - \\delta _{\\text{pad}}$ .", "We maintain the invariant that when calling $\\texttt {MoveTo}(x)$ to reach some non-uniform vector $x$ from $x_U$ , the $\\ell _{\\infty }$ distance between $x$ and $x_U$ is at most $\\alpha $ , and that after calling Query($x$ ) the current vector $x^{\\prime }$ (accounting for drift during sampling) has $\\ell _{\\infty }$ distance at most $\\alpha $ from $x_U$ .", "At any time $t < t_0$ when $\\texttt {MoveTo}$ is called, the proportion of steps which the current invocation will contribute to the total history is at least: Rmove = tmove tpad + S(tmove + tquery) = O(1/S) Let $\\alpha = \\frac{k-1}{2{n}(n - 1)} \\cdot R_{\\text{move}}$ denote the radius of the $\\ell _2$ ball around $x_U$ in which we permit queries for local learning.", "Any point $x$ within the $\\alpha $ -ball around the uniform vector can reach (or be reached from) the uniform vector with one call to $\\texttt {MoveTo}(x)$ , as their $\\ell _{\\infty }$ distance is at most $\\alpha $ , so some difference vector exists with mass $R_{\\text{move}}$ and which satisfies the required norm bound.", "For each input $x$ , called from $x_t$ , $\\texttt {MoveTo}(x)$ applies the construction from Lemma REF for the mass $t_{\\text{move}}$ vector $y = x\\cdot (t_{\\text{move}}) - x_t \\cdot t$ .", "This results in a total error of at most $\\frac{\\beta }{2 n F_Q} \\cdot t_{\\text{move}} + 1 \\le \\frac{\\beta }{n F_Q} \\cdot t_{\\text{move}}$ per item count with probability at least $1 - {\\delta _{\\text{move}}}$ , as $ t_{\\text{move}} \\ge \\frac{32 n^2 F_Q^2 \\log (4S/\\delta _{\\text{move}})}{( 1 - 4k/n) \\beta ^2}.", "$ This yields a total variation distance within $\\frac{\\beta }{2F_Q}$ for the entire memory vector when appended to the current history.", "To run $\\texttt {Query}(x)$ , consider a set of $\\frac{n}{k-1}$ menus, where item 1 appears in every menu and every other item appears in exactly one.", "Over the following $t_{\\text{query} }$ rounds, play each menu $t_{\\text{query} } \\cdot \\frac{k-1}{n}$ times and note the proportion of each item observed relative to item 1 when its menu was played.", "Each scoring function $f_i \\in M$ is $L$ -Lipschitz; we run $\\texttt {Query}(x)$ for $t_{\\text{query}}$ rounds, which can introduce a drift of at most $\\beta / (2 F_Q)$ in total variation distance given the bound on $t_{\\text{query}} $ in terms of $t_{\\text{pad}}$ .", "This drift results in a vector which remains within $\\ell _{\\infty }$ distance $2\\alpha $ from $x_U$ , and so $x_U$ can still be reached again in a single $\\texttt {MoveTo}(x_U)$ call.", "The empirical average memory vector over all menu queries (for any item) is within $\\beta / F_Q$ total variation distance from $x$ , and so the expected distribution of items differs from that at $x$ by at most $\\beta / F_Q \\cdot \\frac{4 L\\sqrt{n} k}{ \\lambda } = \\beta /(2 F_{LL})$ in $\\ell _{\\infty }$ distance.", "Each point's observed frequency differs from that expectation by at most $\\beta /(2 F_{LL})$ with high probability.", "For an item $i$ in the menu at a given round, we view whether or not it was chosen as a Bernoulli random variable, with mean equal to its relative score among items in the menu.", "Let $\\bar{s}_{v, K,i}$ be the expected frequency of observing an item when the menu $K$ containing it is played, given the empirical sequence of memory vectors during those rounds $t_{\\text{query} } \\cdot \\frac{k-1}{n}$ , and let $\\hat{s}_{v,K,i}$ be the true observed frequency.", "We then have: Pr sv, K,i - sv,K,i 2 FLL 2 e -2 (/ 2FLL)2 tquery (k-1)/n = 2 e - (/ FLL)2 tquery (k-1) / (2n) query (k-1) nk S, given that tquery 2 nk-1 FLL2 2n k S(k-1)query .", "For item 1 take the average over all menus, and rescale such that all scores sum to 1 (using the frequency of item $i$ relative to the frequency of item 1 when both were in the menu).", "Each score, and its error bound, will only shrink under the rescaling.", "This gives us score vector estimates $\\hat{s}_x$ for each $x\\in S$ with additive error at most $\\frac{\\beta }{F_{LL}}$ relative to the true frequency of item 1, and thus overall, where $F_{LL} = f_{LL}(\\lambda , \\alpha , n, )$ .", "This holds for every query simultaneously with probability $1 - \\delta _{\\text{query}}$ .", "By the local learnability guarantee for $$ , running $_{}$ our results in a hypothesis $\\hat{M}$ which has $\\ell _2$ error at most $\\epsilon _{LL} = \\frac{\\epsilon \\lambda k}{n}$ for any $x \\in \\Delta (n)$ .", "In each round, the model and memory vector defines a space of feasible item distributions.", "This allows us to run RC-FKM for perturbations up to $\\epsilon $ .", "We can represent each set $\\texttt {IRD}(v_t, \\hat{M})$ explicitly as the convex hull of normalized score estimates for every menu.", "We implement PlayDist($x$ ) using current score estimates $\\hat{M}(v_t)$ to generate a menu distribution which approximately induces the instantaneous item distribution $x$ .", "Taking the convex hull over every menu's score vector under $\\hat{M}$ yields a polytope representation of $\\texttt {IRD}(v_t, \\hat{M})$ , which will contain our chosen action at each step.", "Let $x$ be a point in $\\textup {\\texttt {IRD}}(v, M)$ , and let $z \\in \\Delta ({n k})$ be a non-negative vector such that $\\sum _{j \\in {n k}} z_j \\cdot p_{K_j, v} = x$ , where $K_j$ is the $j$ th menu in lexicographic order.", "If the Recommender randomly selects a menu $K$ to show the Agent with probability according to $z$ , then the Agent's item selection distribution is $x$ .", "The probability that the Agent selects item $i$ is obtained by first sampling a menu, then selecting an item proportionally to its score: [Agent selects i] = j n k zj pKj, v, i = xi.", "Given $\\hat{M}$ satisfying $\\frac{e\\lambda k}{n}$ -accuracy and a target vector $x_t \\in \\textup {\\texttt {IRD}}(v_t, \\hat{M})$ generated by RC-FKM, there is a linear program for computing a menu distribution $z_t$ such that the induced item distribution $p_{z_t}$ satisfies pzt - xt .", "We can define a linear program to solve for $z$ with: variables for $z_j \\in [0,1]$ , where $\\sum _{j \\in {n k}} z_j = 1$ , estimated induced distributions for each menu $\\hat{p}_{K_j}$ , and a constraint for each $i \\in [n]$ : j=1 nk zj pKj, i = xt, i.", "If ${\\hat{M}(x)/\\hat{M}^* - M(x)/M^*_x} \\le \\frac{\\epsilon \\lambda k}{n}$ , then for any menu distribution $z$ , we have that: pz, v - pz, v .", "Consider some menu $K$ .", "The $\\ell _2$ distance of score vectors restricted to the menu is at most $\\frac{\\epsilon \\lambda k}{n}$ , and each vector has mass at least $\\frac{k \\lambda }{n}$ by dispersion.", "Rescaling vectors to have mass 1 yields a bound of $\\epsilon $ , which is preserved under mixture (which is the induced distribution by Lemma REF ), as well as when projecting into the $n-1$ dimensional space for RC-FKM, and so there is some perturbation vector $\\xi _t$ with norm at most $\\epsilon $ such that $z$ induces $x_t + \\xi _t$ .", "Note that the losses for RC-FKM can be $2G$ -Lipschitz after the reparameterization where $x_{t,n} = 1 - \\sum _{i=1}^{n-1} x_{t,i}$ .", "Any point satisfying within radius $r = \\frac{k-1}{n(n-1)}$ from the uniform distribution in $n$ dimensions, feasible by Lemma REF , is within distance $r$ under the reparameterization as well, as we simply drop the term for $x_n$ .", "The required radius surrounding $\\mathbf {0}$ for RC-FKM of $r$ is thus satisfied, and we have that $\\epsilon + \\delta \\le r /T^{1/4} \\le r$ .", "Further, the diameter of the simplex is bounded by $D = 2$ .", "We can directly apply the regret bound of RC-FKM for these quantities, which holds with respect to $H_c \\cap \\texttt {EIRD}(\\hat{M})$ .", "By Lemma REF , for any point $x \\in \\texttt {EIRD}(\\hat{M})$ , there is a point $x^{\\prime } \\in \\texttt {EIRD}({M})$ such that ${x - x^{\\prime }} \\le \\epsilon $ .", "Projecting both points into $H_c$ cannot increase their distance by convexity, and so the optimality gap between the two sets is at most $\\epsilon GT$ .", "Our total regret is at most the sum of: Maximal regret for the learning runtime $G \\cdot t_0$ ; The regret of RC-FKM over $T - t_0$ rounds; The gap between ${\\texttt {EIRD}}(\\hat{M})$ and ${\\texttt {EIRD}}({M})$ ; and The union bound of each event's failure probability.", "We can bound this by: RegretC EIRD(M)(T) G t0 + 4n GT3/4 + 4(+ 2) G Tr + GT+ (pad + move + query)T = O(T3/4) when taking each of $\\lbrace \\delta _{\\text{pad}}, \\delta _{\\text{move}}, \\delta _{\\text{query}} \\rbrace = \\frac{1}{T^{1/4}}$ .", "We can also bound the empirical distance from $H_c$ .", "The diversity constraint is $O(\\epsilon )$ -satisfied by the empirical distribution $v_T$ with probability $1 - O(T^{-1/4})$ .", "Note that after $t_0$ , the empirical distribution $v_{t_0}$ is within total variation distance $\\frac{\\beta }{2 F_Q} $ from $x_U$ (which is necessarily in $H_c$ ).", "Further, each vector $x_t$ played by RC-FKM results in a per-round expected item distribution $y_t$ which lies in $H_c$ by the robustness guarantee.", "We can apply a similar martingale analysis as in Lemma REF to the sequence of realizations of any item versus its cumulative expectation $\\sum _{t > t^* } y_t$ to get a bound of (much less than) $\\frac{\\beta }{2 F_Q}$ in total variation distance as well, which is preserved under mixture.", "For any locally learnable class, $\\beta = O(\\epsilon )$ .", "Note that for all the classes we consider, we have $\\beta /(2 F_{Q}) \\ll \\epsilon $ .", "Both events hold with probability $1 - O(T^{-1/4})$ , as we can apply the same failure probabilities used for the learning stage for each.", "Note that for a constraint $H_c$ where $c$ is sufficiently bounded away from $\\log (n)$ and for large enough $T$ , this will in fact yield an empirical distribution which exactly satisfies $H_c$ , as the weight $\\tilde{O}(T^{3/4})$ uniform window will “draw” the empirical distribution back towards the center of $H_c$ , as it dominates the total ${O}(T^{1/2})$ total error bound (for the unnormalized empirical histogram $T \\cdot v_T$ ) obtainable with a martingale analysis over the entire RC-FKM window.", "This completes the proof of the theorem." ] ]
2210.07773
[ [ "Learning To Rank Diversely" ], [ "Abstract Airbnb is a two-sided marketplace, bringing together hosts who own listings for rent, with prospective guests from around the globe.", "Applying neural network-based learning to rank techniques has led to significant improvements in matching guests with hosts.", "These improvements in ranking were driven by a core strategy: order the listings by their estimated booking probabilities, then iterate on techniques to make these booking probability estimates more and more accurate.", "Embedded implicitly in this strategy was an assumption that the booking probability of a listing could be determined independently of other listings in search results.", "In this paper we discuss how this assumption, pervasive throughout the commonly-used learning to rank frameworks, is false.", "We provide a theoretical foundation correcting this assumption, followed by efficient neural network architectures based on the theory.", "Explicitly accounting for possible similarities between listings, and reducing them to diversify the search results generated strong positive impact.", "We discuss these metric wins as part of the online A/B tests of the theory.", "Our method provides a practical way to diversify search results for large-scale production ranking systems." ], [ "Introduction", "Production e-commerce search ranking systems have to account for a range of target metrics, and search ranking at Airbnb is no exception.", "Ranking at Airbnb aims to optimize the guest and host experience end-to-end.", "Targets include increasing bookings, decreasing negative outcomes like cancellations, as well as increasing final trip ratings.", "While each of these targets are valuable, the core model that forms the foundation of the ranking framework is focused on increasing bookings, and it does so by ordering listings by their booking probability.", "Formulating ranking as the ordering of listings by their booking probability was the result of a long evolutionary process.", "Prior to 2015, listings at Airbnb were ranked by scoring functions designed by engineers, which were related to booking probability only indirectly.", "The scoring functions took into account important attributes of the listings such as price, location, and review ratings.", "These attributes of the listings were studied to infer their relation to bookings, and scoring functions were designed to uprank listings with preferred attributes.", "This process was largely automated in 2015 with the launch of a gradient-boosted decision tree ($GBDT$ ) model.", "The $GBDT$ was a regression model, targeting a utility score assigned to each listing based on past user interactions.", "For example, a booking could be assigned a value of $1.0$ , a click on the listing a value of $0.2$ , an impression $0.0$ , while a cancellation a negative value of $-0.5$ .", "Like the manually crafted scoring functions, the $GBDT$ model correlated with booking probability, but the association still remained indirect.", "Next in the step of evolution came neural networks.", "Our journey is described in  [9] and  [10].", "Casting the modeling task as pairwise learning to rank, optimizing it using cross-entropy loss, and using normalized discounted cumulative gain ($NDCG$ ) for evaluation brought booking probability into sharp focus.", "Over multiple iterations, we pushed the accuracy of the booking probability prediction, resulting in significant gains in bookings online.", "The baseline now stands at a formidable level.", "For a lot of applications, reaching performance comparable to humans is the gold standard.", "In our case, the model has far surpassed this level.", "As part of an evaluation exercise, a task was given to ranking engineers to identify which listing out of a pair was booked by a searcher.", "Ranking engineers could identify the booked listing for 70% of the pairs, in comparison to 88% for the ranking model.", "To further improve the model, it was time to look beyond the success of the pairwise learning to rank approach.", "In our pursuit to improve bookings and $NDCG$ , focus turned to the question: what aspects of the ranking model could not be improved by refining the pairwise probability of booking?", "One area that came up as a possible answer was, diversity in search ranking – or the lack thereof." ], [ "Why search results lack diversity?", "To understand why typical ranking solutions do not diversify search results out of the box, we dive deeper into the specific case of pairwise learning to rank.", "A similar reasoning applies to pointwise learning to rank.", "The discussion around listwise learning to rank is more involved, which we visit in a later section." ], [ "How do listings get ranked?", "In pairwise learning to rank, we construct training examples from pairs of listings.", "Consider a search result in response to query $q$ , issued by a user $u$ .", "Let $l_x$ be a listing in the search result that was booked, and $l_y$ a listing that appeared along with $l_x$ but was not booked.", "Then the pair $\\lbrace l_x, l_y\\rbrace $ forms a training example.", "Let $f_{\\theta }(q, u, l)$ represent a model with query, user, and the listing to be ranked as inputs, and $\\theta $ the trainable parameters, or “weights” of the model.", "To train the model, we get the estimated logits for both the listings in a training example as $\\text{logit}_x=f_{\\theta }(q, u, l_x)\\; ; \\; \\text{logit}_y=f_{\\theta }(q, u, l_y)$ The cross-entropy loss is given by $\\text{crossEntropy} = -\\log \\left(\\frac{e^{\\text{logit}_x}}{e^{\\text{logit}_x}+e^{\\text{logit}_y}} \\right)$ An example implementation of this loss in TensorFlow™: import tensorflow as tf def get_xentropy(logits_booked, logits_not_booked):   logit_diffs = logits_booked - logits_not_booked    xentropy = tf.nn.sigmoid_cross_entropy_with_logits(       labels=tf.ones_like(logit_diffs),       logits=logit_diffs)    return tf.reduce_mean(xentropy) Minimizing the cross-entropy loss summed over all the training examples leads to a model where for any given pair of listings, $e^{\\text{logit}_x}/(e^{\\text{logit}_x}+e^{\\text{logit}_y})$ can be interpreted as the pairwise booking probability, written as $P_{\\text{booking}}(l_x>l_y \\mid q, u)$ .", "We omit the conditional $\\lbrace q, u\\rbrace $ going forward for brevity, and write it as $P_{\\text{booking}}(l_x>l_y)$ .", "If the combined booking count for $l_x$ and $l_y$ is scaled to $1.0$ , then $P_{\\text{booking}}(l_x>l_y)$ represents the estimated fraction of bookings for $l_x$ , whereas $(1 -P_{\\text{booking}}(l_x>l_y))$ represents the fraction for $l_y$ .", "If we define $P_{\\text{booking}}(l_x)$ as the ordinary pointwise probability of booking $l_x$ given an impression in search results, and similarly define $P_{\\text{booking}}(l_y)$ , then they can be related to the pairwise booking probability by the Bradley–Terry model [14]: $P_{\\text{booking}}(l_x>l_y)=\\frac{P_{\\text{booking}}(l_x)}{P_{\\text{booking}}(l_x)+P_{\\text{booking}}(l_y)}$ For example, consider 100 search results where $l_x$ and $l_y$ were shown.", "If $l_x$ got 2 bookings while $l_y$ got 6, then Pbooking(lx) =2100 ; Pbooking(ly)=6100 Pbooking(lx>ly)=22+6 =0.25 Given the pair $\\lbrace l_x, l_y\\rbrace $ , for every 1 booking of $l_x$ , we expect 3 of $l_y$ .", "In order to rank a given set of listings $\\lbrace l_a, l_b, ..., l_z\\rbrace $ , we apply the ranking model $f_{\\theta }(q, u, l)$ to each listing to get the corresponding logits $\\lbrace f_{\\theta }(q, u, l_a), f_{\\theta }(q, u, l_b),.., f_{\\theta }(q, u, l_z)\\rbrace $ , then sort the listings by their logits in descending order.", "When sorting by logits, the condition for ranking listing $l_x$ higher than $l_y$ can be successively restated as: $f_{\\theta }(q, u, l_x) &>f_{\\theta }(q, u, l_y) \\\\e^{f_{\\theta }(q, u, l_x)} &>e^{f_{\\theta }(q, u, l_y)} \\\\\\frac{e^{f_{\\theta }(q, u, l_x)}}{e^{f_{\\theta }(q, u, l_x)} + e^{f_{\\theta }(q, u, l_y)}} &> \\frac{e^{f_{\\theta }(q, u, l_y)}}{e^{f_{\\theta }(q, u, l_x)} + e^{f_{\\theta }(q, u, l_y)}} \\\\P_{\\text{booking}}(l_x>l_y) &> P_{\\text{booking}}(l_y>l_x)$ We can therefore claim that pairwise learning to rank orders the listings by their pairwise booking probabilities.", "Inequality  can be further rewritten as: 2 Pbooking(lx>ly) >Pbooking(ly>lx) Pbooking(lx)Pbooking(lx)+Pbooking(ly) >Pbooking(ly)Pbooking(lx)+Pbooking(ly) Pbooking(lx) >Pbooking(ly) This establishes the following: Property 1 Ranking listings by their pairwise booking logits is equivalent to ranking them by their pointwise booking probabilities.", "Inequality  can also be expressed as: 2 Pbooking(lx>ly) >Pbooking(ly>lx) Pbooking(lx>ly) >1-Pbooking(lx>ly) Pbooking(lx>ly) >0.5 which establishes: Property 2 When ranking two listings by their pairwise booking logits, the one estimated to get more than $50\\%$ of the bookings is ranked higher.", "At Airbnb, a query $q$ from user $u$ typically results in 0 or 1 booking.", "In such a scenario, the listing estimated to get more than $50\\%$ of the bookings also represents the listing preferred by more than $50\\%$ of the past bookers, or the majority preference for the segment $\\lbrace q, u\\rbrace $ .", "It is built into the very foundation of pairwise learning to rank to abide by the majority preference at each ranking position.", "Note, as in real life elections, majority is defined by those who vote, or in our case, those who book.", "The majority preference is not defined by the entire population that visits Airbnb, as preference of non-bookers remain hidden.", "In theory, a heavily-personalized ranking model could customize results for the minority preference appropriately.", "In practice though, accurately identifying users with minority preferences, and personalizing their search results in a fruitful manner, is an open challenge at Airbnb.", "The majority preference dominates the model's decisions for all practical purposes.", "But majority preference isn’t necessarily the best way to accommodate the preference of the entire population.", "Let’s elaborate the idea by an example.", "An important consideration for bookers at Airbnb is price, and the majority of bookings lean towards economical ones.", "Learning from this user behavior, the ranking model demotes listings if price increases.", "Figure REF shows how the normalized model score for a listing decreases as we increase price along the x-axis.", "Figure: X-axis: percent increase in price.", "Y-axis: model scores normalized per query.", "Plot shows the average model score changes over a random sample of 100K listings.Experiments directly measuring price sensitivity confirm the same, that bookings drop sharply in response to price increases.", "But gravity towards affordability doesn’t tell the whole story, as Airbnb is a very diverse marketplace.", "The pareto principle  , or the $80/20$ rule, provides a better perspective.", "In most cities, if we segment the total value of bookings, roughly $20\\%$ of bookings account for $50\\%$ of the aggregated booking value (see Figure REF ).", "Figure: A distribution of booking values for 2 guests, 2 nights bookings in Rome.", "X-axis corresponds to booking values in USD, log-scale.", "Left y-axis is the number of bookings corresponding to each price point on the x-axis.", "The orange shape confirms the log-normal distribution of booking value.", "Red line plots the percentage of total bookings in Rome that have booking value less than or equal to the corresponding point on x-axis, and the green line plots the percentage of total booking value for Rome covered by those bookings.", "Splitting total booking value 50/50 splits bookings into two unequal groups of 80/20.Using booking value as an indicator of quality, we can apply the broad classification that the $80\\%$ majority of users are affordability-leaning.", "The remaining $20\\%$ minority are quality-leaning, with average booking value four times higher compared to the $80\\%$ majority.", "In reality, though, users can't be boxed into such neat binary classifications.", "Every guest has preference towards both quality and affordability, and the distribution is continuous.", "Still, the simplified and binarized 80/20 perspective is useful to see where ranking is falling short.", "Let’s consider how the first page of search results, where most of the user attention is spent, is impacted by the two effects: Majority principle: ranking is driven by the majority preference at each position, as shown in Property REF .", "Pareto principle: user preferences are distributed smoothly with a long tail, and can be roughly binarized by a 80/20 split, as shown in Figure REF .", "The combined effect is that $100\\%$ of the first page results get dictated by what the $80\\%$ majority prefers.", "It’s the tyranny of the majority   in search ranking.", "Intuitively, this suggests that factoring in the minority preference, and giving them proportionate representation, should improve the overall utility of search results.", "At the same time, the model based on pairwise learning to rank, friend of the tyrant majority oblivious of diversity, was the undisputed champion when it came to delivering bookings.", "That led us to ask a few questions: The core focus of ranking is to optimize for total bookings.", "Is improving diversity of search results a distraction from the core focus?", "A lot of effort has gone into refining the pairwise booking probability model.", "Can some simple changes allow it to tackle diversity for \"free”?", "Evaluating $NDCG$ offline, paired with measuring bookings in A/B tests online, is the gold standard for evaluating ranking models.", "Does diversity need a radically different evaluation method?", "The answers to the questions happen to be – No, No, and No.", "We dive into further details in the next section.", "Can improving diversity drive bookings gain?", "To see how diversification of search results can lead to bookings gain, we start by examining how improving $NDCG$ leads to increased bookings.", "We then reason that optimal diversity in search results should improve $NDCG$ , thereby improving bookings.", "Finally, we look at the relation between pairwise booking probabilities and $NDCG$ , and how the $NDCG$ gain from diversity optimization is inherently a different mechanism.", "Why does NDCG correlate with total bookings?", "A useful tool for our discussion in this section is the game of roulette.", "Consider $N$ games being played over the course of an evening.", "In each game, we have bets on $K$ different numbers.", "Let $P_{\\text{win}}(i,j)$ refer to the probability of win on the $j$ th bet in the $i$ th game.", "Let $win(i, j)$ be the corresponding winning amount in dollars.", "The total dollars won is expected to be: $E[\\text{total dollars won}]=\\sum \\limits _{i=0}^{N}\\sum \\limits _{j=0}^{K}P_{\\text{win}}(i, j)*win(i, j)$ At the end of the evening, we can sum up the actual realized wins from each bet to get the total, which is the observed outcome of the stochastic process.", "The observed total dollars won would converge to the expected total dollars won, provided $N$ is large enough .", "To analyze the total number of bookings from a given set of search results, we can employ a reasoning similar to roulette.", "Consider $N$ search results, each with $K$ listings.", "Getting a booking for a listing is equivalent to the winning event in roulette, and it depends on the attributes of the listing, as well as the listing’s position in search results.", "For the $i$ th search result, let the listing placed at the $j$ th position be $l_{i,j}$ .", "We denote its complete probability of booking as $P_{\\text{booking}}(l_{i,j})*P_{\\text{attention}}(j)$ , where $P_{\\text{booking}}(l_{i,j})$ is the booking probability of the listing $l_{i,j}$ , and $P_{\\text{attention}}(j)$ is the probability that the guest examines the $j$ th position of the search result.", "The equivalent of the winning amount in each case is one booking, so $win(i,j)=1$ .", "The total number of bookings expected can therefore be written as: $E[\\text{bookings}]=\\sum \\limits _{i=0} ^{N}\\sum \\limits _{j=0}^{K}P_{\\text{booking}}(l_{i,j})*P_{\\text{attention}}(j)$ Under the assumption that user attention drops monotonically as they scan the search results from top to bottom, i.e, $P_{\\text{attention}}(a)>P_{\\text{attention}}(b)$ if $a < b$ , we can show that $E[\\text{bookings}]$ is maximized if the listings are sorted by their booking probabilities.", "This property can be established using a proof by contradiction.", "Assume we have maximized $E[\\text{bookings}]$ , but the listings are not sorted by their booking probabilities.", "Then there must exist a pair of listings such that $P_{\\text{booking}}(l_{i,x}) <P_{\\text{booking}}(l_{i,y})$ and $P_{\\text{attention}}(x)>P_{\\text{attention}}(y)$ .", "Consider swapping the positions of the two listings $l_{i,x}$ and $l_{i,y}$ .", "The difference in expected bookings due to the swap is given by: ${\\mathsf {B}}_{x} = P_{\\text{booking}}(&l_{i,x}) \\; ; {\\mathsf {B}}_{y} = P_{\\text{booking}}(l_{i, y}) \\; ; \\; {\\mathsf {B}}_{x} < {\\mathsf {B}}_{y}\\\\{\\mathsf {A}}_{x} = P_{\\text{attention}}&(x) \\; ; \\; {\\mathsf {A}}_{y} = P_{\\text{attention}}(y) \\; ; \\; {\\mathsf {A}}_{x} > {\\mathsf {A}}_{y}\\\\\\Delta E[\\text{bookings}] = & \\;({\\mathsf {B}}_{x} {\\mathsf {A}}_{y} +{\\mathsf {B}}_{y}{\\mathsf {A}}_{x}) - ({\\mathsf {B}}_{x}{\\mathsf {A}}_{x}+{\\mathsf {B}}_{y}{\\mathsf {A}}_{y}) \\\\= & \\;({\\mathsf {B}}_{y} -{\\mathsf {B}}_{x})({\\mathsf {A}}_{x} - {\\mathsf {A}}_{y} )$ Since both terms of the product in Equation  are positive, $\\Delta E[\\text{bookings}] > 0$ .", "This implies the previous sum could not be the maximum.", "Hence the listings must be sorted by booking probability to maximize the total expected bookings.", "For alternate arguments supporting the property, see  [12].", "The difference in total expected bookings due to an unsorted pair, $\\Delta E[\\text{bookings}]$ , is the product of the difference in booking probability and the difference in attention to the two positions.", "As we discuss next, $NDCG$ tracks total bookings so well because it follows the same conditions.", "Let’s begin by considering how $NDCG$ is computed.", "We adopt a binary definition of relevance where a booked listing has a relevance of 1, and all other listings 0 relevance.", "For simplicity, we assume only a single listing is booked from a given search result.", "$NDCG$ can then be written as: $NDCG = \\frac{1}{N}\\sum \\limits _{i=0}^{N}\\frac{\\text{log}(2)}{\\text{log}(2 + \\text{pos}_{i})}$ where $\\text{pos}_{i}$ refers to the position of the booked listing in the $i$ th search.", "We can map the computation of $NDCG$ back to our game of roulette, where we consider each search result as an individual game as before.", "The winning event is defined as a listing getting booked, and the winning amount at position $j$ is set to $\\text{log}(2)/(\\text{log}(2 + j))$ .", "The formula for $NDCG$ in Equation REF is computing the observed total value of this stochastic process, a simple sum over the realized individual wins.", "What about the expected value of $NDCG$ ?", "The expected value would be the sum of the winning amounts for each position, weighted by the probability of attaining the win.", "This can be written as: $ E[NDCG]=\\frac{1}{N}\\sum \\limits _{i=0}^{N}\\sum \\limits _{j=0}^{K}P(l_{i,j})*\\frac{\\text{log}(2)}{\\text{log}(2 + j)}$ Provided we evaluate $NDCG$ over a large number of searches, the expected and observed values would converge, and maximizing Equation REF is equivalent to maximizing Equation REF .", "The monotonic decay of $\\text{log}(2)/\\text{log}(2 + j)$ together with Equation REF implies that sorting the listings by their booking probabilities maximizes $NDCG$ .", "The proof is by contradiction once again.", "Further, the drop in $NDCG$ from a pair of listings $l_{i,x}$ and $l_{i,y}$ not ordered by booking probabilities is given by: $\\Delta E[NDCG] = & (P_{\\text{booking}}(l_{i,y})-P_{\\text{booking}}(l_{i,x}))\\\\&* \\left(\\frac{\\text{log}(2)}{\\text{log}(2+x)}- \\frac{\\text{log}(2)}{\\text{log}(2+y)}\\right)$ $\\Delta E[NDCG]$ in Equation  correlates with $\\Delta E[\\text{bookings}]$ in Equation  because the positional discount curve is constructed based on how user attention decays by position, making $\\text{log}(2)/\\text{log}(2+x)$ proportional to $P_{\\text{attention}}(x)$ .", "As a result, a gain in $NDCG$ is also a strong indicator of gain in total bookings.", "To recap, both total bookings and $NDCG$ are maximized when listings are sorted by their booking probabilities.", "In the case of total bookings, this arises due to the fact that we accurately represented the probability of booking as $P_{\\text{booking}}(l_{i,j})*P_{\\text{attention}}(j)$ .", "However for $NDCG$ , we made the simplified assumption that the probability of booking is given by $P_{\\text{booking}}(l_{i,j})$ alone, and independent of the position where the listing is placed.", "This assumption is a useful one, since it allows us to shuffle listings around in offline analysis, and compute $NDCG$ without requiring fresh user input.", "But then, to align $NDCG$ with total bookings, we need to weigh each potential booking by its positional discount.", "Can NDCG measure the impact of diversity?", "Utilizing the fact that listings ordered by their booking probabilities maximize $NDCG$ and total bookings, we can design a straightforward iterative algorithm to construct optimal search results as shown in Algorithm .", "Ranking by booking probabilities [1] A set of $N$ listings $\\lbrace l_0, l_1, \\dots l_{N-1}\\rbrace $ Listing positions $\\lbrace \\text{pos}(l_0), \\text{pos}(l_1), \\dots \\text{pos}(l_{N-1})\\rbrace $ $\\mathcal {L} \\leftarrow \\lbrace l_0, l_1 \\dots l_{N-1}\\rbrace $ $k \\leftarrow 0$ until $N$ Compute $\\text{logit}(l_i) \\leftarrow f_{\\theta }(q, u, l_i)$ foreach $l_i \\in \\mathcal {L}$ $l_{max} \\leftarrow \\text{argmax}(\\text{logit}(l_i), l_i \\in \\mathcal {L})$ $\\text{pos}(l_{max}) \\leftarrow k$ $\\mathcal {L} \\leftarrow \\mathcal {L} \\setminus l_{max}$ In Algorithm , we rely on Property REF , which allows us to use the pairwise booking logit interchangeably with booking probability.", "$\\text{logit}(l_i)$ on line 3 depends only on the attributes of the listing being ranker, $l_i$ , besides the query and the user.", "Since attributes of a listing are invariant all throughout, line 3 in Algorithm  can be taken out of the for loop.", "This reduces Algorithm  to computing $\\text{logit}(l_i)$ once for each listing, and then using it to sort the listings.", "But it is not a binding restriction that the booking probability of a listing should be independent of the other listings, and must depend on attributes of the given listing alone.", "Specifically, consider iteration $K+1$ of the for loop in Algorithm .", "We have already placed listings in position 0 through $K-1$ , and we need to select a listing for position $K$ by computing the logits of the remaining $N-K$ listings.", "When computing the logit for a given listing $l_i$ , we can factor in the attributes of $l_i$ , as well as all the attributes of the $K$ listings placed at 0 through $K-1$.", "By including the attributes of listings at 0 through $K-1$ , we can ensure the calculated logits of the $N-K$ listings are more accurate.", "Listings being ranked which are too similar to the listings at 0 through $K-1$ can have their logits corrected to lower values.", "Maximization of $NDCG$ is preserved through this process of extending the inputs to include attributes of the listings placed at 0 through $K-1$ .", "The critical step in the proof for maximal $NDCG$ , where we swap the listings not ordered by booking probability to demonstrate a contradiction, continues to work.", "That's because the extended inputs from the listings at 0 through $K-1$ are invariant in the swapping process.", "This provides a mechanism for diversification that is aligned with maximizing $NDCG$ .", "We require no new metrics to evaluate diversity.", "Due to the relation between $NDCG$ and total bookings, we expect this mechanism to directly increase total bookings as well.", "To summarize, diversity removes redundant choices, thereby improving utilization of positions in search results, which get reflected in improved $NDCG$ and total bookings.", "While we don’t need a new metric for evaluating diversity, the situation is different when it comes to implementing diversity.", "For diversity aware booking probabilities, we require the attributes of the listings that are placed before.", "This information is not available in the garden variety pointwise or pairwise learning to rank frameworks.", "Hence we need a new kind of model, which we discuss next.", "How to implement diversity in ranking?", "In this section we build a framework to rank listings for $N$ positions while incorporating diversity.", "Instead of building a single model, we build $N$ models, one dedicated for each of the $N$ positions.", "Starting at position 0, we reuse the regular pairwise booking probability model described in Section .", "Let’s name the model $f_{0, \\theta _0}(q, u, l)$ , where the 0 index refers to the position in search result, and $\\theta _0$ the parameters of the model.", "To recap from Section , $f_{0, \\theta _0}(q, u, l)$ maps each listing to a pairwise logit, where sigmoid of the difference of two pairwise logits gives the pairwise booking probability.", "Now let’s construct $f_{1, \\theta _1}(q, u, l, l_0)$ , the model for position 1.", "This model has an additional input $l_0$ , which we call the antecedent listing from position 0.", "A user scanning the search results from top to bottom would consider the listing at position 1, only if the antecedent listing at position 0 did not meet their requirements.", "See  [4] and  [5] for an in-depth study of this phenomenon.", "Thus when ranking for position 1, we have incrementally more information than we did when ranking for position 0.", "Leveraging this new information, we construct $f_{1, \\theta _1}(q, u, l, l_0)$ conditional upon the fact that the user has rejected the listing at position 0.", "To construct training examples for this conditional pairwise booking probability model, we go through the search logs and discard all searches where the listing at position 0 was booked.", "For the remaining searches, we set aside the listing at position 0, denoting it the antecedent listing.", "From the listings below position 0, we create pairs of booked and not booked listings, similar to how pairwise booking examples are created.", "Figure  REF illustrates this.", "Figure: Training data construction for f 1,θ 1 (q,u,l,l 0 )f_{1, \\theta _1}(q, u, l, l_0)For a training example with $l_x$ as the booked listing, $l_y$ as the not booked listing, and $l_0$ as the antecedent listing, we compute $\\text{logit}_x=f_{1, \\theta _1}(q, u, l_x, l_0) ; \\text{logit}_y=f_{1, \\theta _1}(q, u, l_y, l_0)$ and cross-entropy loss the same as Equation REF .", "By minimizing the cross-entropy loss summed over all training examples, we can infer parameters $\\theta _1$ of the model such that $e^{\\text{logit}_x}/(e^{\\text{logit}_x}+e^{\\text{logit}_y})$ represents the pairwise probability a user will book $l_x$ over $l_y$ , given the condition they have rejected the antecedent listing $l_0$.", "We write this conditional probability as $P_{\\text{booking}}(l_x>l_y \\mid \\mathcal {A}=\\lbrace l_0\\rbrace )$ .", "The conditional part of this pairwise booking probability arises because of the training data.", "The model is learning about pairwise preference between $l_x$ vs. $l_y$ , but only from those users who have rejected $l_0$ .", "Intuitively, we expect the choice of such users to be different from $l_0$ , providing us a notion of diversity that can be learnt from the training data.", "For position 2, we follow a similar strategy.", "We discard all searches where either listing at position 0 or 1 was booked, to arrive at the model $f_{2, \\theta _2}(q, u, l, l_0, l_1)$ which gives $P_{\\text{booking}}(l_x>l_y \\mid \\mathcal {A}=\\lbrace l_0, l_1\\rbrace )$ .", "Generalizing for position $k$ , we get $f_{k, \\theta _{k}}(q, u, l, l_{0 \\rightarrow k-1})$ which predicts the conditional probability $P_{\\text{booking}}(l_x>l_y \\mid \\mathcal {A}=\\lbrace l_{0 \\rightarrow k-1}\\rbrace )$ .", "For ranking $N$ positions, we now have $N$ distinct ranking models $\\lbrace f_{0, \\theta _0}(q, u, l), f_{1, \\theta _1}(q, u, l, l_0), \\dots , f_{N-1, \\theta _{N-1}}(q, u, l, l_{0 \\rightarrow N-2})\\rbrace $ .", "These models can be plugged into Algorithm .", "In the $K$ th iteration of loop at line 3, we can employ $f_{K, \\theta _{K}}(q, u, l, l_{0 \\rightarrow K-1})$ to evaluate the logits, using listings already placed at 0 through $K-1$ as antecedent listings.", "The logits are computed afresh in each iteration of the loop, now incorporating diversity.", "Though the theory presented in this section is simple, it is not a very practical one.", "For ranking $N$ positions, the number of models needed to be trained is $O(N)$ .", "On top of it, the computational complexity of Algorithm  is $O(N^3)$ , since the loop at line 2 is $O(N)$ , the iteration over each listing at line 3 is $O(N)$ , and the complexity of evaluating $f_{K, \\theta _{K}}(q, u, l, l_{0 \\rightarrow K-1})$ is $O(N)$ .", "In the next section we discuss how to make the framework more practical.", "How to efficiently implement diversity in ranking?", "We start with the $N$ distinct models constructed for each of the $N$ positions, and simplify them one by one.", "The model for position 0, $f_{0, \\theta _0}(q, u, l)$ , is our regular pairwise booking probability model $f_{\\theta }(q, u, l)$ from Section .", "We treat this as our core model and write the base case as: $f_{0, \\theta _0}(q, u, l) = f_{\\theta }(q, u, l)$ To simplify $f_{1, \\theta _1}(q, u, l, l_0)$ , the model for position 1, we compare it with $f_{\\theta }(q, u, l)$ .", "Both models are obtained by minimizing the cross-entropy loss over pairwise training examples consisting of booked and not booked listing pairs.", "The differences between them are: $f_{\\theta }(q, u, l)$ is trained over pairs constructed from all searches.", "$f_{1, \\theta _1}(q, u, l, l_0)$ is trained on the subset of searches where the booked listing appears below position 0.", "$f_{\\theta }(q, u, l)$ has the listing being ranked $l$ as the input, whereas $f_{1, \\theta _1}(q, u, l, l_0)$ has the listing being ranked $l$ , as well as the antecedent listing $l_0$ , as inputs.", "We expect $f_{\\theta }(q, u, l)$ and $f_{1, \\theta _1}(q, u, l, l_0)$ to be fairly close since a large part of their training examples are shared.", "But we expect $f_{1, \\theta _1}(q, u, l, l_0)$ to outperform $f_{\\theta }(q, u, l)$ for position 1 since it can downrank listings that are too similar to the antecedent $l_0$ .", "We use this insight to simplify $f_{1, \\theta _1}(q, u, l, l_0)$ , refactoring it into two models as: $f_{1, \\theta _1}(q, u, l, l_0) = f_{\\theta }(q, u, l) - s_{\\phi }(q, u, l, l_0)$ First part of the refactor is $f_{\\theta }(q, u, l)$ , the regular pairwise booking probability model.", "The second part is a new model $s_{\\phi }(q, u, l, l_0)$ parameterized by $\\phi $ .", "It adds a negative term based on the similarity between $l$ and $l_0$ .", "This similarity is not defined by us, instead it is learnt from the training data that we built for $f_{1, \\theta _1}(q, u, l, l_0)$ .", "We train $f_{\\theta }(q, u, l)$ prior to training $f_{1, \\theta _1}(q, u, l, l_0)$ .", "When training $f_{1, \\theta _1}(q, u, l, l_0)$ , we don’t need to train the $\\theta $ parameters again.", "We can simply substitute $f_{\\theta }(q, u, l)$ by the unconditional booking logit $ubl_l$ , where $ubl_l=f_{\\theta }(q, u, l)$ is obtained by evaluating the model for $l$ .", "Thus, $f_{1, \\theta _1}(q, u, l, l_0) = ubl_l - s_{\\phi }(q, u, l, l_0)$ Given a training example for $f_{1, \\theta _1}(q, u, l, l_0)$ , with $l_x$ as booked, $l_y$ as not booked, and $l_0$ as antecedent, we have $\\text{logit}_x=ubl_x - s_{\\phi }(q, u, l_x, l_0) ; \\text{logit}_y=ubl_y-s_{\\phi }(q, u, l_y, l_0)$ and cross-entropy loss defined by Equation REF .", "The parameters $\\phi $ obtained by minimizing the cross-entropy loss over all the training examples gives us a simplified construction of $f_{1, \\theta _1}(q, u, l, l_0)$ according to Equation REF .", "For position 2, we have two antecedent listings, one at position 0 and the other one at position 1.", "We need to account for the similarity to both these antecedent listings.", "Though this time around, we do not need to learn the similarity model all over again.", "We can reuse the similarity model we learnt as part of $f_{1, \\theta _1}(q, u, l, l_0)$ .", "The refactored model for position 2 can therefore be written as: $f_{2, \\theta _2}(q, u, l, l_0, l_1) = ubl_l - s_{\\phi }(q, u, l, l_0) - s_{\\phi }(q, u, l, l_1)$ But Equation REF is valid only if the effect of $l_0$ and $l_1$ are completely independent of each other.", "On the other hand, if $l_0$ was an exact duplicate of $l_1$ , then $l_1$ would not have any incremental effect and we could completely ignore it to rewrite Equation REF as $f_{2, \\theta _2}(q, u, l, l_0, l_1) = ubl_l - s_{\\phi }(q, u, l, l_0)$ In reality, we expect the true effect to be somewhere in between Equation REF and  REF , which we write as 2 $f_{2, \\theta _2}(q, u, l, l_0, l_1) &= ubl_l - s_{\\phi }(q, u, l, l_0) - \\lambda *s_{\\phi }(q, u, l, l_1) \\\\f_{2, \\theta _2}(q, u, l, l_0, l_1) &= f_{1, \\theta _1}(q, u, l, l_0) - \\lambda *s_{\\phi }(q, u, l, l_1)$ where $0 \\le \\lambda \\le 1$ .", "Figure REF depicts Equation REF in terms of a Venn diagram.", "Figure: Illustration for Equation For position 3, we have to factor in the effect of similarity to the additional antecedent listing $l_2$ given by $s_{\\phi }(q, u, l, l_2)$ .", "But we expect the effect to be reduced under the assumption that $l_2$ isn’t completely independent of $l_0$ and $l_1$ .", "To denote the incremental impact of $l_2$ , we scale the contribution from $l_2$ by $\\lambda ^ 2$ , once for overlap with $l_0$ , the second time for overlap with $l_1$ .", "This gets us the refactored equation: 2 f3, 3(q, u, l, l0, l1, l2) = f2, 2(q, u, l, l0, l1) - 2*s(q, u, l, l2) Generalizing for position $(k+1)$ , we can write 2 fk+1, k+1(q, u, l, l0 k) = fk, k(q, u, l, l0 k-1) - k*s(q, u, l, lk) = f(q, u, l) - i=0k i*s(q, u, l, li) We now need to build only two models, $f_{\\theta }(q, u, l)$ and $s_{\\phi }(q, u, l, l_0)$ .", "The models for each of the positions can be expressed in terms of those two.", "Using $f_{\\theta }(q, u, l)$ and $s_{\\phi }(q, u, l, l_0)$ , we can construct Algorithm REF which provides an iterative way to construct the search results, while taking diversity into account.", "Ranking diversely [1] A set of $N$ listings $\\lbrace l_0, l_1, \\dots l_{N-1}\\rbrace $ Listing positions $\\lbrace \\text{pos}(l_0), \\text{pos}(l_1), \\dots \\text{pos}(l_{N-1})\\rbrace $ $\\mathcal {L} \\leftarrow \\lbrace l_0, l_1,..., l_{N-1}\\rbrace $ Compute $\\text{logit}(l_i) \\leftarrow f_{\\theta }(q, u, l_i)$ foreach $l_i \\in \\mathcal {L}$ $l_{max} \\leftarrow \\text{argmax}(\\text{logit}(l_i), l_i \\in \\mathcal {(}L))$ $\\text{pos}(l_{max}) \\leftarrow 0$ $l_{\\text{atcdnt}} \\leftarrow l_{max}$ $\\mathcal {L} \\leftarrow \\mathcal {L} \\setminus l_{max}$ $k \\leftarrow 1$ until $N$ $\\text{logit}(l_i) \\leftarrow \\text{logit}(l_i) - \\lambda ^{k}*s_{\\phi }(q, u, l, l_{\\text{atcdnt}})$ foreach $l_i \\in \\mathcal {L}$ $l_{max} \\leftarrow \\text{argmax}(\\text{logit}(l_i), l_i \\in \\mathcal {(}L))$ $\\text{pos}(l_{max}) \\leftarrow k$ $l_{\\text{atcdnt}} \\leftarrow l_{max}$ $\\mathcal {L} \\leftarrow \\mathcal {L} \\setminus l_{max}$ We can treat $\\lambda $ as a hyperparameter here and sweep through different values to find the one that maximizes $NDCG$ .", "In our case, we settled at $\\lambda = \\frac{1}{3}$ .", "The simplification in this section reduced the number of models from $O(N)$ to 2, and the computational complexity from $O(N^3)$ to $O(N^2)$ .", "Line 8 is now the bottleneck in Algorithm REF .", "The number of iterations of the loop is $O(N)$ , and iterating over each listing in line 8 is $O(N)$ .", "The main computation in this inner loop is evaluation of $s_{\\phi }(q, u, l, l_{\\text{atcdnt}})$ .", "We can further reduce the complexity by optimizing the evaluation of $s_{\\phi }(q, u, l, l_{\\text{atcdnt}})$ .", "Our discussion until this point did not assume any model architecture.", "But for optimizing the evaluation of $s_{\\phi }(q, u, l, l_{\\text{atcdnt}})$ , we limit the discussion to neural networks, which is the model we implemented.", "The bulk of the model complexity comes from processing of the listing features.", "We use a tower of fully-connected layers to map the listing features into an embedding.", "A shallow layer at the top then combines the embeddings of the two input listings to finally output the similarity logit.", "See Figure REF .", "Figure: Neural network for s φ (q,u,l i ,l j )s_{\\phi }(q, u, l_i, l_j).", "Weights are shared for the listing towers and their output is cached.The advantage of this architecture is that the output of the listing towers can be cached.", "We need to process each listing only once to map them to their corresponding embeddings.", "In line 8 of Algorithm REF , we then reuse the cached embeddings and only evaluate the top part of the network.", "With these optimizations we were able to keep the latency impact to a minimum.", "How does this compare to other existing approaches?", "Enforcing Fairness in Ranking Imposing fairness constraints has been a popular method for diversifying search results.", "A ranking model to optimize a utility such as $NDCG$ , within a specified upper bound of unfairness is discussed in  [13].", "An alternate approach to provide different groups equal impressions or proportional impressions in web search results is presented in  [8].", "Search results are built iteratively, where each step of the algorithm either chooses to optimize fairness with probability $\\varepsilon $ , or relevance with probability $1-\\varepsilon $ .", "The method in  [8] is attractive because of its simplicity, as it does not require building any new models.", "The limitation of these approaches is that they inherently pose the problem as a tradeoff between utility and fairness.", "Utility is the quantifiable benefit in the short term, bookings in our case, which needs to be sacrificed to accommodate fairness, with assumed benefits in the long term.", "These benefits of fairness are assumed because they are hard to quantify given the long time horizon needed.", "As a result, the knob between utility and fairness has to be controlled by faith.", "The strength of approaches such as  [13] and  [8] is that if one has a fairness goal in mind for which one is willing to sacrifice utility, such as gender equality between job applicants, or equality between opposing views on a topic, then these methods can achieve fairness while minimizing the utility loss.", "This is something our framework cannot handle, which can diversify only in the direction that aligns with optimizing utility.", "This exposes a fundamental difference between two goals: 1) imposing some fairness constraints on search results, and 2) providing users with the optimal level of choice in search results.", "Both goals on the surface are related to diversification, but in the final analysis, they lead to different paths.", "Providing Context Features Another popular method to diversify ranking is to provide features of surrounding items in search results.", "These approaches are closer to our framework as they aim to improve utility through diversification, and diversity is not a goal in itself.", "An example is the ranking of widgets on Amazon video homepage, discussed in  [6].", "The method relies on a predefined categorization of widgets based on a combination of content type and purchasing option.", "Features are then constructed to summarize the categories of the widgets placed in positions 0 through $k-1$ .", "These features capture the incremental gain in category diversity while scoring widgets for the $k$ th position.", "The limitation of this approach is that it diversifies only along the predefined categories.", "In the case of Airbnb, we found such categorization of listings quite challenging.", "Attempts to diversify listings based on location, price, amenities, etc.", "mostly lead to negative results.", "When categorizing based on any particular criteria, for instance percentile price buckets, one ends up ignoring the rest of the dimensions such as location, quality, amenities, and aesthetics.", "Listings may bubble up in ranking simply because they belong to a certain price percentile bucket, compromising on one or more of the other dimensions that users care about.", "To prevent that, diversity needs to account for the entire context of the user, query, and listings – and the interactions between them.", "Our method aims for this generalized diversification.", "The strength of  [6] is its simplicity.", "If there is a categorization available at hand, and one cares about diversity only along those categories, then  [6] reduces the problem to feature engineering and does not require any new model to be learnt.", "Our previous attempt at diversity described in  [1] provides another method for supplying context information to the model.", "A recurrent neural network is used to map the features of all the listings in the search result to an embedding.", "This embedding can then be used as a feature by the ranking model to make more context aware decisions.", "The challenge with this method is loss of information.", "While ranking for the $k$ th position, the embeddings don't provide information about the listings placed at 0 through $k-1$ .", "Instead, the embeddings represent an aggregated summary of all the listings mushed together, and doesn’t allow the kind of pairwise comparisons a user would perform.", "The strength of this technique is its simplicity when scoring.", "Once the embedding is evaluated, plugging it in as a feature is straightforward, and requires no additional computation.", "Sequential and Setwise Ranking Abandoning the assumption that utility of individual items in search results are independent of each other, and making ranking aware of their interactions has been gaining momentum in recent years.", "A reinforcement learning algorithm is used in  [7], where ranking for each position is considered one time step of the sequential Markov decision process.", "While ranking for the $k$ th position, the state of the RL algorithm encodes the items placed at positions 0 through $k-1$ .", "At each step, a Monte Carlo tree search is used to explore the updates to policy.", "The challenge with this approach is its complexity, both for training the RL algorithm and for serving it online.", "Our approach reuses the pairwise booking probability model for the most part, combining it with a similarity model that is further optimized to reduce the computational complexity.", "Even then, we found a significant impact on online latency as we discuss under experimental results.", "The strength of  [7] is that it explores the space of possible rankings more thoroughly, and could potentially find larger $NDCG$ gains.", "Taking all the items to be ranked as inputs and producing position assignments simultaneously is discussed in  [11].", "The challenge with this approach, once again, is its complexity.", "In our setting each listing has ~700 features, along with ~200 features for the query and the user.", "For diversification $O(100)$ listings are evaluated.", "At this scale, the proposed setwise architecture is impractical given the system constraints for training and serving.", "The strength of the setwise approach is that it makes the least assumptions, and can be used as a tool to assess the upper bounds of the gains possible through diversification.", "A method to tackle the complexity of setwise ranking is discussed in  , which uses determinantal point processes.", "It is an alternative to our proposal that also relies on the concept of similarity.", "Listwise Loss We previously discussed how typical pairwise learning to rank frameworks consider only the attributes of the listing being ranked.", "Listwise learning to rank considers the entire list, so presumably they can overcome that limitation.", "Even then, in common listwise learning to rank algorithms, such as ListNet  [3], it is assumed that the booking probability of individual listings are independent of each other.", "This assumption is necessary to keep the computational complexity tractable.", "For handling diversity, it is not enough to have access to all the other listings in the search result, one has to explicitly take into consideration the interaction between the listings.", "Accounting for the interactions leads to the kind of problem formulation described in Section .", "The assumption regarding the independence of items ranked is relaxed in  [2].", "It encodes the features of the top results into an embedding using a recurrent neural network, which are then used as supplemental features for scoring each item.", "The comparison of  [2] to our work is similar to what we discussed for  [1].", "How does the theory work in practice?", "We developed the theory in Sections  and   before embarking on actual implementation and experimentation.", "This allowed us to make certain predictions about the experimental results.", "In a flow of events reminiscent of developments in physics, the predicted experimental results were subsequently matched by tests offline and online.", "NDCG The first prediction to come out of the theory was a simple one: that $NDCG$ should improve.", "What made the prediction interesting was the expectation of an $NDCG$ gain, with no additional information added to the training data.", "In contrast, most $NDCG$ gains over the past years required incrementally new information in the form of features, labels, or bug fixes.", "On the test set for $f_{1, \\theta _1}(q, u, l, l_0)$ , we observed an $NDCG$ gain of $0.45\\%$ compared to the baseline Algorithm .", "When measured on the test set of $f_{\\theta }(q, u, l)$ , this gain translated to $0.2\\%$ .", "This gain is smaller than $0.45\\%$ because the listing at the first position remains invariant between Algorithm   and  REF , diluting the effect.", "The impact on $NDCG$ measured in the online A/B test was much stronger, where we observed an increase of $1.5\\%$ .", "Bookings & Booking Value From the $0.2\\%$ $NDCG$ gain, we expected a similar gain in bookings online.", "Further, we expected these bookings to come from preference groups which were a minority in the training data.", "The most prominent minority preference being quality-leaning searchers, as discussed in Section .", "This allowed us to make a further prediction: that there would be gains in gross booking value, or the sum of the prices of the booked trips, which would be multiple times over the bookings gain.", "In the online A/B test, we observed a bookings gain of $0.29\\%$ .", "Along with it, we saw a $0.8\\%$ gain in gross booking value.", "Segmenting the bookings gain by user groups, we found almost the entire gain came from users who were booking a listing on Airbnb for the first time.", "Engagement Other observations from the online A/B test include a $0.46\\%$ increase in listings viewed, and a $1.1\\%$ increase in listings saved.", "This increased engagement with search results can be attributed to the increased choice.", "Price & Location To directly measure diversification, we compared metrics along some key dimensions.", "The first measure compared the variance in price among the top 8 results.", "We observed an increase of $3.4\\%$ in treatment, which captures the increased diversity in price, and hence quality by proxy.", "The second metric compared the number of listings in the top 8 results that were within $0.5$ km of each other.", "We noted a decrease of $-0.62\\%$ , which shows reduced redundancy in location.", "Trip Quality To measure the impact of diversity on the entire user experience, we waited for 90 days after the end of the A/B test.", "This allowed the majority of the trips booked during the experiment to be realized.", "Comparing the ratings from guests checking out of their stays, we noted an increase of $0.4\\%$ in 5-star ratings.", "Diversifying search results shifts the balance away from the majority preference of affordability towards the minority preference of quality.", "This shift towards quality ultimately surfaces in improved trip ratings.", "Position Utilization We segmented the bookings gain in the A/B test by the position where the booked listing was first presented to the searcher.", "This revealed an interesting phenomenon: the utilization of the top position increased significantly, followed by some gain for the second position.", "The utilization decreased for subsequent positions.", "See Figure REF .", "Deciphering this phenomenon to get a better understanding of how searchers respond to increased choice is part of our future roadmap.", "Figure: Y-axis: position in search result.", "X-axis: percentage change in bookings.", "Latency The gains came at a latency cost, where we saw an increase of $8.4\\%$ in P95 latency and a $5.3\\%$ increase in median latency.", "Conclusion We started this paper discussing how ranking evolved at Airbnb.", "We conclude with a summary of our efforts to diversify ranking.", "Early attempts started in 2017 with category-based diversification.", "Various categories were tried, based on price, location, and amenities.", "All these efforts resulted in disappointment.", "This led to the conclusion that diversifying along a particular dimension mostly degraded the quality of results.", "Focus then moved to diversification along multiple dimensions, in particular combining price and location.", "These attempts resulted in failure as well.", "The breakthrough came in 2019 with  [1], where instead of forcing ranking to adhere to some predefined notion of diversity, we supplied the ranking model with more information, giving it the freedom to diversify.", "The current work continues with that philosophy.", "We revisited the problem in 2022 with a theory-first approach, letting the model learn the notion of diversity from the training data.", "This lead to one of the most impactful ranking changes of the year.", "But as discussed in Section , this training data itself is biased against diversity.", "After the launch of the diversity ranker, we expect future training data to have richer examples to learn from, enabling a virtuous cycle of diversification." ], [ "Can improving diversity drive bookings gain?", "To see how diversification of search results can lead to bookings gain, we start by examining how improving $NDCG$ leads to increased bookings.", "We then reason that optimal diversity in search results should improve $NDCG$ , thereby improving bookings.", "Finally, we look at the relation between pairwise booking probabilities and $NDCG$ , and how the $NDCG$ gain from diversity optimization is inherently a different mechanism." ], [ "Why does NDCG correlate with total bookings?", "A useful tool for our discussion in this section is the game of roulette.", "Consider $N$ games being played over the course of an evening.", "In each game, we have bets on $K$ different numbers.", "Let $P_{\\text{win}}(i,j)$ refer to the probability of win on the $j$ th bet in the $i$ th game.", "Let $win(i, j)$ be the corresponding winning amount in dollars.", "The total dollars won is expected to be: $E[\\text{total dollars won}]=\\sum \\limits _{i=0}^{N}\\sum \\limits _{j=0}^{K}P_{\\text{win}}(i, j)*win(i, j)$ At the end of the evening, we can sum up the actual realized wins from each bet to get the total, which is the observed outcome of the stochastic process.", "The observed total dollars won would converge to the expected total dollars won, provided $N$ is large enough .", "To analyze the total number of bookings from a given set of search results, we can employ a reasoning similar to roulette.", "Consider $N$ search results, each with $K$ listings.", "Getting a booking for a listing is equivalent to the winning event in roulette, and it depends on the attributes of the listing, as well as the listing’s position in search results.", "For the $i$ th search result, let the listing placed at the $j$ th position be $l_{i,j}$ .", "We denote its complete probability of booking as $P_{\\text{booking}}(l_{i,j})*P_{\\text{attention}}(j)$ , where $P_{\\text{booking}}(l_{i,j})$ is the booking probability of the listing $l_{i,j}$ , and $P_{\\text{attention}}(j)$ is the probability that the guest examines the $j$ th position of the search result.", "The equivalent of the winning amount in each case is one booking, so $win(i,j)=1$ .", "The total number of bookings expected can therefore be written as: $E[\\text{bookings}]=\\sum \\limits _{i=0} ^{N}\\sum \\limits _{j=0}^{K}P_{\\text{booking}}(l_{i,j})*P_{\\text{attention}}(j)$ Under the assumption that user attention drops monotonically as they scan the search results from top to bottom, i.e, $P_{\\text{attention}}(a)>P_{\\text{attention}}(b)$ if $a < b$ , we can show that $E[\\text{bookings}]$ is maximized if the listings are sorted by their booking probabilities.", "This property can be established using a proof by contradiction.", "Assume we have maximized $E[\\text{bookings}]$ , but the listings are not sorted by their booking probabilities.", "Then there must exist a pair of listings such that $P_{\\text{booking}}(l_{i,x}) <P_{\\text{booking}}(l_{i,y})$ and $P_{\\text{attention}}(x)>P_{\\text{attention}}(y)$ .", "Consider swapping the positions of the two listings $l_{i,x}$ and $l_{i,y}$ .", "The difference in expected bookings due to the swap is given by: ${\\mathsf {B}}_{x} = P_{\\text{booking}}(&l_{i,x}) \\; ; {\\mathsf {B}}_{y} = P_{\\text{booking}}(l_{i, y}) \\; ; \\; {\\mathsf {B}}_{x} < {\\mathsf {B}}_{y}\\\\{\\mathsf {A}}_{x} = P_{\\text{attention}}&(x) \\; ; \\; {\\mathsf {A}}_{y} = P_{\\text{attention}}(y) \\; ; \\; {\\mathsf {A}}_{x} > {\\mathsf {A}}_{y}\\\\\\Delta E[\\text{bookings}] = & \\;({\\mathsf {B}}_{x} {\\mathsf {A}}_{y} +{\\mathsf {B}}_{y}{\\mathsf {A}}_{x}) - ({\\mathsf {B}}_{x}{\\mathsf {A}}_{x}+{\\mathsf {B}}_{y}{\\mathsf {A}}_{y}) \\\\= & \\;({\\mathsf {B}}_{y} -{\\mathsf {B}}_{x})({\\mathsf {A}}_{x} - {\\mathsf {A}}_{y} )$ Since both terms of the product in Equation  are positive, $\\Delta E[\\text{bookings}] > 0$ .", "This implies the previous sum could not be the maximum.", "Hence the listings must be sorted by booking probability to maximize the total expected bookings.", "For alternate arguments supporting the property, see  [12].", "The difference in total expected bookings due to an unsorted pair, $\\Delta E[\\text{bookings}]$ , is the product of the difference in booking probability and the difference in attention to the two positions.", "As we discuss next, $NDCG$ tracks total bookings so well because it follows the same conditions.", "Let’s begin by considering how $NDCG$ is computed.", "We adopt a binary definition of relevance where a booked listing has a relevance of 1, and all other listings 0 relevance.", "For simplicity, we assume only a single listing is booked from a given search result.", "$NDCG$ can then be written as: $NDCG = \\frac{1}{N}\\sum \\limits _{i=0}^{N}\\frac{\\text{log}(2)}{\\text{log}(2 + \\text{pos}_{i})}$ where $\\text{pos}_{i}$ refers to the position of the booked listing in the $i$ th search.", "We can map the computation of $NDCG$ back to our game of roulette, where we consider each search result as an individual game as before.", "The winning event is defined as a listing getting booked, and the winning amount at position $j$ is set to $\\text{log}(2)/(\\text{log}(2 + j))$ .", "The formula for $NDCG$ in Equation REF is computing the observed total value of this stochastic process, a simple sum over the realized individual wins.", "What about the expected value of $NDCG$ ?", "The expected value would be the sum of the winning amounts for each position, weighted by the probability of attaining the win.", "This can be written as: $ E[NDCG]=\\frac{1}{N}\\sum \\limits _{i=0}^{N}\\sum \\limits _{j=0}^{K}P(l_{i,j})*\\frac{\\text{log}(2)}{\\text{log}(2 + j)}$ Provided we evaluate $NDCG$ over a large number of searches, the expected and observed values would converge, and maximizing Equation REF is equivalent to maximizing Equation REF .", "The monotonic decay of $\\text{log}(2)/\\text{log}(2 + j)$ together with Equation REF implies that sorting the listings by their booking probabilities maximizes $NDCG$ .", "The proof is by contradiction once again.", "Further, the drop in $NDCG$ from a pair of listings $l_{i,x}$ and $l_{i,y}$ not ordered by booking probabilities is given by: $\\Delta E[NDCG] = & (P_{\\text{booking}}(l_{i,y})-P_{\\text{booking}}(l_{i,x}))\\\\&* \\left(\\frac{\\text{log}(2)}{\\text{log}(2+x)}- \\frac{\\text{log}(2)}{\\text{log}(2+y)}\\right)$ $\\Delta E[NDCG]$ in Equation  correlates with $\\Delta E[\\text{bookings}]$ in Equation  because the positional discount curve is constructed based on how user attention decays by position, making $\\text{log}(2)/\\text{log}(2+x)$ proportional to $P_{\\text{attention}}(x)$ .", "As a result, a gain in $NDCG$ is also a strong indicator of gain in total bookings.", "To recap, both total bookings and $NDCG$ are maximized when listings are sorted by their booking probabilities.", "In the case of total bookings, this arises due to the fact that we accurately represented the probability of booking as $P_{\\text{booking}}(l_{i,j})*P_{\\text{attention}}(j)$ .", "However for $NDCG$ , we made the simplified assumption that the probability of booking is given by $P_{\\text{booking}}(l_{i,j})$ alone, and independent of the position where the listing is placed.", "This assumption is a useful one, since it allows us to shuffle listings around in offline analysis, and compute $NDCG$ without requiring fresh user input.", "But then, to align $NDCG$ with total bookings, we need to weigh each potential booking by its positional discount.", "Utilizing the fact that listings ordered by their booking probabilities maximize $NDCG$ and total bookings, we can design a straightforward iterative algorithm to construct optimal search results as shown in Algorithm .", "Ranking by booking probabilities [1] A set of $N$ listings $\\lbrace l_0, l_1, \\dots l_{N-1}\\rbrace $ Listing positions $\\lbrace \\text{pos}(l_0), \\text{pos}(l_1), \\dots \\text{pos}(l_{N-1})\\rbrace $ $\\mathcal {L} \\leftarrow \\lbrace l_0, l_1 \\dots l_{N-1}\\rbrace $ $k \\leftarrow 0$ until $N$ Compute $\\text{logit}(l_i) \\leftarrow f_{\\theta }(q, u, l_i)$ foreach $l_i \\in \\mathcal {L}$ $l_{max} \\leftarrow \\text{argmax}(\\text{logit}(l_i), l_i \\in \\mathcal {L})$ $\\text{pos}(l_{max}) \\leftarrow k$ $\\mathcal {L} \\leftarrow \\mathcal {L} \\setminus l_{max}$ In Algorithm , we rely on Property REF , which allows us to use the pairwise booking logit interchangeably with booking probability.", "$\\text{logit}(l_i)$ on line 3 depends only on the attributes of the listing being ranker, $l_i$ , besides the query and the user.", "Since attributes of a listing are invariant all throughout, line 3 in Algorithm  can be taken out of the for loop.", "This reduces Algorithm  to computing $\\text{logit}(l_i)$ once for each listing, and then using it to sort the listings.", "But it is not a binding restriction that the booking probability of a listing should be independent of the other listings, and must depend on attributes of the given listing alone.", "Specifically, consider iteration $K+1$ of the for loop in Algorithm .", "We have already placed listings in position 0 through $K-1$ , and we need to select a listing for position $K$ by computing the logits of the remaining $N-K$ listings.", "When computing the logit for a given listing $l_i$ , we can factor in the attributes of $l_i$ , as well as all the attributes of the $K$ listings placed at 0 through $K-1$.", "By including the attributes of listings at 0 through $K-1$ , we can ensure the calculated logits of the $N-K$ listings are more accurate.", "Listings being ranked which are too similar to the listings at 0 through $K-1$ can have their logits corrected to lower values.", "Maximization of $NDCG$ is preserved through this process of extending the inputs to include attributes of the listings placed at 0 through $K-1$ .", "The critical step in the proof for maximal $NDCG$ , where we swap the listings not ordered by booking probability to demonstrate a contradiction, continues to work.", "That's because the extended inputs from the listings at 0 through $K-1$ are invariant in the swapping process.", "This provides a mechanism for diversification that is aligned with maximizing $NDCG$ .", "We require no new metrics to evaluate diversity.", "Due to the relation between $NDCG$ and total bookings, we expect this mechanism to directly increase total bookings as well.", "To summarize, diversity removes redundant choices, thereby improving utilization of positions in search results, which get reflected in improved $NDCG$ and total bookings.", "While we don’t need a new metric for evaluating diversity, the situation is different when it comes to implementing diversity.", "For diversity aware booking probabilities, we require the attributes of the listings that are placed before.", "This information is not available in the garden variety pointwise or pairwise learning to rank frameworks.", "Hence we need a new kind of model, which we discuss next." ], [ "How to implement diversity in ranking?", "In this section we build a framework to rank listings for $N$ positions while incorporating diversity.", "Instead of building a single model, we build $N$ models, one dedicated for each of the $N$ positions.", "Starting at position 0, we reuse the regular pairwise booking probability model described in Section .", "Let’s name the model $f_{0, \\theta _0}(q, u, l)$ , where the 0 index refers to the position in search result, and $\\theta _0$ the parameters of the model.", "To recap from Section , $f_{0, \\theta _0}(q, u, l)$ maps each listing to a pairwise logit, where sigmoid of the difference of two pairwise logits gives the pairwise booking probability.", "Now let’s construct $f_{1, \\theta _1}(q, u, l, l_0)$ , the model for position 1.", "This model has an additional input $l_0$ , which we call the antecedent listing from position 0.", "A user scanning the search results from top to bottom would consider the listing at position 1, only if the antecedent listing at position 0 did not meet their requirements.", "See  [4] and  [5] for an in-depth study of this phenomenon.", "Thus when ranking for position 1, we have incrementally more information than we did when ranking for position 0.", "Leveraging this new information, we construct $f_{1, \\theta _1}(q, u, l, l_0)$ conditional upon the fact that the user has rejected the listing at position 0.", "To construct training examples for this conditional pairwise booking probability model, we go through the search logs and discard all searches where the listing at position 0 was booked.", "For the remaining searches, we set aside the listing at position 0, denoting it the antecedent listing.", "From the listings below position 0, we create pairs of booked and not booked listings, similar to how pairwise booking examples are created.", "Figure  REF illustrates this.", "Figure: Training data construction for f 1,θ 1 (q,u,l,l 0 )f_{1, \\theta _1}(q, u, l, l_0)For a training example with $l_x$ as the booked listing, $l_y$ as the not booked listing, and $l_0$ as the antecedent listing, we compute $\\text{logit}_x=f_{1, \\theta _1}(q, u, l_x, l_0) ; \\text{logit}_y=f_{1, \\theta _1}(q, u, l_y, l_0)$ and cross-entropy loss the same as Equation REF .", "By minimizing the cross-entropy loss summed over all training examples, we can infer parameters $\\theta _1$ of the model such that $e^{\\text{logit}_x}/(e^{\\text{logit}_x}+e^{\\text{logit}_y})$ represents the pairwise probability a user will book $l_x$ over $l_y$ , given the condition they have rejected the antecedent listing $l_0$.", "We write this conditional probability as $P_{\\text{booking}}(l_x>l_y \\mid \\mathcal {A}=\\lbrace l_0\\rbrace )$ .", "The conditional part of this pairwise booking probability arises because of the training data.", "The model is learning about pairwise preference between $l_x$ vs. $l_y$ , but only from those users who have rejected $l_0$ .", "Intuitively, we expect the choice of such users to be different from $l_0$ , providing us a notion of diversity that can be learnt from the training data.", "For position 2, we follow a similar strategy.", "We discard all searches where either listing at position 0 or 1 was booked, to arrive at the model $f_{2, \\theta _2}(q, u, l, l_0, l_1)$ which gives $P_{\\text{booking}}(l_x>l_y \\mid \\mathcal {A}=\\lbrace l_0, l_1\\rbrace )$ .", "Generalizing for position $k$ , we get $f_{k, \\theta _{k}}(q, u, l, l_{0 \\rightarrow k-1})$ which predicts the conditional probability $P_{\\text{booking}}(l_x>l_y \\mid \\mathcal {A}=\\lbrace l_{0 \\rightarrow k-1}\\rbrace )$ .", "For ranking $N$ positions, we now have $N$ distinct ranking models $\\lbrace f_{0, \\theta _0}(q, u, l), f_{1, \\theta _1}(q, u, l, l_0), \\dots , f_{N-1, \\theta _{N-1}}(q, u, l, l_{0 \\rightarrow N-2})\\rbrace $ .", "These models can be plugged into Algorithm .", "In the $K$ th iteration of loop at line 3, we can employ $f_{K, \\theta _{K}}(q, u, l, l_{0 \\rightarrow K-1})$ to evaluate the logits, using listings already placed at 0 through $K-1$ as antecedent listings.", "The logits are computed afresh in each iteration of the loop, now incorporating diversity.", "Though the theory presented in this section is simple, it is not a very practical one.", "For ranking $N$ positions, the number of models needed to be trained is $O(N)$ .", "On top of it, the computational complexity of Algorithm  is $O(N^3)$ , since the loop at line 2 is $O(N)$ , the iteration over each listing at line 3 is $O(N)$ , and the complexity of evaluating $f_{K, \\theta _{K}}(q, u, l, l_{0 \\rightarrow K-1})$ is $O(N)$ .", "In the next section we discuss how to make the framework more practical." ], [ "How to efficiently implement diversity in ranking?", "We start with the $N$ distinct models constructed for each of the $N$ positions, and simplify them one by one.", "The model for position 0, $f_{0, \\theta _0}(q, u, l)$ , is our regular pairwise booking probability model $f_{\\theta }(q, u, l)$ from Section .", "We treat this as our core model and write the base case as: $f_{0, \\theta _0}(q, u, l) = f_{\\theta }(q, u, l)$ To simplify $f_{1, \\theta _1}(q, u, l, l_0)$ , the model for position 1, we compare it with $f_{\\theta }(q, u, l)$ .", "Both models are obtained by minimizing the cross-entropy loss over pairwise training examples consisting of booked and not booked listing pairs.", "The differences between them are: $f_{\\theta }(q, u, l)$ is trained over pairs constructed from all searches.", "$f_{1, \\theta _1}(q, u, l, l_0)$ is trained on the subset of searches where the booked listing appears below position 0.", "$f_{\\theta }(q, u, l)$ has the listing being ranked $l$ as the input, whereas $f_{1, \\theta _1}(q, u, l, l_0)$ has the listing being ranked $l$ , as well as the antecedent listing $l_0$ , as inputs.", "We expect $f_{\\theta }(q, u, l)$ and $f_{1, \\theta _1}(q, u, l, l_0)$ to be fairly close since a large part of their training examples are shared.", "But we expect $f_{1, \\theta _1}(q, u, l, l_0)$ to outperform $f_{\\theta }(q, u, l)$ for position 1 since it can downrank listings that are too similar to the antecedent $l_0$ .", "We use this insight to simplify $f_{1, \\theta _1}(q, u, l, l_0)$ , refactoring it into two models as: $f_{1, \\theta _1}(q, u, l, l_0) = f_{\\theta }(q, u, l) - s_{\\phi }(q, u, l, l_0)$ First part of the refactor is $f_{\\theta }(q, u, l)$ , the regular pairwise booking probability model.", "The second part is a new model $s_{\\phi }(q, u, l, l_0)$ parameterized by $\\phi $ .", "It adds a negative term based on the similarity between $l$ and $l_0$ .", "This similarity is not defined by us, instead it is learnt from the training data that we built for $f_{1, \\theta _1}(q, u, l, l_0)$ .", "We train $f_{\\theta }(q, u, l)$ prior to training $f_{1, \\theta _1}(q, u, l, l_0)$ .", "When training $f_{1, \\theta _1}(q, u, l, l_0)$ , we don’t need to train the $\\theta $ parameters again.", "We can simply substitute $f_{\\theta }(q, u, l)$ by the unconditional booking logit $ubl_l$ , where $ubl_l=f_{\\theta }(q, u, l)$ is obtained by evaluating the model for $l$ .", "Thus, $f_{1, \\theta _1}(q, u, l, l_0) = ubl_l - s_{\\phi }(q, u, l, l_0)$ Given a training example for $f_{1, \\theta _1}(q, u, l, l_0)$ , with $l_x$ as booked, $l_y$ as not booked, and $l_0$ as antecedent, we have $\\text{logit}_x=ubl_x - s_{\\phi }(q, u, l_x, l_0) ; \\text{logit}_y=ubl_y-s_{\\phi }(q, u, l_y, l_0)$ and cross-entropy loss defined by Equation REF .", "The parameters $\\phi $ obtained by minimizing the cross-entropy loss over all the training examples gives us a simplified construction of $f_{1, \\theta _1}(q, u, l, l_0)$ according to Equation REF .", "For position 2, we have two antecedent listings, one at position 0 and the other one at position 1.", "We need to account for the similarity to both these antecedent listings.", "Though this time around, we do not need to learn the similarity model all over again.", "We can reuse the similarity model we learnt as part of $f_{1, \\theta _1}(q, u, l, l_0)$ .", "The refactored model for position 2 can therefore be written as: $f_{2, \\theta _2}(q, u, l, l_0, l_1) = ubl_l - s_{\\phi }(q, u, l, l_0) - s_{\\phi }(q, u, l, l_1)$ But Equation REF is valid only if the effect of $l_0$ and $l_1$ are completely independent of each other.", "On the other hand, if $l_0$ was an exact duplicate of $l_1$ , then $l_1$ would not have any incremental effect and we could completely ignore it to rewrite Equation REF as $f_{2, \\theta _2}(q, u, l, l_0, l_1) = ubl_l - s_{\\phi }(q, u, l, l_0)$ In reality, we expect the true effect to be somewhere in between Equation REF and  REF , which we write as 2 $f_{2, \\theta _2}(q, u, l, l_0, l_1) &= ubl_l - s_{\\phi }(q, u, l, l_0) - \\lambda *s_{\\phi }(q, u, l, l_1) \\\\f_{2, \\theta _2}(q, u, l, l_0, l_1) &= f_{1, \\theta _1}(q, u, l, l_0) - \\lambda *s_{\\phi }(q, u, l, l_1)$ where $0 \\le \\lambda \\le 1$ .", "Figure REF depicts Equation REF in terms of a Venn diagram.", "Figure: Illustration for Equation For position 3, we have to factor in the effect of similarity to the additional antecedent listing $l_2$ given by $s_{\\phi }(q, u, l, l_2)$ .", "But we expect the effect to be reduced under the assumption that $l_2$ isn’t completely independent of $l_0$ and $l_1$ .", "To denote the incremental impact of $l_2$ , we scale the contribution from $l_2$ by $\\lambda ^ 2$ , once for overlap with $l_0$ , the second time for overlap with $l_1$ .", "This gets us the refactored equation: 2 f3, 3(q, u, l, l0, l1, l2) = f2, 2(q, u, l, l0, l1) - 2*s(q, u, l, l2) Generalizing for position $(k+1)$ , we can write 2 fk+1, k+1(q, u, l, l0 k) = fk, k(q, u, l, l0 k-1) - k*s(q, u, l, lk) = f(q, u, l) - i=0k i*s(q, u, l, li) We now need to build only two models, $f_{\\theta }(q, u, l)$ and $s_{\\phi }(q, u, l, l_0)$ .", "The models for each of the positions can be expressed in terms of those two.", "Using $f_{\\theta }(q, u, l)$ and $s_{\\phi }(q, u, l, l_0)$ , we can construct Algorithm REF which provides an iterative way to construct the search results, while taking diversity into account.", "Ranking diversely [1] A set of $N$ listings $\\lbrace l_0, l_1, \\dots l_{N-1}\\rbrace $ Listing positions $\\lbrace \\text{pos}(l_0), \\text{pos}(l_1), \\dots \\text{pos}(l_{N-1})\\rbrace $ $\\mathcal {L} \\leftarrow \\lbrace l_0, l_1,..., l_{N-1}\\rbrace $ Compute $\\text{logit}(l_i) \\leftarrow f_{\\theta }(q, u, l_i)$ foreach $l_i \\in \\mathcal {L}$ $l_{max} \\leftarrow \\text{argmax}(\\text{logit}(l_i), l_i \\in \\mathcal {(}L))$ $\\text{pos}(l_{max}) \\leftarrow 0$ $l_{\\text{atcdnt}} \\leftarrow l_{max}$ $\\mathcal {L} \\leftarrow \\mathcal {L} \\setminus l_{max}$ $k \\leftarrow 1$ until $N$ $\\text{logit}(l_i) \\leftarrow \\text{logit}(l_i) - \\lambda ^{k}*s_{\\phi }(q, u, l, l_{\\text{atcdnt}})$ foreach $l_i \\in \\mathcal {L}$ $l_{max} \\leftarrow \\text{argmax}(\\text{logit}(l_i), l_i \\in \\mathcal {(}L))$ $\\text{pos}(l_{max}) \\leftarrow k$ $l_{\\text{atcdnt}} \\leftarrow l_{max}$ $\\mathcal {L} \\leftarrow \\mathcal {L} \\setminus l_{max}$ We can treat $\\lambda $ as a hyperparameter here and sweep through different values to find the one that maximizes $NDCG$ .", "In our case, we settled at $\\lambda = \\frac{1}{3}$ .", "The simplification in this section reduced the number of models from $O(N)$ to 2, and the computational complexity from $O(N^3)$ to $O(N^2)$ .", "Line 8 is now the bottleneck in Algorithm REF .", "The number of iterations of the loop is $O(N)$ , and iterating over each listing in line 8 is $O(N)$ .", "The main computation in this inner loop is evaluation of $s_{\\phi }(q, u, l, l_{\\text{atcdnt}})$ .", "We can further reduce the complexity by optimizing the evaluation of $s_{\\phi }(q, u, l, l_{\\text{atcdnt}})$ .", "Our discussion until this point did not assume any model architecture.", "But for optimizing the evaluation of $s_{\\phi }(q, u, l, l_{\\text{atcdnt}})$ , we limit the discussion to neural networks, which is the model we implemented.", "The bulk of the model complexity comes from processing of the listing features.", "We use a tower of fully-connected layers to map the listing features into an embedding.", "A shallow layer at the top then combines the embeddings of the two input listings to finally output the similarity logit.", "See Figure REF .", "Figure: Neural network for s φ (q,u,l i ,l j )s_{\\phi }(q, u, l_i, l_j).", "Weights are shared for the listing towers and their output is cached.The advantage of this architecture is that the output of the listing towers can be cached.", "We need to process each listing only once to map them to their corresponding embeddings.", "In line 8 of Algorithm REF , we then reuse the cached embeddings and only evaluate the top part of the network.", "With these optimizations we were able to keep the latency impact to a minimum.", "How does this compare to other existing approaches?", "Enforcing Fairness in Ranking Imposing fairness constraints has been a popular method for diversifying search results.", "A ranking model to optimize a utility such as $NDCG$ , within a specified upper bound of unfairness is discussed in  [13].", "An alternate approach to provide different groups equal impressions or proportional impressions in web search results is presented in  [8].", "Search results are built iteratively, where each step of the algorithm either chooses to optimize fairness with probability $\\varepsilon $ , or relevance with probability $1-\\varepsilon $ .", "The method in  [8] is attractive because of its simplicity, as it does not require building any new models.", "The limitation of these approaches is that they inherently pose the problem as a tradeoff between utility and fairness.", "Utility is the quantifiable benefit in the short term, bookings in our case, which needs to be sacrificed to accommodate fairness, with assumed benefits in the long term.", "These benefits of fairness are assumed because they are hard to quantify given the long time horizon needed.", "As a result, the knob between utility and fairness has to be controlled by faith.", "The strength of approaches such as  [13] and  [8] is that if one has a fairness goal in mind for which one is willing to sacrifice utility, such as gender equality between job applicants, or equality between opposing views on a topic, then these methods can achieve fairness while minimizing the utility loss.", "This is something our framework cannot handle, which can diversify only in the direction that aligns with optimizing utility.", "This exposes a fundamental difference between two goals: 1) imposing some fairness constraints on search results, and 2) providing users with the optimal level of choice in search results.", "Both goals on the surface are related to diversification, but in the final analysis, they lead to different paths.", "Providing Context Features Another popular method to diversify ranking is to provide features of surrounding items in search results.", "These approaches are closer to our framework as they aim to improve utility through diversification, and diversity is not a goal in itself.", "An example is the ranking of widgets on Amazon video homepage, discussed in  [6].", "The method relies on a predefined categorization of widgets based on a combination of content type and purchasing option.", "Features are then constructed to summarize the categories of the widgets placed in positions 0 through $k-1$ .", "These features capture the incremental gain in category diversity while scoring widgets for the $k$ th position.", "The limitation of this approach is that it diversifies only along the predefined categories.", "In the case of Airbnb, we found such categorization of listings quite challenging.", "Attempts to diversify listings based on location, price, amenities, etc.", "mostly lead to negative results.", "When categorizing based on any particular criteria, for instance percentile price buckets, one ends up ignoring the rest of the dimensions such as location, quality, amenities, and aesthetics.", "Listings may bubble up in ranking simply because they belong to a certain price percentile bucket, compromising on one or more of the other dimensions that users care about.", "To prevent that, diversity needs to account for the entire context of the user, query, and listings – and the interactions between them.", "Our method aims for this generalized diversification.", "The strength of  [6] is its simplicity.", "If there is a categorization available at hand, and one cares about diversity only along those categories, then  [6] reduces the problem to feature engineering and does not require any new model to be learnt.", "Our previous attempt at diversity described in  [1] provides another method for supplying context information to the model.", "A recurrent neural network is used to map the features of all the listings in the search result to an embedding.", "This embedding can then be used as a feature by the ranking model to make more context aware decisions.", "The challenge with this method is loss of information.", "While ranking for the $k$ th position, the embeddings don't provide information about the listings placed at 0 through $k-1$ .", "Instead, the embeddings represent an aggregated summary of all the listings mushed together, and doesn’t allow the kind of pairwise comparisons a user would perform.", "The strength of this technique is its simplicity when scoring.", "Once the embedding is evaluated, plugging it in as a feature is straightforward, and requires no additional computation.", "Sequential and Setwise Ranking Abandoning the assumption that utility of individual items in search results are independent of each other, and making ranking aware of their interactions has been gaining momentum in recent years.", "A reinforcement learning algorithm is used in  [7], where ranking for each position is considered one time step of the sequential Markov decision process.", "While ranking for the $k$ th position, the state of the RL algorithm encodes the items placed at positions 0 through $k-1$ .", "At each step, a Monte Carlo tree search is used to explore the updates to policy.", "The challenge with this approach is its complexity, both for training the RL algorithm and for serving it online.", "Our approach reuses the pairwise booking probability model for the most part, combining it with a similarity model that is further optimized to reduce the computational complexity.", "Even then, we found a significant impact on online latency as we discuss under experimental results.", "The strength of  [7] is that it explores the space of possible rankings more thoroughly, and could potentially find larger $NDCG$ gains.", "Taking all the items to be ranked as inputs and producing position assignments simultaneously is discussed in  [11].", "The challenge with this approach, once again, is its complexity.", "In our setting each listing has ~700 features, along with ~200 features for the query and the user.", "For diversification $O(100)$ listings are evaluated.", "At this scale, the proposed setwise architecture is impractical given the system constraints for training and serving.", "The strength of the setwise approach is that it makes the least assumptions, and can be used as a tool to assess the upper bounds of the gains possible through diversification.", "A method to tackle the complexity of setwise ranking is discussed in  , which uses determinantal point processes.", "It is an alternative to our proposal that also relies on the concept of similarity.", "Listwise Loss We previously discussed how typical pairwise learning to rank frameworks consider only the attributes of the listing being ranked.", "Listwise learning to rank considers the entire list, so presumably they can overcome that limitation.", "Even then, in common listwise learning to rank algorithms, such as ListNet  [3], it is assumed that the booking probability of individual listings are independent of each other.", "This assumption is necessary to keep the computational complexity tractable.", "For handling diversity, it is not enough to have access to all the other listings in the search result, one has to explicitly take into consideration the interaction between the listings.", "Accounting for the interactions leads to the kind of problem formulation described in Section .", "The assumption regarding the independence of items ranked is relaxed in  [2].", "It encodes the features of the top results into an embedding using a recurrent neural network, which are then used as supplemental features for scoring each item.", "The comparison of  [2] to our work is similar to what we discussed for  [1].", "How does the theory work in practice?", "We developed the theory in Sections  and   before embarking on actual implementation and experimentation.", "This allowed us to make certain predictions about the experimental results.", "In a flow of events reminiscent of developments in physics, the predicted experimental results were subsequently matched by tests offline and online.", "NDCG The first prediction to come out of the theory was a simple one: that $NDCG$ should improve.", "What made the prediction interesting was the expectation of an $NDCG$ gain, with no additional information added to the training data.", "In contrast, most $NDCG$ gains over the past years required incrementally new information in the form of features, labels, or bug fixes.", "On the test set for $f_{1, \\theta _1}(q, u, l, l_0)$ , we observed an $NDCG$ gain of $0.45\\%$ compared to the baseline Algorithm .", "When measured on the test set of $f_{\\theta }(q, u, l)$ , this gain translated to $0.2\\%$ .", "This gain is smaller than $0.45\\%$ because the listing at the first position remains invariant between Algorithm   and  REF , diluting the effect.", "The impact on $NDCG$ measured in the online A/B test was much stronger, where we observed an increase of $1.5\\%$ .", "Bookings & Booking Value From the $0.2\\%$ $NDCG$ gain, we expected a similar gain in bookings online.", "Further, we expected these bookings to come from preference groups which were a minority in the training data.", "The most prominent minority preference being quality-leaning searchers, as discussed in Section .", "This allowed us to make a further prediction: that there would be gains in gross booking value, or the sum of the prices of the booked trips, which would be multiple times over the bookings gain.", "In the online A/B test, we observed a bookings gain of $0.29\\%$ .", "Along with it, we saw a $0.8\\%$ gain in gross booking value.", "Segmenting the bookings gain by user groups, we found almost the entire gain came from users who were booking a listing on Airbnb for the first time.", "Engagement Other observations from the online A/B test include a $0.46\\%$ increase in listings viewed, and a $1.1\\%$ increase in listings saved.", "This increased engagement with search results can be attributed to the increased choice.", "Price & Location To directly measure diversification, we compared metrics along some key dimensions.", "The first measure compared the variance in price among the top 8 results.", "We observed an increase of $3.4\\%$ in treatment, which captures the increased diversity in price, and hence quality by proxy.", "The second metric compared the number of listings in the top 8 results that were within $0.5$ km of each other.", "We noted a decrease of $-0.62\\%$ , which shows reduced redundancy in location.", "Trip Quality To measure the impact of diversity on the entire user experience, we waited for 90 days after the end of the A/B test.", "This allowed the majority of the trips booked during the experiment to be realized.", "Comparing the ratings from guests checking out of their stays, we noted an increase of $0.4\\%$ in 5-star ratings.", "Diversifying search results shifts the balance away from the majority preference of affordability towards the minority preference of quality.", "This shift towards quality ultimately surfaces in improved trip ratings.", "Position Utilization We segmented the bookings gain in the A/B test by the position where the booked listing was first presented to the searcher.", "This revealed an interesting phenomenon: the utilization of the top position increased significantly, followed by some gain for the second position.", "The utilization decreased for subsequent positions.", "See Figure REF .", "Deciphering this phenomenon to get a better understanding of how searchers respond to increased choice is part of our future roadmap.", "Figure: Y-axis: position in search result.", "X-axis: percentage change in bookings.", "Latency The gains came at a latency cost, where we saw an increase of $8.4\\%$ in P95 latency and a $5.3\\%$ increase in median latency.", "Conclusion We started this paper discussing how ranking evolved at Airbnb.", "We conclude with a summary of our efforts to diversify ranking.", "Early attempts started in 2017 with category-based diversification.", "Various categories were tried, based on price, location, and amenities.", "All these efforts resulted in disappointment.", "This led to the conclusion that diversifying along a particular dimension mostly degraded the quality of results.", "Focus then moved to diversification along multiple dimensions, in particular combining price and location.", "These attempts resulted in failure as well.", "The breakthrough came in 2019 with  [1], where instead of forcing ranking to adhere to some predefined notion of diversity, we supplied the ranking model with more information, giving it the freedom to diversify.", "The current work continues with that philosophy.", "We revisited the problem in 2022 with a theory-first approach, letting the model learn the notion of diversity from the training data.", "This lead to one of the most impactful ranking changes of the year.", "But as discussed in Section , this training data itself is biased against diversity.", "After the launch of the diversity ranker, we expect future training data to have richer examples to learn from, enabling a virtuous cycle of diversification." ], [ "Enforcing Fairness in Ranking", "Imposing fairness constraints has been a popular method for diversifying search results.", "A ranking model to optimize a utility such as $NDCG$ , within a specified upper bound of unfairness is discussed in  [13].", "An alternate approach to provide different groups equal impressions or proportional impressions in web search results is presented in  [8].", "Search results are built iteratively, where each step of the algorithm either chooses to optimize fairness with probability $\\varepsilon $ , or relevance with probability $1-\\varepsilon $ .", "The method in  [8] is attractive because of its simplicity, as it does not require building any new models.", "The limitation of these approaches is that they inherently pose the problem as a tradeoff between utility and fairness.", "Utility is the quantifiable benefit in the short term, bookings in our case, which needs to be sacrificed to accommodate fairness, with assumed benefits in the long term.", "These benefits of fairness are assumed because they are hard to quantify given the long time horizon needed.", "As a result, the knob between utility and fairness has to be controlled by faith.", "The strength of approaches such as  [13] and  [8] is that if one has a fairness goal in mind for which one is willing to sacrifice utility, such as gender equality between job applicants, or equality between opposing views on a topic, then these methods can achieve fairness while minimizing the utility loss.", "This is something our framework cannot handle, which can diversify only in the direction that aligns with optimizing utility.", "This exposes a fundamental difference between two goals: 1) imposing some fairness constraints on search results, and 2) providing users with the optimal level of choice in search results.", "Both goals on the surface are related to diversification, but in the final analysis, they lead to different paths.", "Another popular method to diversify ranking is to provide features of surrounding items in search results.", "These approaches are closer to our framework as they aim to improve utility through diversification, and diversity is not a goal in itself.", "An example is the ranking of widgets on Amazon video homepage, discussed in  [6].", "The method relies on a predefined categorization of widgets based on a combination of content type and purchasing option.", "Features are then constructed to summarize the categories of the widgets placed in positions 0 through $k-1$ .", "These features capture the incremental gain in category diversity while scoring widgets for the $k$ th position.", "The limitation of this approach is that it diversifies only along the predefined categories.", "In the case of Airbnb, we found such categorization of listings quite challenging.", "Attempts to diversify listings based on location, price, amenities, etc.", "mostly lead to negative results.", "When categorizing based on any particular criteria, for instance percentile price buckets, one ends up ignoring the rest of the dimensions such as location, quality, amenities, and aesthetics.", "Listings may bubble up in ranking simply because they belong to a certain price percentile bucket, compromising on one or more of the other dimensions that users care about.", "To prevent that, diversity needs to account for the entire context of the user, query, and listings – and the interactions between them.", "Our method aims for this generalized diversification.", "The strength of  [6] is its simplicity.", "If there is a categorization available at hand, and one cares about diversity only along those categories, then  [6] reduces the problem to feature engineering and does not require any new model to be learnt.", "Our previous attempt at diversity described in  [1] provides another method for supplying context information to the model.", "A recurrent neural network is used to map the features of all the listings in the search result to an embedding.", "This embedding can then be used as a feature by the ranking model to make more context aware decisions.", "The challenge with this method is loss of information.", "While ranking for the $k$ th position, the embeddings don't provide information about the listings placed at 0 through $k-1$ .", "Instead, the embeddings represent an aggregated summary of all the listings mushed together, and doesn’t allow the kind of pairwise comparisons a user would perform.", "The strength of this technique is its simplicity when scoring.", "Once the embedding is evaluated, plugging it in as a feature is straightforward, and requires no additional computation.", "Abandoning the assumption that utility of individual items in search results are independent of each other, and making ranking aware of their interactions has been gaining momentum in recent years.", "A reinforcement learning algorithm is used in  [7], where ranking for each position is considered one time step of the sequential Markov decision process.", "While ranking for the $k$ th position, the state of the RL algorithm encodes the items placed at positions 0 through $k-1$ .", "At each step, a Monte Carlo tree search is used to explore the updates to policy.", "The challenge with this approach is its complexity, both for training the RL algorithm and for serving it online.", "Our approach reuses the pairwise booking probability model for the most part, combining it with a similarity model that is further optimized to reduce the computational complexity.", "Even then, we found a significant impact on online latency as we discuss under experimental results.", "The strength of  [7] is that it explores the space of possible rankings more thoroughly, and could potentially find larger $NDCG$ gains.", "Taking all the items to be ranked as inputs and producing position assignments simultaneously is discussed in  [11].", "The challenge with this approach, once again, is its complexity.", "In our setting each listing has ~700 features, along with ~200 features for the query and the user.", "For diversification $O(100)$ listings are evaluated.", "At this scale, the proposed setwise architecture is impractical given the system constraints for training and serving.", "The strength of the setwise approach is that it makes the least assumptions, and can be used as a tool to assess the upper bounds of the gains possible through diversification.", "A method to tackle the complexity of setwise ranking is discussed in  , which uses determinantal point processes.", "It is an alternative to our proposal that also relies on the concept of similarity.", "We previously discussed how typical pairwise learning to rank frameworks consider only the attributes of the listing being ranked.", "Listwise learning to rank considers the entire list, so presumably they can overcome that limitation.", "Even then, in common listwise learning to rank algorithms, such as ListNet  [3], it is assumed that the booking probability of individual listings are independent of each other.", "This assumption is necessary to keep the computational complexity tractable.", "For handling diversity, it is not enough to have access to all the other listings in the search result, one has to explicitly take into consideration the interaction between the listings.", "Accounting for the interactions leads to the kind of problem formulation described in Section .", "The assumption regarding the independence of items ranked is relaxed in  [2].", "It encodes the features of the top results into an embedding using a recurrent neural network, which are then used as supplemental features for scoring each item.", "The comparison of  [2] to our work is similar to what we discussed for  [1]." ], [ "How does the theory work in practice?", "We developed the theory in Sections  and   before embarking on actual implementation and experimentation.", "This allowed us to make certain predictions about the experimental results.", "In a flow of events reminiscent of developments in physics, the predicted experimental results were subsequently matched by tests offline and online." ], [ "NDCG", "The first prediction to come out of the theory was a simple one: that $NDCG$ should improve.", "What made the prediction interesting was the expectation of an $NDCG$ gain, with no additional information added to the training data.", "In contrast, most $NDCG$ gains over the past years required incrementally new information in the form of features, labels, or bug fixes.", "On the test set for $f_{1, \\theta _1}(q, u, l, l_0)$ , we observed an $NDCG$ gain of $0.45\\%$ compared to the baseline Algorithm .", "When measured on the test set of $f_{\\theta }(q, u, l)$ , this gain translated to $0.2\\%$ .", "This gain is smaller than $0.45\\%$ because the listing at the first position remains invariant between Algorithm   and  REF , diluting the effect.", "The impact on $NDCG$ measured in the online A/B test was much stronger, where we observed an increase of $1.5\\%$ .", "From the $0.2\\%$ $NDCG$ gain, we expected a similar gain in bookings online.", "Further, we expected these bookings to come from preference groups which were a minority in the training data.", "The most prominent minority preference being quality-leaning searchers, as discussed in Section .", "This allowed us to make a further prediction: that there would be gains in gross booking value, or the sum of the prices of the booked trips, which would be multiple times over the bookings gain.", "In the online A/B test, we observed a bookings gain of $0.29\\%$ .", "Along with it, we saw a $0.8\\%$ gain in gross booking value.", "Segmenting the bookings gain by user groups, we found almost the entire gain came from users who were booking a listing on Airbnb for the first time.", "Other observations from the online A/B test include a $0.46\\%$ increase in listings viewed, and a $1.1\\%$ increase in listings saved.", "This increased engagement with search results can be attributed to the increased choice.", "To directly measure diversification, we compared metrics along some key dimensions.", "The first measure compared the variance in price among the top 8 results.", "We observed an increase of $3.4\\%$ in treatment, which captures the increased diversity in price, and hence quality by proxy.", "The second metric compared the number of listings in the top 8 results that were within $0.5$ km of each other.", "We noted a decrease of $-0.62\\%$ , which shows reduced redundancy in location.", "To measure the impact of diversity on the entire user experience, we waited for 90 days after the end of the A/B test.", "This allowed the majority of the trips booked during the experiment to be realized.", "Comparing the ratings from guests checking out of their stays, we noted an increase of $0.4\\%$ in 5-star ratings.", "Diversifying search results shifts the balance away from the majority preference of affordability towards the minority preference of quality.", "This shift towards quality ultimately surfaces in improved trip ratings.", "We segmented the bookings gain in the A/B test by the position where the booked listing was first presented to the searcher.", "This revealed an interesting phenomenon: the utilization of the top position increased significantly, followed by some gain for the second position.", "The utilization decreased for subsequent positions.", "See Figure REF .", "Deciphering this phenomenon to get a better understanding of how searchers respond to increased choice is part of our future roadmap.", "Figure: Y-axis: position in search result.", "X-axis: percentage change in bookings.The gains came at a latency cost, where we saw an increase of $8.4\\%$ in P95 latency and a $5.3\\%$ increase in median latency." ], [ "Conclusion", "We started this paper discussing how ranking evolved at Airbnb.", "We conclude with a summary of our efforts to diversify ranking.", "Early attempts started in 2017 with category-based diversification.", "Various categories were tried, based on price, location, and amenities.", "All these efforts resulted in disappointment.", "This led to the conclusion that diversifying along a particular dimension mostly degraded the quality of results.", "Focus then moved to diversification along multiple dimensions, in particular combining price and location.", "These attempts resulted in failure as well.", "The breakthrough came in 2019 with  [1], where instead of forcing ranking to adhere to some predefined notion of diversity, we supplied the ranking model with more information, giving it the freedom to diversify.", "The current work continues with that philosophy.", "We revisited the problem in 2022 with a theory-first approach, letting the model learn the notion of diversity from the training data.", "This lead to one of the most impactful ranking changes of the year.", "But as discussed in Section , this training data itself is biased against diversity.", "After the launch of the diversity ranker, we expect future training data to have richer examples to learn from, enabling a virtuous cycle of diversification." ] ]
2210.07774
[ [ "On general weighted cumulative past extropy" ], [ "Abstract In this paper, we study some properties and characterization of the general weighted cumulative past extropy (n-WCPJ).", "Many results including some bounds, inequalities, and effects of linear transformations are obtained.", "We study the characterization of n-WCPJ based on the largest order statistics.", "Conditional WCPJ and some of its properties are discussed." ], [ "Introduction", "We encounter phenomena or events which are associated with uncertainty.", "Uncertainty emerges since we have less information than the total information required to describe a system and its environment.", "Uncertainty and information are closely associated.", "Since in an experiment the information provided is equal to the amount of uncertainty removed.", "Entropy which is a measure of uncertainty was first introduced by Shannon (1948) [1] in communication theory.", "It is useful to estimate the probabilities of rare events (large deviation theory) and in the study of likelihood-based inference principles.", "Shannon entropy is defined as the average amount of information that we receive per event and was the first defined entropy.", "For continuous case, it is given by $\\mathcal {H}(X)=-\\int _0^{\\infty } f(x) \\log f(x)dx,$ where X is a non-negative absolutely continuous random variable with probability density function (pdf) $f$ , cumulative distribution function(cdf) $F$ and survival function (sf) $\\bar{F}=1-F$ .", "Shannon entropy has various applications in communication theory, mathematical, physical, engineering, biological and social sciences as well.", "For further details on entropy one may refer Ash (1990)[1] and Cover and Thomas (2006)[1].", "Rao et al.", "(2004)[1] introduced the notion of cumulative residual entropy (CRE) as : $\\mathcal {E}(X)=-\\int _0^\\infty \\bar{F}(x)\\log \\bar{F}(x)dx.$ Some general results regarding this measure have been studied by Drissi et al (2008)[1] and Rao (2005)[1].", "CRE finds applications in image alignment and in the measurement of similarity between images.", "Di Crescenzo (2009)[1] proposed an entropy called cumulative past entropy (or cumulative entropy) i.e.", "CPE (or CE) as : $\\mathcal {CE}(X)=-\\int _0^\\infty {F}(x)\\log {F}(x)dx.$ Asadi et al.", "(2007)[1], Di Crescenzo et al.", "(2013)[1], Khorashadizadeh et al.", "(2013)[1] and Navarro et al.", "(2010)[1] investigated many aspects of CRE (CPE).", "Lad et al.", "(2015)[1] defined an alternative measure of the uncertainity of a random variable called extropy.", "For continuous non-negative random variable $X$ the extropy is defined as: $J(X)=-\\frac{1}{2} \\int _{0}^{\\infty }f^2(x)dx=-\\frac{1}{2}E\\left(f(X)\\right).$ Some results and properties of the extropy of order statistics and record values are given by Qiu (2017)[1].", "Qiu et al.", "(2018)[1] derived some of the results of the residual extropy of order statistics.", "Yang et al.", "(2018)[1] studied the bounds on extropy with a variational distance constraint.", "Qiu (2019)[1] examined certain extropy properties of mixed systems.", "To find out more about extropy, one may refer to Krishnan et al.", "(2020)[1], Noughabi et al.", "(2019)[1] and Raqab et al.", "(2019)[1].", "Jahanshahi et al.", "(2019)[1] introduced the cumulative residual extropy (CRJ).", "For continuous non-negative random variable $X$ the cumulative residual extropy (CRJ) as : $\\xi J (X) &=- \\frac{1}{2} \\int _{0}^{\\infty } \\bar{F}_{X}^2 (x) dx.$ Kundu (2021)[1] proposed an extropy called cumulative past extropy (CPJ).", "For continuous non-negative random variable $X$ the cumulative past extropy (CPJ) is defined as: $\\bar{\\xi } J (X) &=- \\frac{1}{2}\\int _{0}^{\\infty } F_{X}^2 (x) dx.$ The idea behind this is to replace the density function by distribution function in extropy $(1.1)$ .", "Kundu (2021)[1] studied extreme order statistics on cumulative residual (past) extropy.", "Hashempour et al.", "(2022)[1] proposed a new measure called weighted cumulative residual extropy.", "Mohammadi (2022)[1] studied a new measure called interval weighted cumulative residual extropy.", "For continuous non-negative random variable $X$ the weighted cumulative residual extropy (WCRJ) is defined as: ${\\xi }^w J (X) &=- \\frac{1}{2}\\int _{0}^{\\infty }x \\bar{F}_{X}^2 (x) dx.$ This paper is organized in the following manner.", "In Section 2 we introduce the weighted cumulative past extropy (WCPJ) and study some of its properties.", "In Section 3 some bounds and inequalities are derived.", "In Section 4, we study the characterization of WCPJ based on the largest-order statistic.", "In Section 5 we focus on related studies on reliability analysis.", "Conditional WCPJ and some of its properties are discussed in Section 6." ], [ "Weighted cumulative past extropy", "Balakrishnan et al.", "(2020)[1] and Bansal et al.", "(2021)[1] independently introduced the weighted extropy.", "Mohammadi (2022)[1] studied a new measure called interval weighted cumulative past extropy.", "Weighted cumulative past extropy (WCPJ) is an information measure, which is a generalization of cumulative past extropy.", "In this section, we study the properties of WCPJ.", "Definition 1 Let $X$ be a non-negative absolutely continuous random variable with cdf $F$ .", "We define the WCPJ of $X$ by $\\bar{\\xi }^wJ(X)=\\frac{-1}{2}\\int _{0}^{\\infty }xF^2(x)dx.$ The integral (REF ) can be extended to the support of random variable $X$ .", "Example 1 Let $X$ has $U[a,b]$ distribution.", "Then CRJ and WCRJ of the uniform distribution are $\\xi J(X)=-\\frac{b-a}{6},\\ \\mbox{and},\\ \\xi ^w J(X)=\\frac{a-b}{24}(3a+b),$ respectively.", "Then CPJ and WCPJ of the uniform distribution are $\\bar{\\xi }J(X)=-\\frac{b-a}{6},\\ \\mbox{and},\\ \\bar{\\xi }^w J(X)=\\frac{a-b}{24}(a+3b),$ respectively.", "Note that $\\bar{\\xi }^w J(X)=\\left(\\frac{a+3b}{4}\\right)\\bar{\\xi }J(X)=\\left(\\frac{E(X)+b}{2}\\right)\\bar{\\xi }J(X).$ If $\\frac{E(X)+b}{2}>1$ , then $\\bar{\\xi }^w J(X)< \\bar{\\xi }J(X)$ , and if $\\frac{E(X)+b}{2}<1$ , then $\\bar{\\xi }^w J(X)> \\bar{\\xi }J(X)$ .", "Example 2 Let $X$ follow power-law distribution with pdf $f(x)=\\lambda x^{\\lambda -1}, x\\in (0,1), \\lambda > 1$ .", "The $CRJ$ and $WCRJ$ of the distribution are $\\xi J(X)=-\\frac{\\lambda ^{2}}{(\\lambda +1)(2\\lambda +1)},\\ \\mbox{and},\\ \\xi ^w J(X)=-\\frac{\\lambda ^{2}}{4(\\lambda +1)(\\lambda +2)},$ respectively.", "Note that $\\xi ^w J(X)=\\left(\\frac{2\\lambda +1}{4(\\lambda +2)}\\right) \\xi J(X)=\\left(\\frac{2(\\lambda +1)E(X)+1}{4(\\lambda +2)}\\right) \\xi J(X).$ For $\\lambda =-\\frac{7}{2},\\ \\xi ^w J(X) = \\xi J(X).", "$ The $CPJ$ and $WCPJ$ of the distribution are $\\bar{\\xi }J(X)=-\\frac{1}{2(2\\lambda +1)},\\ \\mbox{and}, \\ \\bar{\\xi }^w J(X)=-\\frac{1}{4(\\lambda +1)}.$ We conclude that $\\bar{\\xi }^w J(X)=\\frac{2\\lambda +1}{2\\lambda +2}\\bar{\\xi }J(X).$ Also, $\\bar{\\xi }^w J(X)=-\\frac{\\lambda +2}{4\\lambda (\\lambda +1)}E(X^{2}).$ Theorem 1 Let $X$ be a non-negative continuous random variable with finite WCPJ, $\\bar{\\xi }^wJ(X)$ .", "Then we have $\\bar{\\xi }^wJ(X)=\\frac{-1}{2}E(G_F(X)),$ where $G_F(t)=\\int _{t}^{\\infty }xF(x)dx$ .", "Proof Using equation (REF ) and Fubini's theorem, we have $\\bar{\\xi }^wJ(X)&=\\frac{-1}{2}\\int _{0}^{\\infty }xF^2(x)dx=\\frac{-1}{2}\\int _{0}^{\\infty }xF(x)\\left(\\int _{0}^{x}f(t)dt\\right)dx\\\\&=\\frac{-1}{2}\\int _{0}^{\\infty }f(t)\\left(\\int _{t}^{\\infty }xF(x)dx\\right)dt=\\frac{-1}{2}\\int _{0}^{\\infty }f(t)G_F(t)dt\\\\&=\\frac{-1}{2}E(G_F(X))$ $\\blacksquare $ Now we see the effect of linear transformation on WCPJ in the following proposition Proposition 1 Let $X$ be a non-negative random variable.", "If $Y=aX+b,\\ a>0,\\ b\\ge 0,$ then $\\bar{\\xi }^wJ(Y)=a^2 \\bar{\\xi }^wJ(X)+ab\\bar{\\xi }J(X)$ Proof The proof holds using (REF ) and noting that $F_Y(y)=F_X\\left(\\frac{y-b}{a}\\right),\\ y>b$ .$\\blacksquare $ Here we provide an upper bound for WCPJ in terms of extropy.", "Theorem 2 Let $X$ be a random variable with pdf $f(\\cdot )$ and extropy $J(X)$ , then $\\bar{\\xi }^wJ(X)\\le C^* exp\\lbrace 2J(X)\\rbrace ,$ where $C^*=\\frac{-1}{2}exp\\lbrace E\\left(\\log (XF^2(X))\\right)\\rbrace $ Proof Using the log-sum inequality, we have $\\int _{0}^{\\infty }f(x)\\log \\left(\\frac{f(x)}{xF^2(x)}\\right) dx\\ge -\\log \\left(\\int _{0}^{\\infty }xF^2(x)dx\\right).$ Then it follows that $\\int _{0}^{\\infty }f(x)\\log f(x)dx-\\int _{0}^{\\infty }f(x)\\log \\left(xF^2(x)\\right) dx \\ge -\\log \\left(\\int _{0}^{\\infty }xF^2(x)dx\\right).$ Note that $\\log f< f$ , hence $-\\int _{0}^{\\infty } f^2(x)dx+\\int _{0}^{\\infty }f(x)\\log \\left(xF^2(x)\\right) dx&=2J(X)+E\\left(\\log \\left(XF^2(X)\\right)\\right)\\nonumber \\\\&\\le \\log \\left(-2 \\bar{\\xi }^wJ(X)\\right).$ Exponentiating both sides of (REF ), we have $\\bar{\\xi }^wJ(X)\\le \\frac{-1}{2}exp\\lbrace 2J(X)+E\\left(\\log \\left(XF^2(X)\\right)\\right)\\rbrace $ Hence the result.$\\blacksquare $ Theorem 3 $X$ is degenerate, if and only if, $\\bar{\\xi }^wJ(X)=0$ .", "Proof Suppose $X$ be degenerate at point $c$ , then by using the definition of degenerate function and $\\bar{\\xi }^wJ(X)$ , we have $\\bar{\\xi }^wJ(X)=0$ .", "Now consider $\\bar{\\xi }^wJ(X)=0$ , i.e., $\\int _{0}^{\\infty }xF^2(x)dx=0$ .", "Noting that the integrand in the above integral is non-negative, we have $F(x)=0$ , for almost all $x\\in S$ , where $S$ denotes the support of random variable $X$ , i.e., it is 0 in $\\inf S$ and then 1." ], [ "Some inequalities", "This section deals with obtaining the lower and upper bounds for WCPJ.", "Remark 1 Consider $X$ be a non-negative random variable.", "then $\\bar{\\xi }^wJ(X)\\ge \\frac{-1}{2}\\int _{0}^{\\infty }xF(x)dx.$ Proposition 2 Consider a non-negative continuous random variable $X$ having cdf $F_X(\\cdot )$ and support $[a,\\infty ), a>0$ .", "Then $\\bar{\\xi }^wJ(X)\\le a\\bar{\\xi }J(X).$ Proof Note that $\\int _{a}^{\\infty }xF^2(x)dx&\\ge a\\int _{a}^{\\infty }F^2(x)dx\\\\\\frac{-1}{2}\\int _{a}^{\\infty }xF^2(x)dx & \\le \\frac{-a}{2} \\int _{a}^{\\infty }F^2(x)dx\\\\\\bar{\\xi }^wJ(X)&\\le a\\bar{\\xi }J(X).$ $\\blacksquare $ Corollary 1 Let $X$ be a continuous random variable with cdf $F$ that takes values on $[0,b]$ where $b$ is finite.", "Then, $ \\bar{\\xi }^wJ(X)\\le \\frac{-1}{4}\\left(b^2-E(X^2)\\right)\\left[\\log \\left(\\frac{b^2-E(X^2)}{b^2}\\right)-1\\right]$ , $\\bar{\\xi }^wJ(X)\\ge b \\bar{\\xi }J(X)$ .", "Proof Using log-sum inequality, we have $\\int _{0}^{b}F(t)t \\log \\left(F(t)\\right)dt & \\ge \\int _{0}^{b}F(t)tdt \\log \\left(\\frac{\\int _{0}^{b}F(t)tdt}{\\int _{0}^{b}tdt}\\right)\\\\&=\\left(\\frac{b^2-E(X^2)}{2}\\right) \\log \\left(\\frac{b^2-E(X^2)}{b^2}\\right)$ Also note that $\\log F(t)\\le F(t)-1$ , then $\\int _{0}^{b}F(t)t \\log F(t)dt\\le -2\\bar{\\xi }^wJ(X)-\\int _{0}^{b}tF(t)dt\\\\=-2\\bar{\\xi }^wJ(X)-\\left(\\frac{b^2-E(X^2)}{2}\\right)$ Now using the above two inequalities, the first part follows.", "The second part can be verified easily.$\\blacksquare $ Consider two random variables $X$ and $Y$ having cdfs $F$ and $G$ , respectively.", "Then $X \\le _{st}Y$ whenever $F(x)\\ge G(x),\\ \\forall x\\in \\mathbb {R}$ ; where the notation $X \\le _{st}Y$ means that $X$ is less than or equal to $Y$ in usual stochastic order.", "One may refer Shaked and Shanthikumar (2007)[1] for detail of stochastic ordering.", "In the following proposition, we show the ordering of WCPJ is implied by the usual stochastic order.", "Proposition 3 Let $X_1$ and $X_2$ be non-negative continouous random variables.", "If $X_1\\le _{st}X_2$ , then $ \\bar{\\xi }^wJ(X_1)\\le \\bar{\\xi }^wJ(X_2)$ .", "Proof Using $X_1\\le _{st}X_2$ and (REF ), the result follows.$\\blacksquare $ Qiu et al.", "(2019)[1] showed that the extropy of the sum of two independent random variables is larger than that of either, $J(X + Y) \\ge max\\lbrace J(X), J(Y)\\rbrace $ .", "Hashempour et al.", "(2022)[1] obtained the following result for weighted cumulative residual extropy (WCRJ) as $\\xi ^wJ(X+Y)\\ge max\\lbrace \\xi J(X)E(Y)+ \\xi ^wJ(X), \\quad \\xi J(Y)E(X)+ \\xi ^wJ(Y) \\rbrace .$ In the following theorem, we obtain a similar result for WCPJ.", "Theorem 4 For two non-negative and independent random variables $X$ and $Y$ with finite means $\\bar{\\xi }^wJ(X+Y)\\ge max\\lbrace \\bar{\\xi }J(X)E(Y)+\\bar{\\xi }^wJ(X), \\quad \\bar{\\xi }J(Y)E(X)+\\bar{\\xi }^wJ(Y) \\rbrace .$ Proof By supposing $X$ and $Y$ are independent, we have $P(X+Y\\le t) = \\int _{0}^{t} F(t-y)dF(y).$ Using Jensen's inequality, we obtain $P^{2}(X+Y\\le t) \\le \\int _{0}^{t} F^{2}(t-y)dF(y).$ Multiplying both sides of $(2.5)$ by $\\frac{-t}{2}$ and integrating with respect to $t$ from 0 to $\\infty $ ,we have $-\\frac{1}{2} \\int _{0}^{\\infty } tP^2(X+Y\\le t) & \\ge -\\frac{1}{2}\\int _{0}^{\\infty }\\int _{0}^{t} tF^2(t-y)dF(y)dt\\nonumber \\\\&=-\\frac{1}{2} \\int _{0}^{\\infty }\\int _{y}^{\\infty } tF^2(t-y)dtdF(y)\\nonumber \\\\&=-\\frac{1}{2} \\int _{0}^{\\infty }\\int _{0}^{\\infty } (y+v)F^2(v)dvdF(y),\\nonumber $ where we used a change of variable $v=t-y$ .", "So, we have $\\bar{\\xi }^wJ(X+Y)\\ge \\bar{\\xi }J(X)E(Y)+\\bar{\\xi }^wJ(X)$ Using the same arguments for the random variable $X$ , the proof is completed.$\\blacksquare $" ], [ "WCPJ based on largest-order statistic", "Let $X_1,\\ldots ,X_n$ be a random sample from a absolutely continuous cdf $F_X(x)$ and pdf $f_X(x)$ .", "Then $X_{1:n}\\le X_{2:n}\\le \\ldots \\le X_{n:n}$ be the ordered statistics to random sample $X_1,\\ldots ,X_n$ .", "In the following, we obtain the WCPJ of the largest-order statistic.", "The WCPJ of the nth-order statistic is $\\bar{\\xi }^wJ(X_{n:n})= -\\frac{1}{2}\\int _{0}^{\\infty }xF^{2}_{X_{n:n}}(x)dx,$ where $F^{2}_{X_{n:n}}(x)=F^{2n}_{X}(x)$ .", "Using transformation $u=F(x)$ in $(3.1)$ , $\\bar{\\xi }^wJ(X_{n:n})= -\\frac{1}{2}\\int _{0}^{1}\\frac{u^{2n}F^{-1}(u)}{f(F^{-1}(u))} du,$ where $F^{-1}(x)$ is the inverse function of $F(x)$ .", "Example 3 Let $X$ have the uniform distribution on (0,1) with pdf $f(x)=1, \\ x\\in (0,1)$ .", "Then $F^{-1}(u)=u,\\ u\\in (0,1)$ and $f(F^{-1}(u))=1, \\ u\\in (0,1)$ , hence $\\bar{\\xi }^wJ(X_{n:n})=\\dfrac{-1}{4(n+1)}$ .", "Example 4 let $X$ follow power-law distribution with pdf $f(x)=\\lambda x^{\\lambda -1}, \\lambda > 1, x\\in (0,1)$ .Then $F^{-1}(u)=u^{\\frac{1}{\\lambda }},\\ u\\in (0,1)$ and $f(F^{-1}(u))= \\lambda u^{\\frac{\\lambda -1}{\\lambda }}, \\ u\\in (0,1)$ , hence $\\bar{\\xi }^wJ(X_{n:n})=\\dfrac{-1}{4(n\\lambda +1)}$ .", "Remark 2 Consider $\\Lambda =\\bar{\\xi }^wJ(X_{n:n})-\\bar{\\xi }^wJ(X)$ .", "Since $F^{2n}(x)\\le F^{2}(x)$ , hence $\\Lambda \\ge 0$ .", "For the proof of Theorem 5, we need the following lemma.", "Lemma 1 [Lemma 4.1 of Hashempour et al.", "(2022)[1]] Let $g$ be a continuous function with support $[0,1]$ , such that $\\int _{0}^{1}g(y)y^{m}dy=0$ , for $m\\ge 0$ , then $g(y)=0,\\ \\forall \\ y\\in [0,1]$ Theorem 5 Let $X_{1},...,X_{n}$ and $Y_{1},...,Y_{n}$ be two non negative random samples from continuous cdfs $F(x)$ and $G(x)$ ,respectively.", "Then $F(x)=G(x)$ if and only if $ \\bar{\\xi }^wJ(X_{n:n})=\\bar{\\xi }^wJ(Y_{n:n})$ , for all n Proof The necessary condition is trivial.", "Hence, it remains to prove the sufficient part.", "If $ \\bar{\\xi }^wJ(X_{n:n})=\\bar{\\xi }^wJ(Y_{n:n})$ , then we have $-\\frac{1}{2}\\int _{0}^{1}u^{2n}\\left(\\frac{F^{-1}(u)}{f(F^{-1}(u))}-\\frac{G^{-1}(u)}{g(G^{-1}(u))}\\right) du=0$ By using Lemma REF , it follows that $\\frac{F^{-1}(u)}{f(F^{-1}(u))}&=\\frac{G^{-1}(u)}{g(G^{-1}(u))}\\\\\\Rightarrow F^{-1}(u)\\frac{dF^{-1}(u)}{du}&=G^{-1}(u)\\frac{dG^{-1}(u)}{du}, u\\in [0,1],$ since $\\dfrac{dF^{-1}(u)}{du}=\\dfrac{1}{f(F^{-1}(u))}$ .", "Hence it follows $F^{-1}(u)=G^{-1}(u), u\\in [0,1]$ .", "Thus the proof is completed.$\\blacksquare $" ], [ "Connection to reliability theory", "Consider a non-negative continuous random variable $X$ with cdf $F$ , such that $E(X)<\\infty $ .", "The mean inactivity time (MIT) of $X$ is defined as $MIT(t)=\\int _{0}^{t}\\dfrac{F(x)}{F(t)}dx, \\ t\\ge 0.$ The MIT function finds many applications in reliability, forensic science, and so on.", "In the following theorem, we show the relationship between WCPJ and the second moment of inactivity time (SMIT) function.", "For detail about SMIT one may refer Kundu et al.", "(2010)[1].", "Definition 2 Let $X$ be a non-negative continuous random variable.", "Then SMIT is $SMIT(t)=E\\left((t-X)^2|X\\le t\\right)=2tMIT(t)-\\int _{0}^{t}2x\\dfrac{F(x)}{F(t)}dx, \\ t\\ge 0.$ Theorem 6 Let $X$ be a non-negative continuous random variable with SMIT function and weighted cumulative past extropy $\\bar{\\xi }^wJ(X)$ .", "Thus, $\\bar{\\xi }^wJ(X)\\le C^* -\\frac{1}{4}E\\left(SMIT(X)\\right),$ where $C^*=\\frac{1}{2}\\left[E\\left(XMIT(X)\\right)-\\int _{0}^{\\infty }xF(x)dx\\right]$ .", "Proof Consider $E(SMIT(X))&=2E\\left(X MIT(X)\\right)-2E\\left(\\int _{0}^{X}\\frac{xF(x)}{F(X)}dx\\right)\\nonumber \\\\&=2E\\left(X MIT(X)\\right)-2\\int _{0}^{\\infty }\\int _{0}^{t}x\\tilde{h}(t)F(x)dxdt\\nonumber \\\\&=2E\\left(X MIT(X)\\right)-2\\int _{0}^{\\infty }xF(x)\\left(\\int _{x}^{\\infty }\\tilde{h}(t)dt\\right)dx\\nonumber \\\\&=2E\\left(X MIT(X)\\right)-2\\int _{0}^{\\infty }xF(x)|logF(x)|dx\\nonumber \\\\&\\le 2E\\left(X MIT(X)\\right)-2\\int _{0}^{\\infty }xF(x)dx+2\\int _{0}^{\\infty }xF^2(x)dx\\nonumber \\\\&= 2E\\left(X MIT(X)\\right)-2\\int _{0}^{\\infty }xF(x)dx-4\\bar{\\xi }^wJ(X),$ where $\\tilde{h}(\\cdot )$ is the reversed hazard rate function.", "Now from (REF ), (REF ) follows.", "$\\blacksquare $ If some information about SMIT or its behavior is given, then (REF ) may be used.", "Another bound for $\\bar{\\xi }^wJ(X)$ can be provided in terms of hazard rate function.", "Proposition 4 Let $X$ be a non-negative continuous random variable with finite hazard rate function $h(\\cdot )$ and $\\bar{\\xi }^wJ(X)$ .", "Then, $\\bar{\\xi }^wJ(X)\\ge E(S(X)),$ where $S(x)=\\frac{-1}{2}\\int _{t}^{\\infty }x\\left(\\int _{0}^{x}h(v)dv\\right)dx$ , here $h(x)$ is hazard rate function.", "Proof Consider $\\bar{\\xi }^wJ(X)&=\\frac{-1}{2}\\int _{0}^{\\infty }xF^2(x)dx\\\\&=\\frac{-1}{2}\\int _{0}^{\\infty }xF(x)\\left(\\int _{0}^{x}f(t)dt\\right)dx\\\\&=\\frac{-1}{2}\\int _{0}^{\\infty }f(t)\\left(\\int _{t}^{\\infty }xF(x)dx\\right)dt\\\\&\\ge \\frac{1}{2}\\int _{0}^{\\infty }f(t)\\left(\\int _{t}^{\\infty }xlog \\bar{F}(x)dx\\right)dt\\\\&= \\frac{-1}{2}\\int _{0}^{\\infty }f(t)\\left(\\int _{t}^{\\infty }x \\left( \\int _{0}^{x}h(v)dv\\right) dx\\right)dt\\\\&=E\\left(S(X)\\right),$ where $S(x)=\\frac{-1}{2}\\int _{t}^{\\infty }x\\left(\\int _{0}^{x}h(v)dv\\right)dx$ .", "Hence the result.$\\blacksquare $" ], [ "Conditional weighted past extropy", "Now we consider the conditional weighted cumulative past extropy (CWCPJ).", "Consider a random variable $Z$ on probability space $(\\Omega , \\mathbb {A}, P)$ such that $E|Z|<\\infty $ .", "The conditional expectation of $Z$ given sub $\\sigma $ -field $\\mathbb {G}$ , where $\\mathbb {G}\\subseteq \\mathbb {A}$ , is denoted by $E(Z|\\mathbb {G})$ .", "For the random variable $I_{(Z\\le z)}$ , we denote $E(I_{(Z\\le z)}|\\mathbb {G})$ by $F_Z(z|\\mathbb {G})$ .", "Definition 3 For a non-negative random variable $X$ , given $\\sigma $ -field $\\mathbb {G}$ , the CWCPJ $\\bar{\\xi }^wJ(X|\\mathbb {G})$ is defined as $\\bar{\\xi }^wJ(X|\\mathbb {G})=\\frac{-1}{2}\\int _{0}^{\\infty }x F^2_X(x|\\mathbb {G})dx.$ Now we assume that the random variables are continuous and non-negative.", "Lemma 2 If $\\mathbb {G}$ is a trivial $\\sigma $ -field, then $\\bar{\\xi }^wJ(X|\\mathbb {G})=\\bar{\\xi }^wJ(X)$ .", "Proof Since here $F_X(x|\\mathbb {G})=F_X(x)$ , then the proof follows.$\\blacksquare $ Proposition 5 If $X\\in L^p$ for some $p>2$ , then $E[\\bar{\\xi }^wJ(X|\\mathbb {G})|\\mathbb {G}^*]\\le \\bar{\\xi }^wJ(X|\\mathbb {G}^*)$ , provided that $\\mathbb {G}^*\\subseteq \\mathbb {G}$ .", "Proof Consider $E[\\bar{\\xi }^wJ(X|\\mathbb {G})|\\mathbb {G}^*]&=\\frac{-1}{2}\\int _{0}^{\\infty }xE\\left(\\left[P(X\\le x|\\mathbb {G})\\right]^2|\\mathbb {G}^*\\right)dx\\\\&\\le \\frac{-1}{2}\\int _{0}^{\\infty }x\\left[E\\left(P(X\\le x|\\mathbb {G})|\\mathbb {G}^*\\right)\\right]^2dx\\\\&= \\frac{-1}{2}\\int _{0}^{\\infty }x\\left[E\\left(E(I_{(X\\le x)}|\\mathbb {G})|\\mathbb {G}^*\\right)\\right]^2dx\\\\&= \\frac{-1}{2}\\int _{0}^{\\infty }x\\left[E\\left(I_{(X\\le x)}|\\mathbb {G}^*\\right)\\right]^2dx\\\\&=\\frac{-1}{2}\\int _{0}^{\\infty }xF^2_X(x|\\mathbb {G}^*)dx\\\\&=\\bar{\\xi }^wJ(X|\\mathbb {G}^*),$ where the second step follows using the Jensen's inequality for convex function $\\phi (x)=x^2$ .", "Hence the result.$\\blacksquare $ In the following theorem, we investigate the relationship between conditional extropy and $\\bar{\\xi }^wJ(X|\\mathbb {G})$ .", "Theorem 7 Let $\\bar{\\xi }^wJ(X|\\mathbb {G})$ is conditional past extropy.", "Then we have $\\bar{\\xi }^wJ(X|\\mathbb {G})\\le B^* exp\\lbrace 2J(X|\\mathbb {G})\\rbrace ,$ where $B^*=\\frac{-1}{2}exp\\lbrace E\\left(\\log (XF^2(X))|\\mathbb {G}\\right)\\rbrace $ Proof The proof is on the similar lines as of Theorem REF , hence omitted.$\\blacksquare $ Theorem 8 For a random variable $X$ and $\\sigma $ -field $\\mathbb {G}$ , we have $E\\left(\\bar{\\xi }^wJ(X|\\mathbb {G})\\right)\\le \\bar{\\xi }^wJ(X),$ and the equality holds if and only if $X$ is independent of $\\mathbb {G}$ .", "Proof If in Proposition REF , $\\mathbb {G}^*$ is trivial $\\sigma $ -field, then (REF ) can be easily obtained.", "Now assume that $X$ is independent of $\\mathbb {G}$ , then $F_X(x|\\mathbb {G})&=F_X(x)\\nonumber \\\\\\Rightarrow \\ \\bar{\\xi }^wJ(X|\\mathbb {G})&=\\bar{\\xi }^wJ(X).$ On taking expectation to both sides of (REF ), we get equality in (REF ).", "Conversely, assume that equality in (REF ) holds.", "It is sufficient to show that $F_X(x|\\mathbb {G})=F_X(x)$ , to prove independence between $X$ and $\\sigma $ -field $\\mathbb {G}$ .", "Take $U=F_X(x|\\mathbb {G})$ , and since the function $\\phi (u)=u^2$ is convex hence $E(U^2)\\ge E^2(U)=F_X^2(x)$ , and also due to equality in (REF ), we have $\\int _{0}^{\\infty }xE(U^2)dx=\\int _{0}^{\\infty }xF^2_X(x)dx=\\int _{0}^{\\infty }xE^2(U)dx.$ Hence $E(U^2)=E^2(U)$ .", "Now using the Corollary 8.1 of Hashempour et al.", "(2022)[1], we have $F_X(x|\\mathbb {G})=F_X(x)$ .", "Hence the proof.$\\blacksquare $ For the Markov property for non-negative random variables $X,\\ Y$ and $Z$ , we have the following proposition.", "Proposition 6 Let $X\\rightarrow Y \\rightarrow Z$ is a Markov chain, then $\\bar{\\xi }^wJ(Z|X,Y)=\\bar{\\xi }^wJ(Z|Y)$ and $E\\left(\\bar{\\xi }^wJ(Z|Y)\\right)\\le E\\left(\\bar{\\xi }^wJ(Z|X)\\right).$ Proof By the definition of $\\bar{\\xi }^wJ(Z|X,Y)$ and using the Markov property, (REF ) holds.", "Now letting $\\mathbb {G}^*=\\sigma (X),\\ \\mathbb {G}=\\sigma (X,Y)$ and $X=Z$ in Proposition REF , we have $\\bar{\\xi }^wJ(Z|X)\\ge E\\left(\\bar{\\xi }^wJ(Z|X,Y)|X\\right)$ Taking expectation on both sides of (REF ), we have $E\\left(\\bar{\\xi }^wJ(Z|X)\\right)&\\ge E\\left(E\\left(\\bar{\\xi }^wJ(Z|X,Y)|X\\right)\\right)\\\\&=E\\left(\\bar{\\xi }^wJ(Z|X,Y)\\right)\\\\&=E\\left(\\bar{\\xi }^wJ(Z|Y)\\right),$ where the last equality holds using (REF ).", "Hence the result (REF ) holds.$\\blacksquare $ Conflict of interest The authors declare no conflict of interest." ] ]
2210.07712
[ [ "Intel Labs at Ego4D Challenge 2022: A Better Baseline for Audio-Visual\n Diarization" ], [ "Abstract This report describes our approach for the Audio-Visual Diarization (AVD) task of the Ego4D Challenge 2022.", "Specifically, we present multiple technical improvements over the official baselines.", "First, we improve the detection performance of the camera wearer's voice activity by modifying the training scheme of its model.", "Second, we discover that an off-the-shelf voice activity detection model can effectively remove false positives when it is applied solely to the camera wearer's voice activities.", "Lastly, we show that better active speaker detection leads to a better AVD outcome.", "Our final method obtains 65.9% DER on the test set of Ego4D, which significantly outperforms all the baselines.", "Our submission achieved 1st place in the Ego4D Challenge 2022." ], [ "Introduction", "Audio-Visual Diarization (AVD) is a multimodal task where the goal is to identify “who speaks when” given a video: More specifically, it is required to detect active speakers and also find the start and end times of speech activities for each speaker.", "AVD has many real-world applications such as transcription [17], [1], video summarization [2], and human-robot interaction [5], [13].", "Most of the previous state-of-the-art approaches [3], [4], [8], [16] assume that active speakers are always visible in the scene.", "However, this assumption does not hold for egocentric videos because a camera wearer (CW) is usually not visible.", "Moreover, egocentric videos have a high range of motion blurs and noise due to the CW's movements, which makes it harder to identify all speakers and their speech activities.", "Figure: An illustration of our overall framework.", "First, face regions are detected from the visual signal and connected over time.", "Second, ASD is performed on the audio-visual input with the detected face regions.", "Third, we obtain audio embeddings from the audio signal and find potentially missing speech activities.", "Finally, we detect CW's voice activities and additionally filter out false positives by using an off-the-shelf VAD model.", "We use four colors (red, blue, yellow, and green) to represent four different speaking identities including CW.", "Best viewed in color.In this report, we describe our approach for the AVD task of the Ego4D Challenge 2022, which encourages the development of audio-visual understanding in egocentric videos using the recently released Ego4D dataset [6].", "Figure REF illustrates an overview of our framework.", "Importantly, we present multiple technical improvements over Ego4D's official baselines.", "First, we improve the detection performance of CW's voice activity by changing the training scheme of its model.", "Our modification raises the detection mAP score from 72.0% to 80.4%.", "Second, we demonstrate that an off-the-shelf voice activity detection (VAD) model can be used to remove false positives from the CW's voice activities.", "This discovery boosts the final diarization performance by more than 4%.", "Third, we empirically verify that active speaker detection (ASD) plays a huge role in AVD.", "Specifically, we show that better ASD leads to significantly better AVD on the validation set of the Ego4D dataset, which has not been demonstrated by the official baselines.", "Our final method obtains 65.9% DER on the test set, improved from 73.3% which is acquired by the best baseline.", "Our submission achieved 1st position in the Ego4D Challenge 2022 leaderboard." ], [ "Method", "In this section, we describe our method in detail.", "As illustrated by Figure REF , the overall flow of our framework mainly consists of six components.", "Among them, we present our approaches only on CW's VAD, off-the-shelf VAD, and ASD, each of which has meaningful improvements in performance.", "For other components, we generally follow the baselines used in the original Ego4D paper [6]." ], [ "VAD for the CW", "The presence of the CW brings many difficulties to AVD in egocentric videos because its face is usually not visible.", "Therefore, we detect CW's voice activities by only using the audio signal in the following ways: First, for every 5ms, we compute the log-scaled spectrogram of the audio signal in the low frequency range from 0 to 4000 Hz.", "Second, at each time step, we slice the spectrogram over a window of 0.32s, which makes a sequence of single-channel images with a resolution of 256$\\times $ 64.", "Next, we train a ResNet [7] on the generated images using the training set of the Ego4D dataset where the detection performance is optimized on the validation set.", "During inference, we use the optimized ResNet to directly determine whether the CW is speaking or not for each time step." ], [ "Off-the-shelf VAD", "VAD is to identify the presence of human speech from the audio signal at any given moment, thus we can leverage an off-the-shelf VAD model as a separate pre- or post-processing component.", "For example, it can be used to pre-process the audio signal to detect the potential speech activities.", "Then, an ASD model or the ResNet for the CW's VAD would be applied to the potential results.", "Alternatively, we can utilize it to post-process the results of the ASD or CW's VAD models.", "In particular, we use a popular VAD model called Silero [15] (we refer to it as SilVAD) that is pre-trained on a large-scale dataset in multiple languages.", "Here, we demonstrate that SilVAD effectively removes the false positives from the CW's speech activities when it is solely used as a post-processing unit for the CW's VAD.", "Interestingly, SilVAD does not provide any additional benefit when it is applied for ASD or pre-processing." ], [ "ASD", "We significantly improve the final AVD performance by using a better ASD model.", "Specifically, we use SPELL [11], which is a state-of-the-art ASD method where its effectiveness is validated on AVA-ActiveSpeaker dataset [12], [10].", "We show that improved ASD performance directly leads to better AVD results on the validation set of the Ego4D dataset, which has not been observed in the Ego4D paper." ], [ "Implementation details", "We use a 2D ResNet-18 [7] in VAD for the CW.", "It is trained with a batch size of 256 using the Adam optimizer [9].", "In the training process, the dropout is set to 0.5 and the learning rate is fixed at 5$\\times 10^{-4}$ .", "We apply the version 3.1 of SilVAD [15] as an off-the-shelf VAD.", "For SPELL [11], we utilize the official code with its default hyperparameters.", "The whole training process takes less than an hour using a single GPU (TITAN V)." ], [ "Performance of VAD for the CW", "We compare our method for the CW's VAD with the baselines in Table REF .", "The results indicate that our modification is effective in improving the detection performance." ], [ "Impact of post-processing", "In Table REF , we show the performance of our method on the validation set of the Ego4D dataset with different post-processing options.", "We can observe that post-processing boosts the AVD performance only when it is applied to CW's VAD.", "We believe that this is because the intensity of the CW's speech is usually higher than other people so the CW's voice activities can be detected relatively easily by the off-the-self VAD model.", "We also think that the post-processing on ASD has a negative impact because it is challenging to detect speech activities of people who are distant from the camera without considering the visual information of their faces.", "In other words, ASD may need an improved post-processing method other than the off-the-shelf VAD." ], [ "Better ASD, Better AVD", "We compare the ASD and AVD performances of our method with the baselines on the validation set of the Ego4D dataset in Table REF .", "Although TalkNet significantly outperforms RegionCls on ASD (50.6% v.s.", "24.6%), interestingly, the performance difference on AVD is negligible (80.0% v.s.", "79.3%).", "However, we can observe that our method using SPELL brings significant performance gains on both ASD and AVD.", "We believe that this is because our technical improvements provide supplementary benefits for ASD." ], [ "Final AVD performance", "We report the final AVD performance of our method in Table REF .", "When compared to the baseline method using TalkNet, our method achieves 7.4% lower DER on the test set of the Ego4D dataset.", "We attained 1st place in Ego4D Challenge 2022." ], [ "Conclusion", "We have presented multiple technical improvements over the official baselines of Ego4D.", "First, our modified training scheme results in a better optimization and makes VAD for the CW more effective.", "Second, we show that using the off-the-shelf VAD as a post-processing component improves the AVD performance.", "Finally, our method using a better ASD model outperforms all the baselines." ] ]
2210.07764
[ [ "Scales in light-nuclei production near the QCD critical point" ], [ "Abstract Based on the coalescence model, we analyse the light-nuclei production near the critical point by expanding the phase-space distribution function $f(\\mathbf{r},\\mathbf{p})$ in terms of the phase-space cumulants $\\sim \\langle r^m p^m\\rangle_c$.", "We show that the dominant contribution of the phase-space distribution to the yield of light nuclei is determined by the second-order phase-space cumulants.", "Here, we identify the fireball size, the homogeneity length, and the effective temperature, which are encoded in the second-order phase-space cumulants, as the relevant scales in explaining the yield of light nuclei.", "These scales are typically much larger than the correlation length of the critical fluctuations created in the rapid expansion of the heavy-ion systems, so we need to eliminate this dominant contribution of the relevant scales in order to isolate the critical contribution from the yield of light nuclei.", "We find that the second-order phase-space cumulants appeared in the yields of light-nuclei with different mass numbers share a similar structure.", "This property allows us to construct ratios of light-nuclei yields in appropriate combinations so that the effect of the relevant scales of the light-nuclei yield cancels, which isolates the critical effects." ], [ "Introduction", "Searching for the QCD critical point is one of the most important goals for relativistic heavy-ion collisions.", "Preliminary measurements of net-proton multiplicity fluctuations at the Relativistic Heavy Ion Collider (RHIC) show a non-monotonic behavior as a function of the colliding energy [1], which qualitatively agrees with the theoretical prediction [2].", "Meanwhile, the realistic dynamics of the heavy-ion collision reaction is so complicated that it is non-trivial whether the observed non-monotonic behavior is unique to the critical effect.", "Thus, it is preferable to confirm the critical effects in multiple observables.", "Light-nuclei production is claimed to be related to the relative neutron density fluctuations [3] and its non-monotonic behavior [4] is the imprint of critical point.", "In this study, we consider the impact of the critical effect on another observable, the light-nuclei yield ratios calculated from the phase-space distribution of nucleons, $f(\\mathbf {r},\\mathbf {p})$ , using the coalescence model [5], [6].", "In Sec.", "we demonstrate that the size of the fireball, the homogeneity length, and the effective temperature are the relevant scales of the light-nuclei yield.", "The critical fluctuations induce additional correlations in the phase-space distributions and thus affect the yield of the light nuclei formed near the critical point.", "The critical correlation length in rapidly expanding heavy-ion systems is typically much smaller than the relevant scales of the light-nuclei production.", "We find that the ratios of the yield in appropriate combinations can be used to eliminate the effect of the relevant scales and isolate the critical effect from the light-nuclei yield." ], [ "Phase-space distribution in light-nuclei production", "One of the widely used methods to calculate the production of light nuclei is the coalescence model [5], [6], [7], [8], in which the yield is obtained as $N_A=g_A \\int \\biggl [\\prod ^A_i d^3\\mathbf {r}_i d^3\\mathbf {p}_if_i(\\mathbf {r}_i,\\mathbf {p}_i)\\biggr ] W_A(\\lbrace \\mathbf {r}_i,\\mathbf {p}_i\\rbrace ^A_{i=1}),$ where the statistical factor $g_A=(2s+1)/2^A$ is given by the spin $s$ of the light nucleus.", "The probability of producing light nuclei from $A$ -nucleons at the phase-space positions $(\\mathbf {r}_i,\\mathbf {p}_i)$ ($i=1,\\dots ,A$ ) is described by the Wigner function of the spherical harmonic oscillator $W_A(\\lbrace \\mathbf {r}_i,\\mathbf {p}_i\\rbrace ^A_{i=1})=8^{A-1}\\exp [-\\frac{1}{2}\\sum ^{A-1}_{i=1}\\mathbf {Z}_i^2]$ .", "The Wigner function only depends on the relative distances $\\mathbf {Z}_i=\\sqrt{\\frac{i}{i+1}}(\\frac{1}{i}\\sum ^i_{j=1}\\mathbf {z}_j-\\mathbf {z}_{i+1})$ ($i = 1,\\dots ,A-1$ ) but not on the center-of-mass motion $\\mathbf {Z}_A=A^{-1/2}\\sum ^A_{i=1}\\mathbf {z}_i$ , where we redefined the phase-space variables as $\\mathbf {z}_i=\\sqrt{2}(\\mathbf {r}_i/\\sigma _A,\\sigma _A\\mathbf {p}_i)$ .", "This property stems from the fact that the nuclear interactions depend on the relative coordinates between the nucleons, namely the translational invariance of the system.", "The fact that the transform between the relative distances $\\mathbf {Z}_i$ and the nucleon positions $\\mathbf {z}_i$ is orthogonal will play an important role later.", "One of the significant consequences of the Wigner function written in the Gaussian form with respect to the relative distances is that the light nuclei constituted of different numbers of nucleons $A$ share the same structure in the case of Gaussian phase-space distributions $f_i(\\mathbf {r}_i,\\mathbf {p}_i)$ .", "To see this in a systematic manner, we expand the phase-space distribution by the phase-space cumulants [9]: $\\frac{f(\\mathbf {z}_i)}{N_p}&= \\rho (\\mathbf {z}_i) = \\int \\frac{d^6\\mathbf {t}_i}{(2\\pi )^6} e^{-\\mathrm {i}\\mathbf {t}_i\\cdot \\mathbf {z}_i}\\exp \\biggl [\\sum _{\\mathbf {\\alpha }\\in \\mathbb {N}_0^6} \\frac{\\mathcal {C}_{\\mathbf {\\alpha }}}{\\mathbf {\\alpha } !", "}(\\mathrm {i}\\mathbf {t}_i)^{\\mathbf {\\alpha }}\\biggr ],$ where $N_p = N_n = \\int d^6\\mathbf {z} f(\\mathbf {z})$ is the number of nucleons (where isospin asymmetry is neglected).", "The cumulant of the phase-space variable $\\mathcal {C}_{\\mathbf {\\alpha }}\\equiv \\langle \\mathbf {z}^{\\mathbf {\\alpha }}\\rangle _c$ is defined by the multi-index order $\\mathbf {\\alpha }\\in \\mathbb {N}_0^6$ , and $\\langle \\cdots \\rangle = (1/N_p)\\int d^6\\mathbf {z} \\cdots f(\\mathbf {z})$ represents the average over the phase-space under a single phase-space distribution $f(\\mathbf {r},\\mathbf {p})$ .", "When the distribution is sufficiently close to the Gaussian distribution, i.e., $\\mathcal {C}_{\\mathbf {\\alpha }}$ for $|\\mathbf {\\alpha }|\\ge 3$ are sufficiently small, the yield of light nuclei in Eq.", "(REF ) can be diagrammatically evaluated by the perturbation to the Gaussian integration and finally gives the form: $N_A &= g_A N_p \\biggl [\\frac{8 N_p}{\\sqrt{\\det (\\mathcal {C}_2 + \\mathcal {I}_6)}}\\biggr ]^{A-1}\\cdot [1 + \\mathcal {O} (\\lbrace \\mathcal {C}_{\\mathbf {\\alpha }}\\rbrace _{|\\mathbf {\\alpha }|\\ge 3})],$ Here, one can see that, at the lowest order of the perturbation determined by the second-order cumulants $\\mathcal {C}_2$ , the phase-space distribution $f(\\mathbf {r},\\mathbf {p})$ plays a similar role in light-nuclei yields of different mass numbers $A$ .", "Consequently, we may construct the ratios of the light-nuclei yields, which are fixed solely by $g_A$ (under the assumption of the common light-nuclei size $\\sigma _A\\equiv \\sigma $ ): $R_{A,B}^{1-B,A-1} =\\frac{N_p^{B-A} N_B^{A-1}}{N_A^{B-1}}&=\\frac{g^{A-1}_B}{g_A^{B-1}}[1+\\mathcal {O}(\\lbrace \\mathcal {C}_{\\mathbf {\\alpha }}\\rbrace _{|\\alpha | \\ge 3})],$ where the dominant contribution from the second-order phase-space cumulants is canceled out.", "Explicitly, the canceled second-order cumulants have the form: $\\mathcal {C}_2&=2\\begin{pmatrix}\\frac{\\langle \\mathbf {r}\\mathbf {r}^\\mathrm {T}\\rangle }{\\sigma ^2} &\\langle \\mathbf {r}\\mathbf {p}^\\mathrm {T}\\rangle \\\\\\langle \\mathbf {p}\\mathbf {r}^\\mathrm {T}\\rangle &\\sigma ^2 \\langle \\mathbf {p}\\mathbf {p}^\\mathrm {T}\\rangle \\end{pmatrix},$ where $\\langle \\mathbf {a}\\mathbf {b}^\\mathrm {T}\\rangle $ is the $3\\times 3$ matrix with the elements $\\langle a_i b_j\\rangle _c$ ($i,j=x,y,z$ ).", "The diagonal elements are the variances of coordinates $\\langle r^2_i\\rangle _c$ and momenta $\\langle p^2_i\\rangle _c$ , corresponding to the fireball size and the effective temperature of the nucleon spectrum, respectively.", "The non-diagonal elements are the correlation between $r$ and $p$ , which is related to the homogeneity length $l$ [10].", "To summarize, we can treat the fireball size, the homogeneity length, and the effective temperature as the relevant scales of the light-nuclei production." ], [ "Critical fluctuations in light-nuclei production", "For the light nuclei created in the vicinity of the critical point, the nucleons interact with the chiral field $\\sigma (\\mathbf {r})$ , and their masses are modified with a small deviation $\\delta m=g_\\sigma \\sigma $ to the leading order.", "Their phase-space distribution thus contains the correction term $\\delta f$  [11]: $f=f_0+\\delta f=f_0[1-g_\\sigma \\sigma /(\\gamma T)],$ where $f_0=f_\\sigma |_{\\sigma =0}$ denotes the background distribution without the critical contribution, and $\\gamma =\\sqrt{\\mathbf {p}^2+m^2}/m$ is the Lorentz factor.", "In addition to the contribution from the background $f_0$ , the polynomials of the correction term $\\delta f$ also play a role in the yield of light nuclei in Eq.", "(REF ).", "As the first step of the study, let us borrow the forms of the static critical correlators [11]: $&\\langle \\sigma (\\mathbf {r}_1)\\sigma (\\mathbf {r}_2)\\rangle _\\sigma =TD(\\mathbf {r}_1-\\mathbf {r}_2),\\\\&\\langle \\sigma (\\mathbf {r}_1)\\sigma (\\mathbf {r}_2)\\sigma (\\mathbf {r}_3)\\rangle _\\sigma =-2T^2\\lambda _3\\int d^3uD(\\mathbf {r}_1-\\mathbf {u})D(\\mathbf {r}_2-\\mathbf {u}) D(\\mathbf {r}_3-\\mathbf {u}),$ where the critical propagator $D(\\mathbf {r}_1-\\mathbf {r}_2)\\equiv \\frac{1}{4\\pi r}e^{-r/\\xi }$ is written by $r=|\\mathbf {r}_1-\\mathbf {r}_2|$ , the correlation length $\\xi $ , and the coupling constant $\\lambda _3$ for the 3-point correlator.", "$\\langle \\cdots \\rangle _\\sigma $ represents the event-by-event averaging over different realizations of the sigma field.", "The interaction with the sigma field induces the critical correlation, and the correlation length $\\xi $ affects the yield of light nuclei.", "This can be seen by using the characteristic function of the phase-space distribution with the critical contribution [12], where the final result takes the form: $\\langle N_A \\rangle _\\sigma =g_A8^{A-1}N^A_p[\\det (\\mathcal {C}_2+\\mathcal {I}_6)]^{-(A-1)/2}[1+\\tilde{\\Xi }(A)].$ Here we defined $\\tilde{\\Xi }(A)\\equiv \\sum _{b=2}^A(-1)^bC^b_A\\Xi (A,b)$ , where $C^b_A$ is the binomial coefficients and $\\Xi (A,b)\\sim g^b_\\sigma \\langle \\prod ^b_{j=1}\\sigma (\\mathbf {t}_{r,j})\\rangle _\\sigma $ is the critical contribution which encodes the critical correlation length $\\xi $ .", "Although the correlation length grows up to $\\xi = 1/m_\\sigma $ in the static systems with $m_\\sigma $ being the $\\sigma $ mass, the correlation length is limited to the order of 2–3 fm in heavy-ion collisions due to the rapid expansion of the system [13], which is typically much smaller than the relevant scales of the light-nuclei production encoded in $\\mathcal {C}_2$ .", "Considering the small value of $\\xi $ and small critical regime on the QCD phase diagram, the correlation length in the individual light-nuclei yield as shown in Eq.", "(REF ) is negligible.", "However, due to the similar structure related to the second-order phase-space cumulants $\\mathcal {C}_2$ in Eq.", "(REF ), the combination such as $\\tilde{R}(A,B)&\\equiv R^{1-B,A-1}_{A,B}-g^{A-1}_Bg^{-(B-1)}_A$ and $\\tilde{R}(A,B,D)\\equiv R^{1-B,A-1}_{A,B}-g^{A-1}_Bg^{-(A-1)(B-1)/(D-1)}_D[R^{1-D,A-1}_{A,D}]^{(B-1)/(D-1)}$ greatly suppress the contribution from the background scales in $\\mathcal {C}_2$ which help to isolate the effects related to the correlation length.", "Here, the definition of $R^{1-B,A-1}_{A,B}$ in Eq.", "(REF ) is adapted to the present case as $R^{1-B,A-1}_{A,B}\\equiv \\langle N_B\\rangle _\\sigma ^{A-1} \\langle N_A\\rangle ^{-(B-1)}_\\sigma N_p^{B-A}$ ." ], [ "Summary", "In this study, we discussed the light-nuclei production near the QCD critical point from the viewpoint of the relevant scales of the light-nuclei production $\\mathcal {C}_2$ and the scale of the critical correlation length $\\xi $ .", "We first decomposed the phase-space distribution $f(\\mathbf {r},\\mathbf {p})$ in terms of various orders of phase-space cumulants $\\mathcal {C}_{\\mathbf {\\alpha }}$ .", "Since the Wigner function in the coalescence model is approximately written in Gaussian form with respect to the relative phase-space distances of constitutive nucleons, the yield of light nuclei share a similar structure at the lowest order of the phase-space cumulants, We identified the relevant scales of the yield in the second-order cumulants: the fireball size, the homogeneity length, and the effective freeze-out temperature.", "The phase-space distribution of nucleons is modified by the interaction with the chiral field, which would be reflected in the yield of light nuclei.", "Naively, it would seem hard to separate the critical contributions from the relevant scales of the background phase-space distribution, but the structure of the lowest order of the phase-space cumulants in the yield enables us to construct combinations of light-nuclei yields to suppress the relevant scales encoded in the second-order phase-space cumulants and isolate the effects of the correlation length." ] ]
2210.07841
[ [ "Lightweight Alpha Matting Network Using Distillation-Based Channel\n Pruning" ], [ "Abstract Recently, alpha matting has received a lot of attention because of its usefulness in mobile applications such as selfies.", "Therefore, there has been a demand for a lightweight alpha matting model due to the limited computational resources of commercial portable devices.", "To this end, we suggest a distillation-based channel pruning method for the alpha matting networks.", "In the pruning step, we remove channels of a student network having fewer impacts on mimicking the knowledge of a teacher network.", "Then, the pruned lightweight student network is trained by the same distillation loss.", "A lightweight alpha matting model from the proposed method outperforms existing lightweight methods.", "To show superiority of our algorithm, we provide various quantitative and qualitative experiments with in-depth analyses.", "Furthermore, we demonstrate the versatility of the proposed distillation-based channel pruning method by applying it to semantic segmentation." ], [ "Introduction", "The purpose of a natural image matting (i.e., alpha matting) is to estimate the transparency of the user-specified foreground in an image.", "The alpha matting is formally defined as follows [12]: $I = \\alpha F + (1-\\alpha ) B,$ where $I$ , $F$ and $B$ are the observed color image, foreground, background, respectively.", "Also, $\\alpha $ is transparency (i.e., alpha matte).", "The natural image matting is a highly ill-posed problem because it needs to estimate $F$ , $B$ , and $\\alpha $ simultaneously from an input color image $I$ and a trimap providing known foreground and background pixels.", "Traditional approaches for natural image matting are categorized into affinity-based and sampling-based methods.", "Affinity-based methods [43], [28], [55], [20], [27], [4], [11], [5], [1] propagate alpha values from known regions to pixels in unknown regions by analyzing statistical correlation among pixels.", "Meanwhile, sampling-based methods [48], [16], [19], [41], [40], [7], [25], [8] construct foreground and background color sample sets using pixels in known areas, then estimates alpha values in unknown regions.", "However, these algorithms often rely on strong assumptions such as local smoothness [28] or sparsity of foreground and background colors [48].", "Since the advent of large-scale image matting datasets such as Adobe-1k [50], deep learning-based matting algorithms been actively studied [10], [42], [9], [34], [35], [46], [2], [30], [37], [44].", "These methods outperform conventional ones remarkably.", "Usually, the alpha matting networks are based on U-Net [38] or fully-convolutional networks (FCN) [33].", "For better performance, the number of layers or channels can be increased and also auxiliary modules can be added to baseline networks.", "However, this leads to the increased computational costs and memory requirements that can be problematic for mobile applications.", "Recently, a lightweight alpha matting network based on similarity-preserving knowledge distillation (SPKD) [52] was introduced to resolve these issues.", "It successfully transfers similarities of features from the teacher network to the student network, which make the student network achieves much better performance than the baseline student network trained from scratch.", "However, it is still an open problem that which architecture is the best one for the lightweight student network for natural image matting.", "It can be seen from the left of Figure REF that the performance varies greatly depending on which layer the channels are removed from.", "Note that the high-level pruned model has fewer parameters than the low-level pruned model.", "Also, as shown in right of Figure REF , there is a trade-off between performance and model size, thus it is important to find an proper network architecture.", "To find the optimal lightweight network architecture, various network pruning techniques can be applied.", "Although it has been actively studied in the field of classification, it has not been dealt with much in the reconstruction problem including alpha matting.", "Recently, channel pruning methods for semantic segmentation task were introduced in [6], [21], but they mainly focus on preserving high-level semantic information rather than low-level fine structures that are crucial for the natural image matting problems.", "To focus on low-level fine details during the channel pruning, we borrow the power of a pre-trained high-performance matting network which well preserves fine details.", "In other words, in this paper, we present a distillation-based channel pruning method that removes the channels having a low impact in mimicking the pre-trained teacher network.", "In the pruning phase, we induce the sparsity of the scaling factor of the batch normalization (BN) layer as in [31], [6], [21] and additionally apply distillation loss with a powerful pre-trained teacher model that is capable of precisely guiding a student network to preserve fine structural details in its prediction.", "In the training phase, we train the pruned lightweight network by the same distillation loss used in the pruning stage.", "Note that proposed method can make existing lightweight model (i.e., IndexNet) even lighter.", "Our contributions can be summarized as follows.", "(i) We introduce a novel channel pruning method for the natural image matting problem.", "To our best knowledge, this is the first attempt to apply the network pruning technique for the alpha matting problem.", "(ii) By utilizing a distillation loss within the channel pruning step, we succeed in finding a lightweight alpha matting network that can recover fine details.", "(iii) Our pruned network outperforms other baseline pruning approaches on two publicly available alpha matting datasets (Adobe-1K, Distinctions-646) while having a comparable number of parameters.", "In addition, we provide various ablation studies for a deeper understanding.", "Most image matting techniques can be categorized into affinity-based and color sampling-based methods.", "In affinity-based methods, statistical affinity is analyzed among the local and non-local neighbors to propagate values of alpha to the unknown areas from the known regions.", "Levin et al.", "[28] introduced the closed-form solution based on matting Laplacian using the linear color model.", "For handling high resolution images, He et al.", "[20] proposed efficient method to solve a large kernel matting Laplacian.", "Furthermore, Lee and Wu [27] introduced non-local matting propagating alpha values across non-local neighboring pixels.", "Chen et al.", "[4] suggested the KNN matting which uses only $k$ -nearest non-local neighbors to propagate alpha values.", "In addition, Chen et al.", "[5] utilized both local and non-local smoothness prior and Aksoy et al.", "[1] proposed multiple definitions of affinity for natural image matting.", "The color sampling-based methods find foreground and background colors from constructed color sampler sets, then estimate alpha values in unknown regions.", "Bayesian matting [12] utilizes statistical models to analyze pixels in unknown regions.", "Robust matting [48], shared matting [16] and weighted color and texture matting [41] select the best color samples based on their own designed cost functions that take into account spatial, photometric, or texture information.", "He et al.", "[19] proposed a randomized searching method to use global samples in the known areas to find the best combination of foreground and background colors.", "Shahrian et al.", "[40] constructed comprehensive color sample sets to cover broad color variations using Gaussian Mixture Model (GMM).", "Karacan et al.", "[25] choose colors of foreground and background based on sparse representation.", "After large-scale alpha matting datasets were published [50], [37], a lot of deep learning-based works have been introduced.", "Xu et al.", "[50] proposed a simple two-stage network for natural image matting.", "Lutz et al.", "[35] applied adversarial training for obtaining visually appealing alpha matte results.", "To preserve details of alpha mattes, Hao et al.", "[34] introduced IndexNet including indices-guided unpooling operation.", "In addition, contextual attention [30] and hierarchical attention [37] mechanisms were proposed for the matting problem.", "Yu et al.", "[53] proposed mask-guided matting leveraging a progressive refinement network with a general coarse mask as guidance.", "Although the performance of alpha matting has been substantially improved, there are still not many studies on lightening alpha matting networks.", "Recently, Yoon et al.", "[52] succeeded to utilize knowledge distillation (KD) to obtain the lightweight deep learning model for alpha matting.", "They reduce the number of channels with a fixed ratio, therefore, the optimal channel reduction ratio should be determined empirically." ], [ "Network Pruning", "The purpose of the network pruning is to reduce redundancies in the over-parameterized deep convolutional neural network (CNN) models for fast run-time while maintaining performance.", "In general, network pruning is divided into unstructured pruning [18], [36], [14], [45] which requires special libraries or hardware, and structured pruning [49], [29], [3], [31] which is relatively easy to implement.", "In this subsection, we focus on structured pruning that is more relevant to our work.", "Wen et al.", "[49] proposed a Structured Sparsity Learning (SSL) method to sparsify structures including filters, channels, or layers by using group sparsity regularization.", "Li et al.", "[29] introduced a method to remove channels having small incoming weights in a trained deep CNN model.", "Changpinyo et al.", "[3] deactivate connections between filters in convolutional layers to obtain smaller networks.", "Liu et al.", "[31] proposed the network slimming method to explicitly impose channel-wise sparsity in the deep CNN model using scaling factors in batch normalization.", "Gao et al.", "[15] proposed a feature boosting and suppression (FBS) method to dynamically remove and boost channels according to the inputs using auxiliary connections.", "Despite many pruning studies, most of them focus on the classification task.", "Fortunately, pruning techniques for semantic segmentation have begun to be introduced recently.", "Chen et al.", "[6] suggested a channel pruning method for semantic segmentation based on multi-task learning.", "Furthermore, He et al.", "[21] proposed context-aware channel pruning method by leveraging the layer-wise channels interdependency.", "However, pruning researches for matting network have not been addressed yet.", "Since the estimation of the fine structures in alpha mattes are the most important objective of the matting network, a powerful pruning technique suitable for this purpose is strongly required.", "To this end, we present a distillation-based channel pruning technique that exploits a powerful pre-trained alpha matting model suitable for recovering low-level fine details.", "Figure: In the pruning stage, the student network is lightweighted using scaling factor sparsification loss and distillation loss with pre-trained teacher model.", "In the training stage, the same distillation loss is used to train the pruned student network." ], [ "Proposed Approach", "In this section, we briefly describe the basics of KD and motivation of using KD for network pruning.", "Then, we introduce the distillation-based channel pruning method for sparsifying alpha matting network, and explain a method for training pruned lightweight model with KD.", "We use the same distillation method for both pruning and training stages, even though it is also possible to utilize different methods.", "Related experiments are provided by the ablation studies.", "Overview of our distillation-based channel pruning and training is illustrated in Figure REF ." ], [ "Knowledge Distillation", "Knowledge distillation (KD) [23] is a technique supervising a small student model by a larger teacher model.", "The main purpose of KD is to transfer rich feature representations of the large model trained by the huge amount of data into the small model.", "Therefore, it is very useful when there is a lack of training data or limited computational resources and memory of the devices.", "Mathematically, feature maps of the teacher and student networks in $i$ -th layer are denoted as $F^{t}_{i}\\in \\mathbb {R}^{C^{t}_{i}\\times H^{t}_{i}\\times W^{t}_{i}}$ and $F^{s}_{i}\\in \\mathbb {R}^{C^{s}_{i}\\times H^{s}_{i}\\times W^{s}_{i}}$ , respectively.", "Note that $\\left\\lbrace C^{t}_{i}, C^{s}_{i} \\right\\rbrace $ are the number of channels, $\\left\\lbrace H^{t}_{i}, H^{s}_{i} \\right\\rbrace $ and $\\left\\lbrace W^{t}_{i}, W^{s}_{i} \\right\\rbrace $ represent the spatial size.", "Generally, the distillation loss for each layer is formulated as $\\mathcal {L}_{KD}(F^{t}_{i}, F^{s}_{i}) = \\mathcal {L}_{F}(\\Phi _{t}(F^{t}_{i}),\\Phi _{s}(F^{s}_{i})),$ where, $\\mathcal {L}_{F}(\\cdot )$ is a similarity function, $\\Phi _{t}(\\cdot )$ and $\\Phi _{s}(\\cdot )$ are feature transform functions for the teacher and student networks.", "According to the purpose of distillation, appropriate $\\mathcal {L}_{F}(\\cdot )$ , $\\Phi _{t}(\\cdot )$ and $\\Phi _{s}(\\cdot )$ should be designed." ], [ "Motivation", "Over the recent years, various KD methods have been introduced [23], [51], [47], [22], [17], but most of them arbitrarily set the architecture of the student network.", "Therefore, they do not ensure whether the student network is optimal for both distillation and the given tasks.", "For example, the importance of each channel in the layers of a deep CNN model may be different, therefore, reducing the number of channels uniformly for all the layers is sub-optimal obviously.", "We believe that it is also important for the alpha matting task to find the optimal student model.", "To confirm this, we perform a preliminary experiment using GCA [30] as a baseline matting model.", "First, we divide the encoder of GCA model into two groups: low-level layers (conv1-conv3) and high-level layers (conv4-conv5).", "Then, we apply uniform 50% channel pruning to low-level and high-level layers separately, and then obtain two different pruned networks.", "The ratios of the removed channel parameters to the whole encoder are 12.75% in low-level and 37.25% in high-level layers, respectively.", "In other words, more parameters are eliminated from the high-level layers rather than the low-level ones.", "Using these two uniformly pruned GCA models, we verified the alpha matting performance on the Adobe-1k dataset [50].", "Table: The first row: model with channels removed from low-level layers.", "The second row: model with channels removed from high-level layers.As reported in Table REF , the model removing channels of high-level layers (the second row) prunes more channels than the model removing channels of low-level layers (the first row) but achieves better performance.", "The number of network parameters is also much less.", "As a result, it can be seen that the channels of low-level layers have more influence on the alpha matting performance.", "Regarding that the high-level pruned network has achieved better performance, this result implies that the channel distributions of the original and optimal pruned models might be different significantly and the low-level layers are highly important in the alpha matting problem.", "Therefore, in this paper, we propose a method to find a student network model that can well receive low-level knowledge from a large-capacity teacher network.", "To this end, we introduce a distillation-aware pruning loss in the pruning stage to create an optimal lightweight network for alpha matting." ], [ "Pruning with KD", "Inspired by [31], we adopt a channel pruning method based on the sparsification of scaling factors in batch normalization (BN) layers.", "BN layer is used in most deep CNN models for better generalization and fast convergence.", "Formally, the BN layer is defined as follows: $y = \\gamma \\frac{x-\\mu }{\\sqrt{\\sigma ^{2}+\\epsilon }} + \\beta ,$ where $x$ and $y$ are input and output of BN layer, and $\\mu $ and $\\sigma $ are mean and standard deviation of the input mini-batch features, and $\\epsilon $ is a small constant.", "$\\gamma $ and $\\beta $ are the learnable scaling and shifting factors.", "In [31], the scaling factor $\\gamma $ in the BN layer is considered as the measure for the importance of each channel.", "In other words, a channel with a very small $\\gamma $ is regarded as the layer which does not contribute significantly to the final prediction.", "Therefore, enforcing sparsification on the scaling factors eases the identification of prunable layers.", "Similarly, our pruning method trains a target student network with sparsification loss and distillation loss, then remove channels with small scaling factors in BN layers.", "We adopt the same alpha matting model for both teacher and student networks.", "For the network pruning, only parameters of the student network are updated while those of the teacher network are fixed.", "The final loss includes alpha prediction, channel sparsification, and distillation losses as follows: $\\mathcal {L}_{P} = \\lambda _{1}\\mathcal {L}_{\\alpha }(\\alpha _\\mathrm {s},\\alpha _\\mathrm {gt}) + \\lambda _{2}\\mathcal {L}_{\\alpha }(\\alpha _\\mathrm {s},\\alpha _{\\mathrm {t}}) + \\lambda _{3}\\sum _{\\gamma \\in \\zeta } \\left| \\gamma \\right| + \\lambda _{4}\\sum _{i\\in \\eta }{\\mathcal {L}_{KD}(F^{t}_{i}, F^{s}_{i})},$ where $\\mathcal {L}_{\\alpha }(\\cdot )$ is the vanilla alpha prediction loss introduced in [50], and $\\alpha _\\mathrm {s}$ , $\\alpha _\\mathrm {t}$ , $\\alpha _\\mathrm {gt}$ are alpha matte prediction results from the student network and the teacher network, and ground truth, respectively.", "$\\zeta $ is the set of scaling factors over all the BN layers and $\\eta $ is the index set of layers utilized for distillation loss.", "$\\lambda _{1}$ , $\\lambda _{2}$ , $\\lambda _{3}$ , and $\\lambda _{4}$ are balancing factors for each term.", "Note that the distillation loss is used only for the encoder part.", "In (REF ), the gamma value $\\gamma $ corresponding to the importance score can be estimated significantly differently depending on the distillation loss, which means that different pruned networks can be created.", "Therefore, we adopt several recent KD methods to be utilized in the proposed channel pruning method as follows: Neuron Selectivity Transfer (NST): Huang and Wang proposed NST [24] that aligns the distribution of spatial neuron activations between teacher and student networks.", "To this end, NST minimizes maximum mean discrepancy (MMD) distance between activations of teacher and student networks.", "Thus, $\\Phi (\\cdot )$ in (REF ) is a certain function for kernel trick which projects samples into a higher dimensional feature space.", "Also, $\\mathcal {L}_{F}(\\cdot )$ is distance (i.e., $L_{2}$ distance) between means of projected features of teacher and student networks.", "Overhaul of Feature Distillation (OFD): Heo et al.", "[22] investigated various aspects of the existing feature distillation methods and suggested OFD that is a simple but effective distillation method.", "In particular, $\\Phi _{t}(\\cdot )$ in (REF ) is a margin ReLU function while $\\Phi _{s}(\\cdot )$ is a regressor consisting of a 1$\\times $ 1 convolution layer.", "Also, $\\mathcal {L}_{F}(\\cdot )$ in (REF ) is a partial $L_{2}$ distance.", "Similarity-Preserving Knowledge Distillation (SPKD): The SPKD-based distillation method makes the pairwise similarity of the student network similar to that of the teacher network.", "In [47], batch similarity for the classification task is used while spatial and channel similarity for the regression task is utilized in [52], [26].", "Thus, $\\Phi (\\cdot )$ in (REF ) is a function of making pairwise similarities and $\\mathcal {L}_{F}(\\cdot )$ is the $L_{2}$ distance.", "After training with distillation loss as in (REF ), we prune the target student network based on scaling factors of BN layers.", "The smaller the scaling factor is, the less impact it has on the output of the layer, thus we remove the channels with a lower scaling factor than a threshold.", "To eliminate $M$ channels, we adopt the $M$ -th smallest scaling factor as the threshold.", "At this point, thresholds of the encoder and decoder are obtained separately since distillation loss is only used in the encoder.", "After pruning, we can get the compact lightweight alpha matting network, which is suitable to get fine details by KD." ], [ "Training with KD", "By the aforementioned our distillation-based channel pruning, the architecture of a lightweight student network can be obtained.", "In [32], the network structure itself is considered more important than the remaining parameters after pruning.", "In other words, the fine-tuned model and the trained from scratch model achieve similar result, or even the trained from scratch model performs better.", "Thus, we train the pruned network from scratch again by applying KD using the teacher network based on the loss function defined as follows: $\\mathcal {L}_{T} = w_{1}\\mathcal {L}_{\\alpha }(\\alpha _\\mathrm {ps},\\alpha _\\mathrm {gt}) + w_{2}\\mathcal {L}_{\\alpha }(\\alpha _\\mathrm {ps},\\alpha _{\\mathrm {t}}) + w_{3}\\sum _{i\\in \\eta }{\\mathcal {L}_{KD}(F^{t}_{i}, F^{ps}_{i})},$ where $\\alpha _\\mathrm {ps}$ is a prediction of the pruned student network and $F^{ps}_{i}$ is feature maps in the $i$ -th layer of the pruned student network.", "$w_{1}$ , $w_{2}$ , and $w_{3}$ are balancing factors for each term in (REF ).", "Unlike (REF ), sparsification loss is not included, and the pruned student network is used.", "We use the same distillation loss as the pruning step, but other distillation losses can be used.", "Table: Quantitative evaluation by GCA-50% model on the benchmark.Figure: Qualitative image matting results by GCA.", "(a) Input images.", "(b) Ground truths.", "(c) UNI.", "(d) NS .", "(e) CAP .", "(f) NS-SPKD.", "(g) Ours-SPKD.Table: Quantitative results by DIM-50% model on the Adobe-1k.Table: Quantitative results by IndexNet-25% model on the Adobe-1k.Table: Ablation study on various combinations of methods for distillation and pruning method.", "All evaluations are conducted by GCA-50% model on the Adobe-1k.Table: Results according to various pruning ratios.", "All evaluations are conducted by GCA-model with SPKD on the Adobe-1k.Table: Results of training our pruned model from scratch without distillation.Table: Comparisons of Running Time (RT) per image on DIM model.In this section, we evaluate the proposed distillation-based channel pruning method both quantitatively and qualitatively.", "We validate our method on various teacher models including GCA [30], DIM [50], and IndexNet [34], and also provide various ablation studies.", "Finally, we show that the proposed algorithm can be utilized for the other task such as semantic segmentation." ], [ "Implementation Details", "In most experiments, we adopt GCA matting as a baseline alpha matting network.", "In order to evaluate our distillation-based channel pruning method, we use two public benchmark datasets: Adobe-1k [50] and Distinctions-646 [37].", "Since Distinctions-646 test set does not provide official trimaps, we generate trimaps from ground truth alpha matte using dilation with kernel size 10.", "The evaluation metrics for all quantitative experiments are mean squared error (MSE), sum of absolute difference (SAD), gradient error (Grad), connectivity (Conn), the number of network parameters (#Param) and floating-point-opertions (FLOPs).", "We use activations of the last four layers in the encoder for computing distillation loss as in [52] for a fair comparison." ], [ "Quantitative Comparisons", "We quantitatively verify our distillation-based channel pruning and training methods on Adobe-1k and Distinctions-646 datasets.", "We adopt the aforementioned NST [24], OFD [22], and SPKD [52] as KD methods for both pruning and training stages.", "For comparison, uniform channel pruning (UNI), network slimming (NS) [31], feature boosting and suppression (FBS) [15], and context-aware pruning (CAP) [21] are chosen.", "As reported in Table REF , the number of parameters in all student networks and FLOPs are much smaller than that of a teacher network (about 16-25% parameters and 60% FLOPs).", "Although our pruned model sometimes has more parameters or more FLOPs than other pruned models (UNI, NS, FBS and CAP), the alpha matting performance is far superior to their performances.", "Also, our distillation-based channel pruning method achieves better performance than NS regardless of distillation types.", "Note that we utilize the same KD method in the training step for both our pruning method and NS.", "Usually, performance is slightly higher when SPKD is used than OFD.", "However, when NST is used, the performance is lower than the existing pruning that does not include KD in the training step.", "It indicates that the type of distillation loss is also an important factor for both the pruning and training.", "To verify the generality of our method, the same experiments are performed using DIM and IndexNet as backbone models instead of GCA matting.", "Similar to the case of GCA matting, the best performance is achieved with SPKD as reported in Table REF and Table REF .", "A different point from the case of GCA matting is that comparable performance was achieved even when using NST.", "Note that the original IndexNet is already a lightweight model because it is based on MobileNetv2 [39], but it can be even lighter by applying our channel pruning." ], [ "Qualitative Comparisons", "Figure REF shows the qualitative performance of our method.", "We compare our results obtained using SPKD with results from existing pruning algorithms.", "Examples contain various object structures: short hair, overlapped color distribution ($squirrel$ ) and transparency ($glass$ ).", "As expected, the results of the existing pruning methods are over-smoothed as shown in the ($glass$ ) example of Figure REF -(d,e).", "In this example, UNI produces a better result than NS and CAP.", "Overall, results using distillation loss in the pruning step (Figure REF -(f)) show stable and visually pleasing predictions.", "Moreover, our final results using the SPKD in both pruning and training step (Figure REF -(g)) provide the best predictions with fine details preserved." ], [ "Ablation Studies", "Different Distillation for Pruning and Training.", "Since it is possible to utilize different distillation losses for pruning and training stages, it is meaningful to explore whether it is better to use different distillation losses in pruning and training steps or to use the same distillation loss.", "To this end, we performed experiments on all combinations of NST, OFD, and SPKD in the pruning and training phases.", "As reported in Table REF , we can achieve better performance when the same distillation loss is used in the pruning and training phases.", "Even more, in the student model pruned with NST, it is better to use NST in the training stage than OFD and SPKD, which are more advanced distillation techniques.", "These results are reasonable because the student network architecture obtained by a specific KD method will have a high chance to be more effective for the same distillation method than the other ones.", "Pruning Ratio.", "We analyze our distillation-based pruning according to pruning ratios.", "We compare the results of the model in which the number of channels is reduced by 30$\\%$ , 50$\\%$ , and 70$\\%$ using the our method, and the model uniformly reduced in the same proportion.", "For all cases, we use the same SPKD as distillation loss for training step.", "As in Table REF , the pruned models whose channels are reduced by 70$\\%$ and 50$\\%$ using our method achieve better performance and fewer network parameters than the model uniformly pruned by 50$\\%$ .", "Training from Scratch without KD.", "To analyze the effect of paired distillation loss for both the pruning and training stages, we train our pruned model from scratch without KD.", "As reported in Table REF , our pruned model achieves slightly worse performance than models pruned by UNI, NS, and CAP when trained without KD in the training phase.", "Therefore, we conclude that our distillation-based channel pruning is more beneficial when it is combined with the proper distillation method during the training.", "Running Time We measure the running time of the each pruned model using Adobe-1k dataset.", "As reported in Table REF , the performance (MSE, SAD) of our method with SPKD is quite close to those of unpruned teacher model while it runs twice faster than the teacher model.", "The existing methods (UNI, NS, CAP) are faster than our method, but the performance (MSE, SAD) is very poor.", "Figure: Example of semantic segmentation results on PASCAL VOC2012 validation set.", "(a) Input images.", "(b) Ground truths.", "(c) NS.", "(d) CAP.", "(e) NS-OFD.", "(f) Ours-OFD.Table: Semantic segmentation results by 50%\\% pruned PSPNet-50.Application on Semantic Segmentation.", "Our distillation-based channel pruning technique is applicable not only to alpha matting but also to other tasks.", "Therefore, in this subsection, we verify whether the proposed method is effective for semantic segmentation.", "For experiments, we adopt PSPNet-50 [54] as a baseline model and test our method on Pascal VOC 2012 validation set [13].", "We utilize the mean Intersection over Union (mIoU), and pixel accuracy (Acc.)", "as evaluation metrics.", "As reported in Table REF , the proposed distillation-based channel pruning method achieves superior performance compared to the other existing channel pruning methods.", "Note that the performance of the pruned model by the NS is similar to our method when KD is applied, but the pruned model by our method has much fewer parameters.", "Also, as shown in Figure REF , our channel pruning method produces a visually more pleasing result compared to the other channel pruning techniques." ], [ "Conclusion", "We have proposed a distillation-based channel pruning method for lightening a deep image matting network.", "In the pruning step, we train a student network that has the same architecture with a teacher network using the distillation-based sparsification loss.", "Then, we remove channels that have low scaling factor of BN layer.", "Finally, we train the pruned student network using the same distillation loss utilized in the pruning step.", "Experimental results demonstrate that our distillation-based channel pruning method successfully reduces the number of parameters.", "The lightweight network obtained by the proposed method achieves significantly better performance than other lightweight networks with similar capacity.", "We analyze the proposed channel pruning technique through extensive ablation studies." ], [ "Acknowledgement", "This work was partly supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government(MSIT) (No.RS-2022-00155857, Artificial Intelligence Convergence Innovation Human Resources Development (Chungnam National University)) and the National Research Foundation of Korea (NRF) grant funded by the Korea government.", "(MSIT, No.2021R1A4A1032580 and No.2022R1C1C1009334)." ] ]
2210.07760
[ [ "Theory of Laser-Assisted Nuclear Excitation by Electron Capture" ], [ "Abstract The interplay of x-ray ionization and atomic and nuclear degrees of freedom is investigated theoretically in the process of laser-assisted nuclear excitation by electron capture.", "In the resonant process of nuclear excitation by electron capture, an incident electron recombines into a vacancy in the atomic shell with simultaneous nuclear excitation.", "Here we investigate the specific scenario in which the free electron and the required atomic shell hole are generated by an x-ray free electron laser pulse.", "We develop a theoretical description based on the Feshbach projection operator formalism and consider numerically experimental scenarios at the SACLA x-ray free electron laser.", "Our numerical results for excitation of the 29.2 keV nuclear state in $^{229}\\text{Th}$ and the 14.4 keV M\\\"ossbauer transition in $^{57}\\text{Fe}$ show low excitation rates but strong enhancement with respect to direct two photon nuclear excitation." ], [ "Introduction", "Typical energies of electromagnetic transitions in nuclei span over a wide range from a few keV to hundreds MeV.", "Transitions at the lower energy limit are of special interest due to their narrow width of $10^{-7}$ down to $10^{-11}$  eV or less.", "The main application of this property, in conjunction with recoilless photon absorption and reemission, has been the method of Mössbauer spectroscopy widely used in material science, geology, chemistry and biology [1], [2], [3].", "Nuclear excitation can occur via photon absorption, using either radioactive Mössbauer sources as in the original experiment of Mössbauer [1], or synchrotron radiation in nuclear forward scattering or grazing incidence [4], or most recently radiation from x-ray free electron lasers (XFEL) [5].", "The commissioning and operation of the first XFELs with photon energies up to 20 keV at the SACLA facility [6] benefit the emerging field of x-ray quantum optics [7].", "An alternative and less investigated possibility to address low-lying nuclear transitions is the mechanism of nuclear excitation by electron capture (NEEC) theoretically considered e.g.", "in Refs.", "[8], [9].", "In the resonant process of NEEC, an incident electron recombines into a vacancy in the atomic shell with simultaneous nuclear excitation.", "This is the time-reversed process of internal conversion (IC), in which nuclear excitation is not released with an irradiated photon but is transferred to an atomic electron, which leaves the atom.", "Just recently, NEEC has been experimentally observed [10], giving rise to quite some controversy in following theoretical and experimental works [11], [12], [13], [14].", "Typically, the free electrons required in the process of NEEC are obtained for instance from laser-generated plasmas [15], [16], [17], [18] or passing the atoms through a solid target [10], [11], [12].", "In this work we consider a different possibility in an extension of the NEEC process, in which the impact electron stems from the atomic cloud surrounding the nucleus to be excited.", "Expelling of this electron from the electronic shell is achieved by an x-ray photon generated at an XFEL facility.", "The continuum electron is then captured to a vacant bound state with simultaneous excitation of the nucleus.", "We refer to this process as to laser-assisted nuclear excitation by electron capture (LANEEC) proposed for the first time in Ref. [19].", "In this simplest form involving one external photon, LANEEC has been recently considered in Ref.", "[20] (denoted by the authors as “electronic bridge excitation via continuum”) for excitation of the $[229m]{Th}$ nuclear isomer using an optical laser.", "The process has been theoretically described by means of perturbation theory [20] and scattering theory [21].", "In this work we consider LANEEC with one or two x-ray photons based on the Feshbach projection operator method in the form described in Ref.", "[22] which provides a unified description to all orders for the decay channels of the involved states.", "In the simplest LANEEC scenario with one x-ray photon, the electronic path starts from a fully occupied inner shell and ends in a vacant outer shell.", "The electron partially takes over the energy carried by the exciting photon and the latter has to be therefore larger than the nuclear transition energy.", "Apart from this “pure” LANEEC, we consider in this work two improved LANEEC versions with an additional x-ray photon.", "They both allow usage of photons at lower energies and thus addressing nuclear transitions with energies lying beyond the range achievable today at XFEL facilities.", "The two considered scenarios are depicted in Fig.", "REF .", "The nuclear transition in both cases is shown in the right graph (red arrow).", "The electronic part in the first LANEEC version is depicted in the left graph.", "Here an x-ray photon expels an electron from a deep-lying inner shell creating a vacancy (lower yellow arrow).", "Another photon promotes another atomic electron into a continuum state and induces the ordinary one-photon LANEEC process with final electronic state in the vacancy created by the first photon (upper yellow and red arrows).", "In the second scenario with the electronic part depicted in the middle graph, an inner-shell electron absorbs two photons and is promoted to a continuum state (yellow arrows) with further capture to the created vacancy (red arrow).", "Here the first excitation step and the one-photon LANEEC step are combined at the level of amplitudes to yield the amplitude of the compound LANEEC process.", "In order to distinguish between the two processes, we refer to them in the following as “LANEEC with additional hole” and “two-photon LANEEC”, respectrively, stressing the fact, that the electron involved directly in LANEEC experiences a two-photon transition only in the latter version.", "Figure: (Color online) Schematic illustration of two LANEEC versions with two photons.", "The nuclear transition in both cases is shown in the right graph (red arrow).", "In the first version referred to as “LANEEC with additional hole” (left graph), an x-ray creates a vacancy in an inner shell by expelling an electron (lower yellow arrow).", "Another photon induces the ordinary one-photon LANEEC with final electronic state in the created vacancy (upper yellow and red arrows).", "In the second version referred to as “two-photon LANEEC”, (middle graph) an inner-shell electron absorbs two photons and is promoted to a continuum state (yellow arrows) which decays to the created vacancy (red arrow) with excitation of the nucleus.We consider concrete numerical examples for each of the three LANEEC processes in an experimental implementation at the SACLA facility [6], including excitation of the 29.2 keV nuclear state in $[229]{Th}$ and the 14.4 keV Mössbauer transition in $[57]{Fe}$ .", "The calculations show that due to small excitation rates the considered schemes are very challenging or impractical today but may be of interest for future applications.", "The latter conclusion is supported by numerical results for the LANEEC versions with two photons, which show the rates many orders of magnitude larger than the rate of direct excitation of the same nuclear transition with two photons.", "The article is structured as follows.", "In Section  we apply the Feshbach projection operator formalism and derive a general expression for the matrix element of the transition operator describing the LANEEC excitation with one incident photon.", "In Section  we obtain expressions for the amplitude and the rate of the aforementioned process.", "The obtained rate is adopted also for LANEEC with an additional hole.", "In Section  we obtain the expressions for the amplitude and the rate of two-photon LANEEC.", "In Section  we consider concrete numerical examples for each LANEEC scenario.", "In the final Section  we discuss conclusions following from the obtained results.", "Atomic units $\\hbar = e = m_e = 1$ are used unless otherwise stated." ], [ "Matrix element of transition operator", "In this section we develop a very general theoretical description of the LANEEC process based on the Feshbach projection operator formalism.", "According to this approach, we separate the total Hilbert space of the system states into mutually orthogonal subspaces $Q$ , $R$ and $P$ .", "The subspace $Q$ consists of the states with all electrons bound in the atomic shell and no photons present; states from $R$ describe the system with all electrons bound plus one photon; $P$ consists of the states with no photons, one electron in a continuum state and the other electrons bound.", "We denote the basis states in $Q$ , $R$ and $P$ as $\\mathinner {|{\\beta }\\rangle }$ , $\\mathinner {|{f \\omega }\\rangle }$ and $\\mathinner {|{\\alpha \\varepsilon }\\rangle }$ , respectively, where the first symbol denotes the set of all discrete quantum numbers characterizing the state, and the second one (for the states from $R$ and $P$ ) is the total energy of the system, which is a continuous quantity due to presence of a photon or a continuum electron.", "We assume the following normalization $\\mathinner {\\langle {\\beta | \\beta ^{\\prime }}\\rangle } &=& \\delta _{ \\beta \\beta ^{\\prime } } \\;,\\\\\\mathinner {\\langle { f \\omega | f^{\\prime } \\omega ^{\\prime } }\\rangle } &=& \\delta _{f f^{\\prime }} \\delta ( \\omega -\\omega ^{\\prime } )\\;,\\\\\\mathinner {\\langle { \\alpha \\varepsilon | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle } &=& \\delta _{\\alpha \\alpha ^{\\prime }} \\delta ( \\varepsilon - \\varepsilon ^{\\prime } )\\;,$ where each Kronecker symbol $\\delta _{xx^{\\prime }}$ implies that all quantum numbers in the sets $x$ and $x^{\\prime }$ are equal.", "The projection operators $\\hat{Q}$ , $\\hat{R}$ and $\\hat{P}$ on the corresponding subspaces are given by the expressions $\\hat{Q}&=& \\sum _\\beta \\mathinner {|{\\beta }\\rangle } \\mathinner {\\langle {\\beta }|} \\;\\;,\\\\\\hat{R}&=& \\sum _f \\int d\\omega \\mathinner {|{f \\omega }\\rangle } \\mathinner {\\langle {f \\omega }|} \\;, \\\\\\hat{P}&=& \\sum _\\alpha \\int d\\varepsilon \\mathinner {|{ \\alpha \\varepsilon }\\rangle } \\mathinner {\\langle { \\alpha \\varepsilon }|} \\;, $ and satisfy the orthogonality condition $\\hat{Q}\\hat{R}= \\hat{R}\\hat{P}=\\hat{P}\\hat{Q}= 0$ .", "In the current description we neglect contributions from states lying outside the introduced subspaces and assume the completeness condition $\\hat{Q}+ \\hat{R}+ \\hat{P}= {1}\\;,$ where 1 is the identity operator.", "The Hamiltonian describing the dynamics of the system can be represented in the form $\\hat{H}= \\hat{H}_0 + \\hat{V}$ .", "Here $\\hat{V}$ is the interaction term leading to transitions between states from different subspaces $Q$ , $R$ and $P$ , which consist of eigenstates of the unperturbed Hamiltonian $\\hat{H}_0$ .", "We note that $\\hat{H}_0$ incorporates interactions which don't mix the states from $Q$ , $R$ and $P$ , such as static Coulomb interaction between the electrons and the nucleus.", "In the following we use the Green's operators with the complex energy variable $z$ as the argument $\\hat{G}_0(z)&=&(z-\\hat{H}_0)^{-1} \\;,\\\\\\hat{G}(z)&=&(z-\\hat{H})^{-1} \\;.$ They obey the Lippmann-Schwinger equation $\\hat{G}(z) = \\hat{G}_0(z) + \\hat{G}_0(z) \\hat{V}\\hat{G}(z)\\;.$ We follow Ref.", "[22] and introduce an auxiliary operator $\\hat{C}= \\hat{R}+ \\hat{P}$ .", "By acting with $\\hat{C}$ on (REF ) from left and right, and inserting ${1}=\\hat{C}+\\hat{Q}$ between $\\hat{V}$ and $\\hat{G}(z)$ , we obtain $(z-\\hat{H}_0) \\hat{Q}\\hat{G}\\hat{C}&=& \\hat{Q}\\hat{V}\\hat{Q}\\hat{G}\\hat{C}+\\hat{Q}\\hat{V}\\hat{C}\\hat{G}\\hat{C} \\;,\\\\(z-\\hat{H}_0) \\hat{C}\\hat{G}\\hat{Q}&=& \\hat{C}\\hat{V}\\hat{Q}\\hat{G}\\hat{Q}+ \\hat{C}\\hat{V}\\hat{C}\\hat{G}\\hat{Q} \\;,\\\\(z-\\hat{H}_0) \\hat{Q}\\hat{G}\\hat{Q}&=& \\hat{Q}+ \\hat{Q}\\hat{V}\\hat{Q}\\hat{G}\\hat{Q}+\\hat{Q}\\hat{V}\\hat{C}\\hat{G}\\hat{Q}\\;, \\\\(z-\\hat{H}_0) \\hat{C}\\hat{G}\\hat{C}&=& \\hat{C}+ \\hat{C}\\hat{V}\\hat{Q}\\hat{G}\\hat{C}+ \\hat{C}\\hat{V}\\hat{C}\\hat{G}\\hat{C}\\;.", "$ Introducing the operator $\\hat{\\Phi }(z)= \\hat{C}[\\hat{C}( z - \\hat{H}_0 - \\hat{V}) \\hat{C}]^{-1} \\hat{C}$ we rewrite (REF )—() in the form $\\hat{Q}\\hat{G}(z) \\hat{C}= [\\hat{Q}\\hat{G}(z) \\hat{Q}] \\hat{V}[\\hat{C}\\hat{\\Phi }(z) \\hat{C}] \\;,\\\\\\hat{C}\\hat{G}(z) \\hat{Q}=[ \\hat{C}\\hat{\\Phi }(z) \\hat{C}] \\hat{V}[ \\hat{Q}\\hat{G}(z) \\hat{Q}] \\;,$ which after substitution into ()—() give $\\hat{Q}\\hat{G}(z) \\hat{Q}= \\hat{Q}[ \\hat{Q}(z - \\hat{H}_0 - \\hat{\\Lambda }(z) ) \\hat{Q}]^{-1} \\hat{Q}\\;, \\\\\\hat{C}\\hat{G}(z) \\hat{C}= \\hat{C}\\hat{\\Phi }(z) \\hat{C}[{1} + \\hat{V}\\hat{Q}\\hat{G}(z)\\hat{C}] \\;,$ where $\\hat{\\Lambda }(z) = \\hat{V}+ \\hat{V}\\hat{C}\\hat{\\Phi }(z) \\hat{C}\\hat{V}\\;.$ The transition operator characterizing behaviour of the system subject to perturbation $\\hat{V}$ , is given by $\\hat{T}(z) = \\hat{V}+ \\hat{V}\\hat{G}(z) \\hat{V}= \\hat{\\Lambda }(z) + \\hat{\\Lambda }(z) \\hat{Q}\\hat{G}(z) \\hat{Q}\\hat{\\Lambda }(z) \\;,$ which was obtained using (REF )—().", "We assume that the excited nuclear state decays radiatively and include this radiative decay in the current description along with the LANEEC process itself.", "Both the initial and the final states belong therefore to the substate $R$ and the whole process is described by the projection $\\hat{R}\\hat{T}\\hat{R}= \\hat{R}\\hat{\\Lambda }\\hat{R}+ [\\hat{R}\\hat{\\Lambda }\\hat{Q}][\\hat{Q}\\hat{G}\\hat{Q}][\\hat{Q}\\hat{\\Lambda }\\hat{R}] \\;,$ where the property $\\hat{Q}^2 = \\hat{Q}$ was used.", "The required projections of the $\\hat{\\Lambda }(z)$ and $\\hat{G}(z)$ operators can be evaluated based on the projection $\\hat{C}\\hat{\\Phi }\\hat{C}= \\hat{P}\\hat{\\Phi }\\hat{P}+ \\hat{P}\\hat{\\Phi }\\hat{R}+ \\hat{R}\\hat{\\Phi }\\hat{P}+ \\hat{R}\\hat{\\Phi }\\hat{R}\\;,$ which we obtain below.", "In the following we assume $\\hat{R}\\hat{V}\\hat{R}= 0$ , i.e.", "$\\hat{V}$ does not couple the subspace $R$ with itself.", "Then by rewriting (REF ) as $(z-\\hat{H}_0) \\hat{C}\\hat{\\Phi }\\hat{C}= \\hat{C}+ \\hat{C}\\hat{V}\\hat{C}\\hat{\\Phi }\\hat{C}$ and acting with the operators $\\hat{P}$ and $\\hat{R}$ from left and right we find $(z-\\hat{H}_0)\\hat{P}\\hat{\\Phi }\\hat{P}&=&\\hat{P}+\\hat{P}\\hat{V}\\hat{P}\\hat{\\Phi }\\hat{P}+ \\hat{P}\\hat{V}\\hat{R}\\hat{\\Phi }\\hat{P}\\;, \\\\(z-\\hat{H}_0)\\hat{R}\\hat{\\Phi }\\hat{P}&=&\\hat{R}\\hat{V}\\hat{P}\\hat{\\Phi }\\hat{P}\\;, \\\\(z-\\hat{H}_0)\\hat{P}\\hat{\\Phi }\\hat{R}&=&\\hat{P}\\hat{V}\\hat{P}\\hat{\\Phi }\\hat{R}+ \\hat{P}\\hat{V}\\hat{R}\\Phi \\hat{R}\\;, \\\\(z-\\hat{H}_0)\\hat{R}\\hat{\\Phi }\\hat{R}&=&\\hat{R}+\\hat{R}\\hat{V}\\hat{P}\\hat{\\Phi }\\hat{R}\\;.", "$ By substituting $\\hat{R}\\hat{\\Phi }\\hat{P}$ from () into (REF ) we find $\\hat{P}[z-\\hat{H}_0-\\hat{V}-\\hat{V}\\hat{R}\\hat{G}\\hat{R}\\hat{V}] [\\hat{P}\\hat{\\Phi }\\hat{P}] = \\hat{P}\\;.$ Substitution of () into () and using (REF ) gives $\\hat{P}\\hat{\\Phi }\\hat{R}(z-\\hat{H}_0)=[\\hat{P}\\hat{\\Phi }\\hat{P}][\\hat{P}\\hat{V}\\hat{R}]\\;.$ Expressing $\\hat{P}\\hat{\\Phi }\\hat{P}$ from (REF ), it is possible to find the other three projections required in (REF ) using (), () and (REF ).", "We obtain in the following their matrix elements.", "By taking matrix element from the operator equation (REF ) and insertion of the projection operators in the representation (REF )—(), we find $(z-\\varepsilon ) \\mathinner {\\langle { \\alpha \\varepsilon | \\hat{\\Phi }(z) | \\alpha ^{\\prime }\\varepsilon ^{\\prime }}\\rangle }-\\sum _{\\alpha ^{\\prime \\prime }} \\int d\\varepsilon ^{\\prime \\prime } \\mathinner {\\langle { \\alpha \\varepsilon | \\hat{V}| \\alpha ^{\\prime \\prime } \\varepsilon ^{\\prime \\prime }}\\rangle } \\mathinner {\\langle { \\alpha ^{\\prime \\prime } \\varepsilon ^{\\prime \\prime } | \\hat{\\Phi }(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime } }\\rangle } \\nonumber \\\\-\\sum _{\\alpha ^{\\prime \\prime }f} \\iint d\\varepsilon ^{\\prime \\prime } d\\omega \\frac{ \\mathinner {\\langle { \\alpha \\varepsilon | \\hat{V}|f \\omega }\\rangle } \\mathinner {\\langle { f \\omega | \\hat{V}| \\alpha ^{\\prime \\prime } \\varepsilon ^{\\prime \\prime } }\\rangle } \\mathinner {\\langle { \\alpha ^{\\prime \\prime } \\varepsilon ^{\\prime \\prime } | \\hat{\\Phi }(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle }}{z-\\omega }=\\delta _{\\alpha \\alpha ^{\\prime }} \\delta (\\varepsilon - \\varepsilon ^{\\prime })\\;.$ We are interested in the limit $z=\\lim _{\\delta \\rightarrow +0} ( \\omega _p +i \\delta )$ at some energy $\\omega _p$ .", "According to Sokhotski theorem [23], the following operator equation holds $\\lim _{\\delta \\rightarrow +0}\\frac{1}{\\omega _p+i\\delta -\\omega }= \\left( \\frac{1}{\\omega _p-\\omega } \\right)_\\text{p.p.}", "-i\\pi \\delta ( \\omega _p - \\omega )\\;.$ We omit the principal part (denoted by the subscript p.p.)", "adopting in this way the pole approximation.", "After substitution into (REF ), carrying out integration over $\\omega $ explicitly and introducing auxiliary operators $\\hat{U}&=&-i\\pi \\sum _f \\Bigl ( \\hat{V}\\mathinner {|{f \\omega _p}\\rangle } \\mathinner {\\langle { f \\omega _p}|} \\hat{V}\\Bigr ) \\;, \\\\\\hat{W}&=& \\hat{V}+\\hat{U}\\;,$ we obtain $&& \\sum _{\\alpha ^{\\prime \\prime }} \\int d \\varepsilon ^{\\prime \\prime } \\mathinner {\\langle { \\alpha \\varepsilon | z-\\hat{H}_0 - \\hat{W}| \\alpha ^{\\prime \\prime } \\varepsilon ^{\\prime \\prime }}\\rangle } \\nonumber \\\\&&\\times \\mathinner {\\langle { \\alpha ^{\\prime \\prime } \\varepsilon ^{\\prime \\prime } | \\hat{\\Phi }(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle }= \\delta _{\\alpha \\alpha ^{\\prime }} \\delta (\\varepsilon -\\varepsilon ^{\\prime })\\;.", "$ We solve this equation with respect to $\\mathinner {\\langle { \\alpha \\varepsilon | \\hat{\\Phi }| \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle }$ using the ansatz $\\mathinner {\\langle { \\alpha \\varepsilon | \\hat{\\Phi }(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle } = \\frac{\\delta _{\\alpha \\alpha ^{\\prime }} \\delta (\\varepsilon -\\varepsilon ^{\\prime }) }{z-\\varepsilon ^{\\prime } }+\\frac{ \\mathinner {\\langle { \\alpha \\varepsilon | \\hat{\\phi }(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle } }{(z-\\varepsilon )(z-\\varepsilon ^{\\prime })}\\;,$ leading after substitution into (REF ) to the integral equation $&&\\mathinner {\\langle { \\alpha \\varepsilon | \\hat{\\phi }(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle } - \\mathinner {\\langle { \\alpha \\varepsilon | \\hat{W}| \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle } \\\\&& = \\sum _{\\alpha ^{\\prime \\prime }} \\int d\\varepsilon ^{\\prime \\prime } \\frac{ \\mathinner {\\langle { \\alpha \\varepsilon | \\hat{W}| \\alpha ^{\\prime \\prime } \\varepsilon ^{\\prime \\prime }}\\rangle } \\mathinner {\\langle { \\alpha ^{\\prime \\prime } \\varepsilon ^{\\prime \\prime } | \\hat{\\phi }(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle } }{z-\\varepsilon ^{\\prime \\prime }}\\;.", "\\nonumber $ This in turn is solved with a power series expansion $\\mathinner {\\langle { \\alpha \\varepsilon | \\hat{\\phi }(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle }= \\sum _{n=0}^\\infty \\mathinner {\\langle { \\alpha \\varepsilon | \\hat{\\phi }^{(n)}(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle }\\;,$ where $\\hat{\\phi }^{(n)}(z)$ denotes the term containing the $\\hat{W}$ operator $n$ times.", "We find $\\hat{\\phi }^{(0)}(z)=0$ , $\\mathinner {\\langle { \\alpha \\varepsilon | \\hat{\\phi }^{(1)}(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle } = \\mathinner {\\langle { \\alpha \\varepsilon | \\hat{W}| \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle }\\;,$ and for $n \\ge 2$ after application of the pole approximation $\\mathinner {\\langle { \\alpha \\varepsilon | \\hat{\\phi }^{(n)}(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle } = -i \\pi \\sum _{\\alpha _1 \\alpha _2} \\mathinner {\\langle { \\alpha \\varepsilon | \\hat{W}| \\alpha _1 \\omega _p}\\rangle }\\nonumber \\\\\\times \\left[\\tilde{X}^{n-2}\\right]_{\\alpha _1 \\alpha _2} \\mathinner {\\langle { \\alpha _2 \\omega _p | \\hat{W}| \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle }$ with the matrix $\\tilde{X}$ defined as $\\tilde{X}_{\\alpha _1 \\alpha _2} = -i\\pi \\mathinner {\\langle { \\alpha _1 \\omega _p | \\hat{W}| \\alpha _2 \\omega _p}\\rangle }\\;.$ Using the obtained powers $\\hat{\\phi }^{(n)}(z)$ , we derive from (REF ) and (REF ) $\\mathinner {\\langle { \\alpha \\varepsilon | \\hat{\\Phi }(z) | \\alpha ^{\\prime } \\varepsilon ^{\\prime }}\\rangle } = -i\\pi \\delta (\\varepsilon - \\omega _p) \\delta (\\varepsilon ^{\\prime } - \\omega _p) \\tilde{S}_{\\alpha \\alpha ^{\\prime }}\\;,$ where $\\tilde{S} = \\sum _{n=0}^\\infty \\tilde{X}^n \\;.$ This result is now used to obtain the matrix elements of the other $\\hat{\\Phi }(z)$ operator projections needed in (REF ).", "The matrix element of the operator equation () in the pole approximation leads to $\\mathinner {\\langle {f \\omega | \\hat{\\Phi }(z) | \\alpha \\varepsilon }\\rangle } = (-i\\pi )^2 \\delta (\\omega -\\omega _p) \\delta (\\varepsilon -\\omega _p) \\nonumber \\\\\\times \\sum _{\\alpha ^{\\prime }} \\mathinner {\\langle {f \\omega | \\hat{V}| \\alpha ^{\\prime } \\omega _p}\\rangle } \\tilde{S}_{\\alpha ^{\\prime } \\alpha }\\;.$ Analogously, from (REF ) $\\mathinner {\\langle { \\alpha \\varepsilon | \\hat{\\Phi }(z) | f \\omega }\\rangle } = (-i\\pi )^2 \\delta (\\varepsilon -\\omega _p) \\delta (\\omega -\\omega _p) \\nonumber \\\\\\times \\sum _{\\alpha ^{\\prime }} \\tilde{S}_{\\alpha \\alpha ^{\\prime }}\\mathinner {\\langle { \\alpha ^{\\prime } \\omega _p | \\hat{V}| f \\omega }\\rangle } \\;,$ and from () $\\mathinner {\\langle { f \\omega | \\hat{\\Phi }(z) | f^{\\prime } \\omega ^{\\prime } }\\rangle } = -i \\pi \\delta (\\omega -\\omega _p) \\delta (\\omega ^{\\prime }-\\omega _p) \\Bigl [\\delta _{f f^{\\prime }}+ (-i\\pi )^2 \\sum _{\\alpha ^{\\prime } \\alpha ^{\\prime \\prime }} \\mathinner {\\langle {f \\omega _p | \\hat{V}| \\alpha ^{\\prime } \\omega _p }\\rangle } \\tilde{S}_{\\alpha ^{\\prime } \\alpha ^{\\prime \\prime }}\\mathinner {\\langle { \\alpha ^{\\prime \\prime } \\omega _p | ^{\\prime }hV | f^{\\prime } \\omega _p }\\rangle }\\Bigr ]\\;.", "\\nonumber $ Using the definition of $\\hat{\\Lambda }(z)$ given by (REF ) with the projection operators rewritten in the representation (REF )—(), and substituting the obtained matrix elements of $\\hat{\\Phi }(z)$ , we find $\\hat{\\Lambda }(z) = \\hat{W}- i\\pi \\sum _{\\alpha _1 \\alpha _2} \\hat{W}\\mathinner {|{\\alpha _1 \\omega _p}\\rangle } \\tilde{S}_{\\alpha _1 \\alpha _2}\\mathinner {\\langle {\\alpha _2 \\omega _p}|} \\hat{W}\\;.$ Matrix elements of the projection $\\hat{Q}\\hat{G}\\hat{Q}$ are then evaluated from (REF ) as $\\sum _{\\beta ^{\\prime }}\\mathinner {\\langle { \\beta | z- \\hat{H}_0 -\\hat{\\Lambda }(z) | \\beta ^{\\prime }}\\rangle } \\mathinner {\\langle {\\beta ^{\\prime } |\\hat{G}| \\beta }\\rangle }= \\delta _{\\beta \\beta ^{\\prime }}\\, .$ Since generally a finite number of bound states is involved in the process, the solution of this equation reduces to inversion of a finite-dimensional matrix.", "We adopt in the following the so-called isolated resonance approximation by assuming $\\mathinner {\\langle {\\beta |\\hat{G}| \\beta ^{\\prime } }\\rangle } = g_\\beta \\delta _{\\beta \\beta ^{\\prime }}$ .", "From (REF ) $g_\\beta &=& \\Bigl (z-E_\\beta ^{(0)} - \\mathinner {\\langle {\\beta | \\hat{W}|\\beta }\\rangle } \\\\&+& i\\pi \\sum _{\\alpha _1 \\alpha _2} \\mathinner {\\langle {\\beta | \\hat{W}|\\alpha _1 \\omega _p}\\rangle } \\tilde{S}_{\\alpha _1 \\alpha _2}\\mathinner {\\langle {\\alpha _2 \\omega _p| \\hat{W}|\\beta }\\rangle } \\Bigr )^{-1}\\;,\\nonumber $ where $E_\\beta ^{(0)}$ is the eigenvalue of the unperturbed Hamiltonian $\\hat{H}_0$ corresponding to the state $\\mathinner {|{\\beta }\\rangle }$ .", "As next step, we carry out the summation in the $\\tilde{S}$ matrix definition (REF ) within the expression $\\sum _{\\alpha _1} \\mathinner {\\langle { \\beta | \\hat{W}|\\alpha _1 \\omega _p}\\rangle } \\tilde{S}_{\\alpha _1 \\alpha _2} =\\sum _{n=0}^\\infty \\left( \\sum _{\\alpha _1} \\mathinner {\\langle { \\beta | \\hat{W}|\\alpha _1 \\omega _p}\\rangle } \\left[ \\tilde{X}^n\\right]_{\\alpha _1 \\alpha _2} \\right)$ entering (REF ).", "We make use of auxiliary operators $\\hat{Y}_n$ defined recursively as $\\hat{Y}_0 = \\hat{W}\\;,$ $\\hat{Y}_{n+1} = -i \\pi \\sum _\\alpha \\Bigl ( \\hat{Y}_n \\mathinner {|{\\alpha \\omega _p}\\rangle } \\mathinner {\\langle { \\alpha \\omega _p}|} \\hat{W}\\Bigr )\\;.$ From the definitions of $\\hat{Y}_n$ and $\\tilde{X}$ in (REF ) follows the property $\\sum _{\\alpha _1} \\mathinner {\\langle { \\beta | \\hat{Y}_n|\\alpha _1 \\omega _p}\\rangle } \\tilde{X}_{\\alpha _1 \\alpha _2} = \\mathinner {\\langle { \\beta | \\hat{Y}_{n+1}|\\alpha _2 \\omega _p}\\rangle }\\;.$ The sum in (REF ) reduces then to $\\sum _{\\alpha _1} \\mathinner {\\langle { \\beta | \\hat{W}|\\alpha _1 \\omega _p}\\rangle } \\tilde{S}_{\\alpha _1 \\alpha _2} = \\left\\langle { \\beta }\\,\\vert \\,{\\sum _{n=0}^\\infty \\hat{Y}_n }\\,\\vert \\,{\\alpha _2 \\omega _p}\\right\\rangle \\;.$ Using this expression in (REF ), we find $g_\\beta = \\left(z-E_\\beta ^{(0)} - \\sum _{n=0}^\\infty \\mathinner {\\langle {\\beta | \\hat{Y}_n|\\beta }\\rangle }\\;,\\right)^{-1}\\;.$ where we used the property of the $\\hat{Y}_n$ operators $\\mathinner {\\langle {\\beta | \\hat{W}|\\beta }\\rangle } - i\\pi \\sum _{\\alpha }\\left\\langle { \\beta }\\,\\vert \\,{\\sum _{n=0}^\\infty \\hat{Y}_n}\\,\\vert \\,{\\alpha \\omega _p}\\right\\rangle \\nonumber \\\\\\times \\mathinner {\\langle {\\alpha \\omega _p| \\hat{W}|\\beta }\\rangle } = \\sum _{n=0}^\\infty \\mathinner {\\langle { \\beta | \\hat{Y}_n| \\beta }\\rangle }$ following from their definition.", "The sum in (REF ) contains energy corrections to the level $\\mathinner {|{\\beta }\\rangle }$ and its decay rates due to different mechanisms.", "Let us consider for example the contribution from the term $\\mathinner {\\langle {\\beta | \\hat{Y}_0 |\\beta }\\rangle }=\\mathinner {\\langle {\\beta | \\hat{W}|\\beta }\\rangle }$ , which according to the definitions (REF ) and () of the $\\hat{U}$ and $\\hat{W}$ operators can be written as $\\mathinner {\\langle {\\beta | \\hat{Y}_0 |\\beta }\\rangle } &=& \\mathinner {\\langle {\\beta | \\hat{V}|\\beta }\\rangle } - i\\pi \\sum _f \\mathinner {\\langle {\\beta | \\hat{V}|f \\omega _p}\\rangle }\\mathinner {\\langle {f \\omega _p| \\hat{V}| \\beta }\\rangle } \\nonumber \\\\&=& \\mathinner {\\langle {\\beta | \\hat{V}|\\beta }\\rangle } - \\frac{i}{2}\\left[ 2\\pi \\sum _f \\left| \\mathinner {\\langle {\\beta | \\hat{V}|f \\omega _p}\\rangle } \\right|^2 \\right]\\;.$ The term $\\mathinner {\\langle {\\beta | Y_0|\\beta }\\rangle }$ accounts thus for the first-order energy shift caused by the perturbation $\\hat{V}$ and the radiative decay rate of the state $\\mathinner {|{\\beta }\\rangle }$ (in the brackets).", "The next term is $\\mathinner {\\langle {\\beta | \\hat{Y}_1|\\beta }\\rangle } = -i\\pi \\sum _\\alpha \\mathinner {\\langle {\\beta | \\hat{W}|\\alpha \\omega _p}\\rangle } \\mathinner {\\langle {\\alpha \\omega _p| \\hat{W}|\\beta }\\rangle } \\nonumber \\\\= -i\\pi \\sum _\\alpha \\mathinner {\\langle {\\beta | \\hat{V}+\\hat{U}|\\alpha \\omega _p}\\rangle } \\mathinner {\\langle {\\alpha \\omega _p| \\hat{V}+ \\hat{U}|\\beta }\\rangle }\\, .", "$ The contribution in (REF ) containing only $\\hat{V}$ and not $\\hat{U}$ reads $-\\frac{i}{2}\\left[ 2\\pi \\sum _\\alpha \\left| \\mathinner {\\langle {\\beta | \\hat{V}|\\alpha \\omega _p}\\rangle } \\right|^2 \\right]$ and thus represents the decay of the state $\\mathinner {|{\\beta }\\rangle }$ to states from the subspace $P$ , i.e., Auger decay rate and IC.", "The other contributions from $\\mathinner {\\langle {\\beta | \\hat{Y}_n|\\beta }\\rangle }$ at $n=1$ and higher $n$ are higher order corrections to the mentioned energy shift and decay rates.", "We write generically $g_\\beta = \\left(z-E_\\beta + \\frac{i}{2} \\Gamma _\\beta \\right)^{-1}\\;,$ where $E_\\beta $ is the energy shifted by the interaction in all orders and $\\Gamma _\\beta $ is the total decay rate of the state $\\mathinner {|{\\beta }\\rangle }$ .", "We note however that contributions due to the system states lying outside the considered subspaces are not included in $E_\\beta $ and $\\Gamma _\\beta $ .", "We evaluate finally the matrix element of the transition operator projection (REF ).", "Using the results obtained above we obtain after algebraic simplifications $&&\\mathinner {\\langle {f \\omega | \\hat{T}| f^{\\prime } \\omega ^{\\prime }}\\rangle } = \\mathinner {\\langle {f \\omega | \\sum _{n=0}^\\infty \\hat{Y}_n | f^{\\prime } \\omega ^{\\prime }}\\rangle } \\nonumber \\\\&&+ \\sum _\\beta \\frac{\\mathinner {\\langle {f \\omega | \\sum _{n=0}^\\infty \\hat{Y}_n| \\beta }\\rangle } \\mathinner {\\langle {\\beta | \\sum _{n=0}^\\infty \\hat{Y}_n | f^{\\prime }\\omega ^{\\prime } }\\rangle }}{z-E_\\beta + \\frac{i}{2} \\Gamma _\\beta }\\;.", "$ The first term describes processes that do not go through bound states $\\mathinner {|{\\beta }\\rangle }$ and are thus omitted in the current formalism giving finally $\\mathinner {\\langle {f \\omega | \\hat{T}| f^{\\prime } \\omega ^{\\prime }}\\rangle } = \\sum _\\beta \\frac{\\mathinner {\\langle {f \\omega | \\sum _{n=0}^\\infty \\hat{Y}_n| \\beta }\\rangle } \\mathinner {\\langle {\\beta | \\sum _{n=0}^\\infty \\hat{Y}_n | f^{\\prime }\\omega ^{\\prime } }\\rangle }}{z-E_\\beta + \\frac{i}{2} \\Gamma _\\beta }\\;.$" ], [ "Amplitude and rate of LANEEC process ", "The very general result obtained above in the pole and isolated resonance approximations, describes the LANEEC process with subsequent emission of a photon in all orders.", "We adopt here the lowest order approximation by retaining the minimal amount of terms still reflecting the process scenario: $&&\\mathinner {\\langle {f \\omega | \\hat{T}| f^{\\prime } \\omega ^{\\prime }}\\rangle } = \\sum _\\beta \\frac{\\mathinner {\\langle {f \\omega | \\hat{Y}_0| \\beta }\\rangle } \\mathinner {\\langle {\\beta | \\hat{Y}_1 | f^{\\prime }\\omega ^{\\prime } }\\rangle }}{z-E_\\beta + \\frac{i}{2} \\Gamma _\\beta } \\nonumber \\\\&&=-i\\pi \\sum _{\\alpha \\beta }\\frac{\\mathinner {\\langle {f \\omega | \\hat{W}| \\beta }\\rangle } \\mathinner {\\langle {\\beta | \\hat{W}| \\alpha \\omega _p}\\rangle } \\mathinner {\\langle {\\alpha \\omega _p | \\hat{W}| f^{\\prime }\\omega ^{\\prime } }\\rangle }}{z-E_\\beta + \\frac{i}{2} \\Gamma _\\beta }\\;.$ From this expression, the amplitude $A_\\mathrm {LANEEC}$ of the LANEEC process without inclusion of radiative decay of the final state can be written as $\\mathinner {\\langle {\\beta | \\hat{T}| f \\omega }\\rangle } = -i\\pi \\sum _{\\alpha }\\mathinner {\\langle {\\beta | \\hat{W}| \\alpha \\omega _p}\\rangle } \\mathinner {\\langle {\\alpha \\omega _p | \\hat{W}| f\\omega }\\rangle }\\;.$ At this point, we take into account the actual form of the perturbation operator $\\hat{V}$ specific to the considered process.", "Generally $\\hat{V}$ can be represented as the sum of terms coupling the subspaces $P$ , $Q$ and $R$ pairwise: $\\hat{V}=\\hat{H}_{en}+ \\hat{H}_{nr}+ \\hat{H}_{er}\\;.$ The chosen notations $\\hat{H}_{en}$ , $\\hat{H}_{nr}$ and $\\hat{H}_{er}$ represent the physical meaning of each operator: the Coulomb coupling of the electronic shell to the nucleus, the interaction of the nucleus with the radiation field, and interaction of the electrons with the radiation field, respectively.", "When substituting $\\hat{V}$ into Eq.", "(REF ) via Eqs.", "(REF )—(), however, only those terms from the general form (REF ) are kept, which reflect the LANEEC process scenario.", "We introduce also the magnetic interaction operator, describing interaction between the nucleus and the electrons via an intermediate virtual photon $\\hat{H}_\\text{magn}= -i \\pi \\hat{H}_{nr}\\sum _f \\mathinner {|{f \\omega _p}\\rangle } \\mathinner {\\langle {f \\omega _p}|} \\hat{H}_{er}\\;.$ We obtain then the LANEEC amplitude in the form $\\mathinner {\\langle {\\beta | \\hat{T}| f \\omega }\\rangle } = -i\\pi \\sum _{\\alpha }\\mathinner {\\langle {\\beta | \\hat{H}_\\mathrm {int}| \\alpha \\omega _p}\\rangle } \\mathinner {\\langle {\\alpha \\omega _p| \\hat{H}_{er}| f\\omega }\\rangle }\\;,$ where the operator $\\hat{H}_\\mathrm {int}= \\hat{H}_{en}+ \\hat{H}_\\text{magn}$ represents the full interaction between the electrons and the nucleus.", "We note that the same operator $\\hat{H}_\\mathrm {int}$ describes the hyperfine structure of electronic levels.", "We switch at this point to notations describing the system of interest more concretely and write the amplitude of the LANEEC process given by Eq.", "(REF ) in the form $A_\\mathrm {LANEEC} &=& -i\\pi \\sum _{\\kappa _c m_c}\\mathinner {\\langle { n_f \\kappa _f m_f ; I_f M_f | \\hat{H}_\\mathrm {int}| \\kappa _c m_c \\varepsilon _c;I_i M_i}\\rangle } \\nonumber \\\\&\\times & \\mathinner {\\langle {\\kappa _c m_c \\varepsilon _c | \\hat{H}_{er}| n_i \\kappa _i m_i}\\rangle }\\;.$ Here $I, M$ describe the nuclear total spin and its projection quantum numbers; $\\kappa , m$ are the Dirac angular momentum quantum number (incorporating both the total angular momentum $j$ and the orbital angular momentum $l$ ) and the total angular momentum projection $m$ for the involved electron; $n$ denotes the principal quantum number for electronic bound states and $\\varepsilon $ is the energy for continuum electronic states.", "The indices $i$ ($f$ ) correspond to the initial (final) nuclear or bound electronic states, whereas the index $c$ denotes quantities related to continuum electronic states.", "We assume here the incident photons to be electric dipole photons polarized in $z$ direction.", "The operator $\\hat{H}_{er}$ in Eqs.", "(REF ) reads then $\\hat{H}_{er}= z E$ with the electric field $E$ , which we treat classically.", "The interaction operator $\\hat{H}_\\mathrm {int}$ in (REF ) is given by the scalar product $\\hat{H}_\\mathrm {int}= \\sum _{kq} (-1)^{q} \\hat{M}_{k,-q} \\hat{T}_{kq}\\;,$ where $\\hat{M}_{kq}$ and $ \\hat{T}_{kq}$ are the spherical components of the nuclear multipole moment of rank $k$ and the respective electronic coupling operator (see e.g. [24]).", "The LANEEC amplitude is then written as $&A_\\mathrm {LANEEC} & = -i\\pi E \\sum _{kq} (-1)^{q} \\mathinner {\\langle { I_f M_f | \\hat{M}_{k,-q} | I_i M_i}\\rangle }\\\\&\\times &\\sum _{\\kappa _c m_c}\\mathinner {\\langle { n_f \\kappa _f m_f | \\hat{T}_{kq}| \\kappa _c m_c \\varepsilon _c }\\rangle }\\mathinner {\\langle {\\kappa _c m_c \\varepsilon _c | z | n_i \\kappa _i m_i}\\rangle }\\;.\\nonumber $ Here we assume that the transition is excited by a broad band laser radiation with distribution $f$ such that its peak is tuned to the transition resonance.", "The time-averaged rate of the LANEEC excitation is then obtained based on the amplitude as $R_\\mathrm {LANEEC} = \\frac{2\\pi f_\\mathrm {max} (\\tau _p \\nu )}{2I_i+1} \\sum _{M_iM_fm_im_f} \\left| A_\\mathrm {LANEEC} \\right|^2 \\;,$ where $f_\\mathrm {max}$ is the maximal value of $f$ , $\\tau _p$ and $\\nu $ are the pulse duration and repetition rate, respectively.", "We sum here over the magnetic quantum numbers of the initial and final electronic orbitals, since the former is assumed to be completely filled and the latter completely vacant prior to the LANEEC event.", "We also sum over the final and average over the initial magnetic substates.", "The obtained expression can be applied to LANEEC with an additional hole (see the left and the right graphs in Fig.", "REF ) with the following corrections.", "First, since the vacancies created by the first photon close very fast due to strong Auger decay channel, some steady fraction $\\alpha _h < 1$ of atoms in the sample possessing holes is needed for LANEEC.", "Second, the final electronic orbital is not completely vacant but possesses only one hole closed in the LANEEC event.", "The time-averaged rate for the compound LANEEC process with an additional hole can be thus obtained based on $R_\\mathrm {LANEEC}$ from Eq.", "(REF ) as $R^\\mathrm {+hole}_\\mathrm {LANEEC} = \\frac{\\alpha _h}{2j_f + 1} R_\\mathrm {LANEEC}\\;.$" ], [ " Two-photon LANEEC ", "In the following we consider the two-photon LANEEC scenario introduced above (see the middle and the right graphs in Fig.", "REF ).", "The amplitude of this process can be obtained based on its Feynman-Goldstone diagram shown in Fig.", "REF , where the single and double solid lines represent electronic and nuclear states, respectively, and the wavy lines are the photon lines.", "Figure: Feynman-Goldstone diagram for the two-photon LANEEC process.", "The wavy lines show the external photons with frequencies ω 0 \\omega _0 and ω\\omega and a virtual photon of the nuclear interaction with the atomic shell.", "The double line corresponds to excitation of the nucleus, while the single straight lines correspond to the involved vacancy and electron.", "The states are denoted by quantum numbers as introduced in the text.", "Note the notation of the hole state via the quantum numbers of the missing electron.A photon with the frequency $\\omega _0$ creates a vacancy in a deep inner shell.", "The absorption of the photon with the frequency $\\omega $ with subsequent electron-nucleus interaction is the LANEEC process described by the amplitude in Eq.", "(REF ).", "The amplitudes of the single photon excitation and the LANEEC process are coupled to a resulting amplitude via the equation $A^\\mathrm {2phot}_\\mathrm {LANEEC} = \\sum _{m_im_f} \\frac{ A_\\mathrm {LANEEC} \\mathinner {\\langle { n_i\\kappa _i m_i | Ez^{\\prime } | n_f \\kappa _f m_f}\\rangle } }{ \\omega _0 + E_f - E_i + \\frac{i}{2}\\Gamma _h } \\;,$ where we introduce the energies of the corresponding electronic states $E_f$ and $E_i$ , and assume for the additional photons linear polarization along the $Oz^{\\prime }$ axis, generally speaking different from the polarization axis $Oz$ in Eq.", "(REF ).", "The energy width in the amplitude denominator reduces to the hole width $\\Gamma _h$ only, since $\\Gamma _h$ for deep inner electronic shells due to strong Auger decay channel exceeds significantly the width of the other (electronic and nuclear) states involved in the process.", "Note that all electronic and vacancy states are intermediate in this approach, and imply thus summation over their magnetic quantum numbers in the amplitude.", "The time-averaged rate is obtained based on the amplitude as $R^\\mathrm {2phot}_\\mathrm {LANEEC} &=& \\int d \\omega f_0(E_n -\\omega ) f(\\omega ) \\\\&\\times & \\frac{2\\pi (\\tau _p \\nu )}{2I_i+1} \\sum _{M_iM_f} \\left| A^\\mathrm {2phot}_\\mathrm {LANEEC} \\right|^2 \\;.$ Here the integration is carried out over the distributions of the laser beams $f_0$ and $f$ assuming that the photon frequencies $\\omega _0$ and $\\omega $ match in total the nuclear transition energy $E_n$ and $\\omega _0$ is tuned to the electronic transition between the bound states.", "As before, we sum over the final and average over the initial nuclear magnetic substates, whereas the sum over the electronic magnetic substates enters directly the amplitude (REF )." ], [ " Numerical results ", "In the following we show numerical examples for each case described above.", "As an x-ray laser system we take the x-ray free-electron laser SACLA in Harima, Japan, and assume the following radiation parameters [6].", "Table: x-ray pulse parameters at SACLA assumed based on Ref.", ".Required electronic matrix elements are evaluated based on the wave-functions obtained using the GRASP2K [25] and RATIP [26] packages for the bound and continuum states, respectively.", "The bound transition energies are obtained with GRASP2K, if not otherwise stated.", "Since no high precision is required, we restricted our GRASP calculations to the Multiconfiguration Dirac-Hartree-Fock model without additional electronic correlations.", "The nuclear parameters were taken from the database [27]." ], [ " One-photon LANEEC ", "As an example of “pure” one photon LANEEC process described in Section , we consider LANEEC excitation in highly charged ions $[201][80]{Hg}^{44+}$ and $[205][82]{Pb}^{52+}$ with nuclear transitions of the $M1$ and $E2$ type, respectively.", "The energies $E_n$ of the transitions are 1.565 and 2.329 keV, respectively.", "The strength of the coupling of the nuclear and electronic transitions is characterized by the internal conversion coefficient (ICC).", "We choose therefore optimal charge states and electronic orbitals based on ICC obtained for all electronic shells from [28].", "The information provided in this database for neutral atoms suffices for observation of the relative ICC behaviour in dependence of the involved electronic orbital also for higher charge states.", "In Table REF we show the ratio $\\beta _\\mathrm {LANEEC} = R_\\mathrm {LANEEC} / R_\\mathrm {rad}$ of the LANEEC rate to the rate of direct radiative excitation of the nucleus, calculated based on Eq.", "(REF ) for selected initial (Init.)", "and final (Fin.)", "orbitals for the LANEEC process.", "Table: The ratio β LANEEC =R LANEEC /R rad \\beta _\\mathrm {LANEEC} = R_\\mathrm {LANEEC} / R_\\mathrm {rad} for the one-photon LANEEC rate to the rate of direct radiative nuclear excitation.", "The electronic configurations are shown with respect to the argon core configuration 1s 2 2s 2 2p 6 3s 2 3p 6 1s^22s^22p^63s^23p^6.", "See the text for further explanations.In the case with $[205][82]{Pb}^{52+}$ we obtained a few orders larger enhancement $\\beta _\\mathrm {LANEEC}$ than for $[201][80]{Hg}^{44+}$ .", "This is explained by significantly lower direct radiative excitation rate of the former transition due to its $E2$ type and low nuclear transition energy.", "We observe that only very moderate enhancement due to involvement of the electronic shell is achieved in the “pure” one-photon LANEEC process.", "As already mentioned, further nuisance is that this LANEEC version requires photons of higher energies than the nuclear transition energy.", "We discuss in the following improved LANEEC schemes which may be of interest for future applications, since they allow for both more pronounced advantage with respect to the direct excitation, and extension of addressed nuclear transitions to higher energies." ], [ "LANEEC with additional hole", "As a potentially useful application, we consider here excitation of the 29.2 keV nuclear state in $[229][90]{Th}$ .", "The $[229][90]{Th}$ isomer is of interest due to its very low lying isomeric state at approx.", "0.01 keV [29], [30], [31], [32], which can be used e.g.", "for implementation of the first nuclear clock at an unprecedented accuracy [33], [34].", "A possible (indirect) isomer excitation mechanism demonstrated in Refs.", "[32], [35] employs excitation of the 29.2 keV nuclear state with high-brilliance synchrotron radiation.", "The latter decays then predominantly to the isomeric state.", "We study here the possibility to excite the 29.2 keV level using the LANEEC process in neutral $[229][90]{Th}$ atoms.", "Since this energy is not achievable at the SACLA facility, we consider a modified version of LANEEC, in which the final electronic state is not in an outer shell, but in a vacancy created in a deep-lying closed shell by another x-ray photon (see the left and the right graphs in Fig.", "REF ).", "In this way, the considered process involves two photons, but differs from the two-photon LANEEC excitation described in Section .", "Here the first incoming photon only expels an electron from the deep-lying shell, which leaves the atom and does not further participate in the process.", "The sum of the two photon energies in this case does not need to be equal to $E_n$ .", "As a concrete implementation, we consider the scheme from Ref.", "[19] with two SACLA beams at energies 20.8 and 8.6 keV irradiating a $[229][90]{Th}$ sample.", "The first beam creates vacancies in the $2s$ shells in the sample atoms, whereas the second one induces the one-photon LANEEC process as considered in Section .", "At the latter stage a $6p$ electron is promoted to a continuum state which decays then into the $2s$ vacation with simultaneous excitation of the nucleus.", "The energy of the second photon is chosen such that the needed energy of 29.2 keV is transferred to the nucleus.", "Vacancies in the $2s$ shell close very fast due to strong electronic Auger decay resulting in the width of the hole state of $\\Gamma _h \\approx 14.3 \\;\\mathrm {eV}$ [36] corresponding to the lifetime $\\tau _h \\approx 50 \\;\\mathrm {as}$ .", "Using the photoionization cross section $\\sigma _h \\approx 5.0 \\;\\mathrm {kb}$ calculated theoretically in Ref.", "[37], we find that the steady time-averaged fraction $\\alpha _h \\approx 7.5 \\cdot 10^{-5}$ of atoms have a vacancy in the $2s$ shell.", "The excitation rate per atom in this compound process can be obtained as $R^\\mathrm {+hole}_\\mathrm {LANEEC} = \\alpha _h R_\\mathrm {LANEEC}$ , where the latter rate is given by Eq.", "(REF ).", "For the reduced transition probabilities $B_\\downarrow $ for the $M1+E2$ transition from the 29.2 keV level to the ground state we use the values $B_\\downarrow (M1) = 0.003 \\;\\mathrm {W.u.", "}$ and $B_\\downarrow (E2) = 27.11 \\;\\mathrm {W.u.", "}$ based on nuclear structure calculations performed in Refs.", "[38], [39].", "The calculated nuclear excitation rate is $R^\\mathrm {+hole}_\\mathrm {LANEEC} = 3 \\cdot 10^{-16}\\;\\mathrm {s}^{-1}$ .", "For a $[229][90]{Th}$ sample of thickness 1 $\\mu $ m the number atoms exposed to the laser radiation is $2.4 \\cdot 10^{10}$ leading in total to approx.", "4 excitation events per week.", "Although this number is very small and the scheme is not practically applicable at the moment, we would like to point out large enhancement with respect to direct two-photon excitation of the nucleus.", "Our calculation shows that direct excitation using two 14.6 keV SACLA beams with parameters listed in Table REF has the rate $R^\\mathrm {2phot}_\\mathrm {rad} = 1 \\cdot 10^{-26} \\;\\mathrm {s}^{-1} $ per atom.", "This yields the enhancement factor $\\beta ^\\mathrm {+hole}_\\mathrm {LANEEC} = 2\\cdot 10^{10}$ .", "In the considered approach, the energy of the photon ionizing a deeply lying shell is not strictly fixed and has only to exceed the ionization threshold.", "This property allows excitation schemes with only one laser beam for both creation of a vacancy and inducing the LANEEC process.", "Another advantage is that the rate becomes 4 times larger since the photon exchange term contributes in the amplitude in the same way as the direct term.", "As an example, we consider here excitation of the same 29.2 keV nuclear level with creation of a hole in the $2p_{3/2}$ shell, the $3s$ orbital as the starting point for the electronic path in LANEEC which ends in the created hole.", "Both steps are induced by a single SACLA beam at energy 16.5 keV.", "The lifetime of the hole is 80 as based on the width provided in Ref.", "[36], the photoionization cross section is 25 kb [37], leading to the steady hole fraction of $7.6 \\cdot 10^{-4}$ .", "The calculated excitation rate per atom is in this case $R^\\mathrm {+hole}_\\mathrm {LANEEC} = 2 \\cdot 10^{-14}\\;\\mathrm {s}^{-1}$ and corresponds for a sample of 1 $\\mu $ m thickness to approx.", "38 excitation events per day making the scheme though challenging today but interesting for future applications.", "The enhancement with respect to direct two photon excitation is $\\beta ^\\mathrm {+hole}_\\mathrm {LANEEC} = 2\\cdot 10^{12}$ .", "The rate $R^\\mathrm {+hole}_\\mathrm {LANEEC}$ and the enhancement $\\beta ^\\mathrm {+hole}_\\mathrm {LANEEC}$ are considerably larger than in the previous example mainly due to presence of a strong $E2$ channel." ], [ " Two-photon LANEEC ", "In this Section, we obtain the rate of the LANEEC process with two photons as described in Section .", "As an example, we consider the 14.4 keV Mössbauer transition in the $[57]{Fe}$ nucleus.", "Direct one-photon XFEL excitation of this transition has been recently achieved at the SACLA facility [5].", "Here we consider a scenario involving the electronic shell and two photons of energies 7.1 and 7.3 keV at SACLA.", "The 7.1 keV photon excites a $1s$ electron to the vacant $4p$ orbital.", "The 7.3 keV photon promotes this electron to a continuum state, which decays back into the $1s$ vacancy with transferring the energy $7.1+7.3=14.4$  keV to the nucleus.", "The excitation rate is obtained using Eqs.", "(REF )—(REF ) assuming plane polarization for both photons.", "As before, we consider $z$ -direction for the electric vector in the beam inducing the LANEEC part of the process, whereas for the beam exciting the electron from the $1s$ to the $4p$ shell $x$ -polarization is assumed, i.e.", "$z^{\\prime }=x$ in Eqs.", "(REF ).", "The width of the $1s$ vacancy in Eq.", "(REF ) is determined predominantly by Auger decay and has the value $\\Gamma _h = 1.2$ eV [40].", "Using the parameters of the XFEL beams presented in Table REF , we obtain the rate per atom $R^\\mathrm {2phot}_\\mathrm {LANEEC} = 9 \\cdot 10^{-21}\\;\\mathrm {s}^{-1}$ .", "The obtained direct two-photon excitation rate with plane polarization in the same direction in both beams is in this case $R^\\mathrm {2phot}_\\mathrm {rad} = 7 \\cdot 10^{-25} \\;\\mathrm {s}^{-1} $ per atom leading to the enhancement factor $\\beta ^\\mathrm {+hole}_\\mathrm {LANEEC} = 1\\cdot 10^{4}$ .", "We observe an interesting cancellation effect if both beams are polarized in $z$ -direction, i.e.", "for $z^{\\prime }=z$ .", "In this case the excitation rate turns out to be identically zero.", "This peculiarity can be explained by applying the Wigner-Eckart theorem to the electronic matrix elements constituting the amplitude in Eq.", "(REF ).", "The summation over the intermediate magnetic quantum numbers reduces then to a summation with corresponding Clebsch-Gordan coefficients.", "For the considered electronic states and photon polarizations this sum turns out to be zero.", "This effect could be used in this case for switching the nuclear excitation on and off by changing the photon polarization.", "Note however, that the same effect takes place for a purely electronic process, in which the continuum electronic state decays into the $1s$ vacancy with emission of a photon.", "Due to this reason further checks are necessary for unambiguous detection of the nuclear excitation.", "This is however an ubiquitous aspect in all NEEC-related considered processes." ], [ " Conclusions ", "In this work we develop a theoretical description of the LANEEC process based on the Feshbach projection operator formalism.", "Numerical examples for experimental scenarios at the SACLA facility are provided.", "The decay channels appear to all orders in a natural and unified manner in the developed formalism.", "The “pure” LANEEC version involving one photon requires usage of x-ray beam energies higher than the nuclear transition energy.", "The achieved enhancement with respect to direct excitation is at the same time very moderate (see Table REF ).", "Due to these reasons we consider two improved LANEEC versions with an additional x-ray photon, referred to as “LANEEC with additional hole” and “two-photon LANEEC” (see Fig.", "REF and explanations in the text).", "Based on these schemes we describe experimental scenarios for excitation of the 29.2 keV nuclear state in $[229]{Th}$ and the 14.4 keV Mössbauer transition in $[57]{Fe}$ which are of interest for further applications.", "Our calculations show low excitation rates but strong enhancement with respect to the direct two photon excitation.", "These results are insightful and the developed formalism will be useful also for other excitation processes, despite the very challenging practical implementation of LANEEC.", "First experimental efforts towards observation of the LANEEC process in $[57]{Fe}$ were undertaken at LCLS [41].", "We are grateful to A.  Pálffy for extensive discussions of the results and careful revision of the manuscript.", "We thank D.  Reis, A.  Kaldun and J. Haber for very useful discussions of the experimental aspects of LANEEC." ] ]
2210.07708
[ [ "Gordian Distance and Complete Alexander Neighbors" ], [ "Abstract We call a knot $K$ a complete Alexander neighbor if every possible Alexander polynomial is realized by a knot one crossing change away from $K$.", "It is unknown whether there exists a complete Alexander neighbor with nontrivial Alexander polynomial.", "We eliminate infinite families of knots with nontrivial Alexander polynomial from having this property and discuss possible strategies for unresolved cases.", "Additionally, we use a condition on determinants of knots one crossing change away from unknotting number one knots to improve KnotInfo's unknotting number data on 11 and 12 crossing knots.", "Lickorish introduced an obstruction to unknotting number one which proves the same result.", "However, we show that Lickorish's obstruction does not subsume the obstruction coming from the condition on determinants." ], [ "Introduction", "Unknotting number and Alexander polynomials are classical knot invariants, so it is natural to consider the interaction between crossing changes of a knot $K$ and the Alexander polynomial $\\triangle _K(t)$ .", "Gordian distance, the minimal number of crossing changes needed to change one knot to another, is a generalization of unknotting number.", "In 1978, Kondo proved that there exists a knot with unknotting number one realizing any given Alexander polynomial [5].", "A natural next question to ask is whether there exists a nontrivial Alexander polynomial such that, given any second Alexander polynomial, there exist a pair of knots with Gordian distance one realizing the two polynomials.", "In 2012, Kawauchi proved that this is the case for Alexander polynomials of slice type (Corollary 5.2 in [4]).", "Jong's problem asks whether there exists a pair of Alexander polynomials such that any two knots realizing the polynomials have Gordian distance at least 2.", "In 2012, Kawauchi found a family of pairs of polynomials for which this is the case [4].", "One example is the Alexander polynomials of the trefoil and figure eight knot.", "This brings us to a knot property that we will study in this paper: Definition 1.1 A knot $K$ is a complete Alexander neighbor if for any Alexander polynomial $p(t)$ , there exists a knot $K^{\\prime }$ such that $K$ and $K^{\\prime }$ are one crossing change apart and $\\triangle _{K^{\\prime }}(t)=p(t)$ .", "Since the Alexander polynomial is multiplicative under connected sum of knots, Kondo's result stating that there exists a knot with unknotting number one realizing any given Alexander polynomial implies that any knot with trivial Alexander polynomial is a complete Alexander neighbor [5].", "However, it is unknown whether any knot with nontrivial Alexander polynomial has this property.", "Question 1.2 (Raised on pg.", "1017 of [12]) Does there exist a complete Alexander neighbor $K$ with nontrivial Alexander polynomial?", "While Kawauchi found polynomials which are realized by knots with particular Gordian distances, this question asks for a knot whose Gordian neighbors realize all Alexander polynomials.", "We can obstruct knots from this property in a variety of ways.", "One is by considering the algebraic unknotting number, introduced by Murakami in [9], which is the minimal number of crossing changes necessary to transform a knot into a knot with Alexander polynomial one.", "Of course, a complete Alexander neighbor must have algebraic unknotting number one.", "The database Knotorious eliminates 1,526 knots of the 2,977 prime knots with 12 crossings or fewer from being complete Alexander neighbors using algebraic unknotting number [2].", "The following theorem gives an improvement on work by Nakanishi and Okada [12].", "See Section 2 for a definition of Nakanishi index.", "Theorem 1.3 Let $K$ be a knot with Nakanishi index 1, where $\\det (K)\\ge 3$ and where $\\det (K)$ is composite or $\\det (K) \\equiv 1 \\mod {4}$ .", "Then $K$ is not a complete Alexander neighbor.", "In addition, we can characterize the knots eliminated by Kawauchi in Proposition REF [4] (see Theorem REF in Section 2).", "This result together with Theorem REF yields the following corollary.", "Corollary 1.4 Let $K$ be a knot whose Alexander polynomial $\\triangle _{K}(t)$ has breadth 2.", "Then $K$ is not a complete Alexander neighbor.", "Although we can eliminate infinitely many knots from Question REF using these methods, there are small knots which are not eliminated including $6_2$ , $7_6$ , $8_4$ , $8_6$ , $8_7$ , and $8_{14}$ .", "We do eliminate 2,528 of the 2,977 prime knots with crossing number 12 or less.", "In their work on the relationship between crossing changes and Alexander polynomials, Nakanishi and Okada proved in [12] a condition on the determinants $|\\triangle _K(-1)|$ and $|\\triangle _{K^{\\prime }}(-1)|$ of knots $K$ and $K^{\\prime }$ one crossing change apart where $K$ has unknotting number one.", "This gives a new obstruction to unknotting number one, which improves the KnotInfo [8] data for five knots in the following theorem.", "Theorem 1.5 The knots $11n_{162}, 12n_{805}, 12n_{814},12n_{844},$ and $12n_{856}$ have unknotting number greater than one.", "We also give a second proof of this theorem using an obstruction by Lickorish in [7].", "The two obstrutions are different; in particular, there are 17 examples of knots with 11 to 13 crossings where Lickorish's obstruction does not apply, but Nakanishi and Okada's condition on determinants obstructs unknotting number one.", "In addition, there is always the possibility expand the search to apply this obstruction from the condition on determinants, but we can determine from a single diagram of a knot whether or not Lickorish's obstruction applies." ], [ "Acknowledgements", "We are extremely grateful to Mark Brittenham and Alex Zupan for their mentorship, advice, and support.", "We are grateful to Charles Livingston for his interest in this project and for helpful conversations.", "This work was completed while the author was a guest at the Max Planck Institute for Mathematics in Bonn and we are extremely grateful to MPIM for its support and hospitality.", "The author was partially supported by NSF grant DMS-2005518." ], [ "The Relationship Between the Alexander Polynomial and Gordian Distance", "Now we will find families of knots which are not complete Alexander neighbors.", "First we need to introduce some definitions.", "Definition 2.1 The algebraic unknotting number $u_a(K)$ of a knot $K$ is the minimal number of crossing changes necessary to change $K$ to a knot with trivial Alexander polynomial.", "Notice that any knot with algebraic unknotting number greater than one is not a complete Alexander neighbor since the trivial Alexander polynomial is not realized by any Gordian neighbor.", "Definition 2.2 The Nakanishi index $n(K)$ of a knot $K$ is the minimal $n$ such that the Alexander module of $K$ (the first homology of the infinite cyclic cover of the exterior of $K$ , viewed as a $\\mathbb {Z}[t,t^{-1}]$ module) is presented by an $n\\times n$ matrix [10].", "Every crossing change can be described with a $\\pm 1$ surgery along a loop around the crossing with linking number 0 with the knot, so for any knot there exists a collection of these surgeries describing crossing changes resulting in the unknot.", "Levine [6] and Rolfsen [13] introduced a surgery view of the Alexander matrix, which Nakanishi and Okada also describe in Section 2 of [12], so it is always possible to build an $n\\times n$ Alexander matrix where $n$ is the unknotting number.", "However, in some cases there exists a smaller Alexander matrix.", "Also notice that algebraic unknotting number is a lower bound for unknotting number since the unknot has trivial Alexander polynomial.", "Furthermore, Nakanishi index is a lower bound for algebraic unknotting number (see Section 4.1 of [1]), so $n(K)\\le u_a(K)\\le u(K)$ for any knot $K$ where $u(K)$ is the unknotting number of $K$ .", "Therefore we can restrict our investigation of Question REF to knots with Nakanishi index one.", "Now we will eliminate families of knots with Nakanishi index one.", "First, we need to prove a lemma.", "Lemma 2.3 Let $n\\ge 3$ be an odd integer.", "Then $n$ is composite or $n\\equiv 1 \\mod {4}$ if and only if there exists some integer $d$ such that both $d$ and $-d$ are quadratic nonresidues mod $n$ .", "Let $n\\ge 3$ be an odd integer and let $f:\\mathbb {Z}_n\\rightarrow \\mathbb {Z}_n$ such that $f(x)=x^2$ for all $x \\in \\mathbb {Z}_n$ , so the image of $f$ is the set of quadratic residues mod $n$ together with 0.", "First notice that for every nonzero $y$ such that there exists $x \\in \\mathbb {Z}_n$ where $f(x)=y$ (equivalently, every quadratic residue mod $n$ ), we have $f(n-x)=(n-x)^2=n^2-2nx+x^2\\equiv x^2=f(x)=y \\mod {n}.$ Since $n$ is odd, $n-x\\lnot \\equiv x \\mod {n}$ , so at least two distinct elements of $\\mathbb {Z}_n$ map to each quadratic residue.", "Therefore, at most half the nonzero elements of $\\mathbb {Z}_n$ are quadratic residues.", "Consider the case where $n$ is prime.", "Then by the law of quadratic reciprocity, if $n\\equiv 3 \\mod {4}$ , then the negative of a residue modulo $n$ is a nonresidue and the negative of a nonresidue is a residue, as desired.", "Also by the law of quadratic reciprocity, if $n\\equiv 1 \\mod {4}$ , then the negative of a residue modulo $n$ is a residue and the negative of a nonresidue is a nonresidue.", "Since at most half the nonzero elements of $\\mathbb {Z}_n$ are quadratic residues and $n\\ge 3$ , there exists a nonzero quadratic nonresidue $d$ , so $d$ and $-d$ are quadratic nonresidues mod $n$ as desired.", "Otherwise, $n$ is composite, so $n=ab$ for some positive odd integers $a$ and $b$ both greater than one.", "Assume without loss of generality that $a\\le b$ .", "First we will show that one of the following must be true $1\\le b-a<b+a \\le \\frac{ab}{2}$ $a=b$ $a=3$ and $b=5$ We will assume (a) is false and show that (b) or (c) must be true.", "Let $1>b-a$ or $b+a>\\frac{ab}{2}$ .", "In the case where $b-a<1$ we have $b-1<a\\le b$ , so (b) holds.", "In the case where $b+a>\\frac{ab}{2}$ , we have that $a<\\frac{b}{\\frac{b}{2}-1}$ .", "Then for $b\\ge 6$ we have $1<a<\\frac{b}{\\frac{b}{2}-1}\\le 3$ which is impossible since $a$ is an odd integer, so $b<6$ .", "Since $b$ is odd and greater than one, we have that $b=3$ or $b=5$ .", "Since $1<a\\le b$ and $a$ is odd, in the case that $b=3$ , (b) holds and in the case that $b=5$ either $a=3$ so (c) holds or $a=5$ so (b) holds.", "Consider the case (a) where $1\\le b-a \\le \\frac{ab}{2}$ and $1\\le a+b \\le \\frac{ab}{2}$ .", "Notice that $f(b-a)$ , $f(a+b)$ , $f(n-(b-a))$ , and $f(n-(a+b))$ are all congruent to $a^2+b^2$ mod $n=ab$ .", "Since $1\\le b-a<b+a\\le \\frac{ab}{2}$ , we have that $1\\le b-a<b+a\\le \\frac{ab}{2}=n-\\frac{ab}{2}\\le n-(a+b)<n-(b-a)\\le n-1,$ so at least three distinct elements of $\\mathbb {Z}_n$ map to the same quadratic residue $a^2+2ab+b^2$ in $\\mathbb {Z}_n$ .", "Therefore, strictly less than half of the nonzero elements of $\\mathbb {Z}_n$ are quadratic residues mod $n$ .", "Thus, there exists some quadratic nonresidue $d$ , so $d$ and $-d$ are quadratic nonresidues mod $n$ as desired.", "Consider the case (b) where $a=b$ .", "Then we have $a\\lnot \\equiv 0 \\mod {n}$ and $f(a)=a^2=n\\equiv 0\\mod {n}$ , so strictly less than half of the nonzero elements of $\\mathbb {Z}_n$ are quadratic residues mod $n$ .", "Therefore, there exists some quadratic nonresidue $d$ , so $d$ and $-d$ are quadratic nonresidues mod $n$ as desired.", "Consider the case (c) where $a=3$ and $b=5$ .", "Then notice that 2 and $-2$ are quadratic nonresidues mod 15 as desired.", "We will use this lemma to improve the following result by Nakanishi and Okada.", "Lemma 2.4 (Propositions 5 and 6 in [12]) Let $K$ be a knot and let $A_K(t)=(a_{ij}(t))_{1\\le i,j\\le n}$ be an Alexander matrix of $K$ such that $a_{ij}(t)=a_{ij}(t^{-1})$ for all $1\\le i,j\\le n$ and $a_{ij}(1)={\\left\\lbrace \\begin{array}{ll}1 &\\text{if }i=j\\\\0 &\\text{if }i\\ne j\\end{array}\\right.", "}$ Then a Laurent polynomial $p(t)$ is the Alexander polynomial of some knot $K^{\\prime }$ one crossing change away from $K$ if and only if there exist Laurent polynomials $r_1(t),...,r_n(t)$ , and $m(t)$ such that $m(t)=m(t^{-1})$ , $m(1)=\\pm 1$ , and $r_i(1)=0$ for all $1\\le i\\le n$ , and $p(t)=\\pm \\det \\begin{pmatrix}&&&r_1(t^{-1})\\\\&A_K(t)&&\\vdots \\\\&&&r_n(t^{-1})\\\\r_1(t)&\\dots &r_n(t)&m(t)\\end{pmatrix}$ We now have all the tools to prove Theorem REF .", "Let $K$ be a knot with Nakanishi index 1, where $\\det (K)\\ge 3$ and $\\det (K)$ is composite or $\\det (K) \\equiv 1 \\mod {4}$ .", "Since knot determinants are odd, by Lemma REF , there exists some quadratic nonresidue $d\\mod {\\det }(K)$ such that $-d$ is also a quadratic nonresidue$\\mod {\\det }(K)$ .", "Also notice that since $K$ has Nakanishi index 1, the 1 by 1 matrix $[\\triangle _K(t)]$ is an Alexander matrix of $K$ , which satisfies conditions (a) and (b) in Lemma REF .", "Let $K^{\\prime }$ be a knot one crossing change away from $K$ .", "Then, by Lemma REF , we have $\\triangle _{K^{\\prime }}(t)=\\triangle _K(t)\\cdot m(t)-r(t)\\cdot r(t^{-1})$ for some $m(t), r(t)\\in \\mathbb {Z}[t,t^{-1}]$ such that $m(t^{-1})=m(t)$ , $|m(t)|=1$ , and $r(1)=0$ .", "Then $\\det (K^{\\prime })= |\\triangle _{K^{\\prime }}(-1)|&=|\\triangle _{K}(-1)\\cdot m(-1)-(r(-1))^2|\\\\\\det (K^{\\prime })&=|\\pm \\det (K)\\cdot m(-1)-(r(-1))^2|\\\\\\text{so }\\det (K^{\\prime })&=|a\\det (K)-b^2|$ for some $a,b\\in \\mathbb {Z}$ .", "Consider the case where $a\\det (K)\\ge b^2$ .", "Then $\\det (K^{\\prime })&=a\\det (K)-b^2\\\\b^2&=-\\det (K^{\\prime })+a\\det (K)$ so $-\\det (K^{\\prime })$ is a quadratic residue$\\mod {\\det }(K)$ .", "Therefore, $-\\det (K^{\\prime })\\lnot \\equiv d\\mod {\\det }(K)$ and $-\\det (K^{\\prime })\\lnot \\equiv -d\\mod {\\det }(K)$ , so $\\det (K^{\\prime })\\lnot \\equiv |d|\\mod {\\det }(K)$ .", "Otherwise, $a\\det (K)<b^2$ .", "Then $\\det (K^{\\prime })&=b^2-a\\det (K)\\\\b^2&=\\det (K^{\\prime })+a\\det (K)$ so $\\det (K^{\\prime })$ is a quadratic residue$\\mod {\\det }(K)$ .", "Therefore, $\\det (K^{\\prime })\\lnot \\equiv d\\mod {\\det }(K)$ and $\\det (K^{\\prime })\\lnot \\equiv - d\\mod {\\det }(K)$ , so $\\det (K^{\\prime })\\lnot \\equiv |d|\\mod {\\det }(K)$ .", "Since the knot determinants are exactly the odd natural numbers, there exists an Alexander polynomial $p(t)$ such that $|p(-1)|\\equiv |d|\\mod {\\det }(K)$ .", "As argued above, this Alexander polynomial is not realized by any knot one crossing change away from $K$ .", "Kawauchi also eliminated families of knots from being complete Alexander neighbors in the following result.", "Proposition 2.5 (Corollary 4.2 from [4]) Let $p$ be any prime number, and $n$ , $\\ell $ integers coprime to $p$ .", "If $p$ is an odd prime, then assume that $p$ is coprime to $1-4n$ and that $1-4n$ is a quadratic nonresidue mod $p$ .", "Consider a set of Alexander polynomials $S_{p,n,\\ell }=\\lbrace n(t+t^{-1})+1-2n\\rbrace \\cup \\lbrace (n+\\ell p^{2s+1})(t+t^{-1})+1-2(n+\\ell p^{2s+1})| s \\in \\mathbb {N}_0\\rbrace $ and let $a, b \\in S_{p,n,\\ell }$ such that $a\\ne b$ .", "Then for any knots $K_a, K_b$ such that $\\triangle _{K_a}=a$ and $\\triangle _{K_b}=b$ , we have that $K_a$ and $K_b$ must have Gordian distance at least two.", "We can characterize the knots Kawauchi has shown not to be complete Alexander neighbors here.", "First notice that any Alexander polynomial of breadth 2 of a knot $K$ can be written in the form $\\triangle _{K}(t)=n(t+t^{-1})+1-2n$ for some nonzero integer $n$ .", "Theorem 2.6 An Alexander polynomial of breadth 2, $q(t)=n(t+t^{-1})+1-2n$ is contained in $S_{p, n, \\ell }$ for some $p, n,$ and $\\ell $ as defined in Proposition REF if and only if $1-4n$ is not a square.", "Let $q(t)=n(t+t^{-1})+1-2n$ be an Alexander polynomial of breadth 2 for some $n \\in \\mathbb {Z}$ .", "Assume $1-4n$ is not a square.", "First notice that for all non-square $x$ , there exist infinitely many primes $p$ such that $x$ is a quadratic nonresidue mod $p$ .", "Since $1-4n$ is not a square, there exist infinitely many primes $p_i$ such that $1-4n$ is a quadratic nonresidue mod $p_i$ , so there exists such a prime $p_k$ such that $|1-4n|<p_k$ and $n<p_k$ , so $1-4n$ and $n$ are coprime to $p_k$ .", "Therefore, $q(t)\\in S_{p_k,n,\\ell }$ for some $\\ell $ as defined in Proposition REF .", "Assume $q(t)\\in S_{p,n,\\ell }$ for some $p, n,$ and $\\ell $ as defined in Proposition REF .", "Then notice that either $1-4n$ is a quadratic nonresidue mod $p$ where $p$ is prime or $p=2$ and $n$ is coprime to $p$ , meaning that $n$ is odd.", "In the case where $1-4n$ is a quadratic nonresidue mod $p$ , then $1-4n$ must not be a square.", "In the case where $n$ is odd, $1-4n\\equiv 5\\mod {8}$ .", "Notice that odd squares are congruent to 1 mod 8 since $(2x+1)^2=4x^2+4x+1=4x(x+1)+1$ and $x(x+1)$ must be even for any positive integer $x$ .", "Therefore, $1-4n$ is not a square.", "Theorems REF and REF yield Corollary REF .", "Let $\\triangle _K(t)=n(t+t^{-1})+1-2n$ be an Alexander polynomial of breadth 2 of a knot $K$ for some nonzero $n \\in \\mathbb {Z}$ .", "In the case where $u_a(K)>1$ , $K$ is not a complete Alexander neighbor, so we may assume $n(K)\\le u_a(K)=1$ .", "Since $\\triangle _K(t)$ has breadth 2 and so is nontrivial, we have $n(K)=1$ .", "Notice that $\\det K={\\left\\lbrace \\begin{array}{ll}1-4n &n<0\\\\-1+4n &n>0\\end{array}\\right.", "},$ so in the case where $n$ is negative, $\\det K\\equiv 1 \\mod {4}$ and $5\\le 1-4n$ , meaning that $K$ is not a complete Alexander neighbor by Theorem REF .", "In the case where $n$ is positive, we have $1-4n<0$ , so $1-4n$ is not a square.", "Therefore, $K$ is not a complete Alexander neighbor by Theorem REF and Proposition REF .", "Theorem 3 from [5] by Kondo states that every Alexander polynomial is realized by a knot with unknotting number one and thus algebraic unknotting number one.", "Thus, Theorem REF proves that infinitely many knots are not complete Alexander neighbors.", "As an example, $1+\\sum _{i=1}^n\\left((t^{2i}+t^{-2i})-(t^{2i-1}+t^{-2i+1})\\right)$ for $n \\in \\mathbb {N}$ is an infinite class of Alexander polynomials with breadth $4n$ and determinant $1+4n$ , so there exist infinitely many knots with unknotting number one, and thus algebraic unknotting number one and Nakanishi index one, realizing this class of Alexander polynomials which are eliminated from being a complete Alexander neighbor by Theorem REF and not by Corollary REF or their algebraic unknotting number.", "Similarly, since every Alexander polynomial is realized by a knot with unknotting number one, Corollary REF proves that infinitely many knots are not complete Alexander neighbors.", "For example, $n(t+t^{-1})+1-2n$ for all nonzero integers $n$ is a collection of Alexander polynomials with breadth 2 and determinant $|1-4n|$ including infinitely many prime determinants congruent to $3 \\mod {4}$ , which are each realized by a knot with unknotting number one, and thus algebraic unknotting number one.", "Therefore, infinitely many knots are eliminated by Corollary REF and not by Theorem REF or their algebraic unknotting number.", "Together, these three methods of proving that a knot is not a complete Alexander neighbor applies to all knots which meet at least one of the following criteria: has algebraic unknotting number greater than one (which applies to 1,546 of the 2,977 prime knots with crossing number 12 or less), has determinant which is composite or congruent to 1 mod 4 (which applies to 2,392 of the 2,977 prime knots with crossing number 12 or less), or has Alexander polynomial of breadth 2 (which applies to 36 of the 2,977 prime knots with crossing number 12 or less).", "All together, this eliminates 2,528 of the 2,977 prime knots with 12 crossings or fewer.", "There are many very small candidates for a complete Alexander neighbor with nontrivial Alexander polynomial which are not yet eliminated.", "Through eight crossings these are $6_2$ , $7_6$ , $8_4$ , $8_6$ , $8_7$ , and $8_{14}$ .", "All knots $K$ for which it is unresolved whether $K$ is a complete Alexander neighbor have algebraic unknotting number one and thus Nakanishi index one, meaning that $(\\triangle _{K}(t))$ is an Alexander matrix for $K$ , so we can restate the question of whether $K$ is a complete Alexander neighbor (see Propositions 5 and 6 in [12]).", "$K$ is not a complete Alexander neighbor if and only if there exists an Alexander polynomial $p(t)$ such that for all Laurent polynomials $r(t)$ where $r(1)=0$ , we have $p(t)&\\lnot \\equiv r(t) r(t^{-1})\\mod {\\triangle }_{K}(t)\\text{ and}\\\\p(t)&\\lnot \\equiv - r(t) r(t^{-1})\\mod {\\triangle }_{K}(t).$ This may give a new approach to Question REF .", "For another possible method of investigating knots for which it is unresolved whether they are a complete Alexander neighbor, consider those with monic Alexander polynomial.", "For example, the knots $6_2$ , $7_6$ , and $8_7$ have monic Alexander polynomial, which means we can use Nakanishi and Okada's algorithm used on the knots $3_1$ and $4_1$ in [12] and the knots $5_1$ and $10_{132}$ in [11] to characterize the set of Alexander polynomials realized by their Gordian neighbors.", "This characterization might be useful to determine whether this set includes all Alexander polynomials." ], [ "Obstructions to Unknotting Number One", "We can leverage the effect of a single crossing change in a knot $K$ on the determinant or on the double cover $M_K$ of $S^3$ branched over $K$ to obtain two obstructions to unknotting number one.", "One was described by Lickorish in 1985 [7].", "Another follows from work by Nakanishi and Okada in 2012 [12].", "Using these obstructions, we can show that many knots have unknotting number greater than one through simple computations, where the two obstructions are similar, though neither subsumes the other.", "Lemma 3.1 (Proposition 13 in [12]) Let $K$ be a knot and $K^{\\prime }$ be a knot one crossing change away from $K$ .", "If $K$ has unknotting number 1, then $\\pm \\det K^{\\prime } \\equiv -n^2 \\mod {\\det }K$ for some integer $n$ .", "Therefore, by the contrapositive of Lemma REF , given any knot $K$ where there exists a knot $K^{\\prime }$ one crossing change away such that $\\mp \\det K^{\\prime }$ is a quadratic nonresidue mod $\\det K$ , we have that $K$ has unknotting number greater than one.", "Note that it is necessary for both $\\det K^{\\prime }$ and $-\\det K^{\\prime }$ to be a quadratic nonresidue mod $\\det K$ to conclude that $K$ has unknotting number greater than one.", "For example, $3_1$ and $5_2$ are unknotting number one knots one crossing change apart with determinants 3 and 7 respectively.", "We have that 3 is a quadratic nonresidue mod 7, but $-3\\equiv 4 \\mod {7}$ is a quadratic residue mod 7.", "We also have that $-7\\equiv 2 \\mod {3}$ is a quadratic nonresidue mod 3 and $7\\equiv 1\\mod {3}$ is a quadratic residue mod 3.", "In the KnotInfo database [8], we can use this observation to show that $11n_{162}, 12n_{805}$ , $12n_{814},12n_{844},$ and $12n_{856}$ have unknotting number greater than one, where this was previously unknown in the database.", "This shows that $11n_{162}$ has unknotting number 2 and constrains the others to be 2 or 3.", "Figure: The knots 11n 162 ,12n 805 ,12n 814 ,12n 844 ,11n_{162}, 12n_{805}, 12n_{814},12n_{844}, and 12n 856 12n_{856} along with a knot one crossing change away from each of these.", "Under each knot is their name, a DT Code for the pictured diagram, and the knot's determinant.The knot $11n_{162}$ has determinant 55 and DT code $[6, -10, 12, 22, 16, -18, 8, 20, -4, 2, 14]$ in KnotInfo [8].", "We can change the sign of the first entry in the DT code to obtain $9_{45}$ , a knot one crossing change away from $11n_{162}$ .", "The determinant of $9_{45}$ is 23.", "Since 23 and $-23$ are both quadratic nonresidues mod 55, by Lemma REF , $11n_{162}$ has unknotting number greater than one.", "In Figure REF , we see a knot one crossing change away from $11n_{162}, 12n_{805}, 12n_{814},12n_{844},$ and $12n_{856}$ whose determinant satisfies the contrapositive of Lemma REF .", "Therefore, using a similar argument to the one above for $11n_{162}$ , we conclude the proof.", "We identify these knots by performing a search with code using SnapPy [3] in Sage to compute the determinant of each knot for which it is unknown in KnotInfo [8] if the unknotting number is one, and compute the determinant of each knot obtained by changing the sign of one number in the DT code recorded in KnotInfo [8].", "Then we check whether the determinants satisfy the condition in Lemma REF .", "We can also use Lickorish's obstruction to show that these knots do not have unknotting number one [7].", "To describe Lickorish's obstruction, we need to introduce some definitions.", "Definition 3.2 Let $M$ be an oriented 3-manifold where $H_1(M)$ is finite.", "Then the linking form of $M$ is $\\lambda : H_1(M)\\times H_1(M)\\rightarrow \\mathbb {Q}/\\mathbb {Z}$ as defined below.", "Let $[\\alpha ], [\\beta ] \\in H_1(M)$ represented by 1-cycles $\\alpha $ and $\\beta $ in $M$ respectively.", "Then $n\\alpha $ bounds a disk $D$ for some integer $n$ .", "Define $\\lambda ([\\alpha ],[\\beta ])=\\frac{1}{n}i(D,\\beta )$ where $i(D,\\beta )$ is the intersection number of $D$ and $\\beta $ .", "Definition 3.3 Let $D$ be a connected, checkerboard colored diagram of a knot $K$ .", "Let $R_0, R_1, ..., R_n$ be the white regions of $D$ .", "Assign each crossing of $D$ a sign $\\pm 1$ as in Figure REF .", "Let $g_{ij}$ be the sum of the signs of the crossings abutted by the white regions $R_i$ and $R_j$ for $0\\le i,j\\le n$ where $i\\ne j$ and let $g_{ii}=-\\displaystyle \\sum _{i\\ne j}g_{ij}$ .", "A Goeritz matrix $G_K$ of $K$ is the $n\\times n$ matrix $(g_{ij})_{1\\le i,j\\le n}$ .", "Note that this eliminates all $g_{ij}$ where $i=0$ or $j=0$ .", "Figure: These are the sign conventions used in the definition of a Goeritz matrix.Lemma 3.4 (Lemmas 1 and 2 in [7]) If $K$ is a knot with unknotting number one, then $M_K$ is obtained by $\\pm \\frac{\\det K}{2}$ -surgery on a knot in $S^3$ and $H_1(M_K)$ is cyclic with a generator $g$ such that $\\lambda (g,g)=\\frac{2}{\\det K}\\in \\mathbb {Q}/\\mathbb {Z}$ .", "Lemma 3.5 (page 253 in [15], page 761 of [14]) Given a knot $K$ , the linking form $\\lambda $ of $M_K$ is given by $\\pm (G_K)^{-1}$ , meaning that $\\lambda (g_i,g_j)=\\pm (G_K^{-1})_{i,j}$ in $\\mathbb {Q}/\\mathbb {Z}$ where $\\lbrace g_1,g_2,...,g_n\\rbrace $ is a generating set of $H_1(M_K)$ .", "First notice that in SnapPy, we can see $&G_{11n_{162}}^{-1}=\\begin{pmatrix}\\frac{16}{55}&\\frac{8}{55}&\\frac{1}{5}&\\frac{3}{55}&\\frac{6}{55}\\\\[6pt]\\frac{8}{55}&\\frac{4}{55}&\\frac{3}{5}&\\frac{29}{55}&\\frac{3}{55}\\\\[6pt]\\frac{1}{5}&\\frac{3}{5}&\\frac{1}{5}&\\frac{3}{5}&\\frac{1}{5}\\\\[6pt]\\frac{3}{55}&\\frac{29}{55}&\\frac{3}{5}&\\frac{4}{55}&\\frac{8}{55}\\\\[6pt]\\frac{6}{55}&\\frac{3}{55}&\\frac{1}{5}&\\frac{8}{55}&\\frac{16}{55}\\end{pmatrix}, G_{12n_{805}}^{-1}=\\begin{pmatrix}\\frac{4}{17}&\\frac{3}{17}&\\frac{2}{17}&\\frac{4}{17}\\\\[6pt]\\frac{3}{17}&\\frac{41}{85}&\\frac{16}{85}&\\frac{32}{85}\\\\[6pt]\\frac{2}{17}&\\frac{16}{85}&-\\frac{29}{85}&\\frac{27}{85}\\\\[6pt]\\frac{4}{17}&\\frac{32}{85}&\\frac{27}{85}&\\frac{54}{85}\\end{pmatrix},\\\\ &G_{12n_{814}}^{-1}=\\begin{pmatrix}\\frac{36}{95}&\\frac{3}{19}&-\\frac{2}{95}&\\frac{22}{95}&\\frac{12}{95}\\\\[6pt]\\frac{3}{19}&\\frac{6}{19}&\\frac{3}{19}&\\frac{5}{19}&\\frac{1}{19}\\\\[6pt]-\\frac{2}{95}&\\frac{3}{19}&-\\frac{21}{95}&\\frac{41}{95}&\\frac{31}{95}\\\\[6pt]\\frac{22}{95}&\\frac{5}{19}&\\frac{41}{95}&\\frac{24}{95}&\\frac{39}{95}\\\\[6pt]\\frac{12}{95}&\\frac{1}{19}&\\frac{31}{95}&\\frac{39}{95}&\\frac{4}{95}\\end{pmatrix}, G_{12n_{844}}^{-1}=\\begin{pmatrix}\\frac{4}{15}&\\frac{1}{15}&\\frac{1}{5}&\\frac{2}{15}&\\frac{2}{15}\\\\[6pt]\\frac{1}{15}&\\frac{1}{15}&\\frac{3}{5}&\\frac{8}{15}&\\frac{2}{15}\\\\[6pt]\\frac{1}{5}&\\frac{3}{5}&\\frac{1}{5}&\\frac{3}{5}&\\frac{1}{5}\\\\[6pt]\\frac{2}{15}&\\frac{8}{15}&\\frac{3}{5}&\\frac{1}{15}&\\frac{1}{15}\\\\[6pt]\\frac{2}{15}&\\frac{2}{15}&\\frac{1}{5}&\\frac{1}{15}&\\frac{4}{15}\\end{pmatrix}, \\text{ and}$ $\\hspace{-185.79645pt}G_{12n_{856}}^{-1}=\\begin{pmatrix}-\\frac{14}{55}&\\frac{27}{55}&\\frac{2}{5}&\\frac{19}{55}\\\\[6pt]\\frac{27}{55}&\\frac{54}{55}&\\frac{4}{5}&\\frac{38}{55}\\\\[6pt]\\frac{2}{5}&\\frac{4}{5}&\\frac{4}{5}&\\frac{3}{5}\\\\[6pt]\\frac{19}{55}&\\frac{38}{55}&\\frac{3}{5}&\\frac{41}{55}\\end{pmatrix}.$ Assume for contradiction that $11n_{162}$ has unknotting number one.", "Then, by Lemma REF , $H_1(M_{11n_{162}})$ is cyclic with a generator $g$ such that $\\lambda (g,g)=\\frac{2}{55}\\in \\mathbb {Q}/\\mathbb {Z}$ .", "Since $(G_{11n_{162}}^{-1})_{1,1}=\\frac{16}{55}$ , we have by Lemma REF that there exists $g_1\\in H_1(M_{11n_{162}})$ such that $\\lambda (g_1,g_1)=\\pm \\frac{16}{55}$ .", "Since $H_1(M_{11n_{162}})$ is cyclic with a generator $g$ , we have that $g_1=tg$ for some integer $t$ , so $\\pm \\frac{16}{55}=\\lambda (g_1,g_1)=\\lambda (tg,tg)=t^2\\lambda (g,g)=t^2\\frac{2}{55}$ in $\\mathbb {Q}/\\mathbb {Z}$ .", "Therefore $t^2\\equiv \\pm 8 \\mod {5}5$ , but 8 and $-8$ are not a quadratic residues mod 55, which is a contradiction.", "Using a similar argument to the one above for $11n_{162}$ , and the Goeritz matrices above, we conclude the proof.", "It is difficult to show that the first obstruction to unknotting number one does not apply to a particular knot since there are infinitely many crossing changes to check for the condition on determinants in Lemma REF , however when we only check each crossing change done by a single sign change in the DT code for each knot recorded in KnotInfo [8] up to 12 crossings, this obstruction shows that 1,273 knots have unknotting number greater than one out of 2,472 knots which are not known to have unknotting number one.", "To show that Lickorish's obstruction does not apply to a particular knot $K$ we must check that $\\lambda (g,g)/2$ or $-\\lambda (g,g)/2$ is a quadratic residue for each $g \\in H_1(M_K)$ .", "In the case where $H_1(M_K)$ is not cyclic, we know by Lemma REF that $K$ must have unknotting number greater than one and in the case where $H_1(M_K)$ is cyclic, checking the diagonal entries of $G_K$ is sufficient to determine whether Lickorish's obstruction is applicable to $K$ .", "However, just checking that for each entry $(G_K^{-1})_{i,i}$ along the main diagonal of the inverse of the Goeritz matrix of $K$ for each prime $K$ with up to 12 crossings, shows that 1,269 knots have unknotting number greater than one out of 2,472 knots which are not known to have unknotting number one.", "We also have that 11 of the remaining knots which are not known to have unknotting number one have non-cyclic $H_1(M_K)$ , so must have unknotting number greater than one.", "In the prime knots up to 13 crossings, there are 17 examples ($11a_{47}$ , $11n_{170}$ , $12a_{166}$ , $12a_{615}$ , $12a_{886}$ , $13a_{947}$ , $13a_{1237}$ , $13a_{1602}$ , $13a_{1853}$ , $13a_{1995}$ , $13a_{2005}$ , $13a_{2006}$ , $13a_{2649}$ , $13a_{4258}$ , $13n_{1663}$ , $13n_{2937}$ , and $13n_{2955}$ ) where changing some crossing in the DT code recorded in KnotInfo [8] yields a knot one crossing change away which satisfies the condition on determinants from Lemma REF to show that the unknotting number must be greater than one, but Lickorish's obstruction does not apply using any of the diagonal entries of the inverse of the Goeritz matrix.", "However, all of these examples except $11n_{170}$ and $13a_{2649}$ have non-cyclic first homology of the double cover of $S^3$ branched over the knot, which also demonstrates that these knots have unknotting number greater than one.", "In the prime knots up to 13 crossings, there are 4 examples ($12n_{553}$ , $13a_{1448}$ , $13a_{2142}$ , and $13n_{3264}$ ) where Lickorish's argument applies to one of the diagonal entries of the inverse of the Goeritz matrix, but no crossing change in KnotInfo's saved DT code [8] gives a knot satisfying the condition on determinants from Lemma REF .", "Of course, we can use other methods to prove that many of these knots have unknotting number greater than one, but we see here that when we use the diagonal entries of the Goeritz matrix and the crossing changes from a sign change in the DT code, on small knots these obstructions are very similar, but not the same, and apply to many knots.", "Also, using the condition on determinants from Lemma REF has the advantage that it is possible to expand the search to different crossing changes than those from sign changes in a particular DT code." ] ]
2210.07728
[ [ "Optimal AdaBoost Converges" ], [ "Abstract The following work is a preprint collection of formal proofs regarding the convergence properties of the AdaBoost machine learning algorithm's classifier and margins.", "Various math and computer science papers have been written regarding conjectures and special cases of these convergence properties.", "Furthermore, the margins of AdaBoost feature prominently in the research surrounding the algorithm.", "At the zenith of this paper we present how AdaBoost's classifier and margins converge on a value that agrees with decades of research.", "After this, we show how various quantities associated with the combined classifier converge." ], [ "Introduction", "The margins hypothesis with respect to the effectiveness of AdaBoost is the leading explanation for how the algorithm achieves good generalization on a wide range of data sets .", "The hypothesis states that AdaBoost converges on a distribution of its decision margins on its training set that also improves its classification effectiveness over time.", "There has been much work on giving sufficient conditions for good margin distributions along with conditions for minimum margin maximization .", "However, even given this research the distribution of these margins of AdaBoost is not well-understood.", "A key reason for that, we believe, is that the tools for analyzing the margins have been of a particular nature.", "Whereas much of the literature uses optimization and probabilistic tools, in this paper we present information theory and ergodic theory-inspired methods.", "Using these ideas we hope to find new inroads to analyzing AdaBoost and perhaps other algorithms." ], [ "Preliminaries", "As usual, when studying theoretical machine learning with probability theory we are treating the training set $S\\subset \\mathcal {X}\\times \\mathcal {Y}$ as a random variable with unknown distribution $\\mathcal {D}_S$ .", "Throughout this paper we are using information theoretic functions to study $S$ or the discrete distribution over which AdaBoost iterates for its dynamics.", "Furthermore, we will use square brackets $[n]$ to denote the set of natural numbers $1,...,n$ .", "Let $P$ be a discrete probability distribution represented as a vector $P=\\langle p_i\\rangle ^n_{i=1}$ such that $\\sum ^n_{i=1}p_i=1$ .", "The information content of a probability value $p$ is the quantity $-\\log p$ .", "This quantity can be thought of as the information yielded by the value of $p$ over the distribution $P$ .", "Generally, a random variable is used in the place of $p$ , but the information content of that random variable is calculated using its corresponding probability value from a distribution.", "We define the entropy of this distribution to be $H(P)=-\\sum ^n_{i=1}p_i\\log p_i.$ Similar to this function is the cross entropy $\\psi $ defined on two discrete probability distributions of $n$ components $P,Q$ such that $H(P,Q)=-\\sum ^n_{i=1}p_i\\log q_i.$ We will also use the expected value $\\mathbb {E}_P [\\cdot ]$ on variable outcomes $X$ defined $\\mathbb {E}_{P}[X]=\\sum ^n_{i=1}p_ix_i$ for discrete distributions $P$ and $X\\in \\mathcal {X}=\\lbrace x: x\\text{ a possible value of random variable }X\\rbrace $ a space of outcomes.", "A measure of statistical distance, we define the Kullback–Leibler divergence between $P,Q$ to be ${\\textbf {KL}}(P,Q)=H(P,Q)-H(P)$ and another way to write this quantity is ${\\textbf {KL}}(P,Q)=\\sum ^n_{i=1}p_i\\log \\left(\\frac{p_i}{q_i}\\right).$ From these functions, we can acquire some elegant consequences about the AdaBoost algorithm.", "First we must define some specific functions native to the mathematical framework of this algorithm, thought of as a dynamical system on $\\Delta _{n-1}=\\left\\lbrace \\langle w_i\\rangle ^{n}_{i=1}\\in \\mathbb {R}^{n}:\\sum ^n_{i=1}w_i=1,\\ w_i\\ge 0\\ \\forall i\\right\\rbrace $ that is home to all $n$ -component discrete probability distributions $P$ .", "Given that we will be working on parameters of the AdaBoost algorithm, we will define some of these values.", "This algorithm can be thought of as an iterative update on the normalized weight vector $\\vec{w}_t=\\langle w_{t,i}\\rangle ^n_{i=1}$ for $t\\in \\mathbb {N}$ an iteration of AdaBoost.", "These weight vectors are initialized as $\\vec{w}_0=\\left\\langle \\frac{1}{n}\\right\\rangle ^n_{i=1}$ .", "AdaBoost updates the weight vector over many iterations, in doing so it requires a mistake dichotomy $\\eta _{t}=\\langle \\eta _{t,i}\\rangle ^n_{i=1}$ with $\\eta _{t,i}=\\pm 1$ .", "These are taken from a modified dichotomy set induced by the hypothesis space $\\mathcal {H}$ denoted $\\mathcal {C}_{\\vec{y}}=\\lbrace \\langle y_ih(z_i)\\rangle ^n_{i=1}:s_i=(z_i,y_i)\\in S,\\ \\vec{y}=\\langle y_i\\rangle ^n_{i=1},\\ h\\in \\mathcal {H}\\rbrace .$ This mistake dichotomy is used to generate the edge at iteration $t$ called $r_t$ and defined by $r_t=\\operatornamewithlimits{arg\\,min}_{\\eta \\in \\mathcal {H}_{\\vec{y}}}(\\vec{w}_t\\cdot \\eta )$ where $\\eta \\cdot \\vec{w}_t$ is the conventional vector dot product.", "Note that we requires $r_t>0$ for all $t$ since $r_t=1-2\\epsilon _t$ with $\\epsilon _t$ the error of our hypothesis $h_t$ .", "This is known as the weak learning condition and ensures that $h_t$ is better than random guessing.", "For our use, we also require that $r_t<1$ since $r_t=1$ corresponds to trivial dynamics and means that we have found a hypothesis that determines all labels of the training set.", "The edge value is used to define the learning coefficient $\\alpha _t$ , or simply coefficient, defined $\\alpha _t=\\frac{1}{2}\\log \\left(\\frac{1+r_t}{1-r_t}\\right)$ which ends up being used to weight the combined classifier $F_t(z_i)$ on the $i$ -th data point of $S$ at iteration $t$ given as $F_t(z_i)=\\sum ^t_{k=0}\\alpha _k h_k(z_i)$ for $h_k\\in \\mathcal {H}$ .", "The learning coefficient is used in the AdaBoost weight update $w_{t+1,i}=\\frac{w_{t,i}e^{-\\eta _{t,i}\\alpha _t}}{Z_t}$ where $Z_t=\\sum ^n_{i=1}w_{t,i}e^{-\\eta _{t,i}\\alpha _t}$ , which we call the partition function at iteration $t$ .", "It is a classic result that $Z_t=\\sqrt{1-r^2_t}$ .", "Our final AdaBoost parameter is the margin of the $i$ -th data point at iteration $t$ which is given by $y_iF_t(z_i)=\\sum ^t_{k=0}\\eta _{t,i}\\alpha _k$ and this tracks the confidence of the final classifier with respect to its classification on the $i$ -th data point.", "We will often represent the margin of data point $i$ at iteration $t$ as $\\text{mar}_{t,i}=y_iF_t(z_i)$ in order to make things look nice and intuitive.", "Optimal AdaBoost $\\mathcal {S}$ Combined classifier initialization $F_0(z)=0$ $w_{i,0} \\leftarrow \\frac{1}{n}$ for $n$ components of $\\vec{w}_0$ $t \\leftarrow 0$ $t\\le t_{max}$ $\\eta _t\\in \\text{argmax}_{\\eta ^{\\prime }}(\\vec{w}_t\\cdot \\eta ^{\\prime })$ $r_t=\\vec{w}_t\\cdot \\eta _t$ the optimal edge at iter.", "$t$ $\\alpha _t=\\frac{1}{2}\\log \\left(\\frac{1+r_t}{1-r_t}\\right)$ $F_t(z)=F_{t-1}(z)+\\alpha _th_{t}(z)$ update combined classifier $w_{t,i}\\leftarrow w_{t,i}e^{-\\eta _{t,i}\\alpha _t}$ $w_{t+1,i}=\\frac{w_{t,i}e^{-\\eta _{t,i}\\alpha _t}}{Z_{t}}$ normalization for each $i$ return $F_{t_{max}}(z)$ final classifier We can take the expected value of random variables $X$ with respect to the distribution $\\vec{w}_{t+1}$ in the manner of $\\mathbb {E}_{\\vec{w}_{t+1}}[X]=\\sum ^n_{i=1}w_{t,i}x_i.$ In particular we can calculate the expected value of the sum of margins at iteration $t$ named $\\text{mar}_t=\\sum ^n_{i=1}\\text{mar}_{t,i}$ via $\\mathbb {E}_{\\vec{w}_{t+1}}[\\text{mar}_t]=\\sum ^n_{i=1}w_{t+1,i}y_iF_t(z_i).$ This expected value is over the discrete distribution that $\\vec{w}_{t+1}$ defines rather than the distribution of the underlying data from which we take $S$ .", "The expected value in this case resembles the mean energy of the Ising model , which AdaBoost in turn greatly resembles." ], [ "Formal Results", "Since AdaBoost iterates over a discrete probability distribution represented as $\\vec{w}_t\\in \\Delta _{n-1}$ , we are able to infer some direct and illuminating results using the information theoretic functions and quantities defined in Section 2.", "A nice evaluation can be found in simply applying $-\\log $ to $w_{t+1,i}$ given the iterative weight update formula of AdaBoost.", "This quantity is $-\\log w_{t+1,i}=-\\log \\left(\\frac{1}{n}\\prod ^t_{k=0}\\frac{e^{-\\eta _{l,i}\\alpha _k}}{Z_k}\\right)=\\log n+\\sum ^t_{k=0}\\eta _{l,i}\\alpha _k+\\sum ^t_{k=0}\\log Z_k.$ The above fact and its consequences will be used repeatedly throughout this work.", "Suppose that AdaBoost is at iteration $\\vec{w}_{t+1}$ and suppose that $k\\in \\mathbb {Z}$ with $0\\le k\\le t$ .", "Then we have $-\\log n+\\sum ^t_{k=0}\\emph {\\textbf {KL}}(\\vec{w}_{k+1},\\vec{w}_{k})\\le \\mathbb {E}_{\\vec{w}_{t+1}}[\\emph {\\text{mar}}_t]\\le \\sum ^t_{k=0}\\emph {\\textbf {KL}}(\\vec{w}_{k+1},\\vec{w}_{k}).$ Applying the entropy function $H$ to $\\vec{w}_{t+1}$ and using the identity shown before this proposition, we have that $\\begin{aligned}H(\\vec{w}_{t+1})&=-\\sum ^n_{i=1}w_{t+1,i}(-\\log n-\\sum ^t_{k=0}\\eta _{k,i}\\alpha _k-\\sum ^t_{k=0}\\log Z_k)\\\\&=\\log n+\\sum ^n_{i=1}w_{t+1,i}\\sum ^t_{k=0}\\eta _{k,i}\\alpha _k+\\sum ^t_{k=0}\\log Z_k\\\\&=\\log n+\\sum ^n_{i=1}w_{t+1,i}y_iF_t(z_i)+\\sum ^t_{k=0}\\log Z_k\\end{aligned}$ and this leads us to $\\begin{aligned}H(\\vec{w}_{t+1})-\\log n-\\sum ^t_{k=0}\\log Z_k&=\\sum ^n_{i=1}w_{t+1,i}y_iF_t(z_i)\\\\&=\\mathbb {E}_{\\vec{w}_{t+1}}[\\text{mar}_t].\\end{aligned}$ Since $0\\le H(P)$ for any distribution $P$ , we get the inequality $-\\log n-\\sum ^t_{k=0}\\log Z_k\\le \\mathbb {E}_{\\vec{w}_{t+1}}[\\text{mar}_t].$ By we know that $\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k})=-\\log Z_k$ for all $k$ .", "So $-\\log n+\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k})\\le \\mathbb {E}_{\\vec{w}_{t+1}}[\\text{mar}_t].$ Similarly, since $H(P)\\le \\log n$ $\\mathbb {E}_{\\vec{w}_{t+1}}[\\text{mar}_t]\\le \\log n-\\log n+\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k})$ and hence the result.", "In the above sense, we see that AdaBoost maintains a growth relation between the sum of KL divergences between successive weight vectors and our expected value of margins.", "We will see later that this has to do with the minimum margin in specific.", "[] We call an unlabeled training example $z_i$ for $i\\in [n]$ a support vector if there exists $t_0\\in \\mathbb {N}\\cup \\lbrace 0\\rbrace $ so that $\\overline{\\text{mar}}_{t,i}$ achieves and maintains the minimum margin over training examples as $t$ grows large.", "Further, let $T$ be the set of training examples without their labels.", "Define the set $V=\\lbrace z_i\\in T:z_i \\emph {\\text{ is a support vector}}\\rbrace $ and call $V$ the set of support vectors with respect to $T$ .", "Any training example that achieves the minimum margin as $t$ grows large will have a respective weight $w_{t,i}$ that stays positive, where as for $z_j\\in \\frac{T}{V}$ will have $w_{t,j}\\rightarrow 0$ .", "Let $i\\in [n]$ .", "Then we define the margin with normalized coefficients or normalized margin to be $\\overline{\\emph {\\text{mar}}}_{t,i}=\\frac{1}{\\sum ^t_{k=0}\\alpha _k}\\emph {\\text{mar}}_{t,i}$ and we call $\\sum ^t_{k=0}\\alpha _k$ the normalization constant where $A_t=\\sum ^t_{k=0}\\alpha _k.$ The normalization constant $A_t$ normalizes the learning coefficients of the margin so that they sum to 1.", "Suppose that AdaBoost is at iteration $t$ and we have the combined classifier $F_t(z)$ constructed using Algorithm .", "We call the quantity $f_t(z)=\\frac{F_t(z)}{A_t}$ the normalized classifier of Optimal AdaBoost.", "Consider the set of dichotomies induced by our hypotheses $\\mathcal {H}$ on $T$ the unlabeled training set $\\mathcal {C}=\\lbrace \\langle h(x_i)\\rangle ^n_{i=1}:x_i\\in T,\\ h\\in \\mathcal {H}\\rbrace .$ Let $j\\in [*{\\mathcal {C}}]$ and suppose that $K_{j,t}$ indexes the iterations up to $t$ at which AdaBoost selects a hypothesis with dichotomy $\\mu _j\\in \\mathcal {C}$ .", "We can identify to each $\\mu _j$ the normalized coefficients that will multiply them in the final classifer up to iteration $t$ with $\\lambda _{t,j}=\\frac{\\sum _{k_j\\in K_{j,t}}\\alpha _{k_j}}{A_t}.$ Let $i\\in [n]$ with AdaBoost at iteration $t$ and consider $\\overline{\\text{mar}}_{t,i}$ .", "Define the index $N^+_{t,i}$ to be iterations $k$ up to $t$ so that $\\eta _{k,i}=+1$ and $N^-_{t,i}$ the same for $\\eta _{k,i}=-1$ .", "We define the value $\\beta ^{\\pm }_{t,i}$ to be $\\beta ^{\\pm }_{t,i}=\\sum _{n^{\\pm }_i\\in N^{\\pm }_{t,i}}\\alpha _{n^{\\pm }_i}.$ Observe that $\\pm \\beta ^{\\pm }_{t,i}=\\overline{\\text{mar}}_{t,i}\\pm \\beta ^{\\mp }_{t,i}.$ This quantity defines the total contribution of $\\eta _{k,i}=\\pm 1$ for each $k$ to the classification of a data point.", "Suppose that $i\\in I(V)$ .", "Let $\\epsilon >0$ be a constant so that $\\epsilon <w_{t+1,i}\\le 1$ for all $t$ by definition of support vector.", "Then $\\emph {\\text{mar}}_{t,i}-\\sum ^t_{k=0}\\emph {\\textbf {KL}}(\\vec{w}_{k+1},\\vec{w}_{k})$ is bounded above and below by finite constants.", "By Eqn.", "REF we have that $-\\log w_{t+1,i}=\\log n+\\text{mar}_{t,i}-\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k}).$ for each $i\\in [n]$ .", "Then since $\\epsilon <w_{t+1,i}$ , applying $-\\log $ to both sides of the inequality gives us $-\\log w_{t+1,i}<-\\log \\epsilon $ for all $t$ .", "This means $0\\le \\log n+\\text{mar}_{t,i}-\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k})<-\\log \\epsilon $ such that $-\\log n\\le \\text{mar}_{t,i}-\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k})<-\\log n\\epsilon $ for all $t$ .", "This proof clearly shows that the minimum margin examples are chosen by the dynamics of the AdaBoost algorithm so that they keep up with the sum of KL divergences.", "The KL divergence of successive weight vectors is a minimum distance projection in $\\Delta _{n-1}$ .", "Let $i\\in I(V)$ .", "Then $\\lim _{t\\rightarrow \\infty }\\overline{\\emph {\\text{mar}}}_{t,i}=\\lim _{t\\rightarrow \\infty }A^{-1}_t\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k}).$ Furthermore, the rate convergence depends only on $*{S}=n$ and the distribution of values $(r_k)^{\\infty }_{k=0}$ .", "By Proposition  we have that $-\\log n\\le \\text{mar}_{t,i}-\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k})<-\\log n\\epsilon .$ Multiplying all parts of these inequalities by $A^{-1}_t$ gives us $-A^{-1}_t\\log n\\le \\overline{\\text{mar}}_{t,i}-A^{-1}_t\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k})<-A^{-1}_t\\log n\\epsilon .$ Since $-\\log n$ and $\\epsilon $ are constants, taking the limit $t\\rightarrow \\infty $ proves the lemma.", "Observe that another way to write the value $A^{-1}_t\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k})$ is via $\\frac{-\\sum ^t_{k=0}\\log (1-r^2_k)}{\\sum ^t_{k=0}\\log \\left(\\frac{1+r_k}{1-r_k}\\right)}.$ A similar function over a finite index appears in a 2004 paper by Rudin, Daubechies, and Schapire for use in the cycling dynamics of AdaBoost.", "It also appears as a single-term version without sums in other literature such as in Boosting: Foundations and Algorithms chapter 5 .", "The single-term version was introduced by Rätsch and Warmuth in 2005 .", "When written with a single term in numerator and denominator it describes a game theoretic relationship between the edge and minimum margin.", "Here the function is an analytical quantity that relates learning coefficients, the sum of KL divergences, and support vectors in the general dynamics of AdaBoost.", "For our purposes now and with the formalism that we have built up over the course of this work, we can write the asymptotic support vector margin as $\\overline{\\text{mar}}_{\\infty ,i}=\\frac{-\\sum ^\\infty _{k=0}\\log (1-r^2_k)}{\\sum ^\\infty _{k=0}\\log \\left(\\frac{1+r_k}{1-r_k}\\right)}$ where $\\infty $ in place of $t$ denotes an infinite limit.", "Since all support vectors have this same limit, they are asymptotically identical.", "Given that we cannot bound non-support vectors as we did in Proposition , it is not clear if they have such an asymptotic identity.", "Nothing too mysterious is going on when taking the limit in this case as all margins with normalized coefficients are in $[0,1]$ , which means their limit is too.", "One may wonder about oscillation, which we deal with in a coming lemma.", "Suppose that AdaBoost is at iteration $\\vec{w}_{t+1}$ .", "Then $\\lim _{t\\rightarrow \\infty }\\mathbb {E}_{\\vec{w}_{t+1}}[\\overline{\\emph {\\text{mar}}}_t]=\\lim _{t\\rightarrow \\infty }A^{-1}_t\\sum ^t_{k=0}\\emph {\\textbf {KL}}(\\vec{w}_{k+1},\\vec{w}_{k}).$ As in the previous lemma, the rate of convergence will depend only on $*{S}=n$ and the distribution of values $(r_k)^\\infty _{k=0}$ .", "We know that $\\mathbb {E}_{\\vec{w}_{t+1}}[\\text{mar}_t]=\\sum ^n_{i=1}w_{t+1,i}\\text{mar}_{t,i}$ and also $-\\log n+\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k})\\le \\mathbb {E}_{\\vec{w}_{t+1}}[\\text{mar}_t]\\le \\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k}).$ from Proposition .", "Multiplying through by $A^{-1}_t$ on the second equality gives us $A^{-1}_t\\left(-\\log n+\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k})\\right)\\le A^{-1}_t\\mathbb {E}_{\\vec{w}_{t+1}}[\\text{mar}_t]\\le A^{-1}_t\\left(\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k})\\right).$ When we take the limit $t\\rightarrow \\infty $ the proposition follows.", "What is important to note for this proposition and the previous lemma is that although we are taking limits, these processes also converge as $t$ grows large but finite.", "While we have chosen to use limits due to their analytical beauty, we could forego this in respecting the context of finite time in computer science applications.", "In this sense, we are also giving finite bounds on both the expected value of margins along with the individual values of support vector margins.", "That these things apply in the most general of cases where we have not specified $S$ nor $\\mathcal {H}$ is quite amazing.", "The following proofs resemble the results given in a paper from 2015 by Joshua Belanich and Luis Ortiz that conjectured AdaBoost as a measure-preserving dynamical system .", "We originally sought to prove the conjecture, but the convergence properties of the algorithm follow without any such measure theoretic properties.", "The limit of normalized margins converges to a constant value.", "Fix $l\\in \\mathbb {N}$ and suppose that $i\\in I(V)$ .", "Now, consider the difference $\\overline{\\text{mar}}_{t,i}-\\overline{\\text{mar}}_{t+l,i}.$ We will prove the lemma by showing that the above quantity equals 0 as $t\\rightarrow \\infty $ .", "This will mean that the limiting value of the margin does not oscillate indefinitely.", "Since the normalized margins are bounded, this implies convergence.", "Now $\\begin{aligned}\\overline{\\text{mar}}_{t,i}-\\overline{\\text{mar}}_{t+l,i}&=\\frac{\\text{mar}_{t,i}}{A_{t}}-\\frac{\\text{mar}_{t+l,i}}{A_{t+l}}\\\\&=\\frac{\\text{mar}_{t,i}A_{t+l}}{A_tA_{t+l}}-\\frac{\\text{mar}_{t+l,i}A_t}{A_tA_{t+l}}.\\end{aligned}$ Then we turn our attention to the difference in the numerator such that $\\begin{aligned}\\text{mar}_{t,i}A_{t+l}-\\text{mar}_{t+l,i}A_{t}&=\\text{mar}_{t,i}\\sum ^{t+l}_{k=0}\\alpha _k-\\text{mar}_{t+l,i}\\sum ^{t}_{k=0}\\alpha _k \\\\&=\\text{mar}_{t,i}\\sum ^{t}_{k=0}\\alpha _k+\\text{mar}_{t,i}\\sum ^{t+l}_{k=t+1}\\alpha _k-\\text{mar}_{t,i}\\sum ^{t}_{k=0}\\alpha _k-\\sum ^{t+l}_{k=t+1}\\eta _{k,i}\\alpha _k\\sum ^{t}_{k=0}\\alpha _k\\\\&=\\text{mar}_{t,i}\\sum ^{t+l}_{k=t+1}\\alpha _k-\\sum ^{t+l}_{k=t+1}\\eta _{k,i}\\alpha _k\\sum ^{t}_{k=0}\\alpha _k.\\end{aligned}$ So $\\begin{aligned}\\overline{\\text{mar}}_{t,i}-\\overline{\\text{mar}}_{t+l,i}&=\\frac{\\text{mar}_{t,i}\\sum ^{t+l}_{k=t+1}\\alpha _k}{A_tA_{t+l}}-\\frac{\\sum ^{t+l}_{k=t+1}\\eta _{k,i}\\alpha _k\\sum ^{t}_{k=0}\\alpha _k}{A_tA_{t+l}}\\\\&=\\frac{\\overline{\\text{mar}}_{t,i}\\sum ^{t+l}_{k=t+1}\\alpha _k}{A_{t+l}}-\\frac{\\sum ^{t+l}_{k=t+1}\\eta _{k,i}\\alpha _k}{A_{t+l}}\\\\\\end{aligned}$ Observe that both terms $\\overline{\\text{mar}}_{t,i}\\sum ^{t+l}_{k=t+1}\\alpha _k,\\ \\sum ^{t+l}_{k=t+1}\\eta _{k,i}\\alpha _k$ are bounded above by $\\sum ^{t+l}_{k=t+1}\\alpha _k$ , a finite quantity for all $t$ since $0<r_k<1$ for $0\\le k\\le t+l$ .", "Hence $\\lim _{t\\rightarrow \\infty }\\left(\\overline{\\text{mar}}_{t,i}-\\overline{\\text{mar}}_{t+l,i}\\right)=\\lim _{t\\rightarrow \\infty }\\left(\\frac{\\overline{\\text{mar}}_{t,i}\\sum ^{t+l}_{k=t+1}\\alpha _k}{A_{t+l}}-\\frac{\\sum ^{t+l}_{k=t+1}\\eta _{k,i}\\alpha _k}{A_{t+l}}\\right)=0$ given that $A_{t+l}\\rightarrow \\infty $ when $t\\rightarrow \\infty $ .", "This completes the proof since $l$ was arbitrarily chosen.", "The limit for the expected value of the normalized margins converges to a constant value.", "The proof for this follows from the above lemma since the limit value of any support vector of a normalized margin is the same as the limit of the expected value of the normalized margins.", "For each dichotomy $\\mu _j\\in \\mathcal {C}$ the value $\\lim _{t\\rightarrow \\infty }\\lambda _{t,j}$ converges.", "Like the lemma above we take $l\\in \\mathbb {N}$ fixed and consider $\\lambda _{t,j}-\\lambda _{t+l,j}.$ This value is $\\begin{aligned}\\lambda _{t,j}-\\lambda _{t+l,j}&=\\frac{\\sum _{k_j\\in K_{j,t}}\\alpha _{k_j}}{A_t}-\\frac{\\sum _{k_j\\in K_{j,t+l}}\\alpha _{k_j}}{A_{t+l}}\\\\&=\\frac{A_{t+l}\\sum _{k_j\\in K_{j,t}}\\alpha _{k_j}}{A_tA_{t+l}}-\\frac{A_t\\sum _{k_j\\in K_{j,t+l}}\\alpha _{k_j}}{A_tA_{t+l}}.\\\\\\end{aligned}$ Then $\\begin{aligned}A_{t+l}\\sum _{k_j\\in K_{j,t}}\\alpha _{k_j}-A_t\\sum _{k_j\\in K_{j,t+l}}\\alpha _{k_j}&=A_{t}\\sum _{k_j\\in K_{j,t}}\\alpha _{k_j}+\\left(\\sum ^{t+l}_{k=t+1}\\alpha _k\\right)\\sum _{k_j\\in K_{j,t}}\\alpha _{k_j}\\\\&-A_t\\sum _{k_j\\in K_{j,t}}\\alpha _{k_j}-A_t\\sum _{k_j\\in K_{j,t+l}\\K_{j,t}}\\alpha _{k_j}\\\\&=\\left(\\sum ^{t+l}_{k=t+1}\\alpha _k\\right)\\sum _{k_j\\in K_{j,t}}\\alpha _{k_j}-A_t\\sum _{k_j\\in K_{j,t+l}\\K_{j,t}}\\alpha _{k_j}.\\end{aligned}$ The above implies that $\\begin{aligned}\\lambda _{t,j}-\\lambda _{t+l,j}&=\\frac{\\left(\\sum ^{t+l}_{k=t+1}\\alpha _k\\right)\\sum _{k_j\\in K_{j,t}}\\alpha _{k_j}}{A_tA_{t+l}}-\\frac{A_t\\sum _{k_j\\in K_{j,t+l}\\K_{j,t}}\\alpha _{k_j}}{A_tA_{t+l}}\\\\&=\\frac{\\left(\\sum ^{t+l}_{k=t+1}\\alpha _k\\right)\\lambda _{t,j}}{A_{t+l}}-\\frac{\\sum _{k_j\\in K_{j,t+l}\\K_{j,t}}\\alpha _{k_j}}{A_{t+l}}.\\\\\\end{aligned}$ As in the previous lemma, both terms have bounded numerators in the difference above.", "This means that $\\lim _{t\\rightarrow \\infty }(\\lambda _{t,j}-\\lambda _{t+l,j})=0$ completing the proof.", "The term $\\pm \\beta ^{\\pm }_{t,i}$ converges as $t\\rightarrow \\infty $ for all $i\\in [n]$ .", "This result follows from a proof exactly like that for the above proposition.", "The set $V$ of support vectors is non-empty.", "Since the normalized margins converge, there must be a minimum normalized margin in the limit.", "Any finite set of real numbers has a minimum.", "This means that for some $i\\in [n]$ and fixed iteration $t_0$ , for all iterations $t$ so that $t_0\\le t$ the value $\\overline{\\text{mar}}_{t,i}$ attains the minimum margin value and stays there.", "Hence, $z_i\\in V$ as $t\\rightarrow \\infty $ .", "Only support vectors contribute to the value of $\\mathbb {E}_{\\vec{w}_{t,i}}[\\overline{\\emph {\\text{mar}}}_{t,i}]$ as $t\\rightarrow \\infty $ .", "Furthermore, we have that $*{V}>1$ .", "Suppose that $\\theta _t=\\min _{j\\in [n]}\\overline{\\text{mar}}_{t,j}.$ We know $\\lim _{t\\rightarrow \\infty }\\mathbb {E}_{\\vec{w}_{t+1}}[\\overline{\\text{mar}}_t]=\\lim _{t\\rightarrow \\infty }A^{-1}_t\\sum ^t_{k=0}\\emph {\\textbf {KL}}(\\vec{w}_{k+1},\\vec{w}_{k})$ and since $V\\ne \\emptyset $ there is $z_i\\in V$ so that $\\lim _{t\\rightarrow \\infty }\\overline{\\text{mar}}_{t,i}=\\lim _{t\\rightarrow \\infty }A^{-1}_t\\sum ^t_{k=0}\\textbf {KL}(\\vec{w}_{k+1},\\vec{w}_{k}).$ Given the above we must have that there exists fixed iteration $t_0$ so that for all $t$ with $t_0\\le t$ the equality $\\theta _t=\\overline{\\text{mar}}_{t,i}$ holds, i.e.", "$z_i$ has the least margin for large enough $t$ .", "The first limit also means that for large enough $t$ and any $\\epsilon >0$ we get $*{\\sum ^n_{j=1}w_{t,j}\\overline{\\text{mar}}_{t,j}-\\theta _t}<\\epsilon .$ Now, rewriting $\\theta _t$ to be a weighted sum over the sole term $\\theta _t$ gives $\\begin{aligned}*{\\sum ^n_{j=1}w_{t,j}\\overline{\\text{mar}}_{t,j}-\\sum ^n_{j=1}w_{t,j}\\theta _t}&=*{\\sum ^n_{j=1}w_{t,i}\\left(\\overline{\\text{mar}}_{t,j}-\\theta _t\\right)}<\\epsilon \\\\\\end{aligned}$ which can only be the case if $\\overline{\\text{mar}}_{t,j}-\\theta _t\\rightarrow 0$ or $w_{t,j}\\rightarrow 0$ for each $j\\in [n]$ as $t\\rightarrow \\infty $ .", "Since AdaBoost cannot converge on a fixed weight vector $\\vec{w}$ with only one non-zero term by the weak learning condition, there must be more than one support vector in the limit.", "The normalized classifier that AdaBoost outputs converges asymptotically.", "By Lemma  all of the normalized margins of AdaBoost converge.", "Since the normalized margins of Optimal AdaBoost are the same as its normalized classifier applied to individual training examples and multiplied by a constant, the normalized classifier converges as well.", "This concludes the formal proofs of this paper." ], [ "Discussion", "Theorem  comes from some interesting ways of dealing with the weight vector $\\vec{w}_t$ in relation to various quantities of information theory.", "Our initial quantity of Eqn.", "REF is like a fingerprint for AdaBoost up to the latest iteration $t$ .", "All information about the run of the algorithm over the training set $S$ can be seen in this equation.", "The cardinality of $S$ , combined loss at each iteration, and the margins of iteration $t-1$ can all be found therein.", "What is most interesting about the information content of $\\vec{w}_{t,i}$ is that the vector itself is rather opaque to analysis as it is.", "However, a simple application of $-\\log $ garners much in terms of the ultimate convergence properties of the algorithm as $t\\rightarrow \\infty $ .", "As well, given Theorem  there must be more than one support vector.", "Using this definition that primarily saw use in the cycling dynamics of AdaBoost , we can see that the algorithm converges on a specific distribution of smallest margins.", "It is possible to control these minimum margin values to show that, in some respect, certain data points will be attracted to a sort of learning limit set $V$ .", "What is most interesting here is that a training example either attains the minimum margin and stays relevant via $\\vec{w}_{t,i}$ bounded away from zero, or else becomes dynamically irrelevant with respect to the effects of the weight vector.", "A paper from 2020 by Keifeng Lyu and Jian Li on homogeneous neural networks regards the normalized margins of these very different classifiers in a similar way.", "Although they do not relate the margins and normalized margins to information theoretic quantities as we do in this work, they are able to show results using approximations of margins whose error is bounded in a similar fashion to our own Proposition .", "Indeed, as in Proposition , the divergence of the magnitude of a parameter used in the learning process causes their approximation to converge to the normalized margin being approximated.", "Bounding techniques of this kind seem important in understanding the convergence of certain algorithms.", "Further, we believe that the information content of normalized quantities, the vector $\\vec{w}_{t}$ in our case, may reveal similar fingerprints in the analysis of learning algorithms separate from AdaBoost.", "[heading=bibintoc,title=References]" ] ]
2210.07808
[ [ "Compensating for non-linear distortions in controlled quantum systems" ], [ "Abstract Predictive design and optimization methods for controlled quantum systems depend on the accuracy of the system model.", "Any distortion of the input fields in an experimental platform alters the model accuracy and eventually disturbs the predicted dynamics.", "These distortions can be non-linear with a strong frequency dependence so that the field interacting with the microscopic quantum system has limited resemblance to the input signal.", "We present an effective method for estimating these distortions which is suitable for non-linear transfer functions of arbitrary lengths and magnitudes provided the available training data has enough spectral components.", "Using a quadratic estimation, we have successfully tested our approach for a numerical example of a single Rydberg atom system.", "The transfer function estimated from the presented method is incorporated into an open-loop control optimization algorithm allowing for high-fidelity operations in quantum experiments." ], [ "Introduction", "Over the last few decades, various quantum systems, including superconducting circuits, neutral atoms, trapped ions, and spins [1], [2], [3], have shown exciting progress in controlling quantum effects for applications in quantum sensors [4], simulators [5], and computers [6].", "In these setups, quantum operations are implemented using external fields or pulses which are generated and influenced by several electronic and optical devices.", "For high-fidelity and uptime applications, this requires high performance of, e.g., population transfers and quantum gates, while suppressing interactions with the environment as well as decoherence.", "By shaping temporal and spatial profiles of external fields and pulses, the time-dependent system Hamiltonian steers the quantum dynamics towards the targeted outcome.", "Experimental distortions of the applied pulses may reduce the effectiveness and robustness of the desired quantum operation [7], [8].", "Methods have been developed to characterize distortions based on the impulse response or transfer function of the experimental system [9], [7], [10], [11], [12], [13], [14], [8], [15].", "These approaches for estimating field distortions work well for distortions with a linear transfer function.", "This work, however, addresses the more general case with substantial non-linear distortions originating from the experimental hardware.", "The description of the distortions can be challenging without knowing the exact characteristics of the experimental hardware.", "Also, approximating a significant non-linearity using a linear model will result in model coefficients and control pulses that are not robust against experimental distortions and suffer from a loss in fidelity.", "To account for this problem, we introduce a mathematical model and an estimation method which rely on limited experimental data and can characterize the system behavior up to a non-linearity of finite order.", "To streamline our presentation, we focus on quadratic non-linearities, but more general non-linearities can be treated similarly.", "We illustrate our estimation approach with numerical data for a single-Rydberg atom excitation experiment in the presence of significant non-linearities and we highlight how our approach can calibrate for and suppress large distortions.", "We describe an effective approach for estimating the coefficients of this non-linear model and correct the pulses accordingly.", "We emphasize that our approach is independent of a specific experimental setup and can therefore be applied to various (spatially or temporally) field-tunable phenomena on different quantum platforms.", "Our estimation method for distortions is particularly effective in combination with methods from quantum optimal control [16], [17], [18], [19], [20] and it yields optimized pulses for highly efficient gates while accounting for estimated distortions.", "To this end, we provide an analytical expression for estimating the Jacobian of the transfer function for quadratic distortions, which can be further generalized to higher orders.", "We also validate this combined approach with our Rydberg atom excitation example.", "In the context of quantum control, any inaccuracy in the system Hamiltonian can severely affect the performance of pulses produced by optimal control.", "Given a reasonably accurate model, control fields might also suffer from discretization effects, electronic distortions, and bandwidth limitations (mostly assumed to be linear).", "Accounting for these distortions by including the linear transfer function within the dynamics, as well as its combined gradient, has been incorporated in related optimization work [7], [21], [15], [22], [23].", "Another strategy for minimizing non-linear pulse distortions is to avoid high frequencies altogether in control pulses [24], [25].", "Starting from initial applications [26], [27], optimal control methods have been extensively used in quantum computing, quantum simulation, and quantum information processing [17], [20], [28], [29], [30], [31].", "Analytic results applicable to smaller quantum systems shape our understanding for the limits to population transfers and quantum gates (see [17], [20], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41], [42], [43], [44], [45], [46], [47], [48] and references therein).", "Increasing the efficiency of quantum operations by numerically optimizing and fine-tuning control parameters can rely on open-loop or model-based optimal control methods [49], [50], [51], [52], [7], [53], [54], [55], [56], [57].", "Our work on the estimation of distortions can be seen in the context of model-based approaches, which might rely on an accurate gradient calculation of the analytical cost function and thus on the knowledge of the Hamiltonian of the system [17], [7].", "This knowledge might be available in naturally occurring qubits (such as atomic, molecular, or optical systems), but may also be estimated in engineered (solid-state) technologies.", "Similarly, closed-loop (i.e.", "adaptive feed-forward) control methods [58], [59], [60], [61], [62], [63], [31], [64] are used in situ to reduce adverse experimental effects on the control pulses, while direct (real-time) feedback and reservoir engineering methods can also be used where appropriate to counteract control uncertainties [65], [66].", "The paper is organized as follows: Section  sketches the control setup for optimizing quantum experiments and describes the conventional method for estimating the transfer function and its inclusion in the optimization.", "In Sec.", ", we detail our non-linear estimation method using non-linear kernels.", "We also describe how to derive the transformation matrix and its gradient.", "The non-linear effects on quantum operations are shown with a numerical example of Rydberg atom excitations in Sec. .", "We apply the estimation methodology to our numerical Rydberg example in Sec.", "and discuss requirements on the available measurement data.", "Finally, we consider different numerical optimization methods in combination with our estimation method in Sec.", "(see also Appendix ) and conclude in Sec.", "." ], [ "Time-dependent Control Problems", "We aim to efficiently transferring the population from an initial quantum state to a final target state.", "The evolving state of a quantum system is described by its density operator $\\rho (t)$ and the corresponding equation of motion is written for coherent dynamics as $\\dot{\\rho }= -i[H(t), \\rho ] + \\mathcal {L}(\\rho ).$ The form of the Lindblad term $\\mathcal {L}(\\rho )$ is discussed in Sec.", "while the Hamiltonian can be expressed as $H(t) = H_d + \\sum _{i}{u_{i}(t)\\, H_{i}}.$ The free-evolution or drift component is given by $H_{d}$ , while $H_i$ denotes the control Hamiltonians which are multiplied with time-dependent control pulses $u_{i}(t)$ .", "More precisely, our goal is to transfer a quantum system from a given initial pure state with density operator $\\rho _{i}$ to a target pure-state density operator $\\rho _t$ in time $T$ by varying the control pulses $u_{i}(t)$ while minimizing the cost function $C=1-| \\langle \\rho _t | \\rho (T)\\rangle |^2 = 1- | \\mathrm {Tr}[\\rho _t^ \\dagger \\rho (T)] |^2,$ where $\\mathrm {Tr}(M)$ denotes the trace of a matrix $M$ .", "This cost function measures the difference between the target-state density operator $\\rho _t$ and the final-state density operator $\\rho (T)$ .", "In this work, we employ gradient-based optimization methods, which are described and discussed in Section  and Appendix .", "Figure: Quadratic estimation of distorted pulses[Eq.", "()]is preferable to linearestimation [Eq.", "()]:(a) A pulse is numerically distorted (solid line); later the distortion isestimated up to linear (dashed line) and quadratic terms (dotted line).The quadratic estimation better matches the actual distorted pulse when compared to the linear estimation.", "(b) Numerically computed errors for different types of distortions [including the distortion C plotted in (a)]generated by Eqs.", "()–() are plotted for both the linear and the quadraticestimation.", "The erroris defined in Eq.", "() and describes the difference between the actual distortedand the estimated pulse.The experimental realization of control pulses $u_{i}(t)$ relies on several devices, which might introduce systematic distortions and reduce the overall control efficiency.", "It is our objective to determine these systematic distortions in order to adapt the control pulses during the optimization and counteract any adverse effects.", "For a linear distortion, we can calculate its transfer function $T(\\omega )=\\frac{Y(\\omega )}{X(\\omega )}$ in the Fourier domain as the ratio of the Fourier transform of the input and output pulses $x(t)$ and $y(t)$ , i.e.", "before and after the distortion has taken place.", "Alternatively, we can calculate the impulse response $\\mathcal {I}(t)$ of the system which relates the input and output pulse in the time domain using the convolution $y(t)=(x*\\mathcal {I})(t) = \\int _{-\\infty }^{\\infty }x(\\tau )\\,\\mathcal {I}(t{-}\\tau )d\\tau .$ Figure REF highlights that a linear model might not be sufficient for estimating experimental distortions as it cannot account for non-linear effects.", "Non-linear effects are demonstrated in Fig.", "REF (a) by passing one estimated example pulse through a numerically generated distortion [see Eqs. ()–()].", "When estimating the distortion coefficients using a linear model, the resulting distorted pulse does not match in Fig.", "REF (a) with the actual distorted pulse.", "However, the quadratic estimation with a non-linear model (as described in Sec. )", "precisely recovers the actual distorted pulse.", "Non-linear models are, e.g., preferable for Rydberg excitations which are detailed with realistic experimental parameters in Sec.", "." ], [ "Non-linear Estimation Method ", "We provide now a general approach for estimating non-linear distortions in a controlled quantum system and explain how this estimation approach can be incorporated into the synthesis of robust optimal control pulses." ], [ "Truncated Volterra series method", "We characterize non-linear distortions using the truncated Volterra series method [67].", "The Volterra series is a mathematical description of non-linear behaviors for a wide range of systems [68].", "In analogy to Eq.", "(REF ), we can write the general form of the Volterra series as $y(t)=h^{(0)}+\\sum _{n=1}^{P}\\int _{a}^{b}\\!\\!\\!\\ldots \\!\\int _{a}^{b}h^{(n)}(\\tau _{1},\\ldots ,\\tau _{n})\\prod _{j=1}^{n}x(t{-}\\tau _{j})d\\tau _{j}$ where $x(t)$ is assumed to be zero for $t< 0$ as we consider general, non-periodic signals.", "The output function $y(t)$ can be expressed as a sum of the higher-order functionals of the input function $x(t)$ weighted by the corresponding Volterra kernels $h^{(n)}$ .", "These kernels can be regarded as higher-order impulse responses of the system.", "The Volterra series in Eq.", "(REF ) is truncated to the order $P<\\infty $ and it is called doubly finite if $a$ and $b$ are also finite.", "For a causal system, the output $y(t)$ can only depend on the input $x(t{-}\\tau _j)$ for earlier times (i.e.", "$t\\ge \\tau _j$ ) which results in $a\\ge 0$ ; recall that $x(t{-}\\tau _j)=0$ for $\\tau _j>t$ .", "The Volterra series can therefore also model memory effects (which are assumed to be of finite length) and it is not restricted to instantaneous effects.", "The discretized form of the Volterra series truncated to second order (i.e.", "$P=2$ ) is given by ([67]) $y_n =h^{(0)}+\\sum _{j=0}^{R-1}h^{(1)}_j\\, x_{n{-}j}+\\sum _{k,\\ell =0}^{R-1}h^{(2)}_{k\\ell }\\, x_{n{-}k} \\, x_{n{-}\\ell },$ The discrete output entries $y_n$ have $N$ time steps with $n \\in \\lbrace 0,\\ldots ,N{-}1\\rbrace $ which are obtained from $L$ discrete input entries $x_q$ where $x_q=0$ for $q<0$ .", "Note that $N = L +R -1 \\ge L$ , where $R\\ge 1$ denotes the assumed memory length of the distortion.", "The memory length $R$ quantifies how the response at the current time step depends on the input of previous time steps, i.e., $R$ bounds the number of previous time steps that can affect the current one.", "Volterra kernel coefficients of the zeroth, first, and second order are represented by $h^{(0)}$ , $h^{(1)}_j$ , and $h^{(2)}_{k\\ell }$ .", "The matrix given by $h^{(2)}_{k\\ell }$ is symmetric.", "We are characterizing the transfer function by estimating the kernel coefficients in Eq.", "(REF ).", "The number $M$ of the to-be-estimated coefficients scales quadratically with the memory length $R$ (in general, the number of coefficients scales with $R^P$ ).", "Although the Volterra estimation can be extended to any higher order $P>2$ , we will focus in this work on the quadratic case.", "For the estimation process, we assume that we are provided with a training data set consisting of input-output pulse pairs $(x(t),y(t))$ from an experimental device (or a sequence of devices) which causes the distortion.", "Next, we discuss how given the training data, we can estimate the kernel coefficients in Eq.", "(REF ) by minimizing some error measures (such as the mean square error) between the modeled output and the measured output." ], [ "Truncated Volterra series via least squares", "We can choose from different methods to estimate the Volterra series.", "The most widely used ones are the crosscorrelation method of Lee and Schetzen [69] and the exact orthogonal method of Korenberg [70].", "We choose the latter due to its simplicity and as it does not require an infinite-length input.", "We can write Eq.", "(REF ) as yn=m=0M-1unm km or equivalently as the matrix equation $Y = U K$ or y0 y1 $\\vdots $ yN-1 = u00 u01 u0,M-1 u10 u11 u1,M-1 $\\vdots $ $\\vdots $ $\\vdots $ uN-1,0 uN-1,1 uN-1,M-1 k0 k1 $\\vdots $ kM-1 , where $K$ is defined in Eq.", "(REF ) below.", "We follow the convention that the entries of a given matrix (or vector) $D$ are represented by $d_{ij}$ (or $ d_i$ ).", "Here, $n\\in \\lbrace 0,\\ldots ,N{-}1\\rbrace $ and $m\\in \\lbrace 0,\\ldots ,M{-}1\\rbrace $ where $M= 1{+} R {+} R(R{+}1)/2$ denotes the number of coefficients that need to be estimated to describe the quadratic Volterra series.", "In particular, $u_{nm}$ are obtained from the input pulses via (recall again $x_q=0$ for $q<0$ ) $u_{nm} ={\\left\\lbrace \\begin{array}{ll}1& \\text{for }\\; m=0,\\\\x_{n{-}m{+}1} & \\text{for }\\; m\\in \\lbrace 1,\\ldots ,R\\rbrace ,\\\\{x_{n{-}a}\\, x_{n{-}b}} & \\text{for }\\; m\\in \\lbrace R{+}1,\\ldots ,M{-}1\\rbrace ,\\end{array}\\right.", "}$ where $(a,b)$ with $0\\le a \\le b \\le R{-}1$ is the $(m{-}R{-}1)$ th element in the lexicographically ordered sequence from $(0,0)$ to $(R{-}1,R{-}1)$ .", "As the quadratic distortion coefficients $h^{(2)}_{k\\ell }$ are symmetric, only the upper (or lower) triangular entries need to be considered.", "The column vector $K=[h^{(0)},h^{(1)}_0,\\ldots ,h^{(1)}_{R-1}, h^{(2)}_{00},\\ldots ,h^{(2)}_{R-1,R-1}]^T$ consists of all the Volterra kernels, where $k_m = h^{(2)}_{ab}$ for $R{+}1 \\le m \\le M{-}1$ and $(a,b)$ is chosen as above.", "The example of $R=2$ , $L=3$ , $N=L+R-1=4$ , and $M=6$ results in (with $x_q=0$ for $q<0$ ) y0 y1 y2 y3 = 1 x0 x-1 x0x0x0x-1x-1x-1 1 x1 x0 x1x1x1x0x0x0 1 x2 x1 x2x2x2x1x1x1 1 x3 x2 x3x3x3x2x2x2 h(0) h(1)0 h(1)1 h(2)00 h(2)01 h(2)11 .", "For the estimation of the distortions, we need to determine the values of $K$ by solving the matrix equation (REF ) with the method of least squares.", "We assume now that the output data vector $Y$ has been measured in an experimental setup.", "We can also concatenate multiple output pulses into a single vector to form $Y$ , which allows us to perform the estimation using multiple short pulses with different characteristics as compared to a single long pulse.", "This provides the freedom of choosing the format for our training data while observing experimental constraints.", "In addition to taking a single long pulse or a set of short pulses, we can also repeatedly use the same set of pulses to reduce the measurement error.", "As the matrix $U$ contains higher-order terms of the input $x_n$ , different columns of $U$ are highly correlated with each other.", "This leads to the problem of solving a linear regression model with a correlated basis set, i.e., the input variables are dependent on each other.", "The precision of the estimation is adversely affected and less robust when naively applying the method of least squares to solve the matrix equation (REF ).", "We resolve this problem by first orthogonalizing the columns of the matrix $U$ .", "The orthogonalization transforms the input variables (stacked in columns of $U$ ) such that they are independent of each other.", "After orthogonalizing $U$ to $V$ , Eq.", "(REF ) is transformed to $Y=VW.$ Now we can solve the modified matrix equation (REF ) using the method of least squares to robustly obtain the values of the vector $W$ .", "Finally, if the Gram-Schmidt method is used for orthogonalization, then one can convert $W$ to $K$ by recursive methods (as explained in [70]) to extract the Volterra kernels $h^{(0)}$ , $h^{(1)}_{j}$ , and $h^{(2)}_{k\\ell }$ .", "In this work, we use the QR factorization method which directly provides the values for $K$ [71], [72]." ], [ "Gradient of the input response function", "Assuming that we have successfully estimated the transfer function, we want to include this information in our gradient-based optimization.", "This would allow us to also go beyond the piecewise-constant control basis of GRAPE by including arbitrarily deformed controls, generalizing further along the lines of Ref. [7].", "We provide now an analytic expression for the corresponding gradient (i.e.", "Jacobian) to build upon the earlier work discussed in Appendix .", "We apply the commutativity of the convolution (i.e.", "$f*g=g*f$ ), e.g., by changing the integration variable from $\\tau $ to $z=t-\\tau $ in Eq.", "(REF ).", "Using a slight generalization, Eq.", "(REF ) can be rewritten asNote that using Eq.", "(REF ) for the estimation in Secs.", "REF -REF would require a number of coefficients given by $N\\times M$ instead of only $M$ and is therefore not recommended.", "$y_n = h^{(0)} + \\sum _{j=0}^{L-1} h^{(1)}_{n{-}j}\\, x_j+\\sum _{k,\\ell =0}^{L-1} h^{(2)}_{n{-}k,n{-}\\ell }\\, x_{k}\\, x_{\\ell },$ where the upper summation bound $L{-}1$ differs from $R{-}1$ in Eq.", "(REF ), i.e.", "integrating over the length of the input instead of the length of the kernel.", "From Eq.", "(REF ), we specify for each time step (indexed by $n$ ) a scalar $K^{(0)}=h^{(0)}$ , a column vector $K^{(1)}_n$ with entries $[K^{(1)}_n]_j=h^{(1)}_{n{-}j}$ , and a matrix $K^{(2)}_n$ with entries $[K^{(2)}_n]_{k\\ell }=h^{(2)}_{n{-}k,n{-}\\ell }$ for $j, k,\\ell \\in \\lbrace 0,\\ldots ,L{-}1\\rbrace $ .", "With this notation, we can write Eq.", "(REF ) as a matrix equation $y_n=K^{(0)} + {X}^T K^{(1)}_n+{X}^T K^{(2)}_n {X},$ where the column vector $X=(x_0,\\ldots ,x_{L-1})^T$ has length $L$ .", "The corresponding partial derivatives are given by $\\frac{\\partial {y_n}}{\\partial {X}}= K^{(1)}_n+ {X}^T [K^{(2)}_n+(K^{(2)}_n)^T],$ which simplifies for a symmetric quadratic kernel to $\\frac{\\partial {y_n}}{\\partial {X}}= K^{(1)}_n+ 2{X}^T K^{(2)}_n$ We can calculate $\\partial {y_n}/\\partial {X}$ for all $n$ and then determine the Jacobian.", "Eventually, the gradient of the cost function (REF ) is obtained using the chain rule as, e.g., in [7] and as discussed in Appendix ." ], [ "Non-linear distortions during Rydberg excitations", "We illustrate our scenario of non-linear distortions during controlled quantum dynamics with robust state-to-state transfers in a single Rydberg atom experiment.", "In recent years, Rydberg atoms have been proven to be a promising platform for quantum simulation [73] and quantum computation [74].", "One of the most distinctive features of these atoms in quantum experiments is their strong and tunable dipole-dipole interactions [75], [76].", "For larger Rydberg atom arrays as for quantum simulators, excitation protocols (and more general operations) from the ground state to the Rydberg state are crucial.", "We consider a gradient-based optimization of control pulses (without feedback) for tailored excitation pulses as outlined in Sec.", ", (see also Sec.", "and Appendix ).", "Figure: (a) Path of the input control pulsefrom the computer code via an arbitrary waveform generator (AWG)and an acousto-optic modulator (AOM) to the atom.", "Before the atom,the output control pulse can be measured using a photodiode (PD).", "(b) Three-level excitation scheme for a single Rydberg atom (see text).The Lindblad master equation for the time evolution of the system is given by Eq.", "(REF ).", "Following [77], the model Hamiltonian for a single Rydberg atom is equal to H(t) = b(t) |gp|+|pg|2+ r(t) |pr|+|rp|2 - |pp| - |rr|.", "The Rabi frequency $\\Omega _b(t)$ of the blue laser excites the atom from the ground state $|g\\rangle $ to the intermediate state $|p\\rangle $ and the Rabi frequency $\\Omega _r(t)$ excites the atom from $|p\\rangle $ to the desired Rydberg state $|r\\rangle $ (see Fig.", "REF (b)).", "In terms of Eq.", "(REF ), $\\Omega _b(t)$ and $\\Omega _r(t)$ constitute the time-dependent control pulses.", "Moreover, $\\Delta $ and $\\delta $ are the single-photon and the two-photon resonance detuning, which will be for simplicity assumed to be zero ($\\Delta =0$ MHz and $\\delta =0$ MHz).", "The Lindblad operator [78] reads as [77] ${\\mathcal {L}(\\rho ) =\\sum \\limits _{j\\in \\lbrace d,g,g^{\\prime }\\rbrace }(V_j \\rho V_j^{\\dagger }) -\\tfrac{1}{2}(V_j^{\\dagger }V_j \\rho + \\rho V_j^{\\dagger } V_j)}$ where $V_g = \\sqrt{\\Gamma _{g}} |g\\rangle \\langle p|$ , $V_{g^{\\prime }}= \\sqrt{\\Gamma _{g^{\\prime }}} |g^{\\prime }\\rangle \\langle p|$ , and $V_d = \\sqrt{\\Gamma _{d}} |r\\rangle \\langle r|$ are the Kraus operators.", "Here, $\\Gamma _{g}={\\Gamma }/{3}$ and $\\Gamma _{g^{\\prime }}={2\\Gamma }/{3}$ denote the probability for spontaneous emission from $|p\\rangle $ to the ground state $|g\\rangle $ or to $|g^{\\prime }\\rangle $ which represents all other ground-state sublevels.", "Realistic experimental parameters $\\Gamma =2\\pi \\times 1.41$ MHz and $\\Gamma _d=2\\pi \\times 0.043$ MHz have been provided by the Browaeys group, where $\\Gamma _d$ is the Doppler effect.", "In a real experiment, the gradients of the controls are restricted due to bandwidth limitations.", "In particular, the controls cannot have derivatives larger than a certain rise speed given by the experimental setup.", "In our simulations, we take realistic values for the rise times of $0.1\\mu s$ and $0.15 \\mu s$ for the red and blue laser pulses respectively (which translate into rise speeds).", "Let us now discuss how systematic distortions can be introduced in this experimental platform during the processing and forwarding of the control signals which finally act on the atom(s).", "The path of the control signals is sketched in Fig.", "REF (a).", "Starting from some computer program, the input pulse (modulated with a fixed carrier frequency) is passed through an arbitrary waveform generator (AWG) to produce the radio-frequency pulse.", "This pulse is then used as an input for an acousto-optic modulator (AOM) which modulates the intensity of a laser beam.", "The final laser pulse is then applied to the atom(s) to perform the excitation.", "The AOM can shape pulses using optical effects such as dispersion [79], [80].", "In this experimental setup, one can measure the laser signal before it acts on the atom(s) using a photo diode.", "In summary, one can choose the input pulse and measure the output pulse; multiple measured input-output pulse pairs serve as training data, which is used to determine systematic distortions.", "In our simulation, we excite the Rydberg atom using the system Hamiltonian from Eq.", "() by applying our optimized input control pulses.", "After that, we introduce quadratic distortions to the control pulses and repeat the simulation.", "The discrete linear and quadratic distortions are prepared from Gaussian distributions described by h1(t) = 12[-(t-)2212], h2(t1,t2) = J[-(t1-1)2+(t2-2)2222].", "The memory length of the discretized dimensionless distortion is $R$ .", "For the distortions A, B, and C, we have chosen $R=50$ , standard deviations $\\sigma _1$ of 1, 6, and 11, and $\\sigma _2$ of $4.25$ , $6.37$ , and $8.50$ .", "Similarly, for the distortions D, E, and F, we have varied $R$ between 20, 40, and 60 while fixing $\\sigma _{1}=1$ and $\\sigma _{2}=4.25$ .", "The amplitude term $J$ has been kept constant at $5 \\times 10^{-6}$ in all cases.", "The example distortion C is shown in Figs.", "REF (a1) and REF (b1).", "Throughout this work, the zeroth order kernel is set to $h^{0}=0.1$ .", "We observe optimized controls with a simulated Rydberg excitation error in the range from $0.06$ to $0.008$ for different pulse lengths (see Fig.", "REF ).", "As expected, longer total durations for the excitation lead to smaller simulated errors.", "But longer pulse durations might lead to further decoherence effects in the experimental implementation (particularly when combined with additional experimental steps).", "We, therefore, aim at reducing the length of the pulses (e.g.", "to a pulse duration around $0.3\\mu s$ ) with reduced excitation errors.", "In Fig.", "REF (a), we notice a uniform increase in the error magnitude when we increase the standard deviation of the Gaussian kernels of Eqs.", "()-() for the distortions A to C. The standard deviation is kept constant in Fig.", "REF (b), but we increase the memory length for the distortions D to F which also results in bigger excitation error.", "The increased excitation errors suggest that optimized control pulses would be susceptible to distortions when applied in the Rydberg atom experiments (and particularly for short pulse lengths).", "In Sec.", ", we present estimation results building on Sec.", "for the considered types of distortions.", "Figure: Reduced excitation efficiencies of optimizedcontrol pulsesdue to non-linear distortions in a simulatedsingle Rydberg atom for distortions with(a) an increasing variance but constant memory length (A-C) and(b) a constantvariance but increasing memory length (D-F); refer to Sec.", ".Figure: Estimation of both the linear and quadratic components for a non-linear distortion: (a) The linear component (a1) of the distortion Cis compared with its estimated value (a2).", "The amplitude andtime steps are dimensionless.", "(a3) The mean absolute scaled error[as defined in Eq.", "()]between the actual and the estimated valuesis calculated for various types of distortions A-F [see Eqs. ()–()].", "(b) Quadratic component similar as in (a)." ], [ "Numerical estimation results", "We report in this section on different simulated estimation results which describe the characteristics and precision of applying the Truncated Volterra series method while also comparing multiple types of input control pulses used in the estimation.", "We also perform the optimization for a single Rydberg excitation again by including the distortions in the algorithm.", "In each analysis, the estimated results are compared with the actual ones using the mean absolute scaled error (MASE) measure $\\text{MASE}=\\frac{1}{N}\\sum _{i=1}^{N}\\left| \\frac{z^{\\text{true}}_i}{|| z^{\\text{true}} ||}-\\frac{z^{\\text{est}}_i}{|| z^{\\text{est}} ||} \\right|$ where $z^{\\text{true}}$ is the actual value, $z^{\\text{est}}$ is the estimated value, and $|| z ||$ is the Frobenius norm of the observable $z$ of length $N$ .", "The MASE is numerically more stable compared to the mean relative error, which can be very large when the measured and the actual values are very small." ], [ "Estimation of distortions", "We start with the results presented in Fig.", "REF where numerical distortions are estimated by relying on a single randomly generated control pulse with 4000 time steps.", "We apply different distortions to the pulse and employ the resulting input-output pulse pairs in the estimation.", "In order to provide a more realistic analysis, we add an additional noise term to the output pulse $y_{\\text{noise}}=y_{\\text{output}}+ \\tfrac{1}{\\sigma \\sqrt{2\\pi }}\\exp [{-\\tfrac{(t{-}\\mu )^2}{2{\\sigma _{}}^2}}],$ where the noise is drawn from a normal distribution with mean $\\mu =0$ and standard deviation $\\sigma =10^{-4}$ .", "Figures REF (a1)-(b1) display the linear and quadratic contribution of the distortion C. The corresponding estimated contributions are shown in Figs.", "REF (a2)-(b2) which match closely with values in Figs.", "REF (a1)-(b1).", "The results also emphasize that precisely knowing the memory length of the distortion (which is here $R=50$ ) is not required as redundant coefficients are automatically set to zero during the estimation for a sufficiently large $R$ (here set to 60).", "The estimation process has been repeated for multiple distortions of type A to F and we observe in Figs.", "REF (a3)-(b3) low estimation errors of approximately $10^{-7}$ to $10^{-8}$ .", "We now also compare the estimation method of Sec.", "with a linear estimation method in the time domain which relies on a linear impulse response [cf.", "Eq.", "(REF )].", "We omit here the very similar linear estimation in the frequency domain.", "We again use the distortion types A to F from Sec.", "for this comparison and apply them again to a random-noise pulse of 4000 steps to obtain input-output pulse pairs for the estimation.", "Figure REF (a) shows the effect of the true and estimated distortion C when applied to an example pulse of $0.4$$\\mu s$ duration.", "The example pulse is stretched under the distortion to a final duration of $0.65$$\\mu s$ .", "The linear estimation is considerably less precise when compared to the quadratic estimation.", "This effect is confirmed in Fig.", "REF (b) which plots the estimation errors for the different distortion types A-F.", "Naturally, this also validates that the chosen distortion types contain some non-linearity which is not accounted for by a linear estimation." ], [ "Orthogonalization", "One important step of the estimation method is orthogonalization and we have discussed its significance in Sec. .", "To further highlight the benefits of orthogonalizing the basis functionals, we test the estimation by directly solving the matrix equation ${U^TY=U^TUK},$ where $U$ is the matrix of the non-orthogonalized and correlated basis functionals, $K$ is the to-be-estimated vector of linear or nonlinear kernel coefficients and $Y$ is the measured output vector.", "We compare the results with coefficients we get from solving the matrix equation (REF ) with the orthogonalized basis set.", "In this analysis, along with the benefit of orthogonalization, we also demonstrate how the estimation depends on the number $M$ of the to-be-estimated coefficients for the distortion, the amount of training data, and the presence of noise in the output pulse.", "Figures REF (a)-(b) discuss the case without added noise.", "The nonlinear distortion with $\\sigma _{1}=0.1, \\sigma _{2}= 0.42$ and $R=5$ is estimated using spline input pulses as the training data.", "Each test and training pulse has 500 time steps and a unique frequency.", "For a fixed number of spline pulses, we observe in Fig.", "REF (a) an increasing estimation error for an increasing number of coefficients $M$ (or memory length $R$ as $M\\propto {R^2}$ ).", "For each $M$ , we apply the estimation results on 50 different spline pulses which serve as test data.", "The corresponding mean error is plotted as a line and the $95\\%$ confidence interval is shown as a shaded region around the mean.", "Figure: Comparison of the simulated estimation ofnon-linear distortions without and with orthogonalizationsolving respectively Eq.", "() and ():(a) Using a fixed number of noiseless training data for splineinput pulses, the relative error rises with an increasing number of coefficientsMM.", "The plottedline shows the mean error and the shaded areaindicates the spread between the 95%95\\% confidence interval found from applyingthe estimation results to 50 different test pulses.Orthogonalization is advantageous fora larger number of coefficients.", "(b) Average estimation errors for differenttraining data sets (see text) highlight the importance of increasing the frequencycontent of the available data.", "The averaging is performed over the fullrange of all number of coefficients MM in (a).", "(c)-(d) As in (a)-(b), but the added noise in the output pulses of the datarequires a higher frequency content for comparable error rates.In Fig.", "REF (b), we gradually increase the number of training pulses used in the estimation.", "For each fixed number of pulses, we perform the estimation on all the values of $M$ as shown in Fig.", "REF (a).", "Hence each point in Fig.", "REF (b) is averaged over 500 results.", "In all cases, the estimation benefits from being performed with orthogonalization.", "Also, extending the amount of training data points by adding more spline pulses with different frequencies improves the estimation precision as seen in Fig.", "REF (b).", "For Figs.", "REF (c)-(d) in the presence of a noise term in the output pulse with a standard deviation of $10^{-9}$ , we observe higher estimation errors which need to be compensated with additional training data points.", "One can also reduce correlations present in the training data by considering a random input pulse as its autocorrelation is zero.", "However, even a completely random input pulse results in correlations in $U$ from Eqs.", "(REF ) and (REF ) which contains various non-linear terms of the same input vector [67].", "In summary, Fig.", "REF illustrates the positive effect of orthogonalization on the error rates in the estimation of the distortion.", "Figure: Simulated estimation errors of distortions with multiple typesof data:(a) training data with more frequency content (such as random-noise pulses)perform better, even as the number of to-be-estimated coefficients MM increases.", "(b) A lower error can be achieved by increasing the frequency content ofthe training data.", "For cosine and Gaussian pulses, the frequency contentis increased by adding more pulses with different frequencies, whereasthe number of knots is increased within a single pulse for splines.Spectrally rich random-noise pulses are highly effective while keeping the data requirements low.", "(c)-(d) Similar to (a) and (b), but noisy training dataincreases the overall error, while random-noise pulses are the most robust.The estimation setup is similar to Fig.", "." ], [ "Frequency requirements", "We investigate different types of training data and their performance in the estimation following the setup of Fig.", "REF .", "We can order different training data types according to their increasing frequency content, with Gaussian pulses having the minimum frequency and random-noise pulses having the maximum.", "Here, the frequency content describes the spectral content of the training data while its value depends on the type of pulses used (see Fig.", "REF (b) and (d).", "There are different errors for spline and cosine pulses depending on the amount of data.", "For a fixed number of pulses, the estimation error grows with an increasing number of coefficients $M$ [see Fig.", "REF (a)].", "Gaussian input pulses are most strongly affected by this, while this effect is essentially negligible in the case of random-noise pulses.", "This illustrates the importance of spectrally rich input training pulses, which is further emphasized in Fig.", "REF (b) where the estimation error is plotted, relative to the frequency content.", "For different types of input pulses, the frequency content is increased differently: we add more pulses with different standard deviations for Gaussian pulses, we add more pulses with different frequencies for cosine pulses, we add more random knots to a single spline.", "Since a random-noise pulse has a very large bandwidth, we aim at increasing the frequency content by increasing the number of random-noise pulses which only slightly reduces the estimation error.", "Figure REF (b) highlights that the frequency content is crucial for the estimation and even a single random-noise pulse is highly effective due to its high-frequency content.", "Splines start to outperform the cosine pulses as soon as they attain higher frequency content than the latter.", "Similar conclusions hold under noise as shown in Fig.", "REF (c)-(d) while the overall estimation error increases for the different input pulse types when compared to the noiseless case.", "The data suggests that a high-frequency content in the training pulses might prevent overfitting noise, which is important when working with real experimental data.", "Also under noise, random-noise input pulses are most effective in the estimation due to their high frequency content." ], [ "Application in optimal control", "Starting from early developments in the field, various theoretical and experimental aspects of quantum control have been discussed in the recent review [20].", "The overall aim of quantum control is to shape a set of external field pulses that drive a quantum system and perform a given quantum process efficiently.", "While the analytical way of finding the control parameters works for special cases, one can use highly developed numerical tools in the context of optimal control theory.", "One solves the Schrödinger or master equation iteratively and produces pulse shapes that perform the desired time evolution.", "Quantum optimal control is broadly divided into at least the two categories of open-loop and closed-loop.", "Open-loop methods can be gradient-based or not.", "Open-loop control is based on the available information about the Hamiltonian of the system and hence it suffers when the system parameters are not completely known such as in the case of an engineered quantum system (such as solid state systems) or when the model cannot be solved precisely as in the case of many-body dynamics [81].", "These limitations might be overcome by means of closed-loop optimal control where the control parameters are updated based on the earlier measurements results [61], [62].", "Closed-loop quantum optimal control can be implemented via both gradient-based and gradient-free algorithms [82], [83], [84].", "In some cases, hybrid approaches have also been suggested [85].", "But in the case where the system Hamiltonian is well known, open-loop control provides more freedom to precisely tune the controls depending on experimental constraints and generally explore a wider range of control solutions.", "Moreover, it also gives a better understanding of the system and works well with systems where fast measurements are not feasible or very noisy, in contrast to closed-loop methods which may require many measurements to converge.", "Figure: Reduced excitation errors after correcting for the distortion with agradient-based optimization relying on the trust-region method.", "(a) distortion C: significant reduced errors, (d) distortion F: mostly recoversthe ideal case.To take full advantage of the open-loop control method and to provide more robust pulses, one can also characterize the experimental system completely or at least partially.", "Here, we highlight how the estimation method from Sec.", "can be employed in an open-loop control setting to minimize the cost function $C$ in Eq.", "(REF ) by relying on the corresponding gradients as computed via Eqs.", "(REF ) and (REF ).", "We refer to Appendix  for details.", "This compensates for distortions and decreases the error.", "Figure REF shows test minimizations of the cost function using the trust-region constrained algorithm [86], which can perform constraint minimization with linear or non-linear constraints on the control pulses.", "Trust-region methods allow us to explicitly observe bandwidth limitations of the control hardware such as limited rise speeds as discussed in Sec.", "by enforcing the corresponding pulse constraints.", "Since distortions C and F defined by Eqs.", "() and (), have the strongest effects on the Rydberg excitation error (see Fig.", "REF ), we correct the control pulses affected by them in the simulations.", "We limit our test to pulses with shorter durations ranging from 0.1$\\mu s$ to 0.4$\\mu s$ as they are less susceptible to decoherence and hence might be more suitable for the excitation process.", "We compare the excitation error produced from the corrected pulse with the ideal and the distorted pulse excitation error.", "In particular, Figure REF shows that the effect of the distortion C can be significantly reduced, but it cannot be completely corrected due to a large standard deviation and long memory length in the distortion.", "The distortion F has a small standard deviation combined with a long memory length which still produces strong effects on the control pulse but with a generally weaker distortion.", "In this case, the effect of the distortion can be almost completely corrected.", "The estimation of transfer functions in order to correct for distortions has one additional benefit.", "The experimental hardware given by, e.g., AWGs and AOMs usually has bandwidth limitations which translate into limited rise speeds as discussed in Sec. .", "In the process of characterizing the experimental devices via their transfer function, we also estimate the effects of these bandwidth limitations.", "The estimated transfer function is then applied during the optimization, which mirrors the effects in the experimental platform and implicitly enforces limitations on the bandwidth or rise time.", "Assuming that the bandwidth-limiting effect of the estimated transfer function is pronounced enough, this allows us to use the limited memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm to perform the minimization of the cost function [52].", "L-BFGS usually offers a more efficient optimization but it cannot explicitly account for general linear or non-linear constraints.", "In the corresponding optimizations, we only enforce simple box constraints to limit the amplitude of the controls while using the extended L-BFGS or L-BFGS-B algorithm [87].", "The results are shown in Fig.", "REF where L-BFGS-B improves the excitation efficiency more effectively than the trust-region method (which needs to also explicitly enforce the constraints on the rise speeds).", "In summary, combining the estimation of distortions with gradient-based optimizations can often effectively compensate for these non-linear distortions during an open-loop optimization." ], [ "Conclusion", "We have proposed a method for estimating non-linear pulse distortions originating from experimental hardware.", "Hardware limitations affect the performance of optimal control pulses as highlighted using numerical data for single Rydberg atom excitations.", "In this case, the errors are increased for distorted control pulses beyond purely linear effects.", "We provide a general model for describing the complex characteristics of these non-linear effects.", "To incorporate estimated distortions into open-loop optimizations, we have detailed a formula to determine the Jacobian of the transfer function.", "Figure: Reduced excitation error for L-BFGS-B whencompared to the trust-region method, even thoughL-BFGS-B does not explicitly enforce constraints onthe control pulses (such as limited rise speeds).But the correctly estimated transfer functionwill implicitly account for pulse constraints.Also, L-BFGS-B is more effectivein the optimization.We tested and validated our proposed method by efficiently estimating different numerical quadratic distortions with varying strength and duration.", "We have also shown that linear estimation methods cannot effectively handle non-linear transfer functions.", "From our detailed analysis and tests, we deduce that the orthogonalization (as described in Sec.", "REF ) is key for a robust estimation.", "A robust least-squares estimation is effective only after the orthogonalization is applied to the matrix containing the training data as its correlated columns would otherwise interfere with the estimation.", "Another critical requirement for effectively performing the estimation is training data with enough frequency content.", "Large frequency content such as in random-noise pulses better captures the non-linear features of transfer functions, particularly in the presence of measurement noise.", "Since the estimation method is independent of any particular type of device characteristics, it can easily be adapted to a wide range of experimental platforms.", "Combining our estimation method with existing numerical optimization techniques can improve the quality and robustness of quantum operations.", "Our work thereby addresses a key challenge of enhancing the accuracy and robustness of experimental quantum technology platforms.", "The authors acknowledge funding from the EU H2020-FETFLAG-03-2018 under grant agreement No 817482 (PASQuanS).", "We also appreciate support from the German Federal Ministry of Education and Research through the funding program quantum technologies—from basic research to market under the project FermiQP, 13N15891.", "We would like to thank Antoine Browaeys, Daniel Barredo, Thierry Lahaye, Pascal Scholl, and Hannah Williams for the illuminating discussions about the Rydberg system as well as providing detailed experimental parameters.", "R.Z.", "would like to thank Jian Cui for initial discussions about the Rydberg setup." ], [ "Optimization algorithm", "We work with open-loop optimal control and detail how to incorporate the Jacobian of the transfer function which can be determined following Sec.", "REF .", "In the mathematical statement of an optimal control problem, the fidelity function $C$ is minimized with regard to the control values $u_i$ .", "We can apply the gradient-based optimization technique known as GRAPE [49] which can also utilize Newton or quasi-Newton (BFGS) methods [52], [7], [88].", "We assume that the total control duration $T$ is divided into $L$ equal steps of duration $\\Delta t= T/L$ .", "During each time step, the control amplitudes $u_i$ are constant.", "The time evolution of the quantum system during the $j$ th time step is given by $U_j=\\exp {(-i\\Delta t(H_0+\\sum _{i}^{}{u_i(j)H_i}))}.$ The cost function can be written as $C=1-| \\langle \\rho _{t}|{{U_{L}\\cdots U_{1}\\rho _{i}U_{1}^{\\dag }\\cdots U_{L}^{\\dag }}}\\rangle |^2.$ From the inner product definition and invariance of the trace of a product under cyclic permutations of the factors, Eq.", "(REF ) can be rewritten as, $1-| \\!\\underbrace{\\langle U_{j+1}^{\\dag }\\cdots U_{L}^{\\dag }\\rho _{t}U_{L}\\cdots U_{j+1}|}_{\\lambda _j}\\underbrace{{U_{j}\\cdots U_{1}\\rho _{i}U_{1}^{\\dag }\\cdots U_{j}^{\\dag }}\\rangle }_{\\rho _j}\\!", "|^2.$ Here, $\\rho _j$ denotes the density operator at the $j$ th time step and $\\rho _{t}$ is the backward propagated target operator at the $j$ th time step.", "If we perturb $u_{i}(j)$ to $u_{i}(j)+\\delta u_{i}(j)$ , the derivative of $C$ is given in terms of the change in $U_j$ to the first order in $\\delta u_{i}(j)$ which is calculated by the Fréchet derivative method [89] using the Python package SciPy [90].", "In order to minimize $C$ , at every iteration of the algorithm, we update the controls by $u_{i}(j)\\rightarrow u_{i}(j) - \\epsilon \\frac{\\delta C}{\\delta u_{i}(j)},$ where $\\epsilon $ is a small unitless step matrix.", "Next, we follow the derivation in [7], where the product rule for gradient calculation is applied and one obtains $\\frac{\\delta C}{\\delta u_{i}(j)}=\\sum _{n=0}^{N-1}{\\frac{\\delta s_k(n)}{\\delta u_k{(j)}}\\frac{\\delta C}{\\delta s_{k}(n)}}\\;\\text{ where }\\; \\frac{\\delta s_k(n)}{\\delta u_k{(j)}}=T_{k}(n,j).$ Compared to (REF ), $u_k$ corresponds to the input pulse $x$ and $s_n$ corresponds to the output pulse $y$ .", "Hence we can calculate each column of $T_{k}$ from Eq.", "(REF ) as $T_{k}(n)= \\frac{\\delta y_{n}}{\\delta X},$ and insert $T_{k}$ into Eq.", "(REF ) to calculate the effective gradient." ] ]
2210.07833
[ [ "Computads and string diagrams for $n$-sesquicategories" ], [ "Abstract An $n$-sesquicategory is an $n$-globular set with strictly associative and unital composition and whiskering operations, which are however not required to satisfy the Godement interchange laws which hold in $n$-categories.", "In arXiv:2202.09293 we showed how these can be defined as algebras over a monad $T_n^{D^s}$ whose operations are simple string diagrams.", "In this paper, we give an explicit description of computads for the monad $T_n^{D^s}$ and we prove that the category of computads for this monad is a presheaf category.", "We use this to describe a string diagram notation for representing arbitrary composites in $n$-sesquicategories.", "This is a step towards a theory of string diagrams for semistrict $n$-categories." ], [ "[hang] 1.1ex" ], [ "[hang] 1.11ex matrix,arrows,decorations.pathmorphing allcolors=[rgb]0.1,0.1,0.4 M. AraújoM.", "AraújoString diagrams for $n$ -sesquicategoriesString diagrams for $n$ -sesquicategoriesManuel Araújo Department of Computer Science and Technology University of Cambridge United Kingdom [email protected] 0.5cm0.5cm Abstract.", "An $n$ -sesquicategory is an $n$ -globular set with strictly associative and unital composition and whiskering operations, which are however not required to satisfy the Godement interchange laws which hold in $n$ -categories.", "In we showed how these can be defined as algebras over a monad $T_n^{D^s}$ whose operations are simple string diagrams.", "In this paper, we give an explicit description of computads for the monad $T_n^{D^s}$ and we prove that the category of computads for this monad is a presheaf category.", "We use this to describe a string diagram notation for representing arbitrary composites in $n$ -sesquicategories.", "This is a step towards a theory of string diagrams for semistrict $n$ -categories.", "Keywords.", "String diagrams.", "Higher categories.", "Monads.", "Computads.", "Mathematics Subject Classification (2010).", "18N20, 18N30." ], [ "Introduction", "The use of string diagram notation as a tool for representing composites in higher categories is becoming ever more widespread.", "This paper is part of a project which aims to give a definition of semistrict $n$ -category based on a purely algebraic/combinatorial notion of string diagram.", "In we defined a monad $T_n^{D^s}$ on the category of $n$ -globular sets, whose operations we call simple string diagrams.", "We give a generators and relations description of $T_n^{D^s}$ , which allows us to characterize its algebras, which we call $n$ -sesquicategories, as $n$ -globular sets equipped with strictly associative and unital composition and whiskering operations, which however do not satisfy the Godement interchange laws that hold in a strict $n$ -category.", "We think of simple string diagrams as analogous to the globular pasting diagrams used in the definition of the monad $T_n^{str}$ whose algebras are strict $n$ -categories ().", "In the present paper we study computads for the monad $T_n^{D^s}$ and show how morphisms in an $n$ -sesquicategory generated by a computad $C$ can be depicted as general $C$ -labelled string diagrams.", "We also prove that the category of computads for this monad is equivalent to the category of presheaves on a small category of computadic cell shapes.", "In future work, we will show how to add coherent weak interchange laws to get a notion of semistrict $n$ -category," ], [ "Results", "We now describe the main result in this paper.", "Denote by $\\operatorname{Comp}$ the category of $(n+1)$ -computads for $T_n^{D^s}$ , by $\\mathbb {1}$ the terminal computad and by $F(C)$ the $n$ -sesquicategory generated by a computad $C$ .", "Cells $c\\in \\mathbb {1}_k$ are called (computadic) $k$ -cell shapes and morphisms $d\\in F(\\mathbb {1})_k$ are called unlabelled $k$ -diagrams.", "A morphism $x\\in F(C)$ is said to have shape $d$ if its image in $F(\\mathbb {1})$ is $d$ .", "Given such $d$ , we construct a computad $\\hat{d}$ with the property that $d$ -shaped morphisms in a computad $D$ are in canonical bijection with maps $\\hat{d}\\rightarrow D$ .", "Applying this to cells, we define a small category $\\operatorname{Cell}$ whose objects are cell shapes, together with a fully faithful embedding $\\widehat{(-)}:\\operatorname{Cell}\\hookrightarrow \\operatorname{Comp}$ .", "From this we construct the nerve/realization adjunction $\\begin{tikzcd}{|-|:\\operatorname{Psh}(\\operatorname{Cell})} & \\operatorname{Comp}:N[\"\"{name=0, anchor=center, inner sep=0}, shift left=2, from=1-1, to=1-2][\"\"{name=1, anchor=center, inner sep=0}, shift left=2, from=1-2, to=1-1][\"\\dashv \"{anchor=center, rotate=-90}, draw=none, from=0, to=1]\\end{tikzcd}.$ Theorem 2.1 The adjunction $\\begin{tikzcd}{|-|:\\operatorname{Psh}(\\operatorname{Cell})} & \\operatorname{Comp}:N[\"\"{name=0, anchor=center, inner sep=0}, shift left=2, from=1-1, to=1-2][\"\"{name=1, anchor=center, inner sep=0}, shift left=2, from=1-2, to=1-1][\"\\dashv \"{anchor=center, rotate=-90}, draw=none, from=0, to=1]\\end{tikzcd}$ is an equivalence of categories.", "We now give an outline of the proof.", "In we showed that $T_n^{D^s}$ has a presentation with generators $\\mathcal {O}_n$ and relations $\\mathcal {E}_n$ .", "We can describe morphisms in the $n$ -sesquicategory generated by an $n$ -computad $C$ as equivalence classes of trees whose internal vertices are labelled by generators in $\\mathcal {O}_n$ and whose leaves are labelled by cells in $C$ .", "The equivalence relation is generated by the relations in $\\mathcal {E}_n$ .", "We then prove that each of these trees has a unique canonical form in its equivalence class.", "This allows us to show that for an unlabelled diagram $d$ the category $\\operatorname{Comp}(d)$ of pairs $(C,x)$ , where $C$ is a computad and $x$ is a morphism of shape $d$ , has an initial object, which we denote $(\\hat{d},\\tilde{d})$ .", "This allows us to construct the nerve/relization adjunction as mentioned above and then the proof of the Theorem follows by formal arguments from the fact $(\\hat{c},\\tilde{c})$ is initial, for $c\\in \\operatorname{Cell}$ .", "After we've established this Theorem, we go on to give a description of the diagrammatic interpretation of morphisms in the $n$ -sesquicategory generated by $C$ as $C$ -labelled string diagrams." ], [ "Related work", "The string diagrammatic calculus for monoidal categories and bicategories is by now well established.", "Generalizations to Gray 3-categories also exist, in the theory of surface diagrams (, ).", "Recently there has been a lot of progress in extending this to higher dimensions, with the discovery of the theory of associative $n$ -categories (), later developed into the manifold diagrams of .", "These manifold diagrams have a combinatorial counterpart, which the authors of call trusses, which are in turn equivalent to the notion of zigzgags introduced in and which forms the basis for an online proof assistant for diagrammatic calculus in higher categories ().", "There are two main differences between the approach above and the one followed in this paper.", "The first is that the input of our theory is the simple combinatorial notion of simple string diagram introduced in , whereas manifold diagrams start from the geometry and obtain from that a combinatorial description, by passing to exit path posets.", "The second is that we want to produce an algebraic notion of semistrict $n$ -categories, by which we mean that these will be algebras over a certain monad on $n$ -globular sets.", "One advantage of the manifold diagrams approach to semistrict $n$ -categories is that all coherences are already encoded in the basic cell shapes, whereas we naturally produce a theory of $n$ -sesquicategories, to which we then have to add coherent weak interchange laws.", "The main advantage of our approach is its simplicity, as in a sense everything follows from the combinatorial notion of simple string diagrams introduced in .", "Most closely related to our work is .", "There the authors develop a framework which is the basis for another proof assistant for diagrammatic calculus in higher categories ().", "The authors have a notion of signature, which corresponds exactly to a computad for $T_n^{D^s}$ , and a notion of diagram over a signature, which corresponds exactly to a morphism in the $n$ -sesquicatery generated by a computad.", "In this sense, our work can also be seen as providing a mathematical foundation for the kinds of higher categorical structures implemented by this proof assistant.", "Our work is also related to questions in the general theory of computads for monads on globular sets.", "If one considers the monad $T_n^{str}$ whose algebras are strict $n$ -categories, then computads consist of presentations for strict $n$ -categories.", "Cells of dimension $k\\le n$ are generating $k$ -morphisms and $(n+1)$ .", "The cells of the terminal $n$ -computad for $T_n^{str}$ are the most general $n$ -categorical cell shapes and the morphisms in the $n$ -category generated by it can be thought of as general unlabelled pasting diagrams.", "One would then like to say that the category of computads for $T_n^{str}$ is a category of presheaves on the cell shapes, but this turns out to be false (, ), essentially because of the Eckman-Hilton argument.", "This lead to the question of finding conditions on monads or restrictions on allowable cells in the associated computads that guarantee that one obtains a presheaf category (,,).", "Our paper can also be seen as a continuation of this line of research, providing a monad on $n$ -globular sets which is related to $T_n^{str}$ and whose category of computads is a presheaf category.", "Finally, the motivation for developing this theory was to be able to use string diagrams to prove results about higher categories.", "In we develop a string diagram calculus for strict 4-categories and we use it to prove a result about fibrations of mapping 4-groupoids.", "In and we use this string diagram calculus to prove coherence results for adjunctions in 3 and 4-categories.", "In , we use a string diagram caculus for strict monoidal 3-categories to prove a coherence result for 3-dualizable objects in strict symmetric monoidal 3-categories." ], [ "Future work", "One can construct a monad $T_n^{ss}$ by adding invertible cells to $T_n^{D^s}$ connecting each pair of string diagrams that map to the same pasting diagram under the map of monads $T_n^{D^s}\\rightarrow T_n^{str}$ .", "This gives a notion of semistrict $n$ -category which comes with a string diagram calculus.", "This is the subject of ongoing research and we will explore it in future papers.", "We are also interested in finding finite descriptions of $T_n^{ss}$ .", "In an upcoming paper, we show how to construct $T_3^{ss}$ by adding a finite set of generators and relations to the monad $T_3^{D^s}$ .", "We then show that its algebras agree with Gray 3-categories.", "We are working on extending this to dimension 4.", "Once the definitions of semistrict 3 and 4-categories are in place, we can extend the coherence results for adjunctions of and to this setting.", "We will then put this together to extend the coherence result for 3-dualizable objects of to this setting.", "An extension of this result to the fully weak setting would give a finite presentation of the framed fully extended 3-dimensional bordism category, by the Cobordism Hypothesis (,,,)." ], [ "Background", "Denote by $\\operatorname{gSet}_n$ the category of $n$ -globular sets.", "Given a finitary monad $T:\\operatorname{gSet}_n\\rightarrow \\operatorname{gSet}_n$ one can define categories $\\operatorname{Comp}_k^{T}$ of computads for $T$ , for $k=0,\\cdots , n+1$ , together with adjunctions $\\begin{tikzcd}{F_k:\\operatorname{Comp}_k^T} & \\operatorname{Alg}_T:V_k.", "[\"\"{name=0, anchor=center, inner sep=0}, shift left=2, from=1-1, to=1-2][\"\"{name=1, anchor=center, inner sep=0}, shift left=2, from=1-2, to=1-1][\"\\dashv \"{anchor=center, rotate=-90}, draw=none, from=0, to=1]\\end{tikzcd}$ This is done inductively, by defining a $k$ -computad $C$ to be a tuple $(C_k,C_{\\le k-1},s,t)$ where $C_k$ is a set, which we call the set of $k$ -cells of $C$ , $C_{\\le k-1}$ is a $(k-1)$ -computad, and $s,t:C_k\\rightarrow F_{k-1}(C_{\\le k-1})_{k-1}$ satisfy the globularity relations $ss=st$ and $ts=tt$ .", "One then defines $F_k$ , for $k\\le n$ , by the pushout $\\begin{tikzcd}{T_n^{D^s}(C_{k}\\times \\partial \\theta ^{(k)})} & {T_n^{D^s}(C_{k}\\times \\theta ^{(k)})} \\\\{F_{k-1}(C_{\\le k-1})} & {F_{k}(C),}[from=1-1, to=2-1][from=1-1, to=1-2][from=1-2, to=2-2][from=2-1, to=2-2][\"\"{anchor=center, pos=0.050, rotate=180}, shift right=2, draw=none, from=2-2, to=1-1]\\end{tikzcd}$ where $\\theta ^{(k)}$ is the globular set represented by $k$ .", "For $k=n+1$ , we replace the inclusion $\\partial \\theta ^{(k)}\\hookrightarrow \\theta ^{(k)}$ by the collpase $\\partial \\theta ^{(n+1)}\\rightarrow \\theta ^{(n)}$ .", "Similarly, one defines $V_k$ by a pullback.", "See for a detailed exposition of this theory of computads (the original reference is ).", "Remark 3.1 There are incusion maps $\\operatorname{Comp}_k^T\\hookrightarrow \\operatorname{Comp}_{k+1}^T$ for $k\\le n$ , so we can think of $k$ -computads as $(n+1)$ -computads.", "For this reason, we sometimes write $\\operatorname{Comp}^T$ instead of $\\operatorname{Comp}_{n+1}^T$ and use the term computad to refer to an $(n+1)$ -computad.", "In we introduced a monad $T_n^{D^s}$ on globular sets, based on a notion of simple pasting diagrams and we defined an $n$ -sesquicategory as an algebra over this monad.", "Notation 3.2 We denote by $\\operatorname{Sesq}_n$ the category of $T_n^{D^s}$ -algebras.", "In we gave a presentation of $T_n^{D^s}$ by generators $\\mathcal {O}_n$ and relations $\\mathcal {E}_n$ .", "There is a generator $\\circ _{i,j}$ for each $i,j=1,\\cdots , n$ and a generator $u_i$ for each $i=1,\\cdots , n$ .", "Given an $n$ -sesquicategory $\\mathcal {C}$ , the generator $\\circ _{i,j}$ induces a map $\\circ _{i,j}^{\\mathcal {C}}:\\mathcal {C}_i\\times _{\\mathcal {C}_m}\\mathcal {C}_j\\rightarrow \\mathcal {C}_M$ , where $m=\\min \\lbrace i,j\\rbrace $ and $M=\\max \\lbrace i,j\\rbrace $ .", "We call this composition when $i=j$ and whiskering when $i\\ne j$ .", "The generator $u_i$ induces a map $u_i^{\\mathcal {C}}:\\mathcal {C}_{i-1}\\rightarrow \\mathcal {C}_i$ and we call $u_i^{\\mathcal {C}}(x)$ the identity on $x$ .", "The relations in $\\mathcal {E}_n$ essentially express the associativity and unitality of $\\circ _{i,j}$ .", "We also characterize $n$ -sesquicategories inductively as categories $\\mathcal {C}$ equipped with a lift of the $\\operatorname{Hom}$ functor ${ & \\operatorname{Sesq}_{n-1}@{}[d]|-{=}[rd]^{(-)_0} & \\\\\\mathcal {C}^{op}\\times \\mathcal {C}@{.>}[ru]^{\\underline{\\operatorname{Hom}}_\\mathcal {C}}[rr]_{\\operatorname{Hom}_\\mathcal {C}} & & \\operatorname{Set},}$ but this will not be relevant in the present paper.", "We now briefly review the generators and relations description of $T_n^{D^s}$ , which our description of computads in the present paper will build on.", "This discussion will be informal, see for details.", "Definition 3.3 Let $X$ be an $n$ -graded set.", "A $k$ -dimensional $(\\mathcal {O}_n,X)$ -labelled tree is a rooted tree $T$ , together with a labelling of its internal vertices by generators in $\\mathcal {O}_n$ ; a labelling of its leaves by elements in $X$ ; a bijection between the incoming edges at an internal vertex and the inputs of the associated generator; such that the root label has dimension $k$ ; the source of each incoming edge at an internal vertex has a label of the appropriate dimension.", "We denote the set of $k$ -dimensional $(\\mathcal {O}_n,X)$ -labeled trees by $\\operatorname{Tree}_n^{\\mathcal {O}}(X)(k)$ or $\\operatorname{Tree}_n^{\\mathcal {O}}(X)_k$ .", "When $X$ is an $n$ -globular set, we can define source and target maps $s,t:\\operatorname{Tree}_n^{\\mathcal {O}}(X)_k\\rightarrow \\operatorname{Tree}_n^{\\mathcal {O}}(X)_{k-1}$ , although in general they won't satisfy the globularity relation.", "Definition 3.4 An $n$ -preglobular set is an $n$ -graded set $X=\\coprod _{i=0}^nX_i$ equipped with source and target maps $s,t:X_k\\rightarrow X_{k-1}$ .", "A globular relation on $X$ is a relation $\\sim $ such that if $x\\sim \\tilde{x}$ then $s(x)\\sim s(\\tilde{x})$ and $t(x)\\sim t(\\tilde{x})$ ; $ss(x)\\sim st(x)$ and $ts(x)\\sim tt(x)$ .", "Note that this means the quotient $X\\mathopen {}\\sim $ is an $n$ -globular set.", "So given an $n$ -globular set $X$ , we have an $n$ -preglobular set $\\operatorname{Tree}_n^{\\mathcal {O}}(X)$ .", "Definition 3.5 We define an $n$ -preglobular subset $\\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(X)\\subset \\operatorname{Tree}_n^{\\mathcal {O}}(X)$ of $\\stackrel{\\epsilon }{=}$ -compatible trees, equipped with a preglobular relation $\\stackrel{\\epsilon }{=}$ .", "The definition is by induction on height.", "The relation $\\stackrel{\\epsilon }{=}$ is generated by the relations in $\\mathcal {E}_n$ .", "A tree is $\\stackrel{\\epsilon }{=}$ -compatible if for every subtree of the form $x\\rightarrow \\circ _{i,j}\\leftarrow y$ we have $s^{i-m+1}(x)\\stackrel{\\epsilon }{=}t^{j-m+1}(y)$ , where $m=\\min \\lbrace i,j\\rbrace $ .", "Finally we define $\\widetilde{\\operatorname{Tree}}_n^{\\mathcal {O},\\mathcal {E}}(X):=\\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(X)\\mathopen {}\\stackrel{\\epsilon }{=}$ and we show that this defines a monad on $n$ -globular sets.", "We construct a map of monads $\\varphi :\\widetilde{\\operatorname{Tree}}_n^{\\mathcal {O},\\mathcal {E}}\\rightarrow T_n^{D^s}$ .", "Each generator in $\\mathcal {O}_n$ is a simple string diagram, so one can use composition of simple string diagrams to produce this map.", "Theorem 3.6 () The map $\\varphi :\\widetilde{\\operatorname{Tree}}_n^{\\mathcal {O},\\mathcal {E}}\\rightarrow T_n^{D^s}$ is an isomorphism of monads." ], [ "Computads for $T_n^{D^s}$", "We give an explicit description of computads for $T_n^{D^s}$ and of the $n$ -sesquicategories generated by them, which we will later show is equivalent to the notion described in the previous section.", "We will simply call them computads, leaving the monad $T_n^{D^s}$ implicit.", "Definition 4.1 Given $k\\le n+1$ , an $(n,k)$ -precomputad (or simply $k$ -precomputad, leaving $n$ implicit) $C$ consists of sets $C_i$ for $0\\le i\\le k$ , together with maps $s,t:C_i\\rightarrow \\operatorname{Tree}_n^{\\mathcal {O}}(C_{\\le i-1})_{i-1}$ for $1\\le i\\le k$ .", "Definition 4.2 Given a $k$ -precomputad $C$ , we define source and target maps $s,t:\\operatorname{Tree}_n^{\\mathcal {O}}(C)_{i}\\rightarrow \\operatorname{Tree}_n^{\\mathcal {O}}(C)_{i-1}$ , for $1\\le i\\le n$ .", "For trees of height zero, these are the maps $s,t:C_i\\rightarrow \\operatorname{Tree}_n^{\\mathcal {O}}(C)_{i-1}$ .", "For trees of nonzero height, we use the following inductive formulas for $s$ , where $j<i$ and $x$ and $y$ have appropriate dimensions in each case.", "The map $t$ is defined by the same formulas, replacing every instance of $s$ with $t$ .", "Table: NO_CAPTIONRemark 4.3 Since the source or target of a $k$ -cell may be an arbitrary $(\\mathcal {O}_n,C)$ -labelled tree, the source and target maps above can increase the height of trees.", "This is in contrast to the situation of , where we considered $\\operatorname{Tree}_n^{\\mathcal {O}}(X)$ for a globular set $X$ .", "This means that the arguments in which relied on induction on the height of the tree must now be replaced by simultaneous induction on both the height and the dimension of the tree.", "The following definitions refer to each other and should be interpreted by mutual induction.", "Definition 4.4 Given $k\\le n+1$ , an $(n,k)$ -computad (or simply $k$ -computad, leaving $n$ implicit) $C$ consists of sets $C_i$ for $0\\le i\\le k$ , together with maps $s,t:C_i\\rightarrow \\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(C_{\\le i-1})_{i-1}$ for $1\\le i\\le k$ , such that $ss(x)\\stackrel{\\epsilon }{=} st(x)$ and $ts(x)\\stackrel{\\epsilon }{=} tt(x)$ for all $x\\in C_i$ .", "Terminology 4.5 A computad is an $(n,k)$-computad, where $n$ is usually implicit in the context and $k\\le n+1$ is arbitrary.", "The definition that follows is almost identical to the analogous one in .", "The only difference is the one explained in Remark REF .", "Definition 4.6 Let $C$ be a computad.", "For each $k$ , we define, by induction on $h$ , subsets $\\tau _{\\le h}\\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(C)_k\\subset \\tau _{\\le h}\\operatorname{Tree}_n^{\\mathcal {O}}(C)_k$ equipped with a relation $\\stackrel{\\epsilon }{=}_{h}$ .", "Elements in $\\tau _{\\le h}\\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(C)_k$ are called $\\stackrel{\\epsilon }{=}_{h-1}$ -compatible.", "We say that $x\\in \\operatorname{Tree}_n^{\\mathcal {O}}(C)_k$ is $\\stackrel{\\epsilon }{=}$ -compatible if it is $\\stackrel{\\epsilon }{=}_h$ -compatible for some $h$ and define $\\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(C)_k\\subset \\operatorname{Tree}_n^{\\mathcal {O}}(C)_k$ the set of $\\stackrel{\\epsilon }{=}$ -compatible elements.", "Finally, we define the relation $\\stackrel{\\epsilon }{=}$ on $\\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(C)_k$ by declaring $x\\stackrel{\\epsilon }{=}\\tilde{x}$ when $x\\stackrel{\\epsilon }{=}_h\\tilde{x}$ for some $h$ .", "The definition is by overall induction on $k$ and is presented below.", "When $h=0$ , we let $\\tau _{\\le 0}\\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(C)_k:=\\tau _{\\le 0}\\operatorname{Tree}_n^{\\mathcal {O}}(C)_k=C_k$ and the relation $\\stackrel{\\epsilon }{=}_0$ is $=$ .", "Now consider $h\\ge 1$ .", "Any $x\\in \\tau _{\\le h}\\operatorname{Tree}_n^{\\mathcal {O}}(C)_k$ of height zero is $\\stackrel{\\epsilon }{=}_{h-1}$ -compatible.", "Let $x\\in \\tau _{\\le h-1}\\operatorname{Tree}_n^{\\mathcal {O}}(C)_i$ , $y\\in \\tau _{\\le h-1}\\operatorname{Tree}_n^{\\mathcal {O}}(C)_j$ and $m=\\min \\lbrace i,j\\rbrace $ .", "Then ${1pc}{1pc}{ & \\circ _{i,j} & \\\\ x[ru] & & y[lu]}$ is $\\stackrel{\\epsilon }{=}_{h-1}$ -compatible if and only if $x,y$ are $\\stackrel{\\epsilon }{=}_{h-2}$ -compatible and $s^{i-m+1}(x)\\stackrel{\\epsilon }{=}t^{j-m+1}(y)$ .", "Moreover, $x\\rightarrow u_{i+1}$ is $\\stackrel{\\epsilon }{=}_{h-1}$ -compatible if and only if $x$ is $\\stackrel{\\epsilon }{=}_{h-2}$ -compatible.", "Now we must define the globular relation $\\stackrel{\\epsilon }{=}_{h}$ on $\\tau _{\\le h}\\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(C)_k$ .", "If $x,y\\in \\tau _{\\le h}\\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(C)_k$ have height zero and $x\\stackrel{\\epsilon }{=}_{0}y$ , then $x\\stackrel{\\epsilon }{=}_{h}y$ .", "Let $i\\le k$ , $x\\in \\tau _{\\le h-2}\\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(C)_{i-1}$ , $y\\in \\tau _{\\le h-1}\\operatorname{Tree}_n^{\\mathcal {O},\\mathcal {E}}(C)_k$ .", "If $x\\stackrel{\\epsilon }{=} t^{k-i+1}(y)$ , then Table: String diagrams for nn-sesquicategories" ] ]
2210.07704
[ [ "Close the Gate: Detecting Backdoored Models in Federated Learning based\n on Client-Side Deep Layer Output Analysis" ], [ "Abstract Federated Learning (FL) is a scheme for collaboratively training Deep Neural Networks (DNNs) with multiple data sources from different clients.", "Instead of sharing the data, each client trains the model locally, resulting in improved privacy.", "However, recently so-called targeted poisoning attacks have been proposed that allow individual clients to inject a backdoor into the trained model.", "Existing defenses against these backdoor attacks either rely on techniques like Differential Privacy to mitigate the backdoor, or analyze the weights of the individual models and apply outlier detection methods that restricts these defenses to certain data distributions.", "However, adding noise to the models' parameters or excluding benign outliers might also reduce the accuracy of the collaboratively trained model.", "Additionally, allowing the server to inspect the clients' models creates a privacy risk due to existing knowledge extraction methods.", "We propose CrowdGuard, a model filtering defense, that mitigates backdoor attacks by leveraging the clients' data to analyze the individual models before the aggregation.", "To prevent data leaks, the server sends the individual models to secure enclaves, running in client-located Trusted Execution Environments.", "To effectively distinguish benign and poisoned models, even if the data of different clients are not independently and identically distributed (non-IID), we introduce a novel metric called HLBIM to analyze the outputs of the DNN's hidden layers.", "We show that the applied significance-based detection algorithm combined can effectively detect poisoned models, even in non-IID scenarios.", "We show in our extensive evaluation that CrowdGuard can effectively mitigate targeted poisoning attacks and achieve in various scenarios a True-Positive-Rate of 100% and a True-Negative-Rate of 100%." ], [ "Introduction", "Federated Learning (FL) allows multiple clients to collaboratively train a Deep Neural Network (DNN) on their private data.", "In contrast to centralized learning approaches, in FL each client trains its own DNN locally and shares only the trained parameters of the model with an aggregation server [48].", "Thus, FL reduces concerns regarding the privacy of the clients' local data, as they never leave the respective client, being especially important in the context of legal restrictions and regulations [3], [1], [2].", "FL also offers performance gain, as computationally expensive training is parallelized and outsourced to the participating clients.", "As a result, FL has become a popular technology and is applied in various applications including image recognition [67], [71], [72], [73], e.g., between multiple hospitals [74], [28], [67], natural language processing (NLP), e.g., text prediction on smartphones [29], [49], personalization [13], or threat detection in IoT networks [57].", "Widely used real-world examples include FL application for word suggestion on smartphones [49] or for medical applications [71].", "However, outsourcing the training process to the individual clients made FL also vulnerable to targeted poisoning attacks, so-called backdoor attacks [6]: an adversary compromises a subset of the clients and lets them submit manipulated model updates, causing the aggregated model to show misbehavior at prediction time (also called inference) if the input sample for the DNN contains a certain adversary-defined trigger.", "Besides, the clients are forced to trust the server, since several attacks have been proposed that enable attackers to infer information about the training data from the trained parameters of a model [87], [43], [64], [76], [24], [81], [31], [56], [69], [76].", "Existing defenses against backdoor attacks can be mainly classified into two categories: 1) Influence reduction (IR) approaches that limit the impact of the individual model updates [23], [50], [6], [55] and 2) detection and filtering (DF) approaches that aim to sort out the poisoned updates [8], [53], [75], [58], [88].", "While it has been shown [58], [79] that a sophisticated adversary can circumvent former approaches [23], [50], [6], [55], the latter ones cannot distinguish between benign and poisoned updates if the data of different benign clients significantly differ from each other, e.g.", "are disjunct.", "In such scenarios, the local training data of different clients are not identically and independently distributed (non-IID), making it challenging for a filtering-based defense to determine if an outlier model is actually poisoned or just trained on abnormal or unseen data.", "Additionally, if aware of the safeguard, the adversary can adapt to the defense and act benign, while introducing the backdoor.", "This makes it even harder for the aggregation server to detect poisoned models.", "To overcome these limitations and effectively distinguish benign and poisoned models, we propose Deep Layer Output Analysis (CrowdGuard), a DF approach that leverages the clients' local data during a feedback loop by analyzing the predictions and hidden layer outputs of the local models for the local dataset of the respective validating client.", "The attacker is not aware of other clients' local data, which prevents him from being adaptive.", "To enable secure aggregation and to prevent curious parties from inspecting the local models, the validation is performed inside secure enclaves[15], providing isolation from the remaining system and allowing for attestation of executed code before sharing the local models.", "Contributions: In particular, the contributions of this paper are as follows: We propose CrowdGuard, the first backdoor defense that analyzes the hidden layer states of local models to distinguish between benign and backdoored models.", "Since the main objective of the adversary is to change the model's predictions for triggered inputs, it cannot disguise the backdoor in all hidden layers without reducing the attack impact.", "CrowdGuard effectively distinguishes between benign and poisoned updates even in non-IID data scenarios, e.g., if the data of different clients are disjunct.", "We propose an architecture that enables secure and privacy-preserving utilization of clients' local data for local model inspection.", "Thus, we provide the basis for a new class of poisoning detection algorithms.", "Additionally, we remove the need for trust in the aggregation server by utilizing secure enclaves on the server side and removing the possibility of privacy-violating attacks on the model updates.", "We design a detection algorithm that integrates Principle Component Analysis (PCA) as well as different significance tests to indicate the presence of a backdoor with high confidence, based on a new metric called HLBIM that is derived from the predictions for the local data of individual clients.", "Furthermore, the algorithm contains a robust aggregation mechanism based on a stacked clustering scheme to compensate for corrupt (analysis) reports of malicious clients.", "We extensively evaluate the efficiency and effectiveness of our approach for different FL scenarios to analyze various possible attack parameters, including different non-IID scenarios, poisoning rates, backdoor types, and datasets (CIFAR-10 [37] and MNIST [17]).", "Relying on probabilistic tests, CrowdGuard achieves high True-Positive rates (TPRs) and True-Negative rates (TNRs), outperforming existing Influence reduction (IR) and Detection and Filtering (DF) approaches.", "By analyzing changes in the outputs of the hidden layers for poisoned behavior, CrowdGuard overcomes the weaknesses of existing DF approaches that directly analyze the layers' weights (e.g., [8], [75], [58], [53]).", "While an adaptive adversary can succeed in injecting a backdoor without making the weights or model accuracy suspicious, the adversary has to change the behavior of at least a subset of neurons to add the backdoor functionality to the DNN.", "By inspecting the outputs of the hidden layers in a client feedback loop, CrowdGuard can detect backdoor behavior within every layer.", "By using other clients' local data that are unknown to the adversary, CrowdGuard effectively prevents the adversary's adaption to our defense, making the system robust against adaptive adversaries.", "Existing approaches that similarly utilize the clients' data [88], [5] have the major risk that curious clients try to infer information about the other clients' data from the local models.", "CrowdGuard allows sharing the local models without such a risk, as its security architecture guarantees the confidentiality of these models." ], [ "Background", "In the following, we will provide all the necessary information about Federated Learning (FL) in [sec:background:fl]Sect.", "REF and targeted attacks on FL in [sec:background:bdfl]Sect.", "REF , that are required for comprehension of our approach." ], [ "Federated Learning", "Federated Learning (FL) [36], [48], [85] is used for generating or improving a shared machine learning (ML) model, i.e.", "a Deep Neuronal Network (DNN), by collaborative computation of multiple clients $C_k$ with $|C_k|=N$ and a server $\\mathcal {S}$ in an iterative process [48].", "The major benefits of FL are, that the clients $C_k$ use their local data $\\mathcal {D}_k$ for the training process.", "These client-located data do not have to be shared and therefore no straightforward privacy issues arise.", "Additionally, the heavy learning computation of the final model is distributed to multiple clients, so that no cost-intensive infrastructure is needed at $\\mathcal {S}$ .", "The learning process takes place over multiple FL rounds $t$, that are supervised by the server.", "At the start of each round $t$ , $\\mathcal {S}$ first deploys a global model $G^t$ to a randomly selected group of $n$ clients $C_i$ with $|C_i|=n$ , which is a subset of the total $N$ clients $C_k$ .", "The $n$ chosen clients initialize their local model $L^t_i$ with $G^t$ and continue using their local dataset $\\mathcal {D}_i$ to train the new local model $L^{t+1}_i$ .", "The training is a regular learning process configured by a bunch of hyper-parameters like, i.e., the learning rate, that optimizes one loss function.", "Afterward, the server collects all $L^{t+1}_i$ models from the $n$ clients and aggregates them to a new global model $G^{t+1}$ .", "Therefore, $\\mathcal {S}$ typically first computes the update of each weight for each model, builds the average of these contributions and adds the resulting value to the global model [48].", "This algorithm is called FederatedAveraging (FedAVG).", "The final contributions are weighted with the global learning rate $\\delta $ and can be represented with [eq:1]Eq.", "REF .", "$ G^{t+1} = G^t + \\delta (\\frac{1}{n} \\sum _{i=0}^{n-1} (L^{t+1}_i - G^t))$ After computing a new global model $G^{t+1}$ , the server can initialize a new round.", "If $\\mathcal {S}$ possesses a high-quality validation dataset $\\mathcal {D}_v$ , the process finishes as soon as a predefined accuracy on $\\mathcal {D}_v$ is reached.", "Otherwise, a round limit can be used for termination.", "Client Data Distributions One major point, that influences the performance of a FL system regarding model accuracy, but also complicates the security mechanisms against malicious contributions, is the underlying distribution of the training datasets $\\mathcal {D}_k$  [39], [32].", "Even if the overall amount of samples for each label in the whole system over all clients $C_k$ is equal, the data will most likely not be uniformly distributed among all the clients, so that all the $\\mathcal {D}_k$ follow the same distribution [48].", "Contrary to an independent and identically distributed (IID) data scenario, a non-IID case naturally delivers diverse contributions after training.", "Non-IID can be defined in multiple severities [89], i.e., all clients can have samples of all available labels, but the overall count for each label differs, which is called feature shift [40].", "Experiments mostly introduce a peak with the non-IID rate of $q \\in [0,1]$ in sample counts for one label, the so-called main label of the client [11], [58].", "The rest of the labels follow a uniform distribution [58], [11].", "The peak in the main label can also be provoked by using a certain distribution for all sample counts, like the Dirichlet [51] or normal distribution.", "Another situation is, when some clients miss one label completely or only have data from one or two labels, which one calls full 1-class and 2-class non-IID respectively.", "The latter two are special cases of a uniform non-IID distribution ($q=1$ ).", "Most of the backdoor defenses on FL, as well as approaches improving the aggregation function focus on either 1-class and/or 2-class non-IID setups or the Dirichlet distribution with different main labels within the clients' datasets." ], [ "Poisoning Attacks on FL", "Adversaries, that are willing to influence the utility of the global model in FL might perform so-called poisoning attacks.", "Such attacks can be untargeted [68], [8], with the aim of decreasing the accuracy of the model or reducing the convergence speed of the global model [20], [83], [38].", "While also considering such assaults, this paper focuses on an even more dangerous version of this attack class, the targeted poisoning attacks, also called backdoor attacks [6], [14], [7], [45], [25], which try to add additional functionality to a model while maintaining the main task accuracy (MA).", "All backdoor attacks have in common, that there exists an input trigger, which is embedded within the raw data activating the backdoor.", "It provokes a desired output represented by a target label $T_\\mathcal {A} $ when feeding the triggered sample into the poisoned model $f_p$ for inference.", "In addition to the MA, the adversarial attacker $\\mathcal {A}$ wants to accomplish a high backdoor accuracy (BA), which delivers a high TPR for a malicious dataset $\\mathcal {D}^M$ containing only triggered samples.", "$ BA = \\frac{|\\lbrace d \\in \\mathcal {D}^M \\; : \\; f_p(d) = T_\\mathcal {A} \\rbrace |}{|\\mathcal {D}^M|}$ Without further inspection, such backdoors remain undetected within the resulting global model and pose a danger to the user of the model.", "In the setup of FL, a poisoning attack can occur, if one or more clients out of $C_i$ are malicious.", "Such so-called adversaries ${\\mathcal {A}}_j$ with ${\\mathcal {A}}_j \\subset C_i$ , can potentially contribute to the global model by submitting a manipulated local model $L^{t+1}_i$ to the server $\\mathcal {S}$ .", "Dependent on the attack algorithm and the attacker's capabilities, he can manipulate the input data, the complete learning process including hyper-parameters, and the final weights of $L^{t+1}_i$ to introduce a backdoor (high BA) and hide the backdoor (maintain high MA).", "Furthermore, he can adapt to the defense by imitating the behavior of benign clients (i.e.", "scaling of the model weights) in order to stay undetected but still effective.", "Different triggers have been proposed to activate the backdoors: 1) Pixel backdoors that get activated by a certain pixel pattern, like a red rectangle [6], [26], [44], which can also be distributed in fractions over multiple poisoned models, 2) a Label-Swap backdoor that miss-labels all samples of one class to a target class, or 3) a Semantic Backdoor that is activated when certain characteristics, e.g., a car in front of a striped background, is present in the input [6], [59].", "Examples for all triggers are shown in [fig:trigger]Fig.", "REF .", "We elaborate on these triggers in [app:backdoors]App. .", "Figure: Two triggered and one benign car sampleTo achieve his goal, a given attacker $\\mathcal {A} _j$ chooses one or more of the following concepts: Data Poisoning: $\\mathcal {A}$ manipulates the training dataset $\\mathcal {D}_i$ , so that the resulting $\\mathcal {D}^{\\mathcal {A}}_i$ includes samples containing the trigger.", "To trade-off the effectiveness against the detectability of the attack, a respective ratio of malicious to benign samples, the so-called Poison Data Rate (PDR), must be chosen by $\\mathcal {A}$ .", "Model Poisoning: $\\mathcal {A}$ manipulates the training algorithm itself by changing hyper-parameters.", "Additionally, he can optimize against several objectives [18] in form of additional loss functions (weighted by an $\\alpha $ parameter) and constrain the loss(es) to stay as close as possible to benign behaviour, especially if he is aware of the defense.", "The weights of the resulting $L^{t+1}_i$ can be adapted to benign models to circumvent straightforward defenses, that analyze the weights of all contributions and remove extreme outliers.", "This process conducted by an adaptive attacker is called constrain-and-scale by [6].", "In case the adversary cannot adapt to the defense and simultaneously reach high BA, which is often called adversarial's dilemma [66], a defense is effective." ], [ "Trusted Execution Environments", "Trusted Execution Environments (TEEs) are programmable secure areas located within a processor that allow the execution of applications inside of secure enclaves, isolated from the remaining system.", "By using, e.g., memory encryption, access to them from outside the enclave, even from privileged processes, are restricted and thus guarantees the confidentiality of the enclaves' data.", "By providing attestation, a remote machine can verify the integrity of the executed code [16].", "Examples for TEEs include Intel SGX [15], AMD SEV [34], ARM TrustZone [62], or Nvidia Confidential Computing [61]." ], [ "Problem Setting", "In the following, we first describe the considered system in [sect:problem:system]Sect.", "REF and then characterize the threat model in [sec:problem:advmodel]Sect.", "REF ." ], [ "System Setting", "In the setup, we consider $N$ clients $C_k$ who train collaboratively a DNN on their private datasets $D_i$ but will not share any data to prevent privacy leakages.", "Further, we consider an aggregation server $\\mathcal {S}$ that receives the individual models and aggregates them via FedAVG  [48].", "Aligned with recent work on poisoning attacks [58], [5], [6], we use the regular FedAVG algorithm, scaling the local models' contributions equally with $\\frac{1}{n}$ instead of weighting them based on their dataset sizes, as malicious clients could otherwise report wrong sizes to increase their attack impact.", "We keep the global learning rate constant at $\\delta = 1$ .", "In principle, there exist multiple aggregation algorithms [8], [27], [54], [86], which either provide better performance or are more robust against byzantine contributions from local models and our method can be used with different aggregation techniques, since it is applied before the aggregation takes place.", "We assume that each client and the server have an arbitrary TEE available allowing the execution of code in the secure enclaves while isolating code and memory from the remaining system, including privileged parts.", "Thus, the TEEs shall prevent the remaining system from learning the data inside the enclave and therefore preserving the data's confidentiality.", "Further, the TEE needs to allow a remote machine to attest the code of the executed enclave.", "Especially in cross-silo setups, e.g., if different institutions like hospitals collaboratively train a DNN [72], or multiple institutions collaborate [21], the machines performing the local training can be assumed to be powerful platforms, providing standard hardware features like TEEs and thus making this assumption reasonable.", "An overview of the considered system is shown in Fig.", "REF , showing the clients, the aggregation server as well as the individual TEEs (marked in green).", "Fig.", "REF also shows the individual steps of our scheme, that we will discuss in Sect.", "REF .", "In contrast to existing work, we do not make any assumptions about the data distributions.", "Thus, the data of the individual clients can follow the same distribution (IID), be distributed differently (non-IID), or even be disjunct.", "A setup in which, e.g., each participating hospital provides X-Ray images of different body locations would be a valid setting." ], [ "Adversary Model", "In this paper, we consider two adversaries, $\\mathcal {A}$ which aims to inject a backdoor into the FL system, and $\\mathcal {A^P}$ which aims to learn information about the clients' data from the local model updates and with this harms the privacy of the data." ], [ "Poisoning Attacker $\\mathcal {A}$", "$\\mathcal {A}$ aims to manipulate the model that is resulting from the FL process and injects a backdoor into it by utilizing data and/or model poisoning (see [sec:background:bdfl]Sect.", "REF ).", "If a certain, adversary chosen trigger is present in the input (cf.", "Sect.", "REF ), the backdoor shall make the aggregated model $G^{t+1} $ predicting an adversary chosen target class $T_{\\mathcal {A}}$ .", "From this goal, two objectives follow: O1 - Attack Impact: To successfully inject a backdoor, $\\mathcal {A}$ aims to make the aggregated model $G^{t+1}$ predicting the backdoor target class $T_{\\mathcal {A}}$ for all trigger samples $\\mathcal {I}$ from the input domain $\\mathcal {X}$ .", "Thus, its objective is to maximize the accuracy on the backdoor task (BA).", "If $\\mathcal {S}$ notices the attack, it will repeat the training process until no backdoor is noticed anymore or filter out the poisoned contributions.", "Therefore, a second objective for $\\mathcal {A}$ is: O2 - Stealthiness: Make the poisoned model updates inconspicuous such that $\\mathcal {S}$ neither can identify the poisoned updates nor notices the performed backdoor attack.", "From O2 also follows that the attack must not reduce the performance of the aggregated model on the main task (MA).", "If the predictions of the aggregated model $G^{t+1}$ for a sample $x\\in \\mathcal {X}$ are denoted as $f(x, G^{t+1})$ , $G^{t+1}$ is the aggregated model without the poisoning attack, and $G_*^{t+1}$ is the aggregated model including the poisoned contributions, then from O1 and O2 results the following goal of the adversary: $f(x, G_*^{t+1}) = {\\left\\lbrace \\begin{array}{ll}T_{\\mathcal {A}} \\text{ if } x\\in \\mathcal {I}\\\\f(x, G^{t+1}) \\text{ if } x\\notin \\mathcal {I}\\end{array}\\right.", "}$ Aligned with previous work [58], [8], [53], [75], we assume that $\\mathcal {A}$ fully controls $n_{\\mathcal {A}} < {n}{2}$ clients in one round and overall $N_{\\mathcal {A}} < {N}{2}$ clients in the whole FL system.In each round $n$ clients are selected for training out of all $N$ clients.", "Thus, it can freely manipulate their local datasets $D_i$ , change the training process, or even manually change the submitted parameter updates, replacing parameters with arbitrary numbers.", "Further, we assume that $\\mathcal {A}$ knows all algorithms that are executed by the server or client.", "In contrast to existing work, the considered adversary $\\mathcal {A}$ might even control $\\mathcal {S}$ except the secure enclaves running in a TEE." ], [ "Privacy Attacker $\\mathcal {A^P}$", "The second adversary, $\\mathcal {A^P}$ , aims to reconstruct information about the clients' local data.", "Aligned with existing work [5], [58], [35], we consider only privacy attacks that learn information about the clients' data by analyzing the local model updates.", "The aggregation of FL anonymizes the individual contributions, preventing $\\mathcal {A^P}$ from associating gained information with a specific client, and also smoothens the parameters.", "Therefore, we will consider privacy attacks on the aggregated model to be out of the scope of this work.", "In our threat model, we consider $\\mathcal {A^P}$ to be a malicious attacker that has arbitrary control over the aggregation server excluding secure enclaves running in a TEE.", "Further, $\\mathcal {A^P}$ also can control some of the clients to analyze any other client's local model that this client might receive." ], [ "TEE Security Assumptions", "In the following, we consider arbitrary TEEs, that isolate executed secure enclaves and allow a remote machine to attest the running enclave (cf. Sect.", "REF ).", "Thus, CrowdGuard is not restricted to TEEs of certain manufacturers.", "However, we assume that all used TEEs are trusted.", "Therefore, attacks on the used cryptographic algorithms and attacks that extract keys burned into the TEE are out of the scope of this paper.", "Recently, also a number of side-channel attacks have been proposed that extract data from TEEs [33], [80], [10].", "However, these attacks have a low bandwidth of less than 100 bytes/s [78] which is negligible compared to the size of a DNN model, such that also these attacks are considered to be out of the scope of this work." ], [ "Requirements and Challenges", "Based on the characterization of $\\mathcal {A}$ and $\\mathcal {A^P}$ , the following requirements for a backdoor defense can be derived: R1: Prevent the backdoor attack, i.e., $\\forall x\\in \\mathcal {I}: \\; f(x, G^{t+1}_*)=f(x, G^{t+1})$ .", "R2: In order to be practical, the defense scheme must not reduce the benign performance of the resulting FL model, especially in absence of any attack.", "Therefore, if no attack was performed and $\\hat{G}^{t+1}$ is the aggregated model obtained by FedAVG, then $\\forall x\\in \\mathcal {D}: \\; f(x, \\hat{G}^{t+1})=f(x, G^{t+1})$ R3: The defense must preserve the clients' privacy.", "Therefore, $\\mathcal {S}$ must not be able to access the models for running inference attacks.", "Nor should any other party, e.g., the clients, be able to run inference attacks on the models of other clients.", "From these requirements, a number of challenges follow that CrowdGuard will address in the rest of the paper: C1: How to effectively distinguish benign and poisoned models, especially for non-IID scenarios, in order to fulfill R1?", "[sec:approach]Sect.", "explains, how CrowdGuard can distinguish poisoned models and benign models, being trained on abnormal data.", "C2: $\\mathcal {S}$ must not be able to access the individual local models as this would enable $\\mathcal {S}$ to run inference attacks (cf.", "R3).", "However, to identify poisoned model updates, $\\mathcal {S}$ has to inspect the model updates $L^{t+1}_i$ .", "A challenge that CrowdGuard will address is, therefore, how to inspect the local models without enabling any party to extract knowledge from them.", "C3: CrowdGuard uses the predictions including the hidden state outputs of the local models on the local data of other clients for identifying a backdoor.", "However, the backdoor attack should not change the predictions for non-trigger input samples (cf.", "O2).", "Since it is unlikely that benign clients have many trigger input samples, a challenge that CrowdGuard will solve is, how to use clients' local data for identifying the backdoor, without having trigger samples.", "Figure: Overview and step sequence of CrowdGuard" ], [ "High-Level Overview of CrowdGuard", "CrowdGuard analyses the outputs of the individual layers of the DNN to distinguish between benign and backdoored (local) models.", "An overview of the individual steps of CrowdGuard is shown in Fig.", "REF .", "First, the clients trainDepending on the respective application scenario, there are reasons for performing also the training inside a TEE, such as confidentiality of the training data, while there are also reasons against it, e.g., the computational overhead.", "CrowdGuard focuses on a backdoor resilient aggregation and supports both modes, s.t., the question of performing the training inside or outside the TEE is out of the scope of this paper.", "their local models and send them encrypted to an enclave, running on the aggregation server $\\mathcal {S}$ (Step 1 in Fig.", "REF ).", "Second, the individual contributions are collected and sent to secure enclaves running on the $C_i$ that are now used as validation clients (Step 2 in Fig.", "REF ).", "In the third and most important step, the client-side enclaves will validate the models by analyzing the hidden layer outputs for their local datasets $\\mathcal {D}_i$ (Step 3 in Fig.", "REF ).", "Fourth, the clients will be sending their votes for each model, whether it is benign or suspicious, back to the server (Step 4 in Fig.", "REF ).", "In the last step (Step 5 in Fig.", "REF ), the server applies a stacked clustering schema to combine the votes for each model from the different clients.", "Thereafter, $\\mathcal {S}$ removes the models marked as poisoned, and uses a configured aggregation rule for aggregating the remaining models before sending the aggregated model $G^{t+1}$ back to the clients for further training rounds.", "It should be noted that, although in this paper we focus on FedAVG as aggregation rule, also arbitrary other rules such as Krum [8], trimmed mean [86] or median [86] can be used.", "To ensure the confidentiality of the models, all transmissions are encrypted (e.g., using TLS) and each enclave is attested before the enclave receives any model.", "Further, the client-side enclaves attest additionally the server before they receive their models." ], [ "CrowdGuard", "In this section, we will describe the details of CrowdGuard.", "After the TEE setup and the local training that is orchestrated by the server, $\\mathcal {S}$ transmits the locally trained models to all clients initializing a feedback loop.", "Once the clients analyzed the models and report their votes about which models are backdoored, the server merges those votes and filters out the models marked as poisoned before aggregating the remaining models.", "Those steps are visualized in [fig:overallflow]Fig.", "REF .", "Figure: Execution sequence of CrowdGuardGreen indicates the utilization of secure environments." ], [ "Defense Setup and Local Training", "Motivated by the risks imposed by $\\mathcal {A^P}$ , the server $\\mathcal {S}$ starts with setting up a TEE.", "Hence, the server code and the client feedback loop are executed within a secure environment and clients as well as third parties are capable of attesting the software, addressing parts of C2.", "Once $\\mathcal {S}$ then receives all locally trained model $L^{t+1}_i$ , in the first step of CrowdGuard, the server transmits the $L^{t+1}_i$ models to the $C_i$ clients in step 2.", "Sending local models $L^{t+1}_i$ of all clients to another client for validation imposes privacy risk, as, if malicious, a client-side attacker $\\mathcal {A^P} \\in C_i$ can analyze received models and try to infer information about the other clients' data (O2).", "To prevent such privacy attacks or other data leakages, not only the server-side of CrowdGuard but also the client-side validation runs in a secure enclave.", "This guarantees the confidentiality of the handled data, i.e., the models and respective layer outputs, even if $\\mathcal {A^P}$ has kernel privileges.", "Attesting the enclaves before any data is sent to them, guarantees that the executed code will not leak the received models (cf.", "C2)." ], [ "Client Feedback Loop", "In the third step of CrowdGuard, illustrated in [fig:overallflow]Fig.", "REF , the clients reach a decision if each local model $L^{t+1}_i$ is poisoned in their opinion.", "This happens in two steps: 1) To inspect the local models for backdoors, our novel metric HLBIM is extracted from the local models.", "2) HLBIM is analyzed via probabilistic tests to produce to a voting decision.", "HLBIM Matrix Generation As depicted in [alg:hlbim]Alg.", "REF lines 1-6, the global model $G^t$ , the local models $L^{t+1}_i$ and the local data $D_i$ are used to generate two HLBIM matrices based on Cosine (HLBIM$^C$ ) and Euclidean (HLBIM$^E$ ) distances.", "Therefore, deep layer outputs (DLOs) are necessary, which can be obtained by feeding $D_i$ into both, $G^t$ and $L^{t+1}_i$ .", "During inference we keep track of the outputs of each sample, each model, and each layer, depicting one DLO in the matrix (lines 7-15).", "The DLO matrices are then used to calculate the final HLBIM within five steps (lines 16-20): 1) Generating the distances between each $L^{t+1}_i$ and $G^t$ for each of the DLOs.", "In these distances, the backdoor behaviour is not yet detectable with high significance.", "2) To highlight the differences between the models regarding a reference model, a ratio of the DLOs is generated, with respect to the validating client's own local model used as a reference.", "The rationale for this is, that each client assumes his own model to be benign.", "If the DLOs are equal, the ratio will be one.", "3) To highlight the differences between the ratios, we scale them by subtracting one and squaring the values but keeping the sign.", "4) We average the resulting DLO matrix over the sample dimension for each label, to carve out the effect for each label class separately.", "5) To reduce the final matrix dimension without losing information and thus saving computational costs later on, for each model we concatenate the layers for the individual labels.", "[th!b] HLBIM Matrix Generation for $C_j$ [1] Input: $G^t$ , Global model of round $t$ $L^{t+1}_i$ , All local contributions of round $t$ including $L^{t+1}_j$ $D_j$ Local dataset of client $C_j$ Output: HLBIM$^{C/E}_{m_j\\;m\\;l}$ HLBIM matrices for Cosine and Euclidean distances Generate deep layer outputs DLO_local$_{s\\;m\\;l}$ $\\leftarrow $ {} DLO_global$_{s\\;l}$ $\\leftarrow $ {} $s$ in $D_j$ $m$ in $L^{t+1}_i$ DLO_local$_{s\\;m\\;l}$ $\\leftarrow $ deep_layer_outputs($s$ , $m$ ) DLO_global$_{s\\;l}$ $\\leftarrow $ deep_layer_outputs($s$ , $G^t$ ) Distance Generation client_voting $\\leftarrow $ {} dist$^{C/E}$ in [COSINE-distance; EUCLIDEAN-distance] DLO_dist$^{C/E}_{s\\;m\\;l}$ $\\leftarrow $ dist$^{C/E}$ (DLO_local$_{s\\;m\\;l}$ , DLO_global$_{s\\;l}$ ) Scale relative distances to HLBIM HLBIM$^{C/E}_{s\\;m_j\\;m\\;l}$ $\\leftarrow $ {} DLO_reference$^{C/E}_{s\\;m_j\\;l}$ $\\leftarrow $ DLO_local$^{C/E}_{s\\;j\\;l}$ dlo$^{C/E}_{s\\;m\\;l}$ in DLO_local$^{C/E}_{s\\;m\\;l}$ dlo_rel$^{C/E}$ $\\leftarrow $ dlo$^{C/E}_{s\\;m\\;l}$ $/$ DLO_reference$^{C/E}_{s\\;m_j\\;l}$ DLO_squared$^{C/E}_{s\\;m_j\\;m\\;l}$ $\\leftarrow $ $|$ dlo_rel$^{C/E}$ - 1$|$ $*$ (dlo_relative$^{C/E}$ - 1) DLO_avg$^{C/E}_{la\\;m_j\\;m\\;l}$ $\\leftarrow $ AVG(labels $la$ , DLO_squared$^{C/E}_{s\\;m_j\\;m\\;l}$ ) HLBIM$^{C/E}_{m_j\\;m\\;l}$ $\\leftarrow $ CONCAT(layer $la$ , DLO_avg$^{C/E}_{la\\;m_j\\;m\\;l}$ ) Voting Decision via Model Pruning To detect poisoned models, the HLBIM matrices must be analyzed.", "Our routine, described in detail in [alg:prune]Alg.", "REF , first reduces the dimension of the models $\\times $ layers HLBIM matrix to one value for each model via Principle Component Analysis (PCA), on which we then conduct significance tests, that reveal the presence of a backdoor.", "If our tests indicate malicious models, we generate two clusters via hierarchical clustering and prune the models located in the smaller one.", "We repeat this process until the significance tests report the absence of suspicious models or if we already pruned n2 of all models (cf.", "majority assumption in [sec:problem:advmodel]Sect.", "REF ) and with this solving C1.", "Due to the pruning approach, we can remove different types of backdoors within one round $t$ , since not all backdoored models have to be identified all at once within the first cluster.", "[th!b] Voting Decision via Model Pruning for $V_j$ [1] Input: HLBIM$^{C/E}_{mj\\;m\\;l}$ HLBIM matrices for Cosine and Euclidean distances Output: client_voting List with client decisions for each $L^{t+1}_i$ Analyze HLBIM via dimension reduction client_voting $\\leftarrow $ {} dist_type$^{C/E}$ in [COSINE-distance; EUCLIDEAN-distance] significant $\\leftarrow $ True pruned_models $\\leftarrow $ {} significant HLBIM _pruned$^{C/E}_{mj\\;m\\;l}$ $\\leftarrow $ {} values$_{mj\\;m\\;l}$ in HLBIM$^{C/E}_{mj\\;m\\;l}$ $m$ in pruned_models continue HLBIM _pruned$^{C/E}_{mj\\;m\\;l}$ $\\leftarrow $ values$_{m_j\\;m\\;l}$ pc_dim1_values $\\leftarrow $ PCA(HLBIM _pruned$^{C/E}_{m_j\\;m\\;l}$ )[0] significant $\\leftarrow $ SIGNIFICANCE(pc_dim1_values) malicious_models $\\leftarrow $ {} significant clusters $\\leftarrow $ AGGLOMERATIVE_CLUSTER(nclusters = 2, pc_dim1_values) malicious_models $\\leftarrow $ MIN_CLUSTER(clusters).models() pruned_models.add(malicious_models) len(pruned_models) > FLOOR($|L^{t+1}_i|$ -1) / 2 remove_count = len(pruned_models) - FLOOR($|L^{t+1}_i|$ -1) / 2 count in RANGE(remove_count) malicious_models.remove(MIN(malicious_models)) significant $\\leftarrow $ False client_voting.add(malicious_models) Figure: Visualization of distributions generated by [alg:significance]Alg.", "for pruning of CrowdGuard with n=100n = 100, PMR=40%PMR = 40\\%.", "Distributions are considered to differ significantly indicating backdoors found from (a) - (c).", "Poisoned models are pruned iteratively.To conduct the significance test on the Principal Components (PCs) of the first PCA dimension, we split the values into two different lists: The first one contains the absolute distances to the median that lie above the median and the second list consists of the ones below.", "This can be retraced by inspecting [fig:significance]Fig.", "REF .", "The malicious models get pruned iteratively, showing that we can eliminate different backdoors in one FL round $t$ .", "In [fig:significance]Fig.", "REF , no more significant models are detected.", "Another example can be seen in [fig:graphs:pca1]Fig.", "REF .", "We then conduct several significance tests on those two lists, which are treated as two distributions.", "In a benign setting, the two distributions should be equal with high significance.", "Therefore, CrowdGuard forces an equal mean via Student-T-Test [46], a matching variance via F-TestLevene-Test [41] for equal variances of two distributions., and a comparable overall distribution via D-TestKolmogorov-Smirnov [47] for equal goodness of fit of two distributions.. We used p-values of 0.01 to indicate significant differences for all of the tests.", "If all those tests are passed, we investigate outliers, that do not influence the former results.", "Therefore, we set two thresholds: 1) We analyze the interquartile range of all data points by using a boxplot.", "2) We analyze the distance of each point regarding the interval spanned by the 3$\\sigma $ -rule [63].", "Data points lying outside the interval are marked as significant.", "If we find significant values which indicate the presence of a backdoor, we generate two clusters via hierarchical clustering and prune the models contained in the smaller one.", "This process is repeated until all tests are negative and with this solving C1.", "To prevent False-Positives, the algorithm also stops if half of the models have been pruned due to the threshold set by our threat model (cf.", "[sec:problem:advmodel]Sect.", "REF ).", "The algorithm can be found in detail in [alg:significance]Fig.", "in [app:algSignificance]App. .", "[tb] Voting Filtering via Stacked Clustering [1] Input: voting_matrix Matrix of client votes.", "Dimensions: ($|C_i| \\times |L^{t+1}_i|$ ) Output: aggregated_voting List with final voting decisions for each $L^{t+1}_i$ Majority Cluster Detection clusters $\\leftarrow $ AGGLOM_CLUSTER(nclusters = 2, voting_matrix) majority_cluster $\\leftarrow $ MAX_CLUSTER(clusters) Miss-Classification Compensation filtered_cluster $\\leftarrow $ DBSCAN(majority_cluster, min_samples=1, $\\epsilon $ =0.5) aggregated_voting $\\leftarrow $ {\"all-benign\"} max_count $\\leftarrow $ 0 cluster in filtered_cluster count $\\leftarrow $ $|$ cluster$|$ count $>$ max_count aggregated_voting $\\leftarrow $ cluster[0].decision max_count $\\leftarrow $ count It is important to note, that although the code of the validation enclaves is attested, the malicious clients can still manipulate their own validation result by providing manipulated validation data to the enclave.", "Therefore using secure enclaves on the client side only guarantees the confidentiality of the models but not the integrity of the votes.", "In the next section, we will discuss the server-side aggregation scheme to handle votes of the malicious clients." ], [ "Voting Aggregation", "Stacked Clustering: After the clients provided their votes to the server, $\\mathcal {S}$ is confronted with potentially malicious votes from the $\\mathcal {A} _j$ as well as with accidentally wrong votes of benign clients, which can happen if a model exceeds, i.e., an outlier threshold slightly.", "It is worth noting, that each $C_i$ does not evaluate its own local model, but just reports it as benign by default.", "$\\mathcal {S}$ first generates two clusters by agglomerative clustering [60] and identifies the bigger one as votes from benign clients due to the majority assumption (cf.", "[sec:problem:advmodel]Sect.", "REF ).", "However, this cluster can contain minor errors of benign clients or malicious clients that manipulate their voting to be similar to benign behaviour.", "Therefore, we conduct a second clustering and choose the most frequent voting by using DBSCAN [19] and inspecting the cluster sizes of the output.", "The decisions of the biggest cluster is the final result of the voting aggregation.", "The detailed algorithm can be found in [alg:stackedclustering]Alg.", "REF .", "Robustness: With this, we can handle situations, where the $\\mathcal {A} _j$ mark all benign $L^{t+1}_i$ as malicious and vice versa, since they are removed after the first clustering.", "Settings, where $\\mathcal {A} _j$ behaves as benign as possible and just try to invert the decision for one specific $L^{t+1}_i$ , will be ignored by the second clustering.", "The same holds for minor voting errors of benign clients.", "Due to the stacked clustering approach leveraging the majority assumption, the algorithm is robust against PMRs up to 49%, since first, the benign votes are identified by majority and then the best voting is selected as the final decision by majority again.", "This outperforms a naïve majority voting, which would not ignore False-Positives of benign clients making it less robust." ], [ "Evaluation", "In this section, we first depict our experimental setup in [eval:setup]Sect.", "REF and then describe the influence of various parameters in [eval:params]Sect.", "REF .", "Afterward, in [eval:perf]Sect.", "REF , we investigate the performance of our approach." ], [ "Experimental Setup", "To simplify the comparison of our evaluation with other poisoning defenses, we aligned our experimental setup with recent works [66], [6], [11], as we describe in the following.", "Computational Setup: All experiments were implemented in Python using the Deep Learning library PyTorch [4].", "The experiments were executed on a server with an Intel Xeon 5318S with Intel SGXv2, 2 Nvidia RTX A6000, and 512 GB main memory, from which 128GB were reserved as secure memory.", "For executing python inside an SGX enclave, we leveraged the library Gramine [77].", "Datasets: For our evaluation, we use the popular benchmark datasets CIFAR-10 [37], consisting of 50k training images and 10k test images, and the MNIST [17] dataset, consisting of 60k training images and 10k test images.", "Both datasets contain samples from 10 classes and are frequently used for evaluating poisoning defenses [6], [12], [23], [48], [53], [58], [66].", "To simulate the FL setup, we split the training dataset into local datasets $D_i$ consisting of 2.560 samples, one $D_i$ for each $C_k$ .", "Aligned with the work of Cao et al.", "[11] and Rieger et al.", "[66], we created datasets for different non-IID settings:1) For 1-class non-IID with a non-IID rate $q$ , a main label is chosen randomly for each client and $q$ percent of samples in $D_i$ are changed to samples with the respective main label, while the remaining labels are chosen from all labels uniformly distributed.", "Therefore, for a non-IID rate of $q=1.0$ , each client only uses data from its main label(s), such that the data of different clients are disjunct if they have different main labels.", "2) For 2-class non-IID we do the same but choose the subsequent label of the main label as \"second main label\".", "3) For Dirichlet and Normal distribution, we produced label counts according to the respective distribution.", "For the CIFAR-10 dataset, we used a light version of the Resnet-18 network, as described by Bagdasaryan et al.", "[6], which delivers five deep layers and one final layer output.", "For MNIST, we reimplemented a version of the Convolutional Neural Network (CNN), as described by Cao et al.", "[12].", "Default Configurations: The default parameters and configurations for our experiments are depicted in [eval:params]Tab.", "REF .", "In our default setup, we use $|C_i|=20$ clients to participate in the FL round $t$ as well as in the feedback loop.", "We select the main label of each client according to its index $i$ , so that we have the most disjunct label settings and prevent getting multiple clients with the same main label by chance.", "[tab2,tabularx=Z | Z,title=Default Configurations,boxrule=0.75pt] Parameter Default Value Dataset CIFAR-10 Clients $n$ $|C_k|$ = $|C_i|$ = 20 Epochs 10 Samples per client 2560 Batch Size 64 Backdoor Semantic Backdoor IID rate $q$ 0 Poison Data Rate (PDR) 0.1 Starting round $t$ 1000 Adaptive adversary rate$\\alpha $ 0.7 Poison Model Rate (PMR) 0.45 ( = ${\\mbox{9}}{\\mbox{20}}$ ) Benign Learning Rate 0.01 Malicious Learning Rate 0.01 type=table tableListing of the default FL setup configurations Adaptive adversary from Bagdasaryan et al.", "[6]" ], [ "Outputs and Influence Factors", "In [eval:params:output]Sect.", "REF , we visualize and explain the output of our experiments and list all configurations.", "Afterward, we discuss the parameters and other influencing factors in [eval:params:influence]Sect.", "REF" ], [ "Experiment Outputs", "Experiment Visualization: To improve the comprehension of our approach, in [fig:graphs]Fig.", "REF we visualize intermediate outputs of our algorithm.", "The experiment is conducted with our default configurations from [tab:defaultconf]Tab.", "REF and therefore contains 20 clients.", "[fig:graphs]Fig.", "REF depicts the plain Cosine distance from each $L^{t+1}_i$ to $G^t$ for one label (in this case label 8) averaged over that label.", "As explained in [sect:approach-evaluation]Sect.", "REF , from this metric alone, one cannot clearly identify the poisoned models.", "To enable backdoor detection, we produce the matrices containing our novel metric HLBIM, which can be seen for Cosine distance in [fig:graphs]Fig.", "REF for one label.", "Subsequently, in [fig:graphs]Fig.", "REF , we can then observe the whole HLBIM$^{C}$ plot.", "The malicious models are responsible for the peaks.", "This can be analyzed by comparing the values for HLBIM$^{C}$ to the ones in the poisoned-free version in [fig:graphs]Fig.", "REF .", "According to [alg:prune]Alg.", "REF , we conduct the PCA on the HLBIM matrices to obtain [fig:graphs]Fig.", "REF , which is used in the first pruning round of our significance test.", "The results of the first and second rounds of pruning are depicted in [fig:graphs]Fig.", "REF and [fig:graphs]Fig.", "REF .", "At this step, we end up with just benign models, which results in negative significance tests.", "[fig:graphs]Fig.", "REF shows the cleaned HLBIM$^{C}$ graph used to produce the last PC values in [fig:graphs]Fig.", "REF .", "[fig:graphs]Fig.", "REF shows an exemplary boxplot of our outlier detection algorithm during the second pruning round (c.f [fig:graphs]Fig.", "REF ).", "The abnormalities in the malicious models derive from their local BA, which in our experiments is always 100%.", "This means the attacker is able to incorporate the backdoor in his local model.", "Figure: Visualization of intermediate outputs of CrowdGuard with default configurations (models 0 to 10 being benign)Figure: Influence of parametersConducted Experiments: [tab:experiments]Tab.", "REF lists all of our conducted experiments, each changing one parameter of the default configuration.", "Additionally, we tested the defense against two untargeted poisoning attacks: 1) We selected the labels for each sample in the training and test set randomly.", "2) We changed the learning algorithm to maximize the loss function.", "Both experiments delivered 100% defense success rate.", "Thus we can claim, that CrowdGuard is also robust against untargeted poisoning attacks.", "Furthermore, the following two combined attacks that integrate two backdoors with different triggers and target labels $T_\\mathcal {A} $ at once have been evaluated and results are reported in [tab:experiments]Tab.", "REF .", "1) Every $\\mathcal {A}$ tries to inject both backdoors.", "2) Each half of the malicious clients integrate one of the two backdoors.", "[tab2,tabularx=c | Z | c | c,title=Conducted Experiments with TPR and TNR,boxrule=0.75pt] Analyzed Parameter Parameter Values TPRs TNRs Data distributions CIFAR-10, 1-class non-IID, $q = [0.0, 0.1 ,... ,1.0]$ CIFAR-10, 2-class non-IID, $q = [0.0, 0.1 ,... ,1.0] $ CIFAR-10, Dirichlet CIFAR-10, Normal MNIST 1-class non-IID, $q = [0.0, 1.0]$ 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% Adversarial adaptive rate $\\alpha $ $\\alpha = [0.1, 0.2 ,... ,0.9]$ 100% 100% Poison Data Rate (PDR) $pdr = [0.1, 0.2 ,... ,0.9]$ 100% 100% Poison Model Rate (PMR) & number of clients $n$ $pmr(n=20) = [0.05, 0.1 ,... ,0.45]$ $pmr(n=100) = [0.01, 0.2 ,... ,0.49]$ $pmr(n=10) = [0.1, 0.2,... ,0.4]$ , (1-class non-IID $q=0.0$ ) 100% 100% 100% 100% 100% 100% Backdoor Pixel Backdoors , Label Swap, Semantic 2 combined attacks 2 untargeted attacks 100% 100% 100% 100% 100% 100% Starting FL round $t$ $t = 1000$ $t = 0$ $t >= 1$ 100% 100% 100% 100% 0% 100% Malicious Training Learning Rate(M-LR) $LR = [0.01, 0.001]$ 100% 100% type=table tableListing of conducted experiments with TPR and TNR of CrowdGuard.", "The default settings are used and only the analyzed parameter is changed for one experiment.", "Multiple parameters within brackets denote multiple experiments." ], [ "Influence Factors", "General Results: We depict graphs illustrating the influence of parameters in [fig:param]Fig.", "REF .", "As it can be seen, the defense is independent of $\\alpha $ , the PDR, and the non-IID scenario and achieves 100% True-Positive rate (TPR) as well as True-Negative rate (TNR).", "The MA is higher if the defense is activated, so we do not decrease the benign FL performance.", "The same perfect detection rates are achieved in our other experiments, listed in [tab:experiments]Tab.", "REF .", "Disjunct Data Szenario: We conducted an experiment with only six benign and four malicious clients in 1-class non-IID for $q=[0.0]$ and assigned samples from each of ten existing labels in the dataset to a different client.", "This setup results in completely disjunct training data, which reflects a full non-IID scenario.", "CrowdGuard was also effective in detection of the four malicious clients even in this edge-case scenario, fulfilling C1.", "MNIST: In addition, we evaluated CrowdGuard on the MNIST dataset in 1-class and 2-class non-IID with $q=[0.0,1.0]$ scenarios starting from FL round 100 using the Label Swap backdoor and also for a smaller learning rate of 0.001 at the malicious clients.", "This showed that our approach is not restricted to one specific dataset thus not hindering the application in real-world scenarios.", "Varying PMR: Our experiments are conducted with the biggest PMR possible regarding the $C_i$ clients and since we are pruning the malicious models, a smaller PMR would still result in the same outcome, thus the PMR has no influence.", "In the default setting, the PMR has a maximum value of $\\frac{9}{11} = 45\\%$ .", "To stress this parameter to the maximum, we conducted an additional experiment with $n=100$ clients and a PMR of 49%.", "The results show also perfect detection rates of 100%, making the approach effective in small and large FL setups.", "Randomly Initialised Model: The only exception regarding the defense effectiveness is when starting the training with a randomly initialized $G^0$ in round $t=0$ .", "There we observed a TNR of 0%.", "But already starting from $t=1$ all malicious contributors are identified reliably.", "After $t=0$ , $\\mathcal {A}$ is not able to introduce the backdoor in $G^1$ , since FedAVG changes the weights too much, so that the BA is 0%.", "This experiment showed two facts: 1) It is harder for $\\mathcal {A}$ to implement the backdoor as long as the model did not converge to a certain MA.", "2) Nevertheless, CrowdGuard already detected 100% of malicious models in round $t=1$ , meaning our approach is not limited to already converged base models $G^t$ .", "Robustness against Adaptive Adversaries: $\\mathcal {A}$ can adapt to the defense in various ways by integrating a second loss function (cf.", "Bagdasaryan et al.", "[6]).", "Thereby, he can try to minimize the distance of the model weights to $G^t$ or first train a benign local model and then try to adapt precisely to our defense algorithm by leveraging DLOs measured on that model.", "We conducted experiments with both adaptation strategies, finding that the former delivered better results for the adversary, meaning that poisoned models were harder to identify by CrowdGuard (regarding the significance level).", "Therefore the more difficult-to-handle adaptation method is part of our default setting but does not prevent CrowdGuard from detecting the backdoor.", "The most relevant reason for that is, that $\\mathcal {A}$ cannot adapt to the other clients' local data.", "If $\\mathcal {A}$ increases the level of adaptiveness extremely, he fails in introducing the required BA in his local model, so that his contribution is averaged out, even without clipping or noising methods (cf.", "adversarial's dilemma in [sec:background:bdfl]Sect.", "REF ), fulfilling R1.", "Effectiveness of Stacked Clustering: The stacked clustering ensures the integrity of the filtering, even if $\\mathcal {A}$ can manipulate the votes that are reported by the enclaves of the malicious clients (cf.", "[alg:stackedclustering]Alg.", "REF ).", "To show its impact, we simulated an attack where $\\mathcal {A}$ was able to manipulate the votes that the malicious clients report and vote for all poisoned models to be benign and vice versa.", "In the Dirichlet distribution scenario, for one benign validation client, one poisoned model was very close below the pruning threshold in the last pruning round.", "Thus, together with the malicious votes, this would be sufficient for a naïve majority voting scheme to accept this model, resulting in a TPR of only $88.9\\%$ .", "In comparison, the stacked clustering of CrowdGuard was able even in this corner case to robustly aggregate all models, resulting in TPR=$100\\%$ and TNR=$100\\%$ .", "Overall, CrowdGuard is robust in various scenarios, independent of specific FL system settings, and therefore applicable in real-world scenarios.", "The reasons for the high success rates are, that the benign clients rarely deliver wrong votes, due to our significance test (cf.", "[alg:significance]Alg. )", "based on HLBIM.", "Even if the malicious clients manipulate the voting of its secure enclave, they are compensated by the subsequent stacked clustering voting aggregation." ], [ "Runtime Overhead of CrowdGuard", "To evaluate the performance overhead of using SGX for the client-side validation, we compare the runtime of the SGX implementation with a plain implementation in python for validating our default setting with 20 clients.", "To ensure reliable results, we run each experiment 10 times and averaged the execution times.", "For the SGX version, we measured a runtime for the client-side validation, including attestation and actual validation, of 3 749.60 seconds.", "The plain implementation, without attestation, had a runtime of 14.67 seconds." ], [ "Discussion", "In the earlier sections we introduced CrowdGuard, a novel defense against backdoors in FL that is fully compatible with secure aggregation techniques.", "In the following, we will discuss its security, parameters, as well as its limitations." ], [ "Parameterization", "In our significance-based algorithm (cf.", "[alg:significance]Alg.", "), we introduce thresholds in the form of significance levels, that function as parameters for our defense.", "The p-values of the probabilistic tests is set to 0.01Typical statistical tests operate with 0.05, so our defense is even more sensitive and is therefore unlikely to produce False-Positives.", "and the outlier thresholds are dynamical values based on the observed data.", "Therefore, CrowdGuard purely relies on probabilistic thresholds and does not include empirically determined limits, that are dataset dependent.", "We demonstrated that those parameters paired with [alg:stackedclustering]Alg.", "REF are robust against (un)targeted poisoning attacks." ], [ "Limitations of CrowdGuard", "Available TEE: In this paper, we mainly considered cross-silo scenarios, where multiple computation centers collaborate, such that it is likely that the servers provide any TEE.", "However, in other scenarios, certain low-performance devices, such as mobile devices, might not necessarily have a TEE includedIt should be noted that modern mobile devices often provide ARM Trustzone as TEE.", "Although it is questionable if such a device has the necessary resources for training a DNN, yet it is one limitation of CrowdGuard.", "Computational Overhead: Further, CrowdGuard also introduces some computational overhead.", "In this paper, we focused on developing a secure and privacy-preserving algorithm for backdoor detection, rather than optimizing the performance.", "This becomes visible, e.g., as the experiments were implemented in python on SGX, which cannot utilize ML-specific hardware.", "Therefore, it is left to future work to optimize the runtime performance of CrowdGuard, e.g., by implementing it in other programming languages or for TEEs that allow the utilization of ML accelerators, such as Nvidia confidential computing [61].", "The other aspect that causes the overhead is the distribution of the local updates to other clients.", "While the effort is negligible in the considered cross-silo scenario of a few collaborating computing centers, this overhead might become more relevant when applying CrowdGuard to other scenarios with large numbers of participants.", "In such scenarios, it would be possible to select a subset of clients $V_r \\subset C_i$ for the feedback loop.", "The drawback is that then it is not guaranteed anymore that the majority of clients that validate a model is benign, while for $V_r=C_i$ this is guaranteed by our threat model [sec:problem:advmodel]Sect.", "REF .", "Therefore, this fraction is a tuning parameter of our approach and represents the well-known trade-off between security and performance.", "The probability of violating the majority assumption, in this case, follows a hypergeometric distribution [65].", "[fig:percentage]Fig.", "REF shows this probability for $|C_i|=1000$ selected clients for different PMRs and different values for $|V_r|$ .", "As it can be seen, the probability for small PMR values becomes negligible already for less than 50 clients.", "Figure: Probability for more than 50% of adversaries in the feedback loop out of 1000 clients for different PMRs" ], [ "Related Work", "In the recent past, a large number of defenses against backdoor attacks and targeted poisoning attacks in general have been proposed.", "In the following, we discuss the approaches that are most relevant to this work and categorize them into the following types: Approaches that aim to detect backdoored models (Sect.", "REF ) and approaches that mitigate the backdoors without identifying the poisoned models (Sect.", "REF )." ], [ "Filtering Approaches", "The closest to our work is the approach by Zhao et al.", "[88], which, similarly to CrowdGuard, also sends the local models to the clients.", "Differently, however, it only inspects the model outputs to identify the poisoned models based on a drop in their main task accuracy (MA).", "However, as discussed in Sect.", "REF , stealthy backdoor attacks do not affect MA (O2), hence this approach cannot detect sophisticated attack variations that do not change the last layer of the model (the output).", "Further, the approach of Zhao does not provide TEE-supported platform architecture to protect the confidentiality of local models when they are processed by other clients.", "Thus, it introduces an enlarged attack surface for privacy attacks, which our solution effectively prevents.", "BaFFLe [5] sends the aggregated model to the clients to identify backdoored models based on mispredictions.", "Similarly to [88], the backdoor identification is based on a postulate that backdoors reduce the accuracy of the main task.", "As already discussed above, this does not always hold in practice.", "Auror clusters the individual parameters of the model updates using k-means [75].", "However, Auror is vulnerable to attacks where different clients inject different backdoors (multi-backdoor attacks [58]), like in our combined attack scenarios.", "In comparison, the significance-test-based algorithm of CrowdGuard can iteratively remove the backdoored models.", "FoolsGold assumes all clients to be non-IID  [23] but can be circumvented by adaptive attacks [6] and fails to handle IID scenarios.", "Flame combines an outlier-detection-based approach with clipping and noising [58].", "However, the noising damages the model, while the outlier detection fails in some non-IID scenarios.", "In comparison, CrowdGuard can handle non-IID scenarios even with disjunct data as well as IID scenarios.", "DeepSight uses different techniques to extract fingerprints of the training data from the model updates to distinguish benign and backdoored models.", "Its classification relies on the assumption that poisoned models were trained on fewer labels as benign models [66].", "However, e.g., if the benign clients are trained only on a single label (cf.", "experiments for q=0.0 in Sect.", "), this assumption does not hold, while CrowdGuard can also in this corner case effectively distinguish benign and backdoored updates." ], [ "Mitigation Approaches", "Other approaches try to mitigate the backdoor without identifying the poisoned models.", "Yin et al.", "[86] proposed two approaches: One uses the parameters' median as a rule for aggregation, while the other one first trims the values of different models for a parameter and averages the remaining values.", "Krum [8] selects a single model as an aggregated model that minimizes the distance to a fraction of other models.", "However, the approaches of Yin et al.", "[86] and Blanchard et al.", "[8] do not work well in non-IID scenarios, preventing benign-but-outlier models from being included in the aggregation.", "Naseri et al.", "[55] propose using Differential Privacy (DP) for mitigating backdoors.", "However, besides the drawback that this strategy always reduces the model's performance, the DP level needs to be chosen manually.", "Here, a too high value makes the model unusable while a too low value is not effective.", "In comparison, CrowdGuard works without any dataset-specific parameters and relies on statistical values instead.", "In addition, approaches of this type have the drawback that the malicious clients cannot be identified and, hence, permanently excluded but will permanently try to inject the backdoor, requiring the respective defense to always perfectly mitigate the poisoned models." ], [ "Privacy Attacks and Defenses", "Privacy Attacks: There are multiple attacks against ML models that have the potential to hinder the wide-spread use of this technology in sensitive domains since they are capable of leaking private information.", "Engineers and researchers are aware of membership inference attacks [31], [76], property inference attacks [24], and label inference attacks [87].", "Additionally, inference methods that reconstruct the whole input have been developed [70] not only for centralized ML processes but also in the area of FL [82].", "Secure Aggregation Techniques: Various approaches have been proposed to prevent a curious-but-honest [22] or a fully malicious server [9], [52], [30] from accessing the local model updates.", "For example, Bonawitz et al.", "[9] use a secret-sharing protocol to allow the clients the calculation of noise that will cancel out during aggregation.", "However, this approach is not compatible with state-of-the-art backdoor defenses.", "Fereidooni et al.", "[22] use secure multi-party computation.", "However, these approaches create significant overhead for the clients and server.", "In the past, already different approaches proposed using TEEs for the aggregation step.", "In PPFL [52], the whole FL process, therefore training and aggregation, is performed inside a TEE.", "Hashemi et al.", "[30] implemented Krum [8] on SGX.", "In comparison, CrowdGuard not just implements the aggregation inside a TEE but in this paper, we proposed an architecture to securely leverage clients' data for backdoor detection, without taking any privacy risk for local models or datasets." ], [ "Conclusion", "Privacy of sensitive data and defenses against poisoning attacks are central security considerations when it comes to Federated Learning (FL), which is a widespread collaborative learning setup.", "To satisfy these needs, we propose CrowdGuard, a model filtering defense against (targeted) poisoning attacks, that introduces a client feedback loop leveraging the clients' local data for model assessment, so that adversaries cannot adapt completely to the defense.", "In contrast to existing approaches, CrowdGuard does not only rely on the models' accuracies but considers the intermediate outputs of deep layers to identify abnormal behavior.", "Thereby, our proposed architecture preserves the privacy, integrity, and confidentiality of local models and consequently client data by stretching secure environments.", "Furthermore, our approach is independent of the clients' data distribution and thus detects state-of-the-art backdoor attacks in complicated non-IID scenarios.", "CrowdGuard has two core components: 1) A significance-based backdoor detection algorithm, that executes statistical tests operating on HLBIM, a novel metric based on the deep layer outputs of local models allowing to identify adversarial models.", "2) A stacked clustering scheme, which compensates rogue votes of adversarial clients during the feedback loop outperforming naïve majority voting methods.", "We evaluate our approach in various FL configuration settings and show, that the independence of those factors.", "Additionally, CrowdGuard does not harm the FL performance and is not circumventable by adaptive adversaries, that are aware of the defense, making it applicable in real-world scenarios.", "Significance Algorithm alg:significanceAlg.", "shows the algorithm that is executed by the individual clients during the validation to analyze the HLBIM values and determine, whether there is still a significance for the presence of poisoned models in the calculated PCA values, such that CrowdGuard needs to perform another pruning iteration.", "[thb] Significance Test on PC Values [1] Input: pc_dim1_values, A list of values Output: significant Indicator if the values are contain abnormalities Generate distributions median $\\leftarrow $ MEDIAN(pc_dim1_values) upper $\\leftarrow $ {} lower $\\leftarrow $ {} value in pc_dim1_values distribution_value $\\leftarrow $ value $-$ median value $>=$ 0 upper.append(distribution_value) lower.append(abs(distribution_value)) Significance tests mean_significant $\\leftarrow $ T-TEST(upper, lower) var_significant $\\leftarrow $ F-TEST(upper, lower) dist_significant $\\leftarrow $ D-TEST(upper, lower) outlier_quartil_significant $\\leftarrow $ OUTLIER_BOXPLOT(pc_dim1_values) outlier_sigma_significant $\\leftarrow $ OUTLIER_3$\\sigma $ (pc_dim1_values) Aggregate result significant $\\leftarrow $ mean_significant OR var_significant OR dist_significant OR outlier_quartil_significant OR outlier_sigma_significant Backdoor Types Research knows about various kinds of triggers for different scenarios, i.e.", "[42], [14], but we will only explain few of them, which are also used in our experiments: Pixel Backdoor: This is a backdoor in the domain of image classification, where a pixel pattern is placed on the benign input image [6], [26], [44], as visualized in [fig:trigger:cartrigger]Fig.", "REF and the label is changed to the desired $T_\\mathcal {A} $ .", "In another injection strategy called Distributed Backdoor, this trigger is distributed between multiple adversarial clients.", "Each client incorporates a fraction of the pattern into their local model.", "The final trigger is a combination of all the fractions [84].", "Label Swap: All samples of one label are swapped to $T_\\mathcal {A} $ .", "To create a poisoned dataset $\\mathcal {D}^{\\mathcal {A}}_i$ , only changes regarding the label mapping are mandatory in $\\mathcal {D}_i$ .", "Semantic Backdoor: In this case, the input data contain a specific characteristic within the benign image, that should trigger a swap to $T_\\mathcal {A} $ .", "Examples regarding the CIFAR-10 [37] dataset are the mapping of cars in front of a striped background (cf.", "[fig:trigger:cartrigger]Fig.", "REF ) to $T_\\mathcal {A} $ , but leave all other car samples like [fig:trigger:cartrigger]Fig.", "REF in its benign states [6]." ], [ "Significance Algorithm", "alg:significanceAlg.", "shows the algorithm that is executed by the individual clients during the validation to analyze the HLBIM values and determine, whether there is still a significance for the presence of poisoned models in the calculated PCA values, such that CrowdGuard needs to perform another pruning iteration.", "[thb] Significance Test on PC Values [1] Input: pc_dim1_values, A list of values Output: significant Indicator if the values are contain abnormalities Generate distributions median $\\leftarrow $ MEDIAN(pc_dim1_values) upper $\\leftarrow $ {} lower $\\leftarrow $ {} value in pc_dim1_values distribution_value $\\leftarrow $ value $-$ median value $>=$ 0 upper.append(distribution_value) lower.append(abs(distribution_value)) Significance tests mean_significant $\\leftarrow $ T-TEST(upper, lower) var_significant $\\leftarrow $ F-TEST(upper, lower) dist_significant $\\leftarrow $ D-TEST(upper, lower) outlier_quartil_significant $\\leftarrow $ OUTLIER_BOXPLOT(pc_dim1_values) outlier_sigma_significant $\\leftarrow $ OUTLIER_3$\\sigma $ (pc_dim1_values) Aggregate result significant $\\leftarrow $ mean_significant OR var_significant OR dist_significant OR outlier_quartil_significant OR outlier_sigma_significant" ], [ "Backdoor Types", "Research knows about various kinds of triggers for different scenarios, i.e.", "[42], [14], but we will only explain few of them, which are also used in our experiments: Pixel Backdoor: This is a backdoor in the domain of image classification, where a pixel pattern is placed on the benign input image [6], [26], [44], as visualized in [fig:trigger:cartrigger]Fig.", "REF and the label is changed to the desired $T_\\mathcal {A} $ .", "In another injection strategy called Distributed Backdoor, this trigger is distributed between multiple adversarial clients.", "Each client incorporates a fraction of the pattern into their local model.", "The final trigger is a combination of all the fractions [84].", "Label Swap: All samples of one label are swapped to $T_\\mathcal {A} $ .", "To create a poisoned dataset $\\mathcal {D}^{\\mathcal {A}}_i$ , only changes regarding the label mapping are mandatory in $\\mathcal {D}_i$ .", "Semantic Backdoor: In this case, the input data contain a specific characteristic within the benign image, that should trigger a swap to $T_\\mathcal {A} $ .", "Examples regarding the CIFAR-10 [37] dataset are the mapping of cars in front of a striped background (cf.", "[fig:trigger:cartrigger]Fig.", "REF ) to $T_\\mathcal {A} $ , but leave all other car samples like [fig:trigger:cartrigger]Fig.", "REF in its benign states [6]." ] ]
2210.07714
[ [ "National-scale bi-directional EV fleet control for ancillary service\n provision" ], [ "Abstract Deploying real-time control on large-scale fleets of electric vehicles (EVs) is becoming pivotal as the share of EVs over internal combustion engine vehicles increases.", "In this paper, we present a Vehicle-to-Grid (V2G) algorithm to simultaneously schedule thousands of EVs charging and discharging operations, that can be used to provide ancillary services.", "To achieve scalability, the monolithic problem is decomposed using the alternating direction method of multipliers (ADMM).", "Furthermore, we propose a method to handle bilinear constraints of the original problem inside the ADMM iterations, which changes the problem class from Mixed-Integer Quadratic Program (MIQP) to Quadratic Program (QP), allowing for a substantial computational speed up.", "We test the algorithm using real data from the largest carsharing company in Switzerland and show how our formulation can be used to retrieve flexibility boundaries for the EV fleet.", "Our work thus enables fleet operators to make informed bids on ancillary services provision, thereby facilitating the integration of electric vehicles." ], [ "Background and motivation", "Public authorities and the private sector face many challenges in transforming industries and infrastructure to meet sustainability goals.", "A key factor is the successful integration of renewable energies such as solar or wind power, which however poses difficulties to the power system due to the increased fluctuations in supply from renewable energy sources.", "At the same time, an increasing number of electric vehicles pose an additional burden on the grid [16].", "Both challenges inspired the development of smart charging or V2G technologies, where the charging flexibility of EVs are exploited as buffer storage to the power system.", "Smart charging and V2G were shown to have high potential benefits for peak load shaving [36], [7], [19], supporting the integration of renewable energies [22] while offering additional revenues to vehicle owners [18].", "Although smart charging and V2G have been studied for years [12], [29], they remain difficult to implement in practice for the following reasons: 1) they require control over a sufficiently large fleet of EVs, 2) they imply complex dispatching problems, and 3) they involve trading between the power system and the vehicle fleet operators.", "A major opportunity is the application of V2G for large-scale car sharing systems [11], since they can centrally manage large and significant resources for V2G operations.", "In contrast to the share of EVs on the private vehicle market (8% global sales sharehttps://www.ev-volumes.com), the share of EVs in car sharing systems is already high, with more than 66% of car sharing services offering fully or partially electric fleets [27].", "V2G may afford additional revenues to car sharing operators, but at the same time requires careful dispatching to minimize the negative impact on car availability for mobility purposes.", "Here, we propose an optimization approach for V2G operations that scales to a large fleet of EVs.", "Specifically, we first provide a monolithic formulation for optimizing charging schedules, and further develop relaxations that allow to decompose the problem by aggregated vehicle hubs such as car sharing stations.", "Our experiments demonstrate a strong improvement in runtime using our approach, enabling its application on a large-scale vehicle fleet.", "Furthermore, the optimization framework is tested on a new dataset from a car sharing operator in Switzerland.", "It is shown that our method scales to a fleet of 1440 electric vehicles in feasible runtime and can be employed to decrease energy costs while providing different kinds of grid services.", "Our optimization approach is therefore not only relevant for car sharing services but may in general support in controlling V2G fleet operations." ], [ "Literature review and previous works", "An increasing number of works is tackling the problem of charging schedule optimization in the context of car sharing; [34] optimize charging times in a MINLP problem targeted at determining the fleet size of a car sharing system.", "[15] optimize the charging station setup and schedule for a car sharing fleet and provide interesting insights on the best decisions on charging station placement and minimum State of charge (SOC).", "Similarly, [3] formulate a two-step optimization problem in order to reduce the charging prices in a shared system, while retaining user satisfaction.", "Only some research has focused on large-scale, national level optimization of V2G, since this is a more challenging problem if realistic constraints are considered.", "Furthermore, the typical scale of pilot projects in this context is small: in [25] the authors reviewed 54 pilot projects using EVs for providing grid services, reporting an average number of 26 EVs per pilot.", "In [21] a decentralized algorithm to optimize the charge (but not discharge) of 5000 EVs was presented.", "In [37] the authors present a rule-base two-stage hierarchical approach to coordinate charging operations of thousands of EVs.", "While this research only considers smart charging and not V2G, [5] also include the possibility of V2G in the relocation-optimization of one-way car sharing.", "In [39] the authors coordinated 500 EVs to achieve frequency regulation using a rule-based control in a V2G setting.", "[38] regard the problem that is most closely related to our formulation, namely V2G strategies for car sharing, and they propose a two-stage stochastic optimization employing a 24 hours receding horizon approach solved with a resolution of 15 minutes.", "They show that keeping integer variables lead to infeasible solution times (greater than 32 hours in their case), and propose to both relax all integer variables to continuous one and use decomposition techniques in order to speed it up.", "However, they do not provide a scalability analysis of their algorithms, nor mention the number of considered EVs.", "In contrast to optimal control methods, others propose data-driven optimization with learning methods.", "For example, [31], [32], [30], [8], [20] train a reinforcement learning (RL) agent to decide on charging behavior.", "However, these methods are usually focused on finding decision policies for single EVs, since finding the optimal joint actions for a fleet of EVs, which is the focus of our work, is a much more challenging task, in general requiring a multi-agent RL strategy, which usually involve to optimize over a large decision space.", "Authors in [26] propose RL for guiding charging decisions for a whole vehicle fleet at once by reducing the action space by pooling EVs with similar energy requests; however, this was done not considering external inputs such as an aggregated profile, and disregarding V2G.", "In the following we start describing a generic formulation needed to effectively synchronize the EV fleet charging and discharging operations, and later explain how relaxing some conditions can lower the overall computational complexity.", "The common setting for all the problem formulations is the following: a car sharing provider operating a stationary fleet (as opposed to free-floating) is willing to jointly optimize all its EVs' operations in order to reduce its own operating costs, whether by optimizing for a dynamic price, increasing its own self-consumption if local PV generation is present, or by providing services to the electric grid.", "Furthermore, the provider knows at least an approximated schedule of the future EV locations, in terms of their presence at a given charging station and driven mileage for the next control horizon.", "This can be realistically achieved using information from booking apps and by modeling historical data.", "Based on these assumptions we can estimate the lower bounds for the EVs' battery energy constraints needed to satisfy all their foreseen mobility demand, as we will show in section REF .", "These time series are required to formulate the optimal control problem, as explained in the following section." ], [ "Monolithic formulations", "Given a control horizon of $T$ steps, $n_s$ stations, each station hosting $n_{v, s}$ vehicles, and called $\\mathcal {T}$ and $\\mathcal {S}$ the sets of times and stations, the monolithic problem can be described as: $&u^{*} = \\underset{\\mathcal {X}}{\\text{argmin}}\\,{F(u) + Q(x)} \\\\&x_{t+1, v} = A_v x_{t, v} + B_v u_{t, v} -\\Delta e_{t, v} \\quad \\forall t \\in \\mathcal {T}, v \\in \\mathcal {V} \\\\&u \\succcurlyeq 0 \\\\&u_c \\preccurlyeq x_c u_{c, max}^T \\qquad u_d \\preccurlyeq (1-x_c) u_{d, max}^T \\\\&u_c \\preccurlyeq c u_{c, max}^T \\qquad u_d \\preccurlyeq c u_{d, max}^T \\\\&\\sum _{v \\in \\mathcal {V}_{t, s}} u_{c, t, v} - u_{d, t, v} \\in \\mathcal {U}_s \\quad \\forall t \\in \\mathcal {T}, s \\in \\mathcal {S} \\\\&\\sum _{v \\in \\mathcal {V}_{t, s}} c_{t, v} \\le n_{max, s} \\quad \\forall t \\in \\mathcal {T}, s \\in \\mathcal {S} $ where $x \\in {R}^{T \\times \\sum _s n_{v,s}}$ is the matrix containing the battery state for all the EVs in kWh.", "For sake of clarity, table REF reports all the parameters and optimization variables $\\mathcal {X}$ of the problem with associated dimensions and domains.", "Table: Variables, parameters and constants of the EV optimization problem.Here $F(u): {R}^{T \\times \\sum _s n_{v,s}} \\rightarrow {R}$ and $Q(x): {R}^{T \\times \\sum _s n_{v,s}} \\rightarrow {R}$ are two scalar convex functions.", "In particular $F(u)$ is a cost function associated with the charging and discharging actions of the EVs and depends on the specific business model and will be further specified in section REF .", "We now explain in detail the problem constraints.", "Equation describes the EVs dynamic equation, taking into account self-discharge and asymmetric charging and discharging efficiencies encoded in the $A_v \\in {R}$ and $B_v \\in {R}^2$ discrete dynamics matrices, obtained by the continuous one through exact discretization [28]: $\\begin{array}{l}A=e^{A_c d t} \\\\B=A_c^{-1}\\left(A_{d}-I\\right) B_c\\end{array}$ where $A_c=\\frac{1}{\\eta _{sd}}$ and $B_c=[\\eta _{ch}, \\frac{1}{\\eta _{ds}}]$ , and $\\eta _{sd}$ , $\\eta _{ch}$ and $\\eta _{ds}$ are the characteristic self-discharge constant, charge and discharge efficiencies, respectively.", "Since $B_c$ defines an asymmetric behaviour in charging and discharging (even with equal charging/discharging coefficients), solving the battery scheduling requires to use two different variables for the charging and discharging powers for each EV.", "These are concatenated and denoted as a whole as $u = [u_c, u_d] $ , where $u_d, u_c \\in {R}^{T, n_v}$ are charging and discharging operations for all the EVs in kW.", "$\\Delta e \\in {R}^{T, n_v}$ is the (sparse) matrix containing the energy lost during the last EV trip, defined as: $\\Delta e_{t, v} ={\\left\\lbrace \\begin{array}{ll}e_{t_d(t), v} \\quad \\text{if} \\quad \\Delta _t l_{t, v} > 0\\\\0 \\quad \\text{otherwise}\\end{array}\\right.", "}$ where the first condition in equation (REF ) designs times in which the location matrix has a positive discrete derivative, that is, when the $v_{th}$ EV connects to a charging station.", "Here $e \\in {R}^{T \\times n_v}$ is the (sparse) energy constraint matrix, containing the energy that the EVs require at departure times, while $t_d(t)$ is the last departure time seen at step $t$ .", "In other words, the minimum energies required at departure times and encoded in $e$ are equal to the energy drops $\\Delta e_{t, v}$ needed to be reintegrated at next arrival time.", "The energy requirements stored in $e$ are assumed to be known at solution time for the next solution horizon, and they are estimated starting by the total driven km for the last trip, as explained in section REF .", "Since it is not always possible to guarantee that all the EVs satisfy the energy requirements stored in $e$ at departure time, state constraints on the EVs SOC are taken into account as a threshold soft constraints encoded in $Q(x)$ : $Q(x) = k \\Vert \\text{max}(e - x, 0)\\Vert _2^2$ where $k$ is a large constant, which allows to retrieve feasible solutions even if some EVs are not fully charged.", "Equation () states that charging and discharging variables $u_c$ and $u_d$ are positive quantities.", "Equation () makes use of the binary variable $x_c$ , which indicates whether a given EV is charging, to encode the bilinear constraint $u_c\\odot u_d = 0$ , where $\\odot $ is the Hadamard product; this encodes the fact that each EV cannot charge and discharge simultaneously.", "It must be noted that this condition is sometimes naturally satisfied by the problem, depending on the objective function $F(u)$ , as shown for example in [13].", "However, this is not always guaranteed; for example if we want to implement peak shaving in the presence of PV power plants.", "In this case EVs could occasionally decide to both charge and discharge and exploit the round-trip efficiency to dissipate more power and perform valley filling when the overall station network is a net energy producer.", "The same reasoning can be applied to quadratic profile tracking, as in the case of tracking a given power profile for providing services to the grid.", "In equation (), the binary variable $c \\in {R}^{T \\times n_v}$ is used to enforce charging and discharging powers to be zero when the car is not located at a station.", "Finally, called $\\mathcal {V}_{l,s}$ the set of EVs located at station $s$ at time $t$ , $\\mathcal {U}_s$ the rectangular box set of power limits at station s, the last two equations () and () represent the station constraints on maximum power and available number of charging stations, respectively.", "The problem composed by equations REF - is very general, however it is computationally expensive; due to the presence of the soft constraint on the minimum required energy (REF ) (and to the possible quadratic objectives included in $F(u)$ ), the problem belongs to the MIQP class, with a number of variables in the order of $O(Tn_v)$ , where in our case $n_v$ is in the order of $10^3$ and $T$ is equal to 96, since we consider 15 minutes steps and a daily control horizon.", "We now discuss how the original problem can be simplified by relaxing or removing some of the constraints - , and the implications for the problem's formulation hypothesis." ], [ "Strictly stationary mobility model", "if the sharing model is strictly stationary, meaning that the EVs are permanently assigned to a charging station and can only be plugged there, we can relax equations () and () which encode the maximum power and connection limits per station.", "These can be rewritten as: $&\\sum _{v \\in \\mathcal {V}_{s}} u_{c, t, v} - u_{d, t, v} \\in \\mathcal {U}_s \\quad \\forall t \\in \\mathcal {T}, s \\in \\mathcal {S} \\\\&\\sum _{v \\in \\mathcal {V}_{s}} c_{t, v} \\le n_{max, s} \\quad \\forall t \\in \\mathcal {T}, s \\in \\mathcal {S} $ The only difference to equations () and () is that the set $\\mathcal {V}_{s}$ is no more time dependent.", "This effectively removes the interlink between different stations given by EVs travelling between them; in other words, sets of EVs belonging to different stations will not influence each other directly, but only by means of the system-level objective $F(u)$ .", "Since the rest of equations () - () do not interlink stations, the problem can be easily decomposed.", "It must be noted that the original problem can also be decomposed; however, if the mobility model is not strictly stationary, it is likely that the influencing graph between EVs is dense, meaning that the behaviour of a given EV can be influenced by a high number of other EVs, dependent on the routing between stations.", "This will require to introduce decoupling variables for all the states and control variables, which involves a message passing of variables in the order of $O(Tn_vn_s)$ at each iteration.", "On the contrary, when $F(u)$ is an aggregate function, as in all the cases presented in this paper, decomposing the problem requires messages with size in the order of $O(Tn_s)$ at each iteration.", "Since $n_s << n_v$ and $n_v$ is in the order of thousands, the strictly stationary hypothesis will results in a data transmission reduction in the order of $10^4$ .", "each station has enough chargers to accommodate all its assigned EVs at the same time.", "This hypothesis, combined with the previous one, allows us to remove completely the binary variable $c$ indicating whether an EV is connected to a charger.", "In fact, equation () is not needed anymore, and equations () can be replaced with: $u_c \\preccurlyeq l u_{c, max}^T \\qquad u_d \\preccurlyeq l u_{d, max}^T $ where $l$ is the location matrix parameter, with entries $l_{t, v}$ equal to 0 if the $v_{th}$ vehicle is not located in any stations at time $t$ .", "this hypothesis will not allow to consider direct discharge of EVs into the main grid, nor energy arbitrage between EVs.", "Considering currently available solutions this is the setting with lower technological burden which could already be implemented by most EV car sharing providers.", "Note that it will be still possible to provide services to the grid by modulating the overall charge.", "This hypothesis will simplify the dynamics equations, removing the discharging variable $u_d$ .", "As a result, bi-linear constraints () can be dropped, removing the binary variable $x_c$ .", "If this hypothesis is combined with the two previous ones, the overall problem becomes linear or quadratic, depending on the form of $F(u)$ , allowing to use a larger set of solvers and substantially reducing the computational complexity." ], [ "Decomposition and business models", "In this section we show how the original problem can be decomposed by stations under the hypothesis of a strictly stationary mobility model and that stations are not downsized.", "As we keep the bidirectional hypothesis, we still need to include the bilinear constraint $u_c \\odot u_d = 0$ , handled by equations () and by the integer variable $x_c$ .", "In the next session we will discuss alternative methods to handle this bilinear constraint.", "Under the aforementioned hypothesis the problem can be decomposed using the alternating method of multipliers (ADMM) [4].", "Following the standard ADMM procedure, since we want to decompose per station, we should introduce $n_s$ auxiliary variables representing the total power at each charging station.", "However, since in our case we are only interested in objective computed at the aggregation level of stations or for the overall fleet, $F(u)$ can be written in the form $F(u) = S\\left(\\sum _{s \\in \\mathcal {S}} p_s(u)\\right) + \\sum _{s \\in \\mathcal {S}} C(p_s(u))$ where $S$ is a system level objective, that is the objective to minimize at fleet level, and $C$ is a cost function that should be minimized at station level.", "Here $p_s(u) = \\hat{p}_{s, load} - \\hat{p}_{s, pv} +\\sum _{v\\in \\mathcal {V}_s} u_{c,v} -u_{d,v}$ is the sum of forecasted base load and PV production (if any) for station $s$ and the sum of the charging and discharging operations of all EVs belonging to $s$ .", "Considering this form for $F(u)$ , we need to introduce only one additional variable $z \\in {R}^{T}$ representing the average power of the $n_s$ controlled stations.", "The final problem before the decomposition can be written as: $&u^{*} = \\underset{\\mathcal {X}}{\\text{argmin}}\\,{S(z n_s) + \\sum _{s\\in \\mathcal {S}} C(p_s) + Q(x)} \\\\s.t.", "& (\\ref {eq:state_dyn}), (\\ref {eq:u_constr}), (\\ref {eq:u_bilin}), (\\ref {eq:u_loc_rel}), (\\ref {eq:s_constr_c_rel}), (\\ref {eq:s_constr_u_rel}) \\\\& z = \\frac{1}{n_s}\\sum _{s\\in \\mathcal {S}} p_s(u) \\overline{p}_s(u) $ We can then proceed to formulate the augmented Lagrangian objective function in scaled form: $L_{\\rho } = S(z n_s) + \\sum _{s\\in \\mathcal {S}} C(p_s) + Q(x) + \\frac{\\rho }{2} \\Vert \\overline{p}_s(u_v) - z + \\lambda \\Vert _2^2$ Since problem (REF )-() can be seen as a sharing problem, we can further simplify the standard ADMM following the description in [4] for this specific case.", "As the choice of ADMM's parameter to achieve a good convergence rate can be problematic under the presence of equality constraints, we use a slightly different form, namely the linearized ADMM [14], [35]; briefly speaking, this form introduces a quadratic penalty for deviating from the decision actions at the previous iteration.", "We can then write the minimization in the primal and dual variables update as: $& u_s^{k+1} = \\underset{u_v}{\\text{argmin}}\\,{C(p_s(u_s)) + Q(x_s) + \\frac{\\rho }{2} \\Vert p_s(u_s) - r_u^k \\Vert _2^2} \\\\&\\qquad \\qquad \\qquad + \\frac{\\gamma }{2}\\Vert u_s -u_s^{k}\\Vert _2^2 \\\\& \\qquad \\qquad s.t.", "\\ (\\ref {eq:state_dyn}), (\\ref {eq:u_constr}), (\\ref {eq:u_bilin}), (\\ref {eq:u_loc_rel}), (\\ref {eq:s_constr_c_rel}), (\\ref {eq:s_constr_u_rel}) \\\\& z^{k+1} = \\underset{z}{\\text{argmin}}\\,{S(z n_s) + \\frac{\\rho }{2} \\Vert r_z - z^k\\Vert _2^2}\\\\& \\lambda ^{k+1} = \\lambda +\\overline{p}_s(u_s)^{k+1} - z^{k+1} $ where $u_s = [u_v^T]^T \\ \\forall \\ v \\in \\mathcal {V}_s$ and $u_s = [x_v^T]^T \\ \\forall \\ v \\in \\mathcal {V}_s$ are the vectors of operations and states of all the EVs belonging to station $s$ .", "Following [4], $r_u^k = p_s(u_s)^k - \\overline{p}_s(u_s)^k + z^k -\\lambda ^k$ and $r_z^k = \\overline{p}_s(u_s)^{k+1} +\\lambda ^k$ are the reference signals for the $u$ and $z$ update.", "Line () contains the dumping term of the linearized ADMM form for the primal variable $u_s$ update, $\\gamma $ being a dumping parameter.", "The two functions $C(p_s(u_s))$ and $S(zn_s)$ , representing respectively the station and the fleet objectives, can be used to tackle different business models.", "For example, for the station level, the following cases can be easily considered: Minimize energy costs.", "Called $p_{buy} \\in {R}^T$ and $p_{sell} \\in {R}^T$ the time-dependent buying and selling prices in $cts/kWh$ .", "In the presence of local generation e.g.", "due to PV power plants at the station's location, the cost function can be either positive or negative, depending on the overall power at a given time and can be expressed as in equation (REF ).", "$C(p_{s,t}) ={\\left\\lbrace \\begin{array}{ll}p_{buy,t} p_{s,t} , & \\text{if } \\quad p_{s,t} \\ge 0 \\\\p_{sell,t} p_{s,t} , & \\text{otherwise}\\end{array}\\right.", "}$ The cost can be thought of as the maximum over two affine functions (the first and second line of equation (REF ), respectively).", "If $p_{buy}$ is always greater than $p_{sell}$ we can minimize energy costs by introducing an auxiliary variable $y \\in {R}^T$ representing the station's energy costs.", "We can restrict the feasible space for $y$ to the epigraph of the cost function $C(p_s(u_s))$ by adding the two following constraints to the station problem (REF )-(): $y &\\ge p_{buy} p_s \\\\y &\\ge p_{sell} p_s$ Minimizing $y$ then guarantees that its value at the optimum, $y^*$ , will lie on the epigraph's lower boundary (and will thus represents the prosumer's total costs).", "In this case $C(p_s(u_s)) = \\sum _t^T y_t \\delta t /3600$ where $\\delta t$ is the considered time step.", "Even without setting a system-level objective, this strategy can result in some EVs performing arbitrage, charging at low price times and later discharging to other EVs if the price swing is high enough to compensate for the round-trip efficiency.", "Maximize self consumption - minimize energy imports from the grid.", "This can be achieved setting $C(p_s(u_s)) = \\sum _t p_{s,t}(u_{s,t})$ .", "If the term $A_v$ in the dynamic state equation () is less than one, i.e.", "if self-discharging is considered, this will result in a delay-charging strategy, pushing charging operations closer to EV departure times.", "Minimize charging times.", "If we want to charge EVs up to their required SOC at departure as soon as possible, we can minimize $C(p_s(u_s)) = \\sum _t p_s(u_s)d(t)$ where $d(t)$ is a convex discount function weighting less initial steps.", "Perform peak shaving.", "The most straightforward way is to set $C(p_s(u_s)) = \\Vert p_s(u_s)\\Vert _2^2$ .", "However, pure peak shaving has usually no economic drivers; the fleet manager is usually interested in reducing its total costs rather than having a flat profile per-se.", "Since peak tariffs are usually computed on the maximum power peak attained on a monthly basis, a more appropriate approach could be to implement a lexicographic strategy, at first minimizing the station's economic costs and then using the optimal cost found in this first step as a constraint for a second optimization in which a peak shaving objective is minimized.", "At the same way, the system level objective $S(zn_s)$ can be used to address several fleet-level business cases: Intra-day cost minimization.", "In the case in which the fleet manager has a deal to buy energy at intra-day costs, it can follow the same strategy illustrated to the cost minimization objective at station level and set $S(zn_s) = \\sum _t^T y_t \\delta t /3600$ .", "Profile tracking.", "A standard quadratic profile tracking can be used to make the fleet dispatchable setting $S(zn_s) = \\sum _t^T (zn_s-r)^2$ , where $r$ is a reference profile to be tracked.", "However, to quantify revenues from grid regulation services and flexibility calls, a linear cost function is more appropriate, as equation (REF ) that we used in the presented case study in section REF ." ], [ "Bilinear constraints handling", "We now present the proposed method to handle the bilinear constraint $u_c \\odot u_d = 0$ inside the ADMM iterations of the decomposed problem (REF )-(), without using the integer variable formulation encoded in equation ().", "Linear complementarity constraints arise in a variety of problems from bilevel optimization to eigenvalue complementary problems.", "Given a scalar objective function $f(x, y)$ of two variables $x, y \\in {R}^T_+$ , the simplest form of the complementarity constraint problem can be written as: $\\underset{z}{\\text{argmin}}\\,& f(z) \\\\s.t.& \\ x^Ty=0$ where $z = [x^T, y^T]^T$ .", "Depending on the complexity of the underlying problem, which is in general NP-hard, different iterative methods exist to find a feasible solution or a stationary point for this kind of problem [17].", "One of the most used strategy is the one implemented in the YALMIP package for Matlab, which uses the built-in solver for non-convex problems BMIBNB.", "The procedure sequentially finds refinements of an upper and a lower bounds for the problem, respectively found using a local non-linear and a convex solver.", "The next iteration is then found using a standard branch-and-bound logic and split the feasible space into two new boxes [1].", "The convex approximation for bilinear problems is found using a McCormick formulation.", "In [6], the authors proposed tighter bounds for bilinear problems exploiting McCormick relaxations and a sequence of MILP problems.", "The McCormick envelope has been also proposed for the relaxation of factorable functions by systematic subgradient construction [23], a concept similar to automatic differentiation.", "In this work we have chosen a different approach relying on the following observation: since we are solving the main problem iteratively, we want to exploit an iterative relaxation running in parallel with the standard ADMM iteration, without relying on branch and bound methods.", "Running a partial optimization for one part of the objective function for ADMM is theoretically justified by the generalized form of ADMM (GADMM) introduced in [9].", "The GADMM guarantees the convergence even in the case in which the local (stations') problems are only partially solved.", "This allows us to use a first order Taylor expansion around the previous solution to approximate the complementarity constraint $x \\odot y=0$ , in combination with a standard ADMM using Lagrangian relaxation.", "We can write the first order Taylor expansion around the previous solution as: $\\begin{split}\\tilde{c}( z^k, z^{k-1}) = x^{k-1} y^{k-1} &+x^{k-1}(y^k -y^{k-1}) \\\\ &+ y^{k-1}(x^k -x^{k-1})\\end{split}$ We propose to use this to minimize $f(z)$ while respecting the constraint, as reported in algorithm REF .", "texttt $z_0 = [x^T, y^T]^T$ , $w_0$ , $\\lambda $ chosen at random, parameters $\\rho $ , $\\gamma $ stop condition not met $z^{k+1} \\leftarrow \\underset{z}{\\text{argmin}}\\, f(z) + \\frac{\\rho }{2}\\Vert z-\\tilde{c}(z^k, z^{k-1}) +\\lambda _{k}\\Vert $ $w^{k+1} \\leftarrow \\frac{\\rho }{\\rho +\\gamma }(\\tilde{c}(z^k, z^{k-1})+\\lambda ^k)$ $\\lambda ^{k+1} \\leftarrow \\lambda _{k} + w^{k+1} -\\tilde{c}(z^k, z^{k-1})$ $z^{k+1} = \\alpha z^{k+1} + (1-\\alpha ) z^k$ Taylor relaxation Here $w$ is an auxiliary variable representing $x\\odot y$ , which we want to shrink to zero; lines 2-4 are standard ADMM iterations where line 3 is the analytical solution of the minimization of the Lagrangian function with respect to $w$ ; finally line 5 is a dumped iteration over the last solution, with dumping parameter $\\alpha $ .", "A different approach is proposed by Wang et al.", "in [33], where they provided algorithm REF , which is a standard application of ADMM to two objective functions, $f(z)$ and ${I}_{x^Ty=0}$ , where ${I}_{x^Ty=0}$ is the feasible set for the complementarity constraint.", "Contrary to algorithm REF that we propose, this approach guarantees that the problem always satisfies the complementary constraint at each iteration, due to the projection onto the feasible space of ${I}_{x^Ty=0}$ at line 3.", "The authors proved that algorithm REF converges into a stationary point for the bilinear constrained problem when $f(z)$ is a smooth function.", "Algorithms REF and REF are appealing since they are easily implementable and don't require to sequentially explore the whole solution space with a branch-and-bound strategy.", "texttt $z_0 = [x^T, y^T]^T$ , $\\tilde{z}_0 = [\\tilde{x}^T, \\tilde{y}^T]^T$ , $\\lambda $ chosen at random, parameter $\\rho $ stop condition not met $z^{k+1} \\leftarrow \\underset{z}{\\text{argmin}}\\, f(z) + \\frac{\\rho }{2}\\Vert z-\\tilde{z}^k +\\lambda _{k}\\Vert $ $\\tilde{z}^{k+1} \\leftarrow {\\pi }_{\\tilde{x}^T\\tilde{y}=0}(\\tilde{z}^k-\\lambda ^k)$ $\\lambda ^{k+1} \\leftarrow \\lambda _{k} + z^{k+1} - \\tilde{z}^{k+1}$ Wang relaxation" ], [ "Data analysis and preprocessing", "We test our optimization framework on a dataset made available by a car sharing operator managing a fleet of around 3000 vehicles.", "The dataset covers all car reservations from 1st of January 2019 until 31st of July 2020, thereby including the period before the COVID-19 pandemic as well as the first wave.", "In total, there are around 2 million bookings during this period, comprising 140880 unique users and 4461 vehicles.", "Due to the setting of the considered car sharing service, only a small fraction of trips are one-way (0.3%), and during the observation period only 3.5% trips involved electric vehicles.", "Furthermore, the number of vehicles per station is low on average in the considered system.", "73% of all stations offer a single vehicle, further 15% only two vehicles.", "5% of all stations have five or more vehicles.", "The limited availability of parking slots per station also explains the low fraction of one-way trips.", "Figure: Reservations by vehicleFigure: Reserved vehicles by time of the dayWe first analyze the flexibility of vehicles for V2G operations based on their daily and overall demand.", "fig:resbyvehicle shows the histogram of reservations by vehicle.", "Clearly, there are strong differences in the usage patterns of different vehicles.", "48% of the vehicles have at least one reservation in less than 50% of the days.", "These findings imply a strong opportunity for the car sharing operator to utilize its fleet for V2G.", "However, the most flexibility is given during the night: fig:usagebyhour shows a bell shaped curve of vehicle utilization over the course of a day, peaking in the afternoon.", "On average 21% of vehicles are reserved at any time.", "Last, we validate the assumption that most car reservations are known in advance, as it is necessary for optimizing the charging schedule.", "Concerning the spontaneity of the bookings, around 34% cars are reserved more than a day in advance, whereas 20% of the reservations are done less than an hour before the reservation period.", "The data are discretized to a temporal resolution of 15-minute steps.", "We remove cancelled trips but include service reservations necessary for relocating vehicles.", "We use the reservation period in contrast to the actual driving period to define the time span of car usage.", "However, this leads to overlapping trips in some cases when a returned vehicle was taken by the next user before the end of the original reservation period.", "The reservation period is therefore cut to the end of the previous drive / start of the next drive if necessary.", "Reservations without a ride are assumed to be cancelled and are not taken into account." ], [ "ICE mobility patterns and State of Charge modeling", "The car sharing service operator has set the ambitious goal to electrify their entire fleet by 2030.", "In order to provide a realistic simulation of the future fleet, and to demonstrate how our optimization approach scales with the number of stations, we propose to utilize the booking patterns of ICE vehicles as projected EV usage patterns, under the assumption of a similar driving behavior.", "Since only 3.5% of all trips are EV trips, this scales up the number of reservations by a factor of more than 25.", "In consultation with the car sharing operator we assign an EV model to each ICE vehicle based on the car category in the car sharing operator service, i.e.", "\"Budget\", \"Combi\", \"Transporter\" etc.", "For example, all vehicles of the category \"Transporter\" were simulated as Mercedes-Benz eVito vehicles, and all in category \"Budget\" were assigned the VW e-up model.", "Two pieces of information are needed as input to the optimization problem: When a vehicle is plugged in at a station, and the required state of charge at the start of a reservation.", "Due to the modeling of ICEs as EVs and the lack of SOC data in the provided dataset, we approximate the latter by the number of driven kilometers.", "Given the vehicle specifications (i.e.", "battery range and battery capacity) we compute the required SOC by multiplying the number of driven kilometers with the average energy consumption." ], [ "Formulations comparison", "We evaluated the numerical advantage of the proposed formulations in two steps.", "At first, we compared the monolithic formulation (REF )-() to the decomposed one (REF )-() using integer variables for handling bilinear constraints.", "In a second step, we evaluated the decrease in computational time in using the proposed linear methods for the bilinear constraints in the decomposed problems.", "For both these comparisons we vary the range of total EVs and the horizon length.", "The stations' objective function was set to energy cost minimization, while the system level objective was set to a profile tracking with a zero reference profile.", "The results of the first comparison are reported in the heatmaps of figure REF .", "For this comparison, we solved the monolithic problem using GUROBI with standard absolute and relative tolerances, while the stopping criterion for the decomposed formulation is a joint condition on the primal and dual residual, as described in $§$ 3.3.1 of [4], using $\\epsilon ^{abs}=1e-6$ and $\\epsilon ^{rel}=1e-4$ , respectively.", "The first two heatmaps refer to the total computational time of the decomposed problem and the monolithic formulation, respectively.", "The last plot shows the ratio of the two, a value lower than one meaning a lower computational time for the decomposed formulation.", "As expected, the computational advantage over the monolithic formulation increases with both the number of EVs and the length of the horizon.", "The experimental data for up to 360 vehicles shows a clear trend; the computational time of the decomposed problem for the most time consuming configuration being roughly 20% of the time needed by the monolithic formulation.", "The second comparison was done using a fixed number of iterations, which was set to 800.", "At first, we tuned the parameters of algorithm REF and REF w.r.t.", "the solution reached by the integer formulation, using a random sampling strategy over the configuration with 144 EVs and an 18 steps horizon.", "The parameters ($\\rho $ and $\\gamma $ for REF and $\\rho $ for REF , respectively) were then held constant over the different combinations of EVs and horizon lengths.", "We found that both the algorithms' performance was stable for a large range of parameters values.", "The computational times are shown in figure REF , where the first heatmap refers to the Taylor relaxation, the second one to the integer formulation and the last is the ratio of the two.", "As the computational advantage is due to the change of the class of the problem from MIQP to QP, we found a negligible difference in the computation times between algorithm REF and REF , and thus here report only results for the Taylor relaxation.", "Also in this case there is a clear trend in the reduction of computational time with increasing number of EVs and steps.", "The highest reduction was found for the most time consuming configuration of 577 EVs and 18 steps, with the Taylor relaxation using roughly 35% of the time needed by the integer formulation; once again we expect this value to get lower for problems with higher number of EVs.", "Figure: Computational time for different number of timesteps and considered EVs for the decomposed (left plot), the monolithic formulation (center plot) and the ratio of the two (right plot).Figure: Computational time for different number of timesteps and considered EVs for the decomposed problem using the Taylor bilinear relaxation (left plot), the integer formulation (center plot) and the ratio of the two (right plot).Figure REF shows the distribution of $\\Delta _{abs, rel} J_c$ for all the cases reported in figure REF .", "Here $J_c$ is defined as the sum of the different objective functions without including any augmented Lagrangian terms (neither the one deriving by the problem decomposition nor the ones of the linear formulations) in order to have a fair comparison: $J_c = F(u) + Q(x) + \\frac{\\gamma }{2}\\Vert u -u^{k}\\Vert _2^2$ Both the algorithms converge to the solution of the integer formulation with some oscillations, even if the Taylor-based relaxation shows better convergence, achieving a relative difference in the order of $1e-3$ for all the cases after 800 iterations.", "Figure: Comparison of convergence dynamics using the Taylor or Wang formulation for the bilinear constraint relaxation, in terms of relative differences in total objective w.r.t.", "integer formulation, when stations optimize for costs and the fleet has a reference tracking objective.", "Confidence interval refers to all the 42 combinations of horizon length and number of EVs of figure ." ], [ "Economic results", "We use the proposed algorithm REF to retrieve flexibility boundaries for an EV fleet.", "The setting is the following: an EV manager bidding for ancillary services is interested to know for a given leading time how many MWs, for how long, can be requested to the EV fleet for both upward and downward flexibility calls, and how much it costs per MWh.", "This information can then be used by the manager to make more informative bids.", "We followed the approach proposed in [24] to achieve hourly flexibility boundary for an aggregation of office buildings.", "For each hour of the day, we solve the optimization problem (REF )-(), where each station minimizes its total energy costs for the EVs charging operations, and $C(p_s(u_s))$ is modeled through the auxiliary variable $y$ as explained in section REF .", "Since the considered car sharing operator's stations are located under different Swiss DSOs, we used data from [10] to link them with the correct values for the buying and selling energy prices $p_{buy}$ and $p_{sell}$ , depending on their location.", "Additionally, we probabilistically assigned each station with a PV power plant, with a nominal power proportional to the maximum number of hosted EVs at that station.", "The system level objective function is set to be: $S\\left(\\sum _{s\\in \\mathcal {S}}p_s(u)\\right) = \\sum _{t\\in \\mathcal {T}_h} p_{f} \\vert r-\\sum _{s\\in \\mathcal {S}}p_s(u)\\vert $ where $r$ is the reference profile, $\\mathcal {T}_h$ is the set of timesteps belonging to hour $h$ and $p_{f}$ is the price of flexibility, which is constant over the considered hour.", "Equation (REF ) can be seen as a linear punishment in deviating from a flexibility call.", "We simulated a total number of 1440 EVs, keeping all the EVs belonging to a given station if the latter was chosen by a random sampling among all the available ones.", "An example of results using this objective function when $h=12$ , at different price levels, is shown in figure REF .", "When the fleet receives an upward flexibility call at noon, the consumption decreases in the rest of the day w.r.t.", "the baseline profile in which the system level objective is set to zero and the only objective is the stations' cost minimization.", "The opposite verifies when the fleet receives a downward flexibility call.", "For a given day, we run 24 optimizations, systematically changing $\\mathcal {T}_h$ and repeat the process for different values of $p_{f}$ .", "The resulting flexibility envelopes can be seen in figure REF .", "Lines of different colors represent the convex envelopes of the maximum and minimum flexibility attained at different hour of the days for a given price $p_f$ .", "It can be seen how during the first hour of the day the fleet is not prepared to an upward call, since the average SOC of the fleet is too high and the fleet has no time to discharge beforehand.", "Moreover, a saturation effect can be noticed after a given level of price: the maximum attainable flexibility does not change significantly passing from a $p_f$ of 255 CHF/MWh to 377 CHF/MWh.", "In order to better analyze this effect, we considered more price levels for the case in which flexibility is requested at noon.", "Figure REF shows the maximum amount of MW reached for 10 different values of $p_f$ ranging from 10 CHF/MWh to 377.5 CHF/MWh.", "The saturation effect is clear for both the upper and lower requests, but it's starting at slightly different price levels, around 210 and 250 CHF/MWh, respectively.", "Finally, we study the effect of the flexibility request on the other considered costs in the optimization problem.", "Figure REF shows the change in charging costs, loss of SOC (equation (REF )), tracking revenues and total costs for the noon case.", "As expected, as the price level increases, the tracking revenues rises for both upward and downward flexibility calls, but this comes at the expense of higher charging costs.", "The change of cost for the SOC lost is negligible compared to the other costs.", "Figure: Example of response to upward and downward flexibility calls as a function of price, compared to the baseline case in which there is no system level costs and the stations just optimize for their local energy prices.Figure: Flexibility envelope for different levels of p f p_f, showing the maximum attainable flexibility for hourly slots of the day.Figure: Deviation from the baseline power profile as a function of flexibility price p f p_f, for the noon case.Figure: Behaviour of different fleet costs as a function of flexibility price p f p_f, for the noon case.In this paper we presented an optimization model to control the charging and discharging operation of large EV fleets.", "We started by modeling a generic case in which the EVs are allowed to relocate between stations, and then focused on the strictly stationary model where EVs are picked up and dropped off at the same station, since this reflects the conditions of the presented case study.", "For this last case we demonstrated how the problem can be decomposed by stations, allowing to reduce the overall computational time.", "Furthermore, we used iterative methods to handle the bilinear constraints arising from the V2G formulation, which allows us to use a larger class of (free) solvers.", "For different combinations of horizon's lengths and number of EVs, we reported numerical results showing substantial speed ups w.r.t.", "the monolithic formulation, due to both problem decomposition and the use of relaxations for the bilinear constraints.", "We see multiple opportunities for future work.", "First, many car sharing bookings are spontaneous, limiting the applicability of day-ahead planning in real world scenarios.", "This could be tackled with the integration of booking forecasts; since forecasts introduce uncertainty, a receding horizon optimization can be used to minimize errors.", "Additionally, a stochastic formulation e.g.", "tree-based stochastic MPC [2], can be used to further tackle the uncertainty of bookings and PV generation." ], [ "Acknowledgments", "This work was financially supported of the Swiss Federal Office of Energy (V2G4CarSharing and GAMES projects SI/502344, SI/502361)." ] ]
2210.07756
[ [ "An Operator Inference Oriented Approach for Mechanical Systems" ], [ "Abstract Model-order reduction techniques allow the construction of low-dimensional surrogate models that can accelerate engineering design processes.", "Often, these techniques are intrusive, meaning that they require direct access to underlying high-fidelity models.", "Accessing these models is laborious or may not even be possible in some cases.", "Therefore, there is an interest in developing non-intrusive model reduction techniques to construct low-dimensional models directly from simulated or experimental data.", "In this work, we focus on a recent data-driven methodology, namely operator inference, that aims at inferring the reduced operators using only trajectories of high-fidelity models.", "We present an extension of operator inference for mechanical systems, preserving the second-order structure.", "We also study a particular case in which complete information about the external forces is available.", "In this formulation, the reduced operators having certain properties inspired by the original system matrices are enforced by adding constraints to the optimization problem.", "We illustrate the presented methodology using three numerical examples." ], [ "Introduction", "Mathematical models of mechanical systems describe their dynamic behaviors and robustness, allowing to anticipate the state of the system under the influence of certain external factors.", "Mechanical models can be designed in various ways, depending on the goals pursued and the system type.", "Dynamic behavior of interconnected rigid or flexible bodies can be analyzed using multibody system formalism [22].", "It is widely used in robotics, vehicle dynamics, and for different types of mechanisms to characterize the motion, e.g., to obtain trajectories, critical speeds, etc.", "The modeling is based on representing a given system as a number of solid bodies which are connected with joints or force elements.", "The governing system of ordinary differential (-algebraic) equations is derived using Lagrange's equation followed via the D'Alembert principle.", "On the other hand, if the dynamic behavior of a continuous object is of interest, methods from solid mechanics can be utilized.", "They allow one to identify the displacements, inner stresses, and strains of the structure [1].", "Considering the general physical principles common to all media, such as the balance of energy, the conservation of mass and momentum, etc., the governing equations are often derived either in integral or differential form.", "The latter form is essential for most structural analysis problems.", "It comes as a partial differential balance equation, which is assumed to be satisfied at every point of the field of interest.", "The central part of continuum mechanics consists of the additional constitutive equations, which define the material law.", "Together with the local balance equation, they allow to completely describe the inner stress-strain state of an object [36].", "In practice, numerical solutions of the governing partial differential equations (PDEs) are arguably most often computed by the finite element method (FEM) [61], which provides a spatial discretization of the solution field and leads to a system of second-order ODEs with specific mechanical properties.", "All these are accompanied by the development of new dynamic and material models, solution methods, simulation software, and at the same time—model-order reduction (MOR) methods.", "Increasing simulation costs while carrying out engineering design gives rise to the necessity of having surrogate models with lower complexity yet acceptable accuracy.", "The construction of the lower dimensional models is typically done by projection-based MOR methods.", "The main idea is to find a low-dimensional subspace of solution-trajectories and project the system operators onto these subspaces; see, e.g., [30], [11], [34] for the details.", "There exist many well-known reduction methods that can be efficiently applied for mechanical systems, such as modal truncation [20], moment matching [24], [5], [9], and balanced truncation [41], [16].", "These methods rely on constructing projection matrices with a particular focus.", "For instance, balanced truncation aims at determining the projection matrices containing the subspaces that are easy to reach as well as easy to observe.", "In [50], an overview and a comparison of many such methods for linear mechanical systems are provided with applications to a high-dimensional robotic fishtail model.", "It is worthwhile highlighting a snapshot-based approach, namely Proper Orthogonal Decomposition (POD), where a projection matrix or reduced basis is constructed from the state snapshots of the full model.", "This method utilizes an orthogonal basis for representing the given data in the least squares optimal sense [34], [35], [12], [39].", "In the engineering literature, common reduction methods are based on dividing the generalized system coordinates into master and slave coordinates.", "This interpretation reflects the intuitive background of all reduction methods — that is, some parts of the system of equations may be unimportant for the system dynamics and thus can be omitted.", "Historically, Guyan reduction [29] was the first important technique in this category.", "This method is also known as static condensation because it does not take into account dynamical effects and provides exact results for static simulations.", "For dynamical simulation, meaningful results are possible only for the loading frequency range close to the lowest eigenfrequencies of the system; otherwise, the results are too stiff [23].", "Guyan reduction forms the basis for other more advanced methods, such as Craig-Bampton reduction [19], Improved Reduction System (IRS) [26], and System Equivalent Reduction Expansion Process (SEREP) [51].", "These methods have improved accuracy due to the consideration of the eigenmodes of the omitted system as in Craig-Bampton reduction, or due to the approximation of the inertia forces as in IRS and SEREP.", "All these mentioned methods require access to the system operators.", "Thus, these methods are referred to as intrusive ones.", "However, in many scenarios, obtaining a full-order model in an explicit form can be very laborious or may not even be possible in many scenarios.", "Experimental measurements can also characterize a mechanical system, where the actual model behind the experiment may be unknown.", "Not only these, but very often, simulations of structural, dynamical processes are done via commercial software, and the governing equations are impossible to extract.", "Therefore, there is considerable interest in constructing potentially low-dimensional models in a non-intrusive way using only data that are either obtained using simulations or experiments.", "In this process, we explicitly eliminate the need for the full-order model but leverage the model hypothesis, which can either be known empirically or given by experts.", "In recent times, many non-intrusive reduced-order methodologies have been developed.", "Often, linear dynamical models can be learned using data either obtained in the time domain or frequency domain.", "The construction of reduced-order models using frequency domain data has been originally developed for first-order and extended to second-order systems (that often arise in mechanical systems): the Loewner framework [40], [10], the vector-fitting [28], [59], and the AAA algorithm [43], [27] are instances of frequency-domain reduced-order modeling approaches.", "There exist several methodologies to learn models from time-domain data.", "A widely used method for learning discrete-time systems is Dynamic Mode Decomposition (DMD) [53], [17], [57], which is an attractive reduction technique related to Koopman operator approximation.", "The basis of this method is collecting data from a dynamical system and solving a minimization problem to find the linear system operator.", "Another method introduced for first-order parametric systems in [48] is operator inference which uses hypotheses based on the structure of the PDE level.", "The essence of the method is learning the unknown operators using the data compressed to a low-dimensional subspace, followed by solving a least-squares problem.", "Several extensions of operator inference to parametric and nonlinear systems can be found in [8], [49], [60].", "Although the operator inference approach was developed for continuous-time systems, it shares an analogy to the DMD approaches.", "Most operator inference methods focus on learning first-order ODE systems with a prior hypothesis on the form of the model.", "However, mechanical systems are distinguished by the second-order specific ODE structure, where system matrices also have a physical meaning.", "Although second-order ODE systems can be transformed into their first-order companion form, it, first of all, leads to the system being twice as large as the original one; secondly, a subsequent naive reduction of these systems not only violates its original structure but also can lead to non-physical behavior.", "Therefore, we focus on preserving second-order structures in the learning process to obtain better interpretability.", "We mention the recent attempt in the direction in [54], where the operator inference methodology is described for Lagrangian mechanical models.", "Therein, the Lagrangian approach to derive the governing equations is presented, together with the formulation of operator inference that preserves the second-order structure and symmetric positive definite (s.p.d.)", "properties of operators.", "The methodology in [54] is presented for the particular case, when the reduced system mass matrix is equal to the identity.", "In this paper, we present the work in a similar direction.", "The operator inference procedure is tailored to the mechanical system structure, focusing on the data obtained from the FEM simulations.", "Firstly, we propose an extension of the operator inference approach to obtain second-order dynamics.", "We discuss the connection between the inferred operators and the matrices obtained via intrusive POD reduction.", "Then we tailor the learning process for the case when the external loads are completely known and develop the operator inference approach with additional constraints in order to enforce the reduced operators to be symmetric positive definite We note that the work presented in [54] paralleled our research, of which a first idea was contained in [6].", "This development happened independently without both groups knowing of the work of the other..", "The remainder of the paper is organized as follows.", "sec:fem briefly presents an overview of continuum mechanics and the derivation of FEM governing equations.", "sec:setup gives the information about the available data and the time-integration algorithm.", "sec:pod briefly explain the intrusive data-driven POD method.", "sec:opinf represents the operator inference for mechanical systems in the simplest form and its constrained version.", "Finally, numerical results are presented in sec:num to illustrate the proposed methodologies.", "We provide our outlook in sec:conc." ], [ "Continuum mechanics and finite element formulation", "In this section, we shall briefly review the equation of motion from the continuum solid mechanics viewpoint.", "The primary interest of solid mechanics is the response of an object to the forces that are acting on it, namely, the identification of the displacement field and stress-strain state.", "All the characteristic quantities are connected through the kinematic and constitutive relations, and can be found in the solution of local impulse balance PDEs.", "These concepts are explained in the continuum mechanics literature [36], [13], [1].", "Next, we present the spatial discretization of the solution domain using the finite element method (FEM).", "It is a widely used approach for solving structural mechanics problems and is implemented in many powerful simulation packages, which are predominantly used in engineering practice.", "More detailed description of the FEM can be found, e.g., in [61], [21], [37].", "In this paper, we focus on the small deformation theory and linear elastic material law, which cover a wide range of structural mechanics problems.", "Consider an object that is exposed to external forces.", "The various displacements of the body are described at each point by values in the corresponding coordinate directions $x_1, x_2$ , and $x_3$ gathered in the displacement vector $x^\\top = \\left[x_1, ~x_2,~x_3\\right].$ If the body is not rigid, displacements appear together with the changes in size and shape of an object, called deformations or strains.", "We assume that deformations are sufficiently small (less than 5%) and connected with displacements via kinematic relations, forming a symmetric Cauchy-strain tensor $\\varepsilon $ as follows: $\\varepsilon = \\frac{1}{2} \\left( \\nabla x + \\left(\\nabla x\\right)^\\top \\right).$ For more significant deformations, the symmetry of the strain tensor cannot be assured because of the different formulations in the Lagrangian and Eulerian coordinate systems [1].", "Thus, in these scenarios, it is essential to use the finite strain theory, which is the out of the scope of this paper.", "Finally, an important part of structural analysis is the identification of stresses as a reaction to external and internal loads.", "A stress vector $t$ is defined as an inner force $f$ acting in an imaginary cut of a body on an arbitrary small area $\\Delta S$ : $t = \\lim _{\\Delta S \\rightarrow 0} \\frac{\\Delta f}{\\Delta S} = \\frac{\\mathrm {d} f}{\\mathrm {d}S}.$ By choosing the vectorial basis for each plane such that the first axis coincides with the normal vector $n$ to the plane, and the second and third axes are two mutually orthogonal vectors, the stress vector can be presented with three corresponding components.", "As a result, it forms a second-order Cauchy stress tensor $\\sigma $ , which describes a three-dimensional stress state in a point of the solid.", "In fact, there are only six independent stress components due to the symmetry of the stress tensor following the balance of the rotational momentum.", "To find the unknown quantities, we consider the balance of momentum of a body $\\mathcal {B}$ with boundary $\\partial \\mathcal {B}$ in its current configuration and denote $g$ as gravity acceleration and $\\rho $ as mass density.", "The balance of the momentum postulates that the overall impulse by the deformation of a body is equal to the sum of all surface and volume forces acting on it, i.e., $\\int _{\\partial \\mathcal {B}} n \\cdot \\sigma \\, \\mathrm {d} x + \\int _{\\mathcal {B}} \\rho g \\, \\mathrm {d} x = \\int _{\\mathcal {B}} \\rho \\ddot{x} \\, \\mathrm {d} x.$ Note that the stress vector is substituted by its relation to the stress tensor $t = n \\cdot \\sigma $ .", "Applying the divergence theorem and using the fundamental assumption that the identity (REF ) must hold for each subpart of the body, we get the local impulse balance equation as follows: $\\nabla ^\\top \\cdot \\sigma + \\rho g = \\rho \\ddot{x}.$ To solve (REF ), appropriate numerical methods are needed.", "Arguably, the most popular approach for this is FEM, in which the main idea is to discretize the spatial domain into a finite number of simpler and smaller parts, namely finite elements, transforming the infinite-dimensional problem into a finite-dimensional one.", "The bridge to the finite elements is the weak formulation of (REF ).", "It requires the multiplication of the governing equation (REF ) with the virtual displacement $\\delta x$ and integration over the domain $\\mathcal {B}$ .", "Applying the divergence theorem once more, we get the weak form of the equation of motion in the current configuration: $\\int _{\\mathcal {B}} \\left( \\rho \\, (\\delta x) ^\\top \\ddot{x} + (\\nabla \\cdot \\delta x )^\\top \\sigma \\right) \\, \\mathrm {d} x= \\int _{\\mathcal {B}} \\rho \\, (\\delta x) ^\\top g \\, \\mathrm {d} x + \\int _{\\partial \\mathcal {B}} x ^\\top t \\; \\mathrm {d} x.$ The domain $\\mathcal {B}$ is discretized in space in $n^e$ elements $\\mathcal {B} \\longrightarrow \\bigcup _{i=1}^{n^e} \\mathcal {B}_i.$ Now, the continuous displacement field can be approximated element-wise as $x \\approx \\sum _{k = 1}^{n} \\phi _k (\\xi , \\eta , \\zeta ) \\mathbf {x}_k = H \\mathbf {x}^e,$ where $H = \\begin{bmatrix}\\phi _1 & 0 & 0 & \\cdots & \\phi _{n} & 0 & 0\\\\0 & \\phi _1 & 0 & \\cdots & 0 & \\phi _{n} & 0 \\\\0 & 0 & \\phi _1 & \\cdots & 0 & 0 & \\phi _{n} \\\\\\end{bmatrix}.$ Here, $\\phi _k $ are shape functions of an element with $n$ nodes, which depend on the so-called isoparameter local coordinates within an element $(\\xi , \\eta , \\zeta )$ .", "The element displacement vector is assembled from the displacement vectors at each node: $\\mathbf {x}^e = \\begin{pmatrix}\\mathbf {x}_1 \\\\\\mathbf {x}_2 \\\\\\vdots \\\\\\mathbf {x}_n\\end{pmatrix}.$ The global displacement vector $\\mathbf {x}$ is related to the element displacement vector by the location matrix $Z_e$ , where the topology of the discretization is stored, i.e., $\\mathbf {x}^e = Z^e \\mathbf {x}.$ To replace the action of the $\\nabla \\cdot $ operation, the additional auxiliary matrix $L$ of size ${6 \\times 3}$ is defined $ L^\\top = \\begin{bmatrix}\\displaystyle \\frac{\\partial }{\\partial \\mathrm {x}_1} & 0 & 0 & \\displaystyle \\frac{\\partial }{\\partial \\mathrm {x}_2} & 0 & \\displaystyle \\frac{\\partial }{\\partial \\mathrm {x}_3}\\\\0 & \\displaystyle \\frac{\\partial }{\\partial \\mathrm {x}_2} & 0 &\\displaystyle \\frac{\\partial }{\\partial \\mathrm {x}_1}& \\displaystyle \\frac{\\partial }{\\partial \\mathrm {x}_3}& 0 \\\\0 & 0 & \\displaystyle \\frac{\\partial }{\\partial \\mathrm {x}_3} & 0 & \\displaystyle \\frac{\\partial }{\\partial \\mathrm {x}_2} & \\displaystyle \\frac{\\partial }{\\partial \\mathrm {x}_1}\\end{bmatrix}.$ We denote the element domain $\\mathcal {B}_e$ with the boundary $\\partial \\mathcal {B}_e$ for each of the $n^e$ elements.", "With (REF ) and (REF ), the weak form of the balance of momentum (REF ) can be reformulated as $\\begin{aligned}&\\sum _{e = 1}^{n^e} \\int _{\\mathcal {B}_e} \\rho (H Z^e \\delta \\mathbf {x})^\\top H Z^e \\ddot{\\mathbf {x}} \\, \\mathrm {d} \\mathbf {x}_e+ \\sum _{e = 1}^{n^e} \\int _{\\mathcal {B}_e} (L H Z^e \\delta \\mathbf {x})^\\top \\sigma \\, \\mathrm {d} \\mathbf {x}_e \\\\& \\hspace{113.81102pt} =\\sum _{e = 1}^{n^e} \\int _{\\mathcal {B}_e} \\rho (H Z^e \\delta \\mathbf {x})^\\top g \\, \\mathrm {d} \\mathbf {x}_e +\\sum _{e = 1}^{n^e} \\int _{\\partial \\mathcal {B}_e} (H Z^e \\delta \\mathbf {x})^\\top t \\, \\mathrm {d} \\mathbf {x}_e.\\end{aligned}$ The equation (REF ) must hold for any virtual displacement, leading to the following ODE system: $M \\ddot{\\mathbf {x}} + \\mathbf {f}_{\\text{int}} = \\mathbf {f}_{\\text{ext}},$ where $M = \\sum _{e = 1}^{n^e} \\left( Z^e \\right) ^\\top & \\left( \\int _{\\mathcal {B}_e} \\rho (H^\\top H \\,\\mathrm {d} \\mathbf {x}_e \\right) Z^e, \\quad \\quad \\mathbf {f}_{\\text{int}} = \\sum _{e = 1}^{n^e} \\left( Z^e \\right) ^\\top \\int _{\\mathcal {B}_e} (L H)^\\top \\sigma \\, \\mathrm {d} \\mathbf {x}_e , \\quad \\text{and}$ $\\mathbf {f}_{\\text{ext}} = \\sum _{e = 1}^{n^e} \\left( Z^e \\right) ^\\top \\int _{\\mathcal {B}_e} \\rho H^\\top g \\, \\mathrm {d} \\mathbf {x}_e +\\sum _{e = 1}^{n^e} \\left( Z^e \\right) ^\\top \\int _{\\partial \\mathcal {B}_e} H^\\top t \\, \\mathrm {d} \\mathbf {x}_e \\hspace{76.82234pt}$ are the consistent mass matrix, the vector of internal forces, and the vector of external forces, respectively.", "Since the material model has not yet been defined, the equation (REF ) is valid for linear and nonlinear material behavior and arbitrarily large displacement gradients.", "To complete the field equations, we add the so-called constitutive equations.", "In many scenarios, the stress tensor can be written as a linear function of the displacements.", "For example, in the case of the classical Hooke's law $\\sigma = D^{el} \\varepsilon ,$ where $D^{el}$ is a fourth-order elastic stiffness tensor.", "Using the Voigt notation, $D^{el} = \\begin{bmatrix}\\lambda + 2 \\mu & \\lambda & \\lambda & 0 & 0 & 0 \\\\\\lambda & \\lambda + 2 \\mu & \\lambda & 0 & 0 & 0 \\\\\\lambda & \\lambda & \\lambda + 2 \\mu & 0 & 0 & 0 \\\\0 & 0 & 0 & \\mu & 0 & 0 \\\\0 & 0 & 0 & 0 & \\mu & 0 \\\\0 & 0 & 0 & 0 & 0 & \\mu \\end{bmatrix},$ where $\\lambda $ and $\\mu $ are the Lamé constants.", "In its turn, the deformation field is approximated using (REF ), (REF ), and (REF ) as follows: $\\varepsilon \\approx L H \\mathbf {x} = Q \\mathbf {x}.$ Thus, the internal force vector can be written as $\\mathbf {f}_{\\text{int}} = \\sum _{e = 1}^{n^e} \\left( Z^e \\right) ^\\top \\int _{\\mathcal {B}_e} Q^\\top D^{el} Q \\mathbf {x} \\mathrm {d} \\mathcal {B}_e.$ The equation (REF ) takes the form $M\\ddot{\\mathbf {x}}(t) + K \\mathbf {x}(t) = \\mathbf {f}_{\\mathrm {ext}}(t),$ where the stiffness matrix is $K = \\sum _{e = 1}^{n^e} \\left( Z^e \\right) ^\\top \\int _{\\mathcal {B}_e} Q^\\top D^{el} Q \\mathrm {d} \\mathcal {B}_e.$ The computation of the system matrices requires numerical integration over the element domain using an appropriate method (e.g., Gauss integration).", "Of course, the dissipation forces also play an important role and is hence important to be taken into account in the internal force vector.", "It is described with a damping matrix $E$ analogously to the elastic forces and stiffness matrix.", "Very common in engineering practice is the Rayleigh damping model, which allows representing the damping matrix as a linear combination of mass and stiffness matrix, where the factors $\\alpha _R$ and $\\beta _R$ damp the lower and higher frequencies, respectively: $E = \\alpha _R M + \\beta _R K.$ However, there are other damping models that can be preferably for different cases; in this work we are not limited to any particular model.", "Thus, in a general case, we have the following system of ODEs: $M\\ddot{\\mathbf {x}}(t) + E\\dot{\\mathbf {x}}(t) + K\\mathbf {x}(t) = \\mathbf {f}(t),$ where $\\mathbf {x}(t)$ are the fundamental unknowns — nodal displacements; $M,K,E \\in \\mathbb {R}^{n \\times n}$ are the system mass, stiffness, and damping matrices, respectively.", "The external force vector $\\mathbf {f}(t)$ can be formulated for some applications in terms of a certain control operator $B \\in \\mathbb {R}^{n \\times m}$ and input vector $\\mathbf {u}(t) \\in \\mathbb {R}^{m}$ , consisting of $m$ input signals $u(t)$ $\\mathbf {f}(t) = B \\mathbf {u}(t).$ It is worth mentioning that $M$ and $K$ are typically symmetric positive definite, and $E$ is symmetric positive semidefinite.", "We will denote this conditions as $M \\succ 0, K \\succ 0, E \\succeq 0$ .", "Moreover, if those conditions hold, it is well known that the mechanical system is stable, see [42], [55], [52].", "Equations (REF ) describe the dynamics of a system and are often inaccessible from the FEM software.", "The system dimension is usually very high, which is natural, considering the high number of elements and nodes needed to maintain structure geometry precisely.", "Each system matrix depends on the material parameters, element type, and other specific FEM settings.", "Our primary goal in this work is to identify smaller dimension surrogate models having the mechanical structure as in (REF ) using simulated data information, which is described in the next section.", "Remark In the presence of geometric nonlinearities, the deformation gradient tensor, which describes the rotation and deformation of the body, is no longer equal to the identity tensor due to the loss of equivalence between the deformed and undeformed configuration.", "Therefore, other appropriate stress and strain measures should be used to describe the motion of the system.", "In particular, it is natural for solid mechanics to use the original reference configuration, namely the Green-Lagrange strain tensor and the Second Piola-Kirchhoff stress tensor.", "As a consequence, the governing equation becomes nonlinear.", "Another source of nonlinearity can be material behavior, introducing a nonlinear relationship between stress and strain tensor.", "For these cases, the solution of the governing system of equations has to be computed in an iterative manner.", "Given that, the reduction method has to be performed in a more involved way, which will be considered in our future work." ], [ "Data setup", "In this section, we explore the available data, including a time-integration solver description.", "We assume that the model of a mechanical system (REF ) is given as a gray-box, i.e., the underlying abstract model structure is known by utilizing the physical knowledge laid out in the previous section, but the system operators are unavailable.", "Instead, we have access to the simulation input and output data, which consist of the excitation signals $\\mathbf {u}(t)$ and the nodal displacements in the state vector $\\mathbf {x}(t)$ .", "The simulation is performed with the following time discretization $0 = t_0 < t_1 < \\dots < t_N = T$ of the time domain $[0, T ]$ .", "Further, we assemble the snapshot matrix $X$ and the input signal matrix $U$ by collecting the inputs and the snapshots of the state at pre-defined time-steps: $U =\\begin{bmatrix}| & \\dots & | \\\\\\mathbf {u}(t_1) & \\dots & \\mathbf {u}(t_N) \\\\| & \\dots & |\\end{bmatrix} \\in \\mathbb {R}^{m \\times N}, \\quad X = \\begin{bmatrix}| & \\dots & | \\\\\\mathbf {x}(t_1) & \\dots & \\mathbf {x}(t_N) \\\\| & \\dots & |\\end{bmatrix} \\in \\mathbb {R}^{n \\times N}.$ The time-integration in FE-packages is usually performed by second-order integration methods, such as Newmark-$\\beta $ [44], Hilber-Hughes-Taylor (HHT) method [32], and Generalized-$\\alpha $ method [18].", "The latter two methods are the generalizations of the Newmark method with controllable numerical damping, which is particularly important for the automatic time stepping scheme to reduce the effect of the high-frequency noise resulting from too large step size or a poor spatial discretization.", "Using the HHT method, the equilibrium (REF ) is replaced by the following discretized expression: $M &\\ddot{\\mathbf {x}}_{k+1} + E ( (1 + \\alpha ) \\dot{\\mathbf {x}}_{k+1} - \\alpha \\dot{\\mathbf {x}}_k ) + K ( (1 + \\alpha ) \\mathbf {x}_{k+1} - \\alpha \\mathbf {x}_k ) = \\mathbf {f}_{k+1}, \\\\&\\mathbf {x}_{k+1} = \\mathbf {x}_k + \\Delta t \\dot{\\mathbf {x}}_k + (\\Delta t)^2 \\left[ \\left( \\frac{1}{2} - \\beta \\right) \\ddot{\\mathbf {x}}_k + \\beta \\ddot{\\mathbf {x}}_{k+1} \\right] , \\\\&\\dot{\\mathbf {x}}_{k+1} = \\dot{\\mathbf {x}}_k + \\Delta t \\left[ (1 - \\gamma ) \\ddot{\\mathbf {x}}_k + \\gamma \\ddot{\\mathbf {x}}_{k+1} \\right].$ Numerical damping is controlled by the parameter $\\alpha \\in \\left[ - \\frac{1}{3}, 0 \\right]$ (negative $\\alpha $ -dissipation).", "The parameters $\\gamma $ and $\\beta $ govern the stability of the algorithm and are often chosen as $\\gamma = \\frac{1 - 2 \\alpha }{2}, \\; \\beta = \\frac{(1 - \\alpha )^2}{4} $  [25].", "Setting $\\alpha = 0$ makes () equivalent to the Newmark-$\\beta $ family of algorithms, which we use in our numerical simulations.", "Hence, the derivative data needed for the system identification can also be extracted from the integrator, which is assembled as follows: $\\begin{aligned}\\displaystyle \\dot{X} &= \\begin{bmatrix}| & \\dots & | \\\\\\dot{\\mathbf {x}}(t_1) & \\dots & \\dot{\\mathbf {x}}(t_N) \\\\| & \\dots & |\\end{bmatrix} \\in \\mathbb {R}^{n \\times N}, \\quad \\displaystyle \\ddot{X} &= \\begin{bmatrix}| & \\dots & | \\\\\\ddot{\\mathbf {x}}(t_1) & \\dots & \\ddot{\\mathbf {x}}(t_N) \\\\| & \\dots & |\\end{bmatrix} \\in \\mathbb {R}^{n \\times N},\\end{aligned}$ where $\\dot{X}$ and $\\ddot{X}$ contain velocities and accelerations information.", "Since solvers can often provide the velocity and acceleration data, we will use these data in our work.", "With this, we aim to develop a data-driven framework to learn second-order dynamical systems to capture the dynamics present in the data.", "Particularly, our focus lies in constructing low-dimensional dynamical models to achieve our goal." ], [ "Intrusive POD reduction", "Before proceeding to the description of a non-intrusive operator inference approach, we briefly recapitulate the intrusive snapshot-based POD method that forms the basis for identifying low-dimensional subspaces for data or the compression step for the operator inference method.", "The main feature of the POD method is to identify orthogonal modes that optimally capture the energy present in the snapshot matrix.", "These modes also capture most of the dynamics in the data.", "This can be achieved by employing the singular value decomposition (SVD) of the snapshot matrix $X$ (REF ): $X = V \\Sigma W^\\top .$ Recall that according to the Eckart-Schmidt-Young-Mirsky theorem, the truncated SVD provides the best rank-$r$ approximation of a given matrix in the Frobenius norm [58].", "In order to get the low-dimensional representation of the system dynamics, we approximate (REF ) by truncating the small singular values.", "Hence, we construct the subspace basis $V_r$ by choosing the first $r$ dominant left singular vectors.", "The system operators in (REF ) can be projected onto the subspace $V_r$ , yielding the following reduced POD system: $\\widetilde{M} \\ddot{\\tilde{\\mathbf {x}}} (t) + \\widetilde{E} \\dot{\\tilde{\\mathbf {x}}} (t) + \\widetilde{K} \\tilde{\\mathbf {x}} (t) = \\widetilde{B} \\mathbf {u}(t),$ with the reduced system operators being defined as $\\widetilde{M} = V_r^\\top M V_r, \\quad \\widetilde{E} = V_r^\\top E V_r, \\quad \\widetilde{K} = V_r^\\top K V_r, \\quad \\widetilde{B} = V_r^\\top B.$ Notice that if the original matrices $M, E$ , and, $K$ are symmetric positive (semi)definite, then so are the reduced matrices $\\widetilde{M}$ , $\\widetilde{E}$ and $\\widetilde{K}$ .", "As a consequence, the intrusive POD model preserves the stability of the original one, as mentioned in [52] in the context of moment matching.", "Except for the basis construction from snapshots, the reduction is performed intrusively, i.e., it requires the original matrices $M$ , $E$ , $K$ , and $B$ , which describe the dynamics of the original mechanical systems." ], [ "Operator inference for mechanical systems", "Instead of projecting the known system operators, our goal is to infer the reduced operators using the data available in sec:setup.", "Towards learning low-dimensional systems from given high-dimensional data, we first need to prepare an appropriate low-dimensional data representation.", "To that end, we aim at finding a low-dimensional approximation of the snapshot matrix (REF ), which is done as described in sec:pod by applying SVD and choosing the $r$ most dominant singular vectors as a projection basis.", "Using the obtained dominant subspace, we prepare the compressed low-dimensional data as follows: $\\begin{aligned}\\widehat{X} = V_r^\\top X, \\qquad \\dot{\\widehat{X}} = V_r^\\top \\dot{X}, \\qquad \\ddot{\\widehat{X}} = V_r^\\top \\ddot{X},\\end{aligned}$ assuming we have access to the velocity and acceleration vectors as well.", "Next, we present an optimization-based formulation to infer reduced-order operators directly using the data (REF )." ], [ "Second-order formulation", "First, we recall that the intrusive POD model (REF ) is represented by the matrices $\\widetilde{M}$ , $ \\widetilde{E}$ , $\\widetilde{K}$ , and $\\widetilde{B}$ .", "These reduced matrices satisfy the following equation: $\\widetilde{M} \\ddot{\\widetilde{X}} + \\widetilde{E} \\dot{\\widetilde{X}} + \\widetilde{K} \\widetilde{X} = \\widetilde{B} U,$ where $\\widetilde{X}$ , $\\dot{\\widetilde{X}}$ and $\\ddot{\\widetilde{X}}$ , respectively, are the snapshot matrix assembling $N$ snapshots of the reduced POD model (REF ), its corresponding derivative, and second-order derivative matrices, i.e., $ \\tilde{X} = \\begin{bmatrix}| & & | \\\\\\tilde{\\mathbf {x}}(t_1) & \\dots & \\tilde{\\mathbf {x}}(t_N) \\\\| & & |\\end{bmatrix}, \\; \\dot{\\tilde{X}}= \\begin{bmatrix}| & & | \\\\\\dot{\\tilde{\\mathbf {x}}}(t_1) & \\dots & \\dot{\\tilde{\\mathbf {x}}}(t_N) \\\\| & & |\\end{bmatrix}, \\; \\ddot{\\tilde{X}}= \\begin{bmatrix}| & & | \\\\\\ddot{\\tilde{\\mathbf {x}}}(t_1) & \\dots & \\ddot{\\tilde{\\mathbf {x}}}(t_N) \\\\| & & |\\end{bmatrix}.$ Assuming the reduced mass matrix $\\widetilde{M}$ is invertible, we multiply (REF ) by $\\widetilde{M}^{-1}$ from the left, yielding the following differential system of equations $\\ddot{\\widetilde{X}} = - \\widetilde{M}^{-1} \\widetilde{E} \\dot{\\widetilde{X}} - \\widetilde{M}^{-1} \\widetilde{K} \\widetilde{X} + \\widetilde{M}^{-1} \\widetilde{B} U.$ Hence, the dynamics of the POD intrusive model is fully described by the matrices $\\widetilde{M}^{-1} \\widetilde{E}$ , $\\widetilde{M}^{-1} \\widetilde{K}$ and $\\widetilde{M}^{-1} \\widetilde{B}$ .", "It is important to notice that $\\widetilde{M}^{-1} \\widetilde{E}$ and $\\widetilde{M}^{-1} \\widetilde{K}$ may not be symmetric positive (semi)definite, even if $\\widetilde{M}$ , $\\widetilde{E}$ , $\\widetilde{K}$ are.", "The structure (REF ) is used as a foundation to formulate a least-squares problem using the projected data (REF ).", "Inspired by the structure (REF ), our next goal is to identify a second-order reduced model of the form as follows: $\\ddot{\\hat{\\mathbf {x}}} (t) + \\widehat{E}_{\\mathrm {M}} \\dot{\\hat{\\mathbf {x}}} (t) +\\widehat{K}_{\\mathrm {M}} \\hat{\\mathbf {x}} (t) = \\widehat{B}_{\\mathrm {M}} \\mathbf {u}(t),$ using the projected data $\\widehat{X}$ , $\\dot{\\widehat{X}}$ and $\\ddot{\\widehat{X}}$ in (REF ) and the input data $U$ .", "In particular, we seek to determine the matrices or operators $\\widehat{E}_{\\mathrm {M}}$ , $\\widehat{K}_{\\mathrm {M}}$ , and $ \\widehat{B}_{\\mathrm {M}}$ .", "Hence, we propose the following second-order inference problem: $\\underset{\\begin{array}{c}\\widehat{E}_{\\mathrm {M}}, \\widehat{K}_{\\mathrm {M}}, \\widehat{B}_{\\mathrm {M}} \\end{array}}{\\text{minimize}} \\left\\Vert \\ddot{\\widehat{X}} + \\widehat{E}_{\\mathrm {M}} \\dot{\\widehat{X}} + \\widehat{K}_{\\mathrm {M}} \\widehat{X} - \\widehat{B}_{\\mathrm {M}} U \\right\\Vert ^2_F,$ where the matrices $\\displaystyle \\widehat{E}_{\\mathrm {M}}, \\, \\widehat{K}_{\\mathrm {M}} \\in \\mathbb {R}^{r \\times r}$ , and $\\displaystyle \\widehat{B}_{\\mathrm {M}} \\in \\mathbb {R}^{r \\times m}$ are the unknown operators.", "Since the intrusive matrices in (REF ) $\\widetilde{M}^{-1} \\widetilde{E}$ and $\\widetilde{M}^{-1} \\widetilde{K}$ may not be symmetric positive definite, we expect the same for the inferred matrices $\\widehat{E}_{\\mathrm {M}}$ and $\\widehat{K}_{\\mathrm {M}}$ .", "In order to reformulate the optimization problem (REF ) in a more compact way, we assemble the global data matrix: $ \\widehat{} = \\begin{bmatrix} \\dot{\\widehat{X}} {\\,}^{\\top },~ \\widehat{X}^\\top ,~ U^\\top \\end{bmatrix}^{\\top }$ using the available project snapshot matrices, except for the second-order derivative matrix $\\ddot{4\\widehat{X}}$ , which plays the role of the right-hand side for the regression problem.", "Finally, we state the optimization problem as follows: $\\underset{\\widehat{P} \\in \\mathbb {R}^{r \\times (2r+m)}}{\\text{minimize}} \\left\\Vert \\widehat{P}\\widehat{} - \\ddot{\\widehat{X}} \\right\\Vert ^2_F,$ where the variable parameter matrix consists of all the unknown operators $\\widehat{P} = \\begin{bmatrix}- 4\\widehat{E}_M ,~ -4\\widehat{K}_M ,~ 4\\widehat{B}_M\\end{bmatrix}.$ It is worth mentioning that the inferred model is obtained non-intuitively, i.e., the construction of the matrices $\\widehat{E}_{\\mathrm {M}}$ , $\\widehat{K}_{\\mathrm {M}}$ and $ \\widehat{B}_{\\mathrm {M}}$ is based only on the provided data.", "Also, in this setup, the mass matrix of the inferred model (REF ) is assumed to be the identity by construction.", "Remark One may argue that the mass matrix can also be identified using this approach.", "To this aim, one needs to include the mass matrix in the unknown operators $ \\widehat{P}_{mod} = \\begin{bmatrix}- 4\\widehat{M} &- 4\\widehat{E} & -4\\widehat{K} & 4\\widehat{B}\\end{bmatrix}, $ and add the projected second derivative to the data matrix as follows $\\widehat{}_{mod}= \\begin{bmatrix} \\ddot{\\widehat{X}}{\\,}^\\top & \\dot{\\widehat{X}} {\\,}^{\\top } & \\widehat{X}^\\top & U^\\top \\end{bmatrix}^{\\top }.$ Hence, to infer the reduced operators, one would have to solve the following least square problem $\\underset{\\widehat{P}_{mod} \\in \\mathbb {R}^{r \\times (2r+m)}}{\\text{minimize}} \\left\\Vert \\widehat{P}_{mod}\\widehat{}_{mod} \\right\\Vert ^2_F,$ It consists of a least-squares problem without a right-hand side, for which zero is a trivial solution.", "Problems of this type are usually solved by constraining the size of the solution norm, which is not suitable for our case.", "In subsec:ForceOpInf, we will propose an approach enabling us to also infer the mass matrix, provided that some additional data is available.", "Although the inferred and intrusive reduced operators are obtained with different procedures, we can show an asymptotic closeness of these two models.", "The original paper on operator inference [48] and some other articles, such as [7], provide theoretical results for the first order reduced systems.", "They show, under certain assumptions, that the inferred matrices are an approximation of the intrusive reduced matrices in the Frobenius norm.", "When such a result holds, the inferred system can inherit several useful properties of POD models, such as stability and error analysis.", "Let the parametric matrix $\\widehat{P}$ be the solution of the optimization problem (REF ) with the corresponding matrix $\\widehat{}$ , constructed from the available data.", "We denote $\\mathbf {x}(t_i)$ as the continuous displacement at the time $t_i$ , and $\\mathbf {x}_i$ as the discretized displacement snapshot-vector.", "Further, we consider the following assumptions.", "Assumption 1 Time-stepping scheme is convergent, i.e., $\\displaystyle \\Vert \\mathbf {x}_i - \\mathbf {x}(t_i) \\Vert \\rightarrow 0$ as $\\Delta t \\rightarrow 0$ .", "Assumption 2 The discretized reduced derivative data converges to the continuous derivative data, i.e., $ \\displaystyle \\Vert \\dot{\\hat{\\mathbf {x}}}_i - \\frac{\\mathrm {d}}{\\mathrm {d}t}\\hat{\\mathbf {x}}(t_i) \\Vert \\rightarrow 0$ and $\\displaystyle \\Vert \\ddot{\\hat{\\mathbf {x}}}_i - \\frac{\\mathrm {d}^2}{\\mathrm {d}t^2}\\hat{\\mathbf {x}}(t_i) \\Vert \\rightarrow 0$ as $\\Delta t \\rightarrow 0$ .", "Assumption 3 The matrix $\\widehat{} \\in \\mathbb {R}^{N \\times (2r+m)}$ has full rank, assuming that the dimension $r$ is much smaller than the number of time steps $N$ .", "Using the above assumptions, we formulate the following theorem: Theorem 1 Let Assumptions 1,2,3 hold and $\\widetilde{M}, \\widetilde{E}, \\widetilde{K},$ and $\\widetilde{B}$ be the reduced-order operators obtained intrusively as in (REF ) using the POD basis $V_r$ .", "Then, for every $ \\varepsilon > 0 $ there exist a reduced order $r<n$ and a step size $ \\Delta t > 0$ such that $ \\Vert \\widetilde{M}^{-1} \\widetilde{E} - 4\\widehat{E}_M \\Vert _F < \\varepsilon , \\quad \\Vert \\widetilde{M}^{-1} \\widetilde{K} - 4\\widehat{K}_M \\Vert _F < \\varepsilon , \\quad \\text{and} \\quad \\Vert \\widetilde{M}^{-1} \\widetilde{B} - 4\\widehat{B}_M \\Vert _F < \\varepsilon , $ where $4\\widehat{E}_M$ , $4\\widehat{K}_M$ and $4\\widehat{B}_M$ are the inferred operators via the optimization problem (REF ).", "Recall that the intrusive POD reduced model has the form (REF ).", "Let $\\widetilde{}= \\begin{bmatrix}\\dot{\\widetilde{X}}{\\,}^\\top &\\widetilde{X}^\\top &U^\\top \\end{bmatrix}^{\\top }$ denote the corresponding data matrix for the system in (REF ) with the POD snapshot matrices, defined in (REF ).", "Hence, the concatenated intrusive reduced operators $\\widetilde{P} = \\begin{bmatrix}-\\widetilde{M}^{-1}\\widetilde{E} & -\\widetilde{M}^{-1}\\widetilde{K} & \\widetilde{M}^{-1}\\widetilde{B}\\end{bmatrix}$ represent one solution of the least-squares problem $\\tilde{P} = \\text{arg}\\vspace{-5.69046pt}\\min _P{ \\left\\Vert P \\widetilde{}- \\ddot{\\widetilde{X}} \\right\\Vert ^2_F}.$ Moreover, it represents the unique solution if the matrix $\\widetilde{}$ has full rank.", "Next, the projected matrix $\\widehat{}$ (REF ) and the projected second order derivative $\\ddot{\\widehat{X}}$ can be interpreted, respectively, as a disturbed POD data matrix $\\widetilde{}$ and disturbed second order POD derivative $\\ddot{\\widetilde{X}}$ , i.e., $\\widehat{} = \\widetilde{} + \\delta \\widetilde{} \\quad \\text{and} \\quad \\ddot{\\widehat{X}} = \\ddot{\\widetilde{X}} + \\delta \\ddot{\\widetilde{X}}.$ Indeed, the disturbing term $\\delta \\widetilde{}$ comes from the time-sampling error of the solution data and from the approximation error considering $\\displaystyle X \\approx V_r \\widetilde{X}$ and $\\displaystyle X \\approx V_r \\widehat{X}$ , which also holds for the first and second order derivative data.", "Hence, $\\delta \\widetilde{} \\rightarrow 0$ and $\\delta \\ddot{\\widetilde{X}} \\rightarrow 0$ as $r \\rightarrow n$ and $\\Delta t \\rightarrow 0$ .", "Therefore, this leads to the following asymptotic result for the least-squares problem $\\min _{\\widehat{P}} \\left( \\lim _{\\begin{array}{c}\\Delta t \\rightarrow 0 \\\\ \\phantom{\\Delta }r \\rightarrow n\\end{array}} \\left\\Vert \\, {\\widehat{P}} \\cdot \\widehat{} - \\ddot{\\widehat{X}} \\, \\right\\Vert ^2_F \\right) = \\min _{\\widetilde{P}} \\left( \\lim _{\\begin{array}{c}\\Delta t \\rightarrow 0 \\\\ \\phantom{\\Delta }r \\rightarrow n\\end{array}} \\left\\Vert \\, \\widetilde{P} \\cdot \\left(\\widetilde{} +\\delta \\widetilde{}\\right)- \\left(\\ddot{\\widetilde{X}} + \\delta \\ddot{\\widetilde{X}}\\right) \\, \\right\\Vert ^2_F \\right)$ In other words, the operator inference problem in (REF ) can be seen as a perturbed version of the minimization problem in (REF ).", "The pre-asymptotic case combined with the assumption that $\\widehat{}$ has full rank leads to the proof of the theorem.", "The above theorem states that if the least-squares problem is well-conditioned, then in the asymptotic case, when the time step converges to zero, and the reduced order converges to the full dimension, the operators obtained by POD are close to the inferred operators.", "This result is important because, for the broad class of mechanical systems, the POD method preserves stability by keeping symmetric positive definiteness of the system matrices due to one-sided projection.", "Therefore, the inferred model will also be stable in case it is close enough to the POD one.", "However, the relevant properties can be inherited only for the asymptotic case.", "Moreover, in [47], for discrete-time linear first-order systems, it has been shown that it is possible to exactly recover the intrusive operators for any order using a re-projection scheme.", "In many applications, the data matrix is numerically rank deficient and the corresponding least-squares problem becomes ill-conditioned.", "Therefore, it is necessary to use appropriate regularization techniques.", "Among different methods (such as truncated SVD or truncated QR) [60], the Tikhonov regularization [56] is one of the widely used techniques.", "The optimization problem (REF ) is replaced by the following regularized problem: $\\widehat{P} = \\text{arg}\\vspace{-5.69046pt}\\min _P \\left( \\Vert P \\cdot \\widehat{} - \\ddot{\\widehat{X}} \\Vert ^2_F + \\lambda \\Vert P \\Vert ^2_F \\right),$ where $\\lambda $ is a penalty parameter.", "The choice of $\\lambda $ plays an important role in obtaining a good solution.", "One of the criteria is to ensure a minimal residual $\\displaystyle \\Vert \\widehat{P} \\cdot \\widehat{} - \\ddot{\\widehat{X}} \\Vert _F$ for the smallest operator norm $\\displaystyle \\Vert \\widehat{P} \\Vert _F $ .", "In this work, we use the Tikhonov regularization by penalizing all the operators with the same regularization parameter." ], [ "Separating the operators", "From the above theorem, we conclude that the inferred operators are close to the POD matrices in (REF ), assuming that the matrix $\\widetilde{M}$ is \"absorbed\" in other operators.", "Further, we assume that the inferred operators can be decomposed as follows: $\\widehat{E}_M = \\widehat{M}^{-1} \\widehat{E}, \\quad \\widehat{K}_M = \\widehat{M}^{-1} \\widehat{K}, \\quad \\widehat{B}_M = \\hspace{1.0pt} \\widehat{M}^{-1} \\widehat{B}.$ In order to obtain a ROM with second-order structure as in (REF ), we may think about some post-processing method to separate the inferred operators.", "The suggested procedure below uses the transformation of the operator inference model to the modal coordinates.", "The generalized eigenvalue problem can be written as $\\widehat{K}_M \\Phi = \\Phi \\Omega ^2,$ where $\\Omega $ is a diagonal matrix with the natural eigenfrequencies on the diagonal and $\\Phi $ are the eigenmodes of the operator inference system.", "The reduced stiffness matrix in modal realization is equal to $\\Omega ^2$ , while the reduced modal mass matrix is identity.", "Using the fact that the modal stiffness is defined as $\\Omega ^2 = \\Phi ^\\top \\widehat{K} \\Phi ,$ we can extract the reduced stiffness matrix from (REF ), using the eigenfrequencies and eigenmodes $\\widehat{K} =\\Phi ^{-\\top } \\Omega ^2 \\Phi ^{-1}.$ Then, we can separate the reduced mass matrix, and the damping matrix from (REF ) as $\\widehat{M} = \\widehat{K} \\widehat{K}_M ^{-1}, \\quad \\widehat{E} = \\widehat{M} \\widehat{E}_M.$ We would like to stress that there are no guarantees that the separated operators will satisfy stability properties.", "A possible remedy to ensure the stability of the learned model is by performing post-processing by finding the nearest symmetric positive definite matrix as in [31].", "This can be done for mass, stiffness, and damping matrices if needed, but it would be at the expense of losing the accuracy of the learned models.", "In the previous subsections, we have defined the second-order operator inference method for learning the reduced mechanical models of structure (REF ), using the state and derivative data, as explained in sec:setup.", "As discussed previously, we were not able to impose the symmetric positive definiteness of the inferred operators in this formulation, even if the intrusive reduced model possesses this structure.", "In this section, we present a alternative operator inference methodology, enabling us to enforce the system's matrices to be symmetric positive definite.", "To this aim, we will use additional information from the full-order model.", "Hence, in this section, we will assume we have access to all the external forces and their positions, meaning the vector $\\mathbf {f}(t)$ in (REF ) is given.", "In many engineering applications, an analysis of a system response under a certain load is required.", "In these scenarios, the forces acting on the system are known and can be extracted from the simulation software (for example, using the input-file defining the simulation setup).", "Moreover, for some simulations, the load data may come from experimental measurements of real working conditions, given as force values at certain time-space points.", "The force matrix can be constructed from the force snapshots at the pre-defined time steps: $ F = \\begin{bmatrix}| & & | \\\\\\mathbf {f}(t_1) & \\dots & \\mathbf {f}(t_N) \\\\| & & |\\end{bmatrix}.$ As a consequence, the POD reduced model satisfies the following equation: $\\widetilde{M} \\ddot{\\widetilde{X}} + \\widetilde{E} \\dot{\\widetilde{X}} + \\widetilde{K} \\widetilde{X} = V^\\top F,$ where, once again, $\\widetilde{X}$ , $\\dot{\\widetilde{X}}$ and $\\ddot{\\widetilde{X}}$ are, respectively, the snapshot matrix assembling $N$ snapshots of the projected reduced POD model in (REF ), its corresponding derivative and second-order derivative matrices as in (REF ).", "We also recall that the intrusive reduced operators $\\widetilde{M}$ , $\\widetilde{E}$ and $\\widetilde{K}$ are typically symmetric positive (semi)definite, implying that the intrusive model is stable.", "Hence, our goal in this section is to infer the second-order operators $\\widehat{M}$ , $\\widehat{E}$ and $\\widehat{K}$ of a reduced model of the form $\\widehat{M} \\ddot{\\hat{\\mathbf {x}}} (t) + \\widehat{E} \\dot{\\hat{\\mathbf {x}}} (t) +\\widehat{K}\\hat{\\mathbf {x}} (t) = V^{\\top }\\mathbf {f}(t),$ such that $\\widehat{M} \\succ 0,\\quad \\widehat{K} \\succ 0, \\quad \\widehat{E} \\succeq 0.$ For this, similar to the projected trajectory data in (REF ), the force data can be projected onto the dominant POD subspace $4\\widehat{F} = V^\\top F$ .", "Moreover, let the new data matrix include the state and derivative data as follows: $\\widehat{} = \\begin{bmatrix} \\ddot{\\widehat{X}} {\\,}^{\\top } &\\dot{\\widehat{X}} {\\,}^{\\top } & \\widehat{X}^\\top \\end{bmatrix} ^\\top .$ The operator inference optimization problem with constraints (REF ) using the external force data (REF ) is formulated as follows: $\\underset{\\begin{array}{c}4\\widehat{M} \\succ 0 ~\\widehat{E} \\succeq 0 ~ \\widehat{K} \\succ 0\\end{array}}{\\text{minimize}} \\left\\Vert ~ \\left[\\widehat{M} ~~ \\widehat{E} ~~ \\widehat{K}\\right] \\widehat{} - \\widehat{F} \\, \\right\\Vert ^2_F.$ In practice, it is not possible to add rigid constraints to the optimization problem, therefore the reformulation $ \\widehat{M} - \\omega I \\succeq 0 $ and $ \\widehat{K} - \\omega I \\succeq 0 $ with a small positive threshold $\\omega > 0$ can be used to ensure the strict positive definiteness.", "The operator inference formulation (REF ) is a convex optimization problem, which can be solved using semidefinite programming algorithms, e.g., [15].", "In contrast to the optimization problem (REF ), which has an analytical solution via Moore-Penrose inverse, the problem (REF ) requires linear matrix inequality solvers, which are computationally more expensive.", "However, since the computations are done in the POD-reduced dimension, they can still be performed in moderate time.", "Moreover, this methodology has the advantage of preserving the symmetric positive definite structure of the inferred system's operators, which implies that the inferred model is also stable." ], [ "Numerical results", "In this section, we study the performance of the proposed operator inference methodologies for mechanical systems to learn reduced-order models directly from data and present a comparison with the intrusive POD approach.", "For this purpose, numerical experiments are performed, namely for international space station [45], butterfly gyroscope [46] benchmarks, and vibrating plate model [3].", "The first model is used for the analysis of vibrations caused by the docking of an incoming spaceship.", "The model is given in a first-order state-space realization, which originates from the second-order form, and can thus be transformed back to the second-order form.", "The second benchmark is a finite element structural model of a vibrating micro-mechanical gyroscope for inertial navigation applications.", "For a more detailed description of the model, we refer to [33].", "The latter example is a finite-element model for analysis of a vibration response of the aluminium plate exited by a point load.", "The time integrator for the simulations of the full-order model and reduced-order models is described in sec:setup.", "The Newmark parameters in () are chosen as $\\gamma = \\frac{1}{2}$ and $\\beta = \\frac{1}{4}$ , which are based on the average constant acceleration assumption ensuring the unconditional stability of the method.", "For the implementation of the optimization with linear matrix inequality constraints, the YALMIP Toolbox [38] is used together with the SeDuMi solverhttps://sedumi.ie.lehigh.edu/.", "The quality analysis of the ROMs is done by comparing the state trajectories and inspecting the relative state error, which is given by the relation to the maximum norm of the state vector, $\\max \\Vert \\mathbf {x}(t) \\Vert _2$ .", "This is, $\\epsilon _{\\mathrm {err}} (t_i) = \\frac{\\left\\Vert \\mathbf {x}(t_i) - \\hat{\\mathbf {x}}(t_i)\\right\\Vert _2}{\\max _{t\\in [t_1, \\; t_N]} \\left\\Vert \\mathbf {x}(t) \\right\\Vert _2}.$ A comparison is performed for the original full-order model (FOM), the POD-reduced model (POD), the operator inference model in the second-order formulation (OpInf), and the force-informed operator inference model with constraints (cOpInf).", "All experiments were performed using  (2021a) running on an HP Probook 430 G3, 2.30 GHz  -6100U CPU, 8GB of RAM.", "Code availability The source code of the implementations and the raw data are available at https://gitlab.mpi-magdeburg.mpg.de/filanova/mechopinf." ], [ "International space station", "The structural model of the international space station [2] is a second-order system used for vibration analysis with the state dimension $n = 135$ .", "The benchmark data are available in [45].", "As a first step towards learning intrusive POD and operator inference reduced models, we collect the training data in the time-interval $[ 0, 7s ]$ with the time step $\\Delta t = 0.01s$ and input signal $u(t) = \\sin (t)$ .", "In fig:isssvd, the normalized singular value decay is depicted for the collected snapshot matrix $X$ , defined in (REF ).", "The black dot denotes the singular value, corresponding to the order $r = 4$ , at which the truncation is done.", "The reduced order is selected so that the approximation error is at least below the threshold $10^{-2}$ .", "Figure: NO_CAPTIONThe testing is performed on a time-interval $[ 0, 21s ]$ with the same time step and input as for the training phase.", "In fig:issxc, the trajectory for the second component of the displacement vector $\\mathbf {x} (t)$ is shown.", "The curves show a good capture of the dynamics in the training phase and in the testing phase.", "fig:issxec shows it more clearly in the comparison of the relative error of the state trajectories.", "The operator inference reduction without constraints shows slightly better accuracy in the training phase, but in the testing phase, all these methods yield similar errors.", "Figure: ISS benchmark: a comparison of FOM and ROMs of order r=4r = 4.", "Black vertical line denotes the training period, used for constructing POD, cOpInf, and OpInf models.In general, both formulations of the operator inference methodology show good results for this example." ], [ "Butterfly Gyroscope", "Our next example is the butterfly gyroscope [14] which is a linear second-order model with the state dimension $n = 17 132$ .", "The benchmark data are available in [46].", "The model contains s.p.d.", "mass and stiffness matrices, the damping is modeled using the Rayleigh assumption – a model with pure stiffness damping, where the coefficients are $\\alpha _R = 0$ and $\\beta _R = 10^{-6}$ , see the equation (REF ).", "The training data is obtained by the simulation of the system on $t = [ 0 , \\, 10^{-3} ]s$ with the time step $\\Delta t = 10^{-6}s$ and input signal $u(t) = \\sin (2 \\pi f t)$ with $f =1 \\; kHz$ .", "In fig:gsvd, we depict the singular value decay of the collected snapshot matrix $X$ , defined in (REF ).", "The reduced order is selected as $r = 6$ .", "Figure: NO_CAPTIONThe testing is performed for the same time-step and input load over a longer time interval $t = [ 0, \\; 3 \\cdot 10^{-3} ]s$ .", "The qualitative comparison of the trajectories for ROMs of reduced order $r = 6$ is presented in fig:gtop for the displacement component $\\mathrm {x}_{out} = \\mathrm {x}_{3143}$ , which corresponds to one of the degrees of freedom, where the external force is applied.", "Over the whole simulation time, the ROMs are able to capture oscillations of the original system.", "To analyze the performance of the ROMs for the displacement field, we demonstrate the relative error for the state trajectories in fig:gbot.", "As for the previous benchmark, the error does not exceed 1%; therefore, we can ensure a good match of the state trajectories.", "For the whole simulation time, the force-informed formulation has slightly better accuracy than the operator inference formulation without constraints.", "However, the POD model shows a better performance than all non-intrusive ROMs.", "Figure: Butterfly gyroscope benchmark: a comparison of FOM and ROMs of order r=6r = 6.", "Black vertical line denotes the training period, used for constructing POD, cOpInf, and OpInf models.The deterioration in the accuracy of the operator inference model compared to the previous benchmark may be explained by a more ill-conditioned least-squares problem resulting from high-frequency loading and higher state dimension." ], [ "Vibrating plate", "Finally, we present the results for a model of simply supported strutted plate excited by a point load [4], the model data available from [3].", "This is a linear second-order model with the state dimension $n = 201900$ .", "The damping is modeled using the Rayleigh assumption, where the coefficients are $\\alpha _R = 0.01$ and $\\beta _R = 10^{-4}$ , see the equation (REF ).", "The training data is obtained by the simulation of the system on $t = [ 0 , \\, 0.5 ]s$ with the time step $\\Delta t = 10^{-3}s$ and input signal $u(t) = \\sin (2 \\pi f t)$ with $f =10 \\; Hz$ .", "In fig:psvd, we depict the singular value decay of the collected snapshot matrix $X$ , defined in (REF ).", "To ensure the desired accuracy the reduced order is selected as $r = 110$ .", "Figure: NO_CAPTIONThe testing is performed for the same time-step and input load over a longer time interval $t = [ 0, \\; 1 ]s$ .", "The qualitative comparison of the trajectories for ROMs of reduced order $r = 110$ is presented in fig:ptop for the displacement component $\\mathrm {x}_{out} = \\mathrm {x}_{176544}$ ; the relative error for the state trajectories is demonstrated in fig:pbot.", "We can observe that the second-order operator inference formulation without constraints does not provide meaningful results for this example: the relative error blows up already during the training phase.", "In contrast, the force-informed operator inference model leads to the stable model with the relative error below $1 \\%$ .", "Figure: Vibrating plate model: a comparison of FOM and ROMs of order r=110r = 110.", "Black vertical line denotes the training period, used for constructing POD, cOpInf, and OpInf models.Although the POD model performs an order of magnitude better than the operator inference model, the accuracy of the POD model changes intermittently in the testing phase and reaches the force-informed operator inference level.", "This confirms the need to preserve the specific mathematical properties of the original system operators.", "Moreover, in the force-informed formulation, the stability of the model is guaranteed by the imposed constraints, which is not the case for the unconstrained version." ], [ "Conclusions", "In this paper, we have discussed extensions of the operator inference method incorporating the mechanical system structure of the governing equations.", "We presented a second-order formulation of operator inference, where the unknown operators can be identified using data.", "The asymptotic closeness of the inferred model to the corresponding intrusive POD model is also shown.", "An alternative formulation, as an optimization problem with positive semidefinite constraints for system operators, is proposed for the special case when the external force-data is available.", "The latter formulation allows ensuring stability of the inferred model.", "Both versions of operator inference provide reduced-order models that capture system dynamics very well.", "In this work we provide the results only for the displacement field.", "However, the identification of stress-strain state might also be of interest.", "For this task, the access to the corresponding deformation data is necessary.", "Using the empirical knowledge about the strain-displacement relationship, it can be learned from the given deformation snapshots.", "Moreover, so far we assumed to have derivative data (e.g., velocity and acceleration) which may not be accessible.", "Therefore, in our future work, we explore approaches to use numerical approximation tools to approximate these quantities and analyze the effect of these on learning operators.", "tocsectionReferences" ] ]
2210.07710
[ [ "When programs have to watch paint dry" ], [ "Abstract We explore type systems and programming abstractions for the safe use of resources.", "In particular, we investigate how to use types to modularly specify and check when programs are allowed to use their resources, e.g., when programming a robot arm on a production line, it is crucial that painted parts are given enough time to dry before assembly.", "We capture such temporal resources using a time-graded variant of Fitch-style modal type systems, develop a corresponding modally typed, effectful core calculus, and equip it with a graded-monadic denotational semantics illustrated by a concrete presheaf model.", "Our calculus also includes temporally-aware graded algebraic effects and effect handlers.", "The former are given a novel temporal treatment, where operations' specifications include their execution times, and their continuations know that an operation's worth of additional time has passed before they start executing, making it possible to safely access further temporal resources in them, and where effect handlers have to respect this temporal discipline." ], [ "Introduction", "The correct usage of resources is at the heart of many programs, especially if they control safety-critical machinery.", "Such resources can take many different forms: ensuring that file handles are not arbitrarily duplicated or discarded (as captured by linear and uniqueness types) [11], [25], [37], or guaranteeing that communication happens according to protocols (as specified by session types) [27], [67], or controlling how data is laid out in memory (as in calculi based on Hoare and separation logics) [3], [31], [53], [61], or assuring that resources are correctly finalised regardless of which effects programs perform or how they are handled [1], [41].", "In contrast to the above approaches that predominantly focus on how resources are used, we study how to modularly specify and verify when programs can use their resources—we call such resources temporal.", "For instance, consider the following code snippet controlling a robot arm on a (car) production line: $\\small \\begin{array}{l}\\mathsf {\\color {keywordColor}let}_{}\\; (\\text{body'},\\text{left-door'},\\text{right-door'}) = \\mathsf {paint}\\; (\\text{body},\\text{left-door},\\text{right-door}) \\;\\mathsf {\\color {keywordColor}in}\\; \\\\\\mathsf {assemble}\\; (\\text{body'},\\text{left-door'},\\text{right-door'})\\end{array}$ Here, the correct execution of the program (and thus operation of the robot arm it is controlling) relies on the car parts given enough time to dry between painting and assembly.", "Therefore, in its current form, the above code is correct only if there is the hidden hand of a compiler (or a scheduler) present that inserts enough of a time delay at compile time (resp.", "dynamically blocks the execution of this program for enough time) between the calls to the $\\mathsf {paint}$ and $\\mathsf {assemble}$ operations.", "However, in either case, one still faces the question of how to reason about the correctness of the compiled code (resp.", "the dynamic checks).", "In this paper, we focus on developing a type system based means for reasoning about the temporal correctness of the kinds of code that the above-mentioned compiler might produce, or that a programmer might write directly when full control and predictability of the low-level (embedded) code is of importance.", "In particular, we had three desiderata we set out to fulfil in this work: We did not want the delay between $\\mathsf {paint}$ and $\\mathsf {assemble}$ to be restricted to just blocking execution, with the robot arm sitting idly while watching paint dry.", "Instead, we wanted a flexible formalism that would allow the arm to spend that time doing other useful work, while still allowing us to reason that the right amount of time has passed before the painted parts are assembled.", "We wanted the passage of time of program execution to be modelled within the type system, rather than being left to some unspecified meta-level run-time.", "We wanted the resulting language to give programmers the freedom to redefine the behaviour of operations such as $\\mathsf {paint}$ and $\\mathsf {assemble}$ , say, via effect handling [58], while respecting the operations' temporal specifications.", "Paper structure  We achieve these goals by designing a mathematically natural core programming language for safe and correct programming with temporal resources, on the one hand, based on a time-graded, temporal variant of Fitch-style modal type systems [18], and on the other hand, on graded monads [32], [48], [64].", "We review modal types and discuss how we use them to capture temporal resources in sect:overview.", "In sect:core-calculus, we present $\\lambda _{[\\tau ]}$ —our modally typed, effectful, equationally presented core calculus for safe programming with temporal resources.", "We justify the design of $\\lambda _{[\\tau ]}$ by giving it a mathematically natural sound denotational semantics in sect:semantics, based on graded monads and adjunctions between strong monoidal functors, including a concrete presheaf example.", "In sect:delay-equations, we briefly discuss a specialisation of $\\lambda _{[\\tau ]}$ with equations for time delay operations.", "We review related work and remark on future work directions in sect:related-future-work, and conclude in sect:conclusion.", "For supplementary rigour, we are also working on writing up the results in Agda [65]—this can be found at https://github.com/danelahman/temporal-resources/releases/tag/fossacs2023-submission.", "As of now, all of sect:core-calculus and the main results of sect:semantics are in Agda, with the exception of some auxiliary results noted in prop:renaming-semantic,, some because of submission deadline and some due to a bug in Agda where with-abstractions produce ill-typed terms.Eta-contraction is not type-preserving: https://github.com/agda/agda/issues/2732 Two laws of the presheaf model are also currently not in Agda due to unfolding of definitions producing unmanageably large terms.", "This will need experimentation with workarounds." ], [ "Modal types for temporal resources", "We begin with an overview of (Fitch-style) modal type systems and how a time-graded variant of them naturally captures temporal aspects of resources." ], [ "(Fitch-style) modal types", "A modal type system extends the types of an underlying type systems with new modal type formers,For brevity, we use the term modal type system to interchangeably refer to both modal type systems and natural deduction systems of (intuitionistic) modal logics.", "e.g., $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ , which states that the type $X$ is to be considered and reasoned about in a different mode compared to $X$ , which can take many forms.", "For instance, in Kripke's possible world semantics, $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ means that values of type $X$ are available in all future worlds [38]; in run-time code generation, the type $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ captures generators of $X$ -typed code [68]; and in asynchronous and distributed programming, the type $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ specifies mobile $X$ -typed values [2], [51], [60].", "Many different approaches to presenting modal type systems have been developed, with one of the main culprits being the difficulty of getting the introduction rule for $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ correct.", "Namely, bearing in mind Kripke's possible world semantics, the introduction rule for $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ must allow one to use only those hypotheses that also hold in all future worlds, while at the same time ensuring that the system still enjoys expected structural properties.", "Solutions to this problem have involved proving $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ in a context containing only $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}$ -types [59] (with a failure of structural properties in the naive approaches), or building a form of explicit substitutions into the introduction rule for $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ to give the rule premise access to only $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}$ -types [12], or incorporating the Kripke semantics in the type system by explicitly indexing types with worlds [63]—see [34] for an in-depth survey.", "In this paper, we build on Fitch-style modal type systems [15], [18], [45], where the typing rules for $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ are given with respect to another modality, Figure: NO_CAPTION, that acts on contexts, resulting in a particularly pleasant type-theoretic presentation.", "As an illustrative example, in a Fitch-style modal type system corresponding to the modal logic S4 (whose Kripke models require the order on worlds to be reflexive and transitive, thus also corresponding to natural properties of time), the typing rules for variables and the $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ type have the following formDepending on which exact modal logic one is trying to capture, the form of contexts used in the introduction/elimination rules can differ, see [18] for a detailed overview.", ": $\\small *[Lab={\\color {rulenameColor}Var}]{\\raisebox {-0.1cm}{\\includegraphics {icons/unlock.pdf}}\\notin \\Gamma ^{\\prime }}{\\Gamma , x \\unknown.", "X, \\Gamma ^{\\prime } \\vdash x : X}\\qquad *[Lab={\\color {rulenameColor}Shut}]{\\Gamma , \\raisebox {-0.1cm}{\\includegraphics {icons/unlock.pdf}}\\vdash t : X}{\\Gamma \\vdash \\mathsf {\\color {keywordColor}shut}\\;{t} : \\raisebox {-0.75mm}{\\scalebox {2}{\\square }}{X}}\\qquad *[Lab={\\color {rulenameColor}Open}]{\\Gamma \\vdash t : \\raisebox {-0.75mm}{\\scalebox {2}{\\square }}{X}}{\\Gamma ,\\Gamma ^{\\prime } \\vdash \\mathsf {\\color {keywordColor}open}\\;{t} : X}$ Intuitively, the context modality $\\raisebox {-0.1cm}{\\includegraphics {icons/unlock.pdf}}$ creates a barrier in the premise of Shut so that only $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}$ -typed variables can be used from $\\Gamma $ when building $t$ , achieving the above-mentioned correctness goal for the introduction rule of $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ .", "Alternatively, in the context of Kripke's possible world semantics, one can also read the occurrences of the $\\raisebox {-0.1cm}{\\includegraphics {icons/unlock.pdf}}$ modality as advancing the underlying world—this intuition will be useful to how we use an analogous modality to capture the passage of time in $\\lambda _{[\\tau ]}$ .", "The context extension $\\Gamma ,\\Gamma ^{\\prime }$ in Open ensures the admissibility of structural rules.", "Regarding the earlier remark about $\\raisebox {-0.1cm}{\\includegraphics {icons/unlock.pdf}}$ , $\\Gamma ,\\Gamma ^{\\prime }$ also has the intuition that if $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}X$ is available at any point, then it will remain so in the future." ], [ "Modal types for temporal resources", "Next, we give a high-level overview of how we use a time-graded variant of Fitch-style modal type systems to capture temporal properties of resources in $\\lambda _{[\\tau ]}$ .", "For this, we use the production line code snippet from sect:introduction as a working example.", "A naive approach  Before turning to modal types, a naive solution to achieve the desired time delay would be for $\\mathsf {paint}$ to return the required drying time and for the program to delay execution for that time duration, e.g., as expressed in $\\small \\begin{array}{l}\\mathsf {\\color {keywordColor}let}_{}\\; (\\tau _{\\text{dry}}, \\text{body'},\\text{left-door'},\\text{right-door'}) = \\mathsf {paint}\\; (\\text{body},\\text{left-door},\\text{right-door}) \\;\\mathsf {\\color {keywordColor}in}\\; \\\\\\mathsf {delay}\\;{\\tau _{\\text{dry}}};\\\\\\mathsf {assemble}\\; (\\text{body'},\\text{left-door'},\\text{right-door'})\\end{array}$ It is not difficult to see that we could generalise this solution to allow performing other useful activities while waiting for $\\tau _{\\text{dry}}$ time to pass.", "So are we done and can conclude the paper here?", "Well, no, because this solution puts all the burden for writing correct code on the shoulders of the programmer, with successful typechecking giving no additional guarantees that $\\tau _{\\text{dry}}$ indeed will have passed.", "A temporal resource type  Instead, inspired by Fitch-style modal type systems and Kripke's possible worlds semantics reading of the $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}$ -modality, we propose a temporal resource type, written $[ \\tau ]\\, X$ , to specify that a value of type $X$ will become available for use in at most $\\tau $ time units, or to put it differently, the boxed value of type $X$ can be explicitly unboxed only when at least $\\tau $ time units have passed.Here $\\tau $ is a natural number that counts discrete time moments, say, seconds.", "Concretely, $[ \\tau ]\\, X$ is presented by the following two typing rules: $\\small *[Lab={\\color {rulenameColor}Box}]{\\Gamma , \\langle \\tau \\rangle \\vdash V : X}{\\Gamma \\vdash \\mathsf {\\color {keywordColor}box}_{\\tau }\\,V : [ \\tau ]\\, X}\\quad *[Lab={\\color {rulenameColor}Unbox}]{\\tau \\le \\mathsf {time}\\; \\Gamma \\\\\\Gamma \\mathbin {-}\\tau \\vdash V : [ \\tau ]\\, X \\\\\\Gamma , x \\unknown.", "X \\vdash N : Y \\mathbin {!", "}\\tau ^{\\prime }}{\\Gamma \\vdash \\mathsf {\\color {keywordColor}unbox}_{\\tau }\\; V \\;\\mathsf {\\color {keywordColor}as}\\; x \\;\\mathsf {\\color {keywordColor}in}\\; N : Y \\mathbin {!", "}\\tau ^{\\prime }}$ Analogously to the context modality $\\raisebox {-0.1cm}{\\includegraphics {icons/unlock.pdf}}$ of Fitch-style modal type systems, we introduce a similar modality on contexts, written $\\langle \\tau \\rangle $ , to express that when typechecking a term of the form $\\Gamma , \\langle \\tau \\rangle \\vdash V : X$ , we can safely assume that at least $\\tau $ time will have passed before $V$  is accessed or executed, as in the premise of the Box rule.", "Accordingly, in the Unbox rule, we require that at least $\\tau $ time units have passed since the resource $V$ of type $[ \\tau ]\\, X$ was created or brought into scope, before we can safely unbox it and use it in the continuation $N$ as $x$ .", "Encapsulating temporal resources as a type gives us flexible first-class access to them, and allows to pack them in data structures and pass them to functions.", "Modelling passage of time  As we see in the Unbox rule, we can unbox a temporal resource only when enough time has passed since its creation.", "This begs a question: how to model the passage of time within the type system?", "For this, we propose a new notion of temporally-aware graded algebraic effects, where each operation $\\mathsf {op}$ is specified not only by its parameter and result types, but also by the operation's prescribed execution time, and with $\\mathsf {op}$ 's continuation knowing that $\\mathsf {op}$ 's worth of additional time has passed before it begins executing.", "We refer the reader to [9], [28], [32], [57] for background on ordinary (graded) algebraic effects.", "For instance, the $\\mathsf {paint}$ operation, taking $\\tau _{\\text{paint}}$ time, is typed in our $\\lambda _{[\\tau ]}$ asWe present $\\lambda _{[\\tau ]}$ formally using algebraic operations with explicit continuations, while in code snippets we use so-called generic effects [56] without explicit continuations.", "$\\small *[Lab={\\color {rulenameColor}}]{\\Gamma \\vdash V : \\mathsf {Car} \\times \\mathsf {Door} \\times \\mathsf {Door} \\\\\\Gamma , \\langle \\tau _{\\text{paint}} \\rangle ,x \\unknown.", "[ \\tau _{\\text{dry}} ]\\, \\mathsf {Car} \\times [ \\tau _{\\text{dry}} ]\\, \\mathsf {Door} \\times [ \\tau _{\\text{dry}} ]\\, \\mathsf {Door}\\vdash M : X \\mathbin {!", "}\\tau }{\\Gamma \\vdash \\mathsf {paint}\\;V\\; (x \\,.\\, M) : X \\mathbin {!", "}(\\tau _{\\text{paint}} + \\tau )}$ Here, $\\langle \\tau _{\\text{paint}} \\rangle $ expresses that from the perspective of any $\\mathsf {\\color {keywordColor}unbox}$ es in $M$ , an additional $\\tau _{\\text{paint}}$ time will have passed compared to the beginning of the execution of $\\mathsf {paint}\\;V\\; (x \\,.\\, M)$ , which is typed in the “earlier” context $\\Gamma $ .", "Also, observe that $\\mathsf {paint}$ 's result $x$ is available after $\\tau _{\\text{paint}}$ time has passed (i.e., after $\\mathsf {paint}$ finishes), and its type has the car part types wrapped as temporal resources, ensuring that any further operations (e.g., $\\mathsf {assemble}$ ) can access them only after at least $\\tau _{\\text{dry}}$ time has passed after $\\mathsf {paint}$ finishes.", "The $\\mathsf {delay}\\; \\tau $ operation is typed analogously.", "Finally, similarly to algebraic operations, we also use the context modality $\\langle \\tau \\rangle $ to model the passage of time in sequential composition, as specified in $\\small *[Lab={\\color {rulenameColor}}]{\\Gamma \\vdash M : X \\mathbin {!", "}\\tau \\\\\\Gamma , \\langle \\tau \\rangle , x \\unknown.", "X \\vdash N : Y \\mathbin {!", "}\\tau ^{\\prime }}{\\Gamma \\vdash \\mathsf {\\color {keywordColor}let}_{}\\; x = M \\;\\mathsf {\\color {keywordColor}in}\\; N : Y \\mathbin {!", "}\\tau + \\tau ^{\\prime }}$ We use graded monads style effect types [32] (the $\\mathbin {!", "}\\; \\tau $ part of $X \\mathbin {!", "}\\tau $ ) to specify and track the execution time of computations.", "The novelty of our work is to use this effect information to inform continuations that they can safely assume that the given amount of additional time has passed before they start executing.", "Putting it all together  We conclude this overview by revisiting our production line code snippet from sect:introduction and note that in our $\\lambda _{[\\tau ]}$ we can write it as follows: $\\small \\begin{array}{l}\\mathsf {\\color {keywordColor}let}_{}\\; (\\text{body'},\\text{left-door'},\\text{right-door'}) = \\mathsf {paint}\\; (\\text{body},\\text{left-door},\\text{right-door}) \\;\\mathsf {\\color {keywordColor}in}\\; \\\\\\mathsf {delay}\\; \\tau _{\\text{dry}};\\\\\\mathsf {\\color {keywordColor}unbox}_{}\\; \\text{body'} \\;\\mathsf {\\color {keywordColor}as}\\; \\text{body''} \\;\\mathsf {\\color {keywordColor}in}\\; \\\\\\mathsf {\\color {keywordColor}unbox}_{}\\; \\text{left-door'} \\;\\mathsf {\\color {keywordColor}as}\\; \\text{left-door''} \\;\\mathsf {\\color {keywordColor}in}\\; \\\\\\mathsf {\\color {keywordColor}unbox}_{}\\; \\text{right-door'} \\;\\mathsf {\\color {keywordColor}as}\\; \\text{right-door''} \\;\\mathsf {\\color {keywordColor}in}\\; \\\\\\mathsf {assemble}\\; (\\text{body''},\\text{left-door''},\\text{right-door''})\\end{array}$ Observe that apart from the $\\mathsf {\\color {keywordColor}unbox}$ operations, the code looks identical to the naive, unsafe solution discussed earlier.", "However, crucially, now any code that wants to use the outputs of $\\mathsf {paint}$ will typecheck only if these resources are accessed after at least $\\tau _{\\text{dry}}$ time units have passed after $\\mathsf {paint}$ finishes.", "In the code snippet, this is achieved by blocking execution with $\\mathsf {delay}\\; \\tau _{\\text{dry}}$ for $\\tau _{\\text{dry}}$ time units, but this could have been equally well achieved by executing other useful operations $\\mathsf {op}_1 ;\\; \\ldots ;\\; \\mathsf {op}_n$ , as long as they collectively take at least $\\tau _{\\text{dry}}$ time." ], [ "A calculus for programming with temporal resources", "We now recast the ideas explained above as a formal, modally typed, effectful core calculus, called $\\lambda _{[\\tau ]}$ .", "We base it on the fine-grain call-by-value $\\lambda $ -calculus [42]." ], [ "Types", "The types of $\\lambda _{[\\tau ]}$ are given in fig:types.", "Ground types include base types $\\mathsf {b}$ , and are closed under finite products and the modal temporal resource type $[ \\tau ]\\, A$ .", "The latter denotes that an $A$ -typed value will become available in at most $\\tau $ time units, where $\\tau \\in \\mathbb {N}$ counts discrete time moments.For concreteness, we work with $(\\mathbb {N}, 0, +, \\mathbin {\\scriptstyle \\dot{\\smash{\\textstyle -}}}, \\le )$ for time-grades, but we do not foresee problems generalising these to come from other analogous algebraic structures.", "The ground types can also come with constants $\\mathsf {f}$ with associated constant signatures $\\mathsf {f} : (A_1,\\ldots ,A_n) \\rightarrow B$ .", "To model operations such as $\\mathsf {paint}$ and $\\mathsf {assemble}$ discussed in sect:overview-temporal-resources, we assume a set of operations symbols $\\mathcal {O}$ , with each $\\mathsf {op}\\in \\mathcal {O}$ assigned an operation signature $\\mathsf {op}: A_\\mathsf {op} \\leadsto B_\\mathsf {op} \\mathbin {!}", "\\tau _\\mathsf {op}$ , which specifies that $\\mathsf {op}$ accepts inputs of type $A_\\mathsf {op}$ , returns values of type $B_\\mathsf {op}$ , and its execution takes $\\tau _\\mathsf {op}$ time units.", "Observe that by typing operations with ground types, as opposed to simply with base types, we can specify operations such as $\\mathsf {paint} : \\mathsf {Part} \\leadsto ([ \\tau _{\\text{dry}} ]\\, \\mathsf {Part}) \\mathbin {!}", "\\tau _{\\text{paint}}$ , returning values that can be accessed only after a certain amount of time, here, after $\\tau _{\\text{dry}}$ .", "Value types extend ground types with function type $X \\rightarrow Y \\mathbin {!", "}\\tau $ that specifies functions taking $X$ -typed arguments to computations that return $Y$ -typed values and take $\\tau $ time to execute, as expressed by the computation type $Y \\mathbin {!", "}\\tau $ .", "Figure: Types of λ [τ] \\lambda _{[\\tau ]}." ], [ "Terms", "The syntax of terms is given in fig:terms, separated into values and computations.", "Figure: Values, computations, and effect handlers of λ [τ] \\lambda _{[\\tau ]}.Values include variables, constants, finite tuples, functions, and the boxing up of temporal resources, $\\mathsf {\\color {keywordColor}box}_{\\tau }\\,V$ , which allows us to consider an arbitrary value $V$ as a temporal resource as long as it is safe to access $V$ after $\\tau $ time units.", "Computations include returning values, sequential composition, function applications, pattern-matchingThe form $\\mathsf {\\color {keywordColor}let}_{}\\; (x,y,z) = M \\;\\mathsf {\\color {keywordColor}in}\\; N$ in sect:introduction, is the natural combination of $\\mathsf {\\color {keywordColor}let}$ and $\\mathsf {\\color {keywordColor}match}$ ., algebraic operation calls, effect handling, and the unboxing of temporal resources, where given a temporal resource $V$ of type $[ \\tau ]\\, X$ , the computation $\\mathsf {\\color {keywordColor}unbox}_{\\tau }\\; V \\;\\mathsf {\\color {keywordColor}as}\\; x \\;\\mathsf {\\color {keywordColor}in}\\; N$ is used to access the underlying value of type $X$ if at least $\\tau $ time units have passed since the creation of the resource $V$ .", "In addition to user-specifiable operation calls (via operation signatures and effect handling), we include a separate $\\mathsf {\\color {keywordColor}delay}\\;\\tau \\; M$ operation that blocks the execution of its continuation for the given amount of time.", "For simplicity, we require effect handlers to have operation clauses $M_\\mathsf {op}$ for all $\\mathsf {op}\\in \\mathcal {O}$ , but we do not allow $\\mathsf {\\color {keywordColor}delay}$ s to be handled in light of the equations we want of them in sect:delay-equations, where all consecutive $\\mathsf {\\color {keywordColor}delay}$ s are collapsed and all zero-$\\mathsf {\\color {keywordColor}delay}$ s are removed." ], [ "Type system", "We now equip $\\lambda _{[\\tau ]}$ with a modal type-and-effect system.", "On the one hand, for modelling temporal resources, we build on Fitch-style modal type systems [18].", "On the other hand, for modelling effectful computations and their specifications, we build on type-and-effect systems for calculi based on graded monads [32].", "The typing judgement forms are written as $\\Gamma \\vdash V : X$ and $\\Gamma \\vdash M : X \\mathbin {!", "}\\tau $ , where $\\tau $ specifies $M$ 's execution time and $\\Gamma $ is a temporal typing context: $\\Gamma \\mathrel {\\;{:}{:}{=}\\ }\\cdot \\mathrel {\\;\\big |\\ \\ }\\Gamma , x \\unknown.", "X \\mathrel {\\;\\big |\\ \\ }\\Gamma , \\langle \\tau \\rangle $ Here, $\\langle \\tau \\rangle $ is a temporal modality on contexts, akin to $\\raisebox {-0.1cm}{\\includegraphics {icons/unlock.pdf}}$ in Fitch-style modal type systems.", "We use it to express that when typechecking a term of the form $\\Gamma , \\langle \\tau \\rangle \\vdash V : X$ , we can safely assume that at least $\\tau $ time will have passed before the resource $V$  is accessed or executed.", "The rules defining these judgements are given in fig:typing-rules.", "Below we only comment on the temporally significant ones.", "Figure: Typing rules of λ [τ] \\lambda _{[\\tau ]}.Observe that in contrast to Fitch-style modal type systems discussed in sect:overview-modal-types, Var does not restrict the $\\Gamma ^{\\prime }$ right of $x$ to not include any temporal modalities.", "This is so because in Kripke's possible worlds reading of $\\lambda _{[\\tau ]}$ (see sect:semantics) we treat all types as being monotone with respect to time—this is not usually the case for formulae in modal logics such as S4, but in $\\lambda _{[\\tau ]}$ this models that once any value is available it will remain so (we leave resource expiration for future work).", "As in systems based on graded monads, Return specifies that returning a value takes zero time, and Let that the execution time of sequentially composed computations is the sum of the individual ones.", "Novel to $\\lambda _{[\\tau ]}$ , Let, Op, Delay, and Handle state that the continuations can safely assume that relevant amount of additional time has passed before they start executing, as discussed in sect:overview-temporal-resources.", "When typing the operation clauses $M_\\mathsf {op}$ in Handle, we universally quantify (at the meta-level) over the time variable $\\tau ^{\\prime \\prime }$ that denotes the execution time of the continuation $k$ of $M_\\mathsf {op}$ .", "We do so because we must be able to execute the operation clauses $M_\\mathsf {op}$ at any point when effect handling recursively traverses $M$ .", "Further, observe that the continuation $k$ is wrapped inside a resource type.", "This ensures that $k$ is invoked only after $\\tau _\\mathsf {op}$ amount of time has been spent in $M_\\mathsf {op}$ , thus guaranteeing that the effect handlers respect the temporal discipline.", "Note that this enforces a linear discipline for our effect handlers: for a non-zero $\\tau _\\mathsf {op}$ , $k$ must be executed exactly once for $M_\\mathsf {op}$ 's execution time to match $\\tau _\\mathsf {op}+ \\tau ^{\\prime \\prime }$ .", "Finally, Box specifies that in order to box up a value $V$ of type $X$ as a temporal resource of type $[ \\tau ]\\, X$ , we must be able to type $V$ when assuming that $\\tau $ additional time units will have passed before $V$ is accessed.", "At the same time, Unbox specifies that we can unbox a temporal resource $V$ of type $[ \\tau ]\\, X$ only if at least $\\tau $ time units have passed since its creation: the time captured by $\\Gamma $ must be at least $\\tau $ , and we must be able to type $V$ in a $\\tau $ time units earlier context $\\Gamma \\mathbin {-}\\tau $ .", "The time captured by a context, $\\mathsf {time}\\; \\Gamma $ , is calculated recursively as $\\begin{array}{c}\\mathsf {time}\\; \\cdot \\mathrel {\\overset{\\text{\\tiny def}}{=}}0\\qquad \\mathsf {time}\\; (\\Gamma , x \\unknown.", "X) \\mathrel {\\overset{\\text{\\tiny def}}{=}}\\mathsf {time}\\; \\Gamma \\qquad \\mathsf {time}\\; (\\Gamma , \\langle \\tau \\rangle ) \\mathrel {\\overset{\\text{\\tiny def}}{=}}\\mathsf {time}\\; \\Gamma + \\tau \\end{array}$ and the “time-travelling” operation $\\Gamma \\mathbin {-}\\tau $ as (where $\\tau _+ \\equiv 1 + \\tau ^{\\prime \\prime }$ for some $\\tau ^{\\prime \\prime }$ ) $\\begin{array}{c}\\Gamma \\mathbin {-}0 \\mathrel {\\overset{\\text{\\tiny def}}{=}}\\Gamma \\qquad \\cdot \\mathbin {-}\\, \\tau _+ \\mathrel {\\overset{\\text{\\tiny def}}{=}}\\cdot \\qquad (\\Gamma , x \\unknown.", "X) \\mathbin {-}\\tau _+ \\mathrel {\\overset{\\text{\\tiny def}}{=}}\\Gamma \\mathbin {-}\\tau _+\\\\[1ex]\\Gamma , \\langle \\tau ^{\\prime } \\rangle \\mathbin {-}\\tau _+ \\mathrel {\\overset{\\text{\\tiny def}}{=}}\\mathsf {if}\\;\\tau _+ \\le \\tau ^{\\prime }\\;\\mathsf {then}\\;\\Gamma , \\langle \\tau ^{\\prime } \\mathbin {\\scriptstyle \\dot{\\smash{\\textstyle -}}}\\tau _+ \\rangle \\;\\mathsf {else}\\;\\Gamma \\mathbin {-}(\\tau _+ \\mathbin {\\scriptstyle \\dot{\\smash{\\textstyle -}}}\\tau ^{\\prime })\\end{array}$ taking $\\Gamma $ to an earlier state by removing $\\tau $ worth of modalities and variables, and where we write $\\mathbin {\\scriptstyle \\dot{\\smash{\\textstyle -}}}$ for the truncated subtraction operation on natural numbers." ], [ "Admissibility of renamings and substitutions", "As contexts involve temporal modalities, proving that expected structural and substitution rules [7] are admissible for well-typed terms is somewhat involved.", "The typing relations $\\Gamma \\vdash V : X$ and $\\Gamma \\vdash M : X \\mathbin {!", "}\\tau $ are closed under standard structural rules of weakening, exchange of consecutive variables, and contraction (omitted here).", "Furthermore, both typing relations are also closed under rules making $\\langle - \\rangle $ into a strong monoidal functor (with a co-strength) [43]: $\\begin{prooftree}[rule style=double]{{\\Gamma , \\langle 0 \\rangle } \\vdash J}1{\\Gamma \\vdash J}\\end{prooftree}\\qquad \\begin{prooftree}[rule style=double]{\\Gamma , \\langle \\tau _1 + \\tau _2 \\rangle \\vdash J}1{\\Gamma , \\langle \\tau _1 \\rangle , \\langle \\tau _2 \\rangle \\vdash J}\\end{prooftree}\\qquad \\begin{prooftree}{{\\Gamma , \\langle \\tau \\rangle } \\vdash J \\quad \\tau \\le \\tau ^{\\prime }}1{\\Gamma , \\langle \\tau ^{\\prime } \\rangle \\vdash J}\\end{prooftree}\\qquad \\begin{prooftree}{{\\Gamma , \\langle \\tau \\rangle , x \\unknown.", "X} \\vdash J}1{\\Gamma , x \\unknown.", "X, \\langle \\tau \\rangle \\vdash J}\\end{prooftree}$ where $\\Gamma \\vdash J$ ranges over both typing relations, where the first two rules hold in both directions, and the last rule expresses that if we can type $J$ using a variable “now”, we can also type $J$ if that variable was brought into scope “earlier”.", "We prove the admissibility of these rules by first inductively defining a renaming relation $\\rho : \\Gamma \\leadsto \\Gamma ^{\\prime }$ , and then proving by induction that if $\\Gamma \\vdash J$ and $\\rho : \\Gamma \\leadsto \\Gamma ^{\\prime }$ then also $\\Gamma ^{\\prime } \\vdash J[\\rho ]$ , where $J[\\rho ]$ is the action of renaming $J$ with $\\rho $ .", "The $\\Gamma \\leadsto \\Gamma ^{\\prime }$ relation is defined as the reflexive-transitive-congruent closure of rules corresponding to the desired structural rules, e.g., $\\mathsf {var}^r_{x \\unknown.", "X \\in \\Gamma } : \\Gamma , y \\unknown.", "X \\leadsto \\Gamma $ and $\\mu ^r : \\Gamma , \\langle \\tau _1 + \\tau _2 \\rangle \\leadsto \\Gamma , \\langle \\tau _1 \\rangle , \\langle \\tau _2 \\rangle $ .", "The full list is given in sect:appendix-renamings.", "For the Var and Unbox cases of the proof, we show that if $\\rho : \\Gamma \\leadsto \\Gamma ^{\\prime }$ and $x \\in _\\tau \\Gamma $ , then $\\rho \\, x \\in _{\\tau ^{\\prime }} \\Gamma ^{\\prime }$ for some $\\tau ^{\\prime }$ with $\\tau \\le \\tau ^{\\prime }$ , where $x \\in _\\tau \\Gamma $ means that $x \\in \\Gamma $ and there is $\\tau $ worth of modalities right of $x$ in $\\Gamma $ , and $\\rho \\, x$ is the variable that $\\rho $ maps $x$ to.", "For Unbox, we further prove that if $\\rho : \\Gamma \\leadsto \\Gamma ^{\\prime }$ , then for any $\\tau $ we can build $\\rho \\mathbin {-}\\tau : \\Gamma \\mathbin {-}\\tau \\leadsto \\Gamma ^{\\prime } \\mathbin {-}\\tau $ , using the result about $\\in _{\\tau }$ to ensure that $\\rho $ does not map any $x \\in \\Gamma \\mathbin {-}\\tau $ outside of $\\Gamma ^{\\prime } \\mathbin {-}\\tau $ .", "We also establish that if $\\Gamma \\leadsto \\Gamma ^{\\prime }$ , then $\\mathsf {time}\\; \\Gamma \\le \\mathsf {time}\\; \\Gamma ^{\\prime }$ , allowing us to deduce $\\tau \\le \\mathsf {time}\\; \\Gamma ^{\\prime }$ from $\\tau \\le \\mathsf {time}\\; \\Gamma $ .", "The admissibility of the rules corresponding to $\\mu ^r$ (and its inverse) relies on us having defined context splitting in Unbox using $\\Gamma \\mathbin {-}\\tau $ , as opposed to more rigidly as $\\Gamma ,\\Gamma ^{\\prime }$ , as in [18], as then it would be problematic if the split happens between $\\langle \\tau _1 \\rangle ,\\langle \\tau _2 \\rangle $ .", "The inverses of last two rules in thm:renaming are not valid—they would allow unboxing temporal resources without enough time having passed.", "The typing relations $\\Gamma \\vdash V : X$ and $\\Gamma \\vdash M : X \\mathbin {!", "}\\tau $ are closed under substitution, i.e., if $\\Gamma , x \\unknown.", "X, \\Gamma ^{\\prime } \\vdash J$ and $\\Gamma \\vdash W : X$ , then $\\Gamma , \\Gamma ^{\\prime } \\vdash J[W/x]$ , where $J[W/x]$ is standard, recursively defined capture-avoiding substitution [7].", "The proof proceeds by induction on the derivation of $\\Gamma , x \\unknown.", "X, \\Gamma ^{\\prime } \\vdash J$ .", "The most involved case is Unbox, where we construct the derivation of $\\Gamma ,\\Gamma ^{\\prime } \\vdash \\mathsf {\\color {keywordColor}unbox}_{\\tau }\\; V[W/x] \\;\\mathsf {\\color {keywordColor}as}\\; y \\;\\mathsf {\\color {keywordColor}in}\\; N[W/x] : Y \\mathbin {!", "}\\tau ^{\\prime }$ by first analysing whether $\\tau \\le \\mathsf {time}\\; \\Gamma ^{\\prime }$ , which tells us whether $x$ is in the context $(\\Gamma , x \\unknown.", "X, \\Gamma ^{\\prime }) \\mathbin {-}\\tau $ of $V$ , based on which we learn whether $W$ continues to be substituted for $x$ in $V$ or whether $V[W/x] = V$ ." ], [ "Equational theory", "We conclude the definition of $\\lambda _{[\\tau ]}$ by equipping it with an equational theory to reason about program equivalence, defined using judgements $\\Gamma \\vdash V \\equiv W : X$ and $\\Gamma \\vdash M \\equiv N : X \\mathbin {!", "}\\tau $ , where we presuppose that the terms are well-typed for the given contexts and types.", "The rules defining these relations are given in fig:equations.", "We omit standard equivalence, congruence, and substitutivity rules.", "Figure: Equational theory (omitting contexts and types where no confusion arises)The equational theory consists of standard $\\beta /\\eta $ -equations for the unit, product, and function types.", "We also include monadic equations for $\\mathsf {\\color {keywordColor}return}$ and $\\mathsf {\\color {keywordColor}let}$  [49].", "For $\\mathsf {op}$ and $\\mathsf {\\color {keywordColor}delay}$ , we include algebraicity equations allowing us to pull them out of $\\mathsf {\\color {keywordColor}let}$  [9].", "For $\\mathsf {\\color {keywordColor}handle}$ , we include equations expressing that effect handling recursively traverses a term, replacing each $\\mathsf {op}$ -occurrence with the operation clause $M_\\mathsf {op}$ , leaving $\\mathsf {\\color {keywordColor}delay}$ s untouched, and finally executes the continuation $N$ when reaching return values [58].", "Finally, we include $\\beta $ /$\\eta $ -equations for $\\mathsf {\\color {keywordColor}box}$ and $\\mathsf {\\color {keywordColor}unbox}$ , which are similar to corresponding equations in Fitch-style modal $\\lambda $ -calculi [18], and express that $\\mathsf {\\color {keywordColor}unbox}$ behaves as a pattern-matching elimination form for $\\mathsf {\\color {keywordColor}box}$ ." ], [ "Denotational semantics", "We justify the design of $\\lambda _{[\\tau ]}$ by giving it a natural semantics based on adjunctions between strong monoidal functors [43] (modelling modalities) and a strong graded monad [32] (modelling computations).To be precise, graded monads are can be viewed as lax monoidal functors [32], meaning that we are in fact only using variants of monoidal functors to model $\\lambda _{[\\tau ]}$ .", "More specifically, we use a temporal notion of $[ - ]$ -strength for the graded monad, see below.", "We assume general knowledge of category theory, only spelling out details specific to modelling $\\lambda _{[\\tau ]}$ .", "To optimise for space, we discuss the abstract model structure simultaneously with a concrete example using presheaves [40], but note that the interpretation is defined, and its soundness proved, with respect to the abstract structure.", "When referring to the abstract model structure, we denote the underlying category with $\\mathbb {C}$ .", "Meanwhile, the concrete presheaf example is given in $\\mathsf {Set}^{(\\mathbb {N},\\le )}$ , consisting of functors from $(\\mathbb {N},\\le )$ to the category $\\mathsf {Set}$ of sets and functions.", "The model in $\\mathsf {Set}^{(\\mathbb {N},\\le )}$ is similar to Kripke's possible worlds semantics, with the exception that in $\\mathsf {Set}^{(\\mathbb {N},\\le )}$ all objects are monotone for $\\le $ , i.e., for any $A \\in \\mathsf {Set}^{(\\mathbb {N},\\le )}$ , we have functions $A(t_1 \\le t_2) : A(t_1) \\rightarrow A(t_2)$ respecting $\\le $ (i.e., reflexivity and transitivity), whereas Kripke models are commonly given by discretely indexed presheaves, i.e., in $\\mathsf {Set}^{\\mathbb {W}}$ for a set of worlds $\\mathbb {W}$ , and it is only the modalities that move between worlds.", "For $\\lambda _{[\\tau ]}$ , working in $\\mathsf {Set}^{(\\mathbb {N},\\le )}$ gives us that when a resource becomes available, it will remain so without need for reboxing, leading to a more natural system for temporal resources and a simpler Var rule." ], [ "Interpretation of types", "Value types and contexts  To interpret value types, we require the category $\\mathbb {C}$ to have finite products $(\\mathbb {1}, A \\times B)$ and exponentials $A \\Rightarrow B$ , so as to model the unit, product, and function types.", "In $\\mathsf {Set}^{(\\mathbb {N},\\le )}$ , the former are given point-wise using the finite products in $\\mathsf {Set}$ , and the latter are given as $(A \\Rightarrow B)(t) \\mathrel {\\overset{\\text{\\tiny def}}{=}}\\mathsf {Set}^{(\\mathbb {N},\\le )}(\\mathsf {hom}\\, t \\times A, B)$ , where $\\mathsf {hom}\\, t : (\\mathbb {N},\\le ) \\rightarrow \\mathsf {Set}$ is the covariant hom-functor for $(\\mathbb {N},\\le )$ , given by $\\mathsf {hom}\\, t \\mathrel {\\overset{\\text{\\tiny def}}{=}}t \\le (-)$  [40].", "When unfolding it further, the above means that $(A \\Rightarrow B)(t)$ is the set of functions $(f_{t^{\\prime }} : A(t^{\\prime }) \\rightarrow B(t^{\\prime }))_{t^{\\prime } \\in \\lbrace t^{\\prime } \\in \\mathbb {N} \\vert t \\le t^{\\prime }\\rbrace }$ that are natural in $t^{\\prime }$ , capturing the intuition that in $\\lambda _{[\\tau ]}$ functions can be applied in any future context.", "For base types, we require an object $[\\!", "[\\mathsf {b}]\\!", "]$ of $\\mathbb {C}$ for each $\\mathsf {b}$ .", "To interpret the temporal resource type, we require a strong monoidal functor $[ - ] : (\\mathbb {N},\\le ) \\rightarrow [ \\mathbb {C}, \\mathbb {C}]$ , where $[ \\mathbb {C}, \\mathbb {C}]$ is the category of endofunctors on $\\mathbb {C}$ .", "This means that we have functors $[ \\tau ]: \\mathbb {C}\\rightarrow \\mathbb {C}$ , for all $\\tau \\in \\mathbb {N}$ , together with morphisms $[ \\tau _1 \\le \\tau _2 ]_A : [ \\tau _1 ] A \\rightarrow [ \\tau _2 ] A$ , natural in $A$ and respecting $\\le $ .", "Strong monoidality of $[ - ]$ means that we have natural isomorphisms ${\\varepsilon _A : [ 0 ] A \\overset{\\cong }{\\rightarrow } A}$ and $\\delta _{A,\\tau _1,\\tau _2} :[ \\tau _1 + \\tau _2 ] A \\overset{\\cong }{\\rightarrow } [ \\tau _1 ] ([ \\tau _2 ] A)$ , satisfying time-graded variants of comonad laws [10]: $\\small \\begin{array}{c}\\varepsilon \\circ \\delta _{A,0,\\tau } \\equiv \\mathsf {id}\\qquad [ \\tau ](\\varepsilon ) \\circ \\delta _{A,\\tau ,0} \\equiv \\mathsf {id}\\qquad \\delta _{[ \\tau _3 ] A,\\tau _1,\\tau _2} \\circ \\delta _{A,\\tau _1 + \\tau _2,\\tau _3}\\equiv [ \\tau _1 ] (\\delta _{A,\\tau _2,\\tau _3}) \\circ \\delta \\end{array}$ We also require $(\\delta _{A,\\tau _1,\\tau _2},\\delta ^{-1}_{A,\\tau _1,\\tau _2}) $ to be monotone in $\\tau _1, \\tau _2$ , i.e., if $\\tau _1 \\le \\tau ^{\\prime }_1$ and $\\tau _2 \\le \\tau ^{\\prime }_2$ , then $[ \\tau ^{\\prime }_1 ] ([ \\tau _2 \\le \\tau ^{\\prime }_2 ]) \\circ [ \\tau _1 \\le \\tau ^{\\prime }_1 ] \\circ \\delta \\equiv \\delta \\circ [ \\tau _1 + \\tau _2 \\le \\tau ^{\\prime }_1 + \\tau ^{\\prime }_2 ]_A$ .", "We omit the indices of the components of natural transformations when convenient.", "In $\\mathsf {Set}^{(\\mathbb {N},\\le )}$ , we define $([ \\tau ]A)(t) \\mathrel {\\overset{\\text{\\tiny def}}{=}}A(t + \\tau )$ , with $[ \\tau ]A$ -values given by future $A$ -values, and with $(\\varepsilon _A,\\varepsilon _A^{-1},\\delta _A,\\delta _A^{-1})$ given by identities on $A$ -values, combined with the laws of $(0,+)$ , e.g., as $(\\varepsilon _A)_t\\, \\big (a \\in ([ 0 ]A)(t) \\equiv A(t + 0)\\big ) \\mathrel {\\overset{\\text{\\tiny def}}{=}}a \\in A(t)$ .", "Using the above, we interpret a value type $X$ as an object $[\\![X]\\!", "]$ of $\\mathbb {C}$ , as $\\begin{array}{c}[\\![A]\\!]", "\\mathrel {\\overset{\\text{\\tiny def}}{=}}[\\![A]\\!", "]^g\\qquad [\\!", "[\\mathsf {unit}]\\!]", "\\mathrel {\\overset{\\text{\\tiny def}}{=}}\\mathbb {1}\\qquad [\\!", "[X \\times Y]\\!]", "\\mathrel {\\overset{\\text{\\tiny def}}{=}}[\\![X]\\!]", "\\times [\\![Y]\\!]\\\\[0.5ex][\\!", "[X \\rightarrow Y \\mathbin {!", "}\\tau ]\\!]", "\\mathrel {\\overset{\\text{\\tiny def}}{=}}[\\![X]\\!]", "\\Rightarrow \\tau \\, [\\![Y]\\!", "]\\qquad [\\!", "[[ \\tau ]\\, X]\\!]", "\\mathrel {\\overset{\\text{\\tiny def}}{=}}[ \\tau ][\\![X]\\!", "]\\end{array}$ where $ is a graded monad for modelling computations---wereturn to it below.", "The interpretation $ [[A]]g$ of ground typesis defined similarly, so we omit it here.$ Next, we define the interpretation of contexts, for which we require another strong monoidal functor, $\\langle - \\rangle : (\\mathbb {N},\\le )^{\\text{op}} \\rightarrow [ \\mathbb {C}, \\mathbb {C}]$ .", "Note that $\\langle - \\rangle $ is contravariant—this enables us to model the structural rules that allow terms typed in an earlier context to be used in future ones (see thm:renaming).", "We denote the strong monoidal structure of $\\langle - \\rangle $ with $\\eta _A : A \\overset{\\cong }{\\rightarrow } \\langle 0 \\rangle A$ and $\\mu _{A, \\tau _1, \\tau _2} : \\langle \\tau _1 \\rangle (\\langle \\tau _2 \\rangle A)\\overset{\\cong }{\\rightarrow } \\langle \\tau _1 + \\tau _2 \\rangle A$ , required to satisfy time-graded variants of monad laws [43], given by $\\mu _{\\!A,0,\\tau } \\circ \\eta _{\\langle \\tau \\rangle A} \\equiv \\mathsf {id}\\quad \\mu _{\\!A,\\tau ,0} \\circ \\langle \\tau \\rangle (\\eta ) \\equiv \\mathsf {id}\\quad \\mu \\circ \\mu _{\\langle \\tau _3 \\rangle A,\\tau _1,\\tau _2} \\equiv \\mu \\circ \\langle \\tau _1 \\rangle (\\mu _{\\!A,\\tau _2,\\tau _3})$ and $(\\mu _{A, \\tau _1, \\tau _2},\\mu ^{-1}_{A, \\tau _1, \\tau _2})$ have to be monotone in $\\tau _1,\\tau _2$ , similarly to $(\\delta ,\\delta ^{-1}) $ above.", "In $\\mathsf {Set}^{(\\mathbb {N},\\le )}$ , we define $(\\langle \\tau \\rangle A)(t) \\mathrel {\\overset{\\text{\\tiny def}}{=}}(\\tau \\le t) \\times A (t \\mathbin {\\scriptstyle \\dot{\\smash{\\textstyle -}}}\\tau )$ , as past $A$ -values, with the side-condition $\\tau \\le t$ crucial for the existence of the adjunctions $\\langle \\tau \\rangle \\dashv [ \\tau ]$ we require below.", "We define $(\\eta _A,\\eta _A^{-1},\\mu _A,\\mu _A^{-1})$ similarly to earlier, as identities on $A$ -values, combined with the laws of $(0,+,\\mathbin {\\scriptstyle \\dot{\\smash{\\textstyle -}}})$ , so as to satisfy the side-conditions.", "With this, we can interpret contexts $\\Gamma $ as functors $[\\!", "[\\Gamma ]\\!]", ": \\mathbb {C}\\rightarrow \\mathbb {C}$ , given by: $\\begin{array}{c}[\\!", "[\\cdot ]\\!]", "A \\mathrel {\\overset{\\text{\\tiny def}}{=}}A\\qquad [\\!", "[\\Gamma , x \\unknown.", "X]\\!]", "A \\mathrel {\\overset{\\text{\\tiny def}}{=}}[\\!", "[\\Gamma ]\\!]", "A \\times [\\![X]\\!", "]\\qquad [\\!", "[\\Gamma , \\langle \\tau \\rangle ]\\!]", "A \\mathrel {\\overset{\\text{\\tiny def}}{=}}\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!]", "A)\\end{array}$ We interpret contexts as functors to easily manipulate denotations of composite contexts, e.g., we then have $\\iota _{\\Gamma ;\\Gamma ^{\\prime };A} : [\\!", "[\\Gamma ,\\Gamma ^{\\prime }]\\!", "]A \\overset{\\cong }{\\rightarrow } [\\!", "[\\Gamma ^{\\prime }]\\!]([\\!", "[\\Gamma ]\\!", "]A)$ , natural in $A$ .", "Finally, to formulate the semantics of computation types and terms, we require there to be a family of adjunctions $\\langle \\tau \\rangle \\dashv [ \\tau ]$ , i.e., natural transformations $\\eta ^{\\dashv }_{A,\\tau } : A \\rightarrow [ \\tau ] (\\langle \\tau \\rangle A)$ (the unit) and $\\varepsilon ^{\\dashv }_{A,\\tau } : \\langle \\tau \\rangle ([ \\tau ] A) \\rightarrow A$ (the counit), for all $\\tau \\in \\mathbb {N}$ , satisfying time-graded variants of standard adjunction laws [43], given by $\\small \\begin{array}{c}\\varepsilon ^{\\dashv }_{\\langle A \\rangle ,\\tau } \\circ \\langle \\tau \\rangle (\\eta ^{\\dashv }_{A,\\tau }) \\equiv \\mathsf {id}\\qquad [ \\tau ](\\varepsilon ^{\\dashv }_{A,\\tau }) \\circ \\eta ^{\\dashv }_{[ \\tau ]A,\\tau } \\equiv \\mathsf {id}\\end{array}$ We also require $(\\eta ^{\\dashv },\\varepsilon ^{\\dashv })$ to interact well with the strong monoidal structures: $\\hspace{-2.84544pt}\\small \\begin{array}{c}[ \\tau ] (\\langle 0 \\le \\tau \\rangle ) \\circ \\eta ^{\\dashv }_{A,\\tau } \\circ \\eta ^{-1} \\circ \\varepsilon \\equiv [ 0 \\le \\tau ]\\qquad [ \\tau _1 ] ([ \\tau _2 ] (\\mu )) \\circ [ \\tau _1 ](\\eta ^{\\dashv }_{\\langle \\tau _1 \\rangle A,\\tau _2}) \\circ \\eta ^{\\dashv }_{A,\\tau _1}\\equiv \\delta \\circ \\eta ^{\\dashv }\\\\[1ex]\\langle 0 \\rangle ([ 0 \\le \\tau ]) \\circ \\eta \\circ \\varepsilon ^{-1} \\circ \\varepsilon ^{\\dashv }_{A,\\tau } \\equiv \\langle 0 \\le \\tau \\rangle \\quad \\varepsilon ^{\\dashv }_{A,\\tau _1} \\circ \\langle \\tau _1 \\rangle (\\varepsilon ^{\\dashv }_{[ \\tau _1 ] A, \\tau _2}) \\circ \\langle \\tau _1 \\rangle (\\langle \\tau _2 \\rangle (\\delta ))\\equiv \\varepsilon ^{\\dashv }\\circ \\mu \\end{array}$ It then follows that $\\eta ^{\\dashv }_{A,0} \\equiv \\varepsilon ^{-1}_{\\langle 0 \\rangle A} \\circ \\eta _A$ and $\\varepsilon ^{\\dashv }_{A,0} \\equiv \\varepsilon _A \\circ \\eta ^{-1}_{[ 0 ]A}$ .", "In $\\mathsf {Set}^{(\\mathbb {N},\\le )}$ , $\\eta ^{\\dashv }_{A,\\tau }$ and $\\varepsilon ^{\\dashv }_{A,\\tau }$ are given by identities on $A$ -values, respectively combined with $\\tau \\le t + \\tau $ and monotonicity for $t \\mathbin {\\scriptstyle \\dot{\\smash{\\textstyle -}}}\\tau + \\tau \\equiv t$ .", "For the latter, we crucially know $\\tau \\le t$ due to the side-condition included in the definition of $\\langle - \\rangle $ .", "We note that modulo the time-gradings, the above structure is analogous to the models of the Fitch-style presentation of S4 [18], where $\\raisebox {-0.75mm}{\\scalebox {2}{\\square }}$ is modelled by an idempotent comonad, Figure: NO_CAPTION by an idempotent monad, and boxing/unboxing by $\\raisebox {-0.1cm}{\\includegraphics {icons/unlock.pdf}}\\!", "\\dashv \\raisebox {-0.75mm}{\\scalebox {2}{\\square }}$ .", "This is also why we present $[ - ]$ and $\\langle - \\rangle $ as comonad- and monad-like.", "Computation types  For computation types, we require a $[ - ]$ -strong graded monad $(\\eta ^{,\\mu ^{,\\mathsf {str}^{) on\\mathbb {C}, with grades in\\mathbb {N}.\\footnote {As \\lambda _{[\\tau ]} does not includesub-effecting (see {sect:future-work}), a discretely graded monad T suffices.", "}In detail, this means a functor \\mathbb {N} \\rightarrow [ \\mathbb {C}, \\mathbb {C}],together with natural transformations \\eta ^{_A : A \\rightarrow 0\\, A (the \\emph {unit}),\\mu ^{_{A,\\tau _1,\\tau _2} : \\tau _1 ( \\tau _2\\, A) \\rightarrow (\\tau _1 + \\tau _2)\\, A (the \\emph {multiplication}),and \\mathsf {str}^{_{A,B,\\tau } : [ \\tau ]A \\times B\\, \\tau \\rightarrow (A \\times B)\\, \\tau (the \\emph {strength}),with the first two satisfying standard graded monad laws (see~\\cite {Katsumata:GradedMonads} or (\\eta ,\\mu ) of \\langle - \\rangle ).Below we only present the laws for \\mathsf {str}^{ because ithas a novel temporal aspect to it---its first argument appears under[ \\tau ].", "As such, \\mathsf {str}^{ expresses that if we know an A-value willbe available after \\tau time units, we can push it into computations taking \\tau -time to execute.", "}We say that is a [ - ]-strong graded monad following theparlance of Bierman and de Paiva~\\cite {Bierman:ModalLogic}---in their work they model thepossibility modality {\\raisebox {-0.2mm}{\\scalebox {1.5}{\\diamond }}} Aas a \\raisebox {-0.75mm}{\\scalebox {2}{\\square }}-strong monad.", "While the laws governing \\mathsf {str}^{ are notoverly different from standard graded strength laws~\\cite {Katsumata:GradedMonads},we have to correctly account for [ - ] in them\\small \\begin{array}{c}\\mathsf {str}^{_{A,B,0} \\circ (\\varepsilon ^{-1}_A \\times \\eta ^{_A) \\equiv \\eta ^{_{A \\times B}\\qquad \\mu ^{_{A \\times B,\\tau _1,\\tau _2} \\circ (\\mathsf {str}^{) \\circ \\mathsf {str}^{\\equiv \\mathsf {str}^{\\circ (\\delta ^{-1} \\times \\mu ^{)\\\\[1ex] (\\mathsf {snd}) \\circ \\mathsf {str}^{_{A,B,\\tau } \\equiv \\mathsf {snd}\\qquad (\\alpha ) \\circ \\mathsf {str}^{\\circ (\\mathsf {m}\\times \\mathsf {id}) \\circ \\alpha ^{-1} \\equiv \\mathsf {str}^{_{A,B \\times C,\\tau } \\circ (\\mathsf {id}\\times \\mathsf {str}^{)}where\\alpha _{A,B,C} : (A \\times B) \\times C \\overset{\\cong }{\\rightarrow } A \\times (B \\times C), and\\mathsf {m}_{A,B,\\tau }: [ \\tau ] A \\times [ \\tau ] B \\rightarrow [ \\tau ] (A \\times B) \\linebreak witnesses that [ \\tau ] is monoidal for \\times ,which follows from [ \\tau ] being a right adjoint~\\cite {MacLane:CatWM}.Observe that it is the [ - ]-strength that gives a temporal flavour---the rest of it isstandard~\\cite {Katsumata:GradedMonads}.", "Next, we reaffirm that \\mathsf {str}^{is mathematically natural.", "}\\begin{proposition}Analogously to ordinary strong and enriched monads~\\cite {Kock:StrongFunctors}, having [ - ]-strength is equivalent to \\emph {[ - ]-enrichment} of ,given by morphisms{[ \\tau ](A \\Rightarrow B) \\rightarrow (\\tau \\,A \\Rightarrow \\tau \\,B)}respecting \\mathbb {C}^{\\prime }s self-enrichment~\\cite {Kelly:EnrichedCats} and (\\eta ^{,\\mu ^{).", "}}In order to model operations \\mathsf {op} and \\mathsf {\\color {keywordColor}delay} in {sect:interpretation-terms},we require to be equipped with \\emph {algebraic operations}:we ask there to be families of natural transformations\\mathsf {op}^{_{A,\\tau } : [\\!", "[A_\\mathsf {op}]\\!", "]^g \\times [ \\tau _\\mathsf {op} ]([\\!", "[B_\\mathsf {op}]\\!", "]^g \\Rightarrow \\tau \\, A) \\rightarrow (\\tau _\\mathsf {op}+ \\tau )\\, A,for all \\mathsf {op}: A_\\mathsf {op} \\leadsto B_\\mathsf {op} \\mathbin {!}", "\\tau _\\mathsf {op} \\in \\mathcal {O}, and\\mathsf {delay}^{_{A,\\tau ^{\\prime }}\\, \\tau : [ \\tau ] ( \\tau ^{\\prime }\\, A) \\rightarrow (\\tau + \\tau ^{\\prime })\\, A, for all \\tau \\in \\mathbb {N},satisfying algebraicity laws~\\cite {Plotkin:HandlingEffects}, which state that bothcommute with \\mu ^{ and \\mathsf {str}^{, e.g.,\\small \\begin{array}{c}\\mathsf {str}^{_{A, B, \\tau + \\tau ^{\\prime }} \\circ (\\mathsf {id}\\times \\mathsf {delay}^{\\, \\tau )\\equiv \\mathsf {delay}^{_{A \\times B,\\tau ^{\\prime }}\\, \\tau \\circ [ \\tau ](\\mathsf {str}^{) \\circ \\mathsf {m}\\circ (\\delta _{A,\\tau ,\\tau ^{\\prime }} \\times \\mathsf {id})}}In \\mathsf {Set}^{(\\mathbb {N},\\le )}, we can define as the initial algebraof a corresponding signature functor for operations \\mathsf {op} and \\mathsf {\\color {keywordColor}delay},analogously to the usual treatment of algebraic effects~\\cite {Bauer:WhatIsAlgebraic}.Concretely, such is determined inductively by three cases\\small *[Lab={\\color {rulenameColor}}]{a \\in A(t)}{\\mathsf {ret}\\, a \\in ( 0\\, A)(t)}\\quad *[Lab={\\color {rulenameColor}}]{a \\in [\\!", "[A_\\mathsf {op}]\\!", "]^g(t) \\\\\\\\ k \\in ([ \\tau _\\mathsf {op} ]([\\!", "[B_\\mathsf {op}]\\!", "]^g \\Rightarrow \\tau \\, A))(t)}{\\mathsf {op}\\, a\\, k \\in ( (\\tau _\\mathsf {op}+ \\tau )\\, A)(t)}\\quad *[Lab={\\color {rulenameColor}}]{k \\in [ \\tau ] ( \\tau ^{\\prime }\\, A)(t)}{\\mathsf {delay}\\, \\tau \\, k \\in ( (\\tau + \\tau ^{\\prime })\\, A)(t)}with (\\eta ^{,\\mu ^{,\\mathsf {str}^{,\\mathsf {op}^{,\\mathsf {delay}^{) defined in the expected way, e.g.,\\mathsf {str}^{ is given by recursively traversing a computation of type \\tau \\, Band moving the argument of type [ \\tau ] A under \\mathsf {ret} cases,modifying \\tau when going under the \\mathsf {op} and \\mathsf {delay} cases.", "}\\textit {Note:} In Agda, we use induction-recursion~\\cite {Dybjer:IR}to define the set ( \\tau \\, A)(t) mutually witha proof of its monotonicity in t, so as to show \\tau \\, A is a presheaf.Therefore, (\\eta ^{,\\mu ^{,\\mathsf {str}^{,\\mathsf {op}^{,\\mathsf {delay}^{) are all definedmutually with proofs of their t-naturality.", "A smallcaveat in these definitions is that in clauses of the form\\mu ^{\\, (\\mathsf {op}\\, a\\, k), types involve recursive calls ofthe form \\mu ^{\\, ( (t_1 \\le t_2)\\, (k\\, y)), where Agda^{\\prime }stermination-checker does not see that (t_1 \\le t_2)\\, (k\\, y)is smaller than k, which it is because (t_1 \\le t_2)does not change tree-height.", "To keep the development cleaner, the terminationof these definitions is currently postulated---in the future we expect that one canuse additional height indices to internalise it.", "}\\subsection {Interpretation of value and computation terms}}The interpretations of values and computations are definedsimultaneously.", "We only present the temporally interesting cases---full details are in{sect:appendix-semantics}.", "}As \\lambda _{[\\tau ]} does not have sub-effecting and includesenough type annotations for typing derivations to be unique, thisinterpretation is \\emph {coherent} by construction.", "}}}\\vspace{8.5359pt}}\\textbf {Values}~We assume a morphism[\\!", "[\\mathsf {f}]\\!]", ": [\\![A_1]\\!", "]^g \\times \\ldots \\times [\\![A_n]\\!", "]^g \\rightarrow [\\![B]\\!", "]^g forevery \\mathsf {f} : (A_1,\\ldots ,A_n) \\rightarrow B. Weinterpret a well-typed value \\Gamma \\vdash V : X as a morphism[\\!", "[\\Gamma \\vdash V : X]\\!]", ": [\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\rightarrow [\\![X]\\!]", "in \\mathbb {C} by inductionon the given typing derivation.", "}Most of the value cases are standard, and analogous to other calculi based onfine-grain call-by-value~\\cite {Levy:FGCBV} and gradedmonads~\\cite {Katsumata:GradedMonads}, using the Cartesian-closed structureof \\mathbb {C}.", "The temporally interesting cases are {\\sc {Var}} and {\\sc {Box}}, given by\\small \\begin{array}{l}[\\!", "[\\Gamma ,x \\unknown.", "X,\\Gamma ^{\\prime } \\vdash x : X]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ,x \\unknown.", "X,\\Gamma ^{\\prime }]\\!", "]\\mathbb {1}\\overset{\\iota }{-\\!\\!\\!\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ^{\\prime }]\\!", "]\\big ([\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times [\\![X]\\!", "]\\big )\\\\[0.5ex]\\hspace{113.81102pt}\\overset{\\mathsf {e}}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }\\langle \\mathsf {time}\\; \\Gamma ^{\\prime } \\rangle \\big ([\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times [\\![X]\\!", "]\\big )\\overset{\\varepsilon ^{\\langle \\rangle }}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times [\\![X]\\!", "]\\overset{\\mathsf {snd}}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }[\\![X]\\!]\\\\[2ex][\\!", "[\\Gamma \\vdash \\mathsf {\\color {keywordColor}box}_{\\tau }\\,V : [ \\tau ]\\, X]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\overset{\\eta ^{\\dashv }}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }[ \\tau ] \\big (\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!", "]\\mathbb {1})\\big )\\overset{[ \\tau ]([\\![V]\\!", "])}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[ \\tau ] [\\![X]\\!", "]\\end{array}where \\mathsf {e}_{A,\\Gamma } : [\\!", "[\\Gamma ]\\!", "]A \\rightarrow \\langle \\mathsf {time}\\; \\Gamma \\rangle Aextracts and collapses all temporal modalities in \\Gamma , andthe counit-like \\varepsilon ^{\\langle \\rangle }_{A,\\tau } is given by the composite\\langle \\tau \\rangle A \\overset{\\langle 0 \\le \\tau \\rangle _A}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\rightarrow }\\langle 0 \\rangle A \\overset{\\eta ^{-1}_A}{-\\!\\!\\!\\rightarrow } A.", "}}}\\vspace{8.5359pt}}\\textbf {Computations}~We interpret a well-typed computation\\Gamma \\vdash M : X \\mathbin {!", "}\\tau as a morphism[\\!", "[\\Gamma \\vdash M : X \\mathbin {!", "}\\tau ]\\!]", ": [\\!", "[\\Gamma ]\\!", "]{\\mathbb {1}} \\rightarrow \\tau \\,[\\![X]\\!", "]in \\mathbb {C} by induction on the typing derivation.", "The definition islargely unsurprising and follows a pattern similar to~\\cite {Katsumata:GradedMonads,Levy:FGCBV}---the noveltylies in controlling the occurrences of \\langle - \\rangle and [ - ].", "}In {\\sc {Let}}, we use \\langle \\tau \\rangle \\dashv [ \\tau ]to push the environment ``into the future^{\\prime \\prime }, and then follow the standardmonadic strength-followed-by-multiplication pattern~\\cite {Katsumata:GradedMonads,Moggi:ComputationalLambdaCalculus}:\\small \\begin{array}{l}[\\!", "[\\Gamma \\vdash \\mathsf {\\color {keywordColor}let}_{}\\; x = M \\;\\mathsf {\\color {keywordColor}in}\\; N : Y \\mathbin {!", "}\\tau + \\tau ^{\\prime }]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\overset{\\langle \\eta ^{\\dashv }, [\\![M]\\!]", "\\rangle }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[ \\tau ]\\big (\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!]", "\\mathbb {1})\\big ) \\times \\tau \\,[\\![X]\\!", "]\\\\[1ex]\\hspace{56.9055pt}\\overset{\\mathsf {str}^{}{-\\!\\!\\!\\!\\!\\!\\longrightarrow } \\tau \\, \\big (\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!]", "\\mathbb {1}) \\times [\\![X]\\!", "]\\big )\\overset{([\\![N]\\!", "])}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow } \\tau \\, ( \\tau ^{\\prime }\\, [\\![Y]\\!", "])\\overset{\\mu ^{}{-\\!\\!\\!\\!\\!\\!\\longrightarrow } (\\tau + \\tau ^{\\prime })\\, [\\![Y]\\!", "]}{}An analogous use of \\langle \\tau \\rangle \\dashv [ \\tau ] also appearsin the cases for operations, e.g., in\\small \\begin{array}{l}[\\!", "[\\Gamma \\vdash \\mathsf {op}\\;V\\; (x \\,.\\, M) : X \\mathbin {!", "}\\tau _\\mathsf {op}+ \\tau ]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\overset{\\langle [\\![V]\\!]", ", \\eta ^{\\dashv }\\rangle }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\!", "[A_\\mathsf {op}]\\!]", "\\times [ \\tau _\\mathsf {op} ]\\big (\\langle \\tau _\\mathsf {op} \\rangle ([\\!", "[\\Gamma ]\\!]", "\\mathbb {1})\\big )\\\\[1ex]\\hfill \\overset{\\!\\!\\mathsf {id}\\times [ \\tau _\\mathsf {op} ](\\mathsf {curry}([\\![M]\\!", "]))}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow \\,\\,}[\\!", "[A_\\mathsf {op}]\\!]", "\\times [ \\tau _\\mathsf {op} ]\\big ([\\!", "[B_\\mathsf {op}]\\!]", "\\Rightarrow \\tau \\, [\\![X]\\!", "]\\big )\\overset{\\mathsf {op}^{}{-\\!\\!\\!\\!\\!\\!\\longrightarrow } (\\tau _\\mathsf {op}+ \\tau )\\, [\\![X]\\!", "]}{}\\end{array}Next, the {\\sc {Unbox}} case of the interpretation is defined as\\small \\begin{array}{l}[\\!", "[\\Gamma \\vdash \\mathsf {\\color {keywordColor}unbox}_{\\tau }\\; V \\;\\mathsf {\\color {keywordColor}as}\\; x \\;\\mathsf {\\color {keywordColor}in}\\; N : Y \\mathbin {!", "}\\tau ^{\\prime }]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\overset{\\langle \\mathsf {id}, \\mathsf {e}^{\\prime } \\rangle }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times \\langle \\tau \\rangle \\big ([\\!", "[\\Gamma \\mathbin {-}\\tau ]\\!]", "\\mathbb {1}\\big )\\\\[1ex]\\hspace{71.13188pt}\\overset{\\!\\!\\mathsf {id}\\times \\langle \\tau \\rangle ([\\![V]\\!", "])}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow \\,\\,}[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times \\langle \\tau \\rangle \\big ([ \\tau ] [\\![X]\\!", "]\\big )\\overset{\\mathsf {id}\\times \\varepsilon ^{\\dashv }}{-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times [\\![X]\\!]\\overset{[\\![N]\\!", "]}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }\\tau ^{\\prime }\\, [\\![Y]\\!", "]\\end{array}showing that temporal resources follow the common patternin which elimination forms are modelled by counits of adjunctions, whereas unitsmodel introduction forms (akin to functions).", "The morphism\\mathsf {e}^{\\prime }_{A,\\Gamma ,\\tau } : [\\!", "[\\Gamma ]\\!", "]A \\rightarrow \\langle \\tau \\rangle ([\\!", "[\\Gamma \\mathbin {-}\\tau ]\\!]", "A)extracts and collapses \\tau worth of context modalities in \\Gamma , as longas \\tau \\le \\mathsf {time}\\; \\Gamma .", "}{F}inally, we discuss the interpretation of effect handling.", "Forthis, we additionally require \\mathbb {C} to have \\emph {set-indexed products}\\Pi _{i \\in I} A_i and \\emph {handling morphisms}\\small \\begin{array}{l}{A,\\tau ,\\tau ^{\\prime }} : \\Pi _{\\mathsf {op}\\in \\mathcal {O}} \\Pi _{\\tau ^{\\prime \\prime } \\in \\mathbb {N}} \\big (([\\!", "[A_\\mathsf {op}]\\!]", "\\times [ \\tau _\\mathsf {op} ]([\\!", "[B_\\mathsf {op}]\\!]", "\\Rightarrow \\tau ^{\\prime \\prime }\\, A)) \\Rightarrow (\\tau _\\mathsf {op}+ \\tau ^{\\prime \\prime })\\, A \\big )\\\\[0.5ex]\\hspace{199.16928pt}\\rightarrow \\tau \\, ( \\tau ^{\\prime }\\, A) \\Rightarrow (\\tau + \\tau ^{\\prime })\\, A\\end{array}satisfying laws which state that A returns a graded-algebra~\\cite {Fuji:GradedMonads,McDermott:GradedAlgebras}, e.g.,we require \\mathsf {uncurry}({A,0,\\tau ^{\\prime }}) \\circ \\mathsf {id}\\times \\eta ^{\\dashv }\\equiv \\mathsf {snd},where \\mathsf {uncurry} (and \\mathsf {curry} earlier) is part of the universal propertyof A \\Rightarrow B.", "We also require similar laws for ^{\\prime }s interaction with \\mathsf {op}^{ and \\mathsf {delay}^{.In \\mathsf {Set}^{(\\mathbb {N},\\le )}, is defined by recursively traversing a given tree, replacingall occurrences of \\mathsf {op}\\, a\\, k with respective operation clauses.", "}Writing \\mathcal {H} for the domain of {[\\![Y]\\!", "],\\tau ,\\tau ^{\\prime }}, the {\\sc {Handle}} case is then defined as\\small \\begin{array}{l}[\\!", "[\\Gamma \\vdash \\mathsf {\\color {keywordColor}handle}\\; M \\;\\mathsf {\\color {keywordColor}with}\\; H \\;\\mathsf {\\color {keywordColor}to}\\; x\\;\\mathsf {\\color {keywordColor}in}\\; N : Y \\mathbin {!", "}\\tau + \\tau ^{\\prime }]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~\\\\[1ex]\\hspace{14.22636pt}[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\overset{\\langle \\mathsf {id}, \\langle \\eta ^{\\dashv }, [\\![M]\\!]", "\\rangle \\rangle }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\times \\Big ([ \\tau ]\\big (\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!]", "\\mathbb {1})\\big ) \\times \\tau \\, [\\![X]\\!", "]\\Big )\\\\[1ex]\\hspace{28.45274pt}\\overset{\\!\\!\\mathsf {id}\\times \\mathsf {str}^{}{-\\!\\!\\!-\\!\\!\\!\\longrightarrow \\,\\,}[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\times \\tau \\, \\big (\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!]", "\\mathbb {1}) \\times [\\![X]\\!]", "\\big )\\overset{\\mathsf {id}\\times \\tau \\, ([\\![N]\\!", "])}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\times \\tau \\, \\big ( \\tau ^{\\prime }\\, [\\![Y]\\!", "]\\big )\\\\[1ex]\\hspace{113.81102pt}\\overset{[\\![H]\\!]", "\\times \\mathsf {id}}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }\\mathcal {H} \\times \\tau \\, \\big ( \\tau ^{\\prime }\\, [\\![Y]\\!", "]\\big )\\overset{\\mathsf {uncurry}({\\![\\![Y]\\!", "],\\tau ,\\tau ^{\\prime }})}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow } (\\tau + \\tau ^{\\prime })\\, [\\![Y]\\!", "]}{}where we write [\\![H]\\!]", "for the point-wise interpretation of operation clauses\\small \\begin{array}{c}[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\overset{\\langle \\langle \\mathsf {id}\\rangle _{\\tau ^{\\prime \\prime } \\in \\mathbb {N}} \\rangle _{\\mathsf {op}\\in \\mathcal {O}}}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }\\Pi _{\\mathsf {op}\\in \\mathcal {O}} \\Pi _{\\tau ^{\\prime \\prime } \\in \\mathbb {N}} \\Big ([\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\Big )\\overset{\\Pi _{\\mathsf {op}\\in \\mathcal {O}} \\Pi _{\\tau ^{\\prime \\prime } \\in \\mathbb {N}} \\big ( \\mathsf {curry}([\\!", "[M_\\mathsf {op}]\\!]", "\\,\\circ \\, \\alpha ^{-1}) \\big )}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }\\mathcal {H}\\end{array}\\end{array}\\subsection {Renamings, substitutions, and soundness}}We now show how syntactic renamings and substitutionsrelate to semantic morphism composition, using whichwe then prove the interpretation to be \\emph {sound}.\\end{array}\\begin{proposition}Given \\rho : \\Gamma \\,{\\leadsto }\\, \\Gamma ^{\\prime } and \\Gamma \\vdash J, then[\\!", "[J[\\rho ]]\\!]", "\\equiv [\\![J]\\!]", "\\circ [\\!", "[\\rho ]\\!", "]_{\\mathbb {1}}, where theinterpretation of renamings [\\!", "[\\rho ]\\!", "]_A : [\\!", "[\\Gamma ^{\\prime }]\\!", "]A \\rightarrow [\\!", "[\\Gamma ]\\!", "]Ais defined by straightfor\\-ward induction on the derivation of\\rho : \\Gamma \\leadsto \\Gamma ^{\\prime }, with [\\!", "[\\rho ]\\!", "]_A also natural in A.\\end{proposition}}\\begin{proposition}Given \\Gamma , x \\unknown.", "X, \\Gamma ^{\\prime } \\vdash J and \\Gamma \\vdash W : X, we have[\\![J[W/x]]\\!]", "\\equiv [\\![J]\\!]", "\\circ \\iota _{\\Gamma ,x \\unknown.", "X;\\Gamma ^{\\prime };\\mathbb {1}}^{-1} \\circ [\\!", "[\\Gamma ^{\\prime }]\\!", "]\\big (\\langle \\mathsf {id}, [\\![W]\\!]", "\\rangle \\big ) \\circ \\iota _{\\Gamma ;\\Gamma ^{\\prime };\\mathbb {1}}, where (\\iota ,\\iota ^{-1}) are discussed in {sect:interpretation-types}.\\end{proposition}\\end{array}\\begin{proof}We prove both results by induction on the derivation of {\\Gamma \\vdash J}.The proofs are unsurprising but require us to proveauxiliary lemmas about recursively defined renamings andsemantic morphisms.For example, for {prop:renaming-semantic},we show\\mathsf {e}^{\\prime } \\circ [\\!", "[\\rho ]\\!]", "\\equiv \\langle \\tau \\rangle ([\\!", "[\\rho \\mathbin {-}\\tau ]\\!])", "\\circ \\mathsf {e}^{\\prime }: [\\!", "[\\Gamma ^{\\prime }]\\!]", "A \\rightarrow \\langle \\tau \\rangle ([\\!", "[\\Gamma \\mathbin {-}\\tau ]\\!", "]A), andfor {prop:substitution-semantic}, we show\\mathsf {e}^{\\prime } \\circ \\iota \\equiv \\langle \\tau \\rangle \\big (\\iota \\big ) \\circ \\mathsf {e}^{\\prime }: [\\!", "[\\Gamma , \\Gamma ^{\\prime }]\\!]", "A \\rightarrow \\langle \\tau \\rangle \\big ([\\!", "[\\Gamma ^{\\prime } \\mathbin {-}\\tau ]\\!]([\\!", "[\\Gamma ]\\!]", "A)\\big ),when \\tau \\le \\mathsf {time}\\; \\Gamma ^{\\prime }.\\end{proof}}\\begin{theorem}Given \\Gamma \\vdash I \\equiv J derived using the rules in {sect:equational-theory},then [\\![I]\\!]", "\\equiv [\\![J]\\!", "].\\end{theorem}}\\begin{proof}The proof proceeds by induction on the derivation of \\Gamma \\vdash I \\equiv J,using {prop:renaming-semantic} and {prop:substitution-semantic} to unfoldthe renamings and substitutions in the equations of {sect:equational-theory},and using the properties of the abstract structure we required \\mathbb {C} to have.\\end{proof}}}\\end{proposition}}}}}}\\section {Quotienting delays}}Observe that in \\lambda _{[\\tau ]} the computations\\mathsf {\\color {keywordColor}delay}\\;\\tau \\; (\\mathsf {\\color {keywordColor}delay}\\;\\tau ^{\\prime }\\; M) and \\mathsf {\\color {keywordColor}delay}\\;(\\tau + \\tau ^{\\prime })\\; M cannot be provedequivalent, neither in the equational theory of {sect:equational-theory} nor in the concretepresheaf model of {sect:semantics}, though in some situations this might be desired.", "}In order to deem the above two programs (and others alike) equivalent, we extend \\lambda _{[\\tau ]}^{\\prime }sequational theory with the following natural equations for \\mathsf {\\color {keywordColor}delay}s:\\begin{array}{c}\\mathsf {\\color {keywordColor}delay}\\;0\\; M \\equiv M\\qquad \\mathsf {\\color {keywordColor}delay}\\;\\tau \\; (\\mathsf {\\color {keywordColor}delay}\\;\\tau ^{\\prime }\\; M) \\equiv \\mathsf {\\color {keywordColor}delay}\\;(\\tau + \\tau ^{\\prime })\\; M\\end{array}}\\begin{theorem}If the algebraic operations \\mathsf {delay}^{\\!", "of satisfy analogous twoequations, the interpretation of {sect:semantics} is sound for this extended equational theory.", "}\\end{theorem}For the concrete model on \\mathsf {Set}^{(\\mathbb {N},\\le )}, we have to \\emph {quotient} ~\\cite {Katsumata:FlexiblePresentations}by these two equations---the resulting graded monad is determined inductively by the cases\\small *[Lab={\\color {rulenameColor}}]{k \\in (S\\, \\tau \\, A)(t)}{\\mathsf {comp}\\, k \\in ( \\tau \\, A)(t)}\\quad *[Lab={\\color {rulenameColor}}]{\\tau > 0 \\\\ k \\in [ \\tau ] (S\\, \\tau ^{\\prime }\\, A)(t)}{\\mathsf {delay}\\, \\tau \\, k \\in ( (\\tau + \\tau ^{\\prime })\\, A)(t)}\\small *[Lab={\\color {rulenameColor}}]{a \\in A(t)}{\\mathsf {ret}\\, a \\in (S\\, 0\\, A)(t)}\\quad *[Lab={\\color {rulenameColor}}]{a \\in [\\!", "[A_\\mathsf {op}]\\!", "]^g(t) \\\\ k \\in ([ \\tau _\\mathsf {op} ]([\\!", "[B_\\mathsf {op}]\\!", "]^g \\Rightarrow \\tau \\, A))(t)}{\\mathsf {op}\\, a\\, k \\in (S\\, (\\tau _\\mathsf {op}+ \\tau )\\, A)(t)}\\vspace{2.84544pt}where ( \\tau \\, A)(t) and (S\\, \\tau \\, A)(t) are defined simultaneously in such a way thatonly non-zero, non-consecutive \\mathsf {delay}s can appear in the tree structure.", "}}\\section {Related and future work}}\\subsection {Related work}\\end{array}We contribute to two prominent areas: (i) modal types and (ii) graded monads.", "}As noted in {sect:overview-modal-types}, \\emph {modal types} provide a mathematically naturalmeans for capturing many aspects of programming.", "Adding to~{sect:overview-modal-types}, typescorresponding to the \\emph {eventually} and \\emph {always modalities} of temporal logics capture functionalreactive programming (FRP)~\\cite {Cave:FairFRP,Jeltsch:FRP,Krishnaswami:FRP}, including acombination with linearity and time-annotations to model resources~\\cite {Jeltsch:Resources},where \\emph {all} values are annotated with inhabitation times.Recently, FRP has also been studied in Fitch-style~\\cite {Bahr:SimplyRaTT}.Starting with Nakano~\\cite {Nakano:Guarded}, modal typeshave also been used for guarded recursion, even in thedependently typed setting~\\cite {Bahr:ClocksTicking,Bizjak:GuardedTT,Mannaa:ClockSemantics},including in Fitch-style~\\cite {Birkedal:DepRightAdjoints}.", "}\\emph {Graded monads} provide a uniform framework fordifferent effect systems and effect-basedanalyses~\\cite {Fuji:GradedMonads,Katsumata:GradedMonads,Katsumata:FlexiblePresentations,McDermott:GradedAlgebras,Mellies:GradedMonads}.A major contribution of ours is showing thatmodalities on contexts can inform continuations of preceding computations^{\\prime } effects.While the theory of gradedmonadscan be instantiated with any ordered monoid, we focus on naturalnumbers to model time, but do not expectcomplications generalising \\lambda _{[\\tau ]} to other structures withsame properties as (\\mathbb {N}, 0, +, \\mathbin {\\scriptstyle \\dot{\\smash{\\textstyle -}}}, \\le ),and perhaps even to grading [ - ] and \\langle - \\rangle with different structures, akin to~\\cite {Gaboardi:EffectsCoefects}.", "}Our use of the [ \\tau ]\\, X type to restrict when certain valuesare available is somewhat reminiscent of\\emph {coeffects}~\\cite {Brunel:Coeffects,Ghica:ResourceSemiring,Petricek:Coeffects,Petricek:CoeffectCalculus}and \\emph {quantitative type systems}~\\cite {Atkey:QTT,McBride:QTT,Orchard:GrQTT}.", "In these works,variables in context are graded by (semi)ring-valued rs, as x \\unknown.", "_r X, counting how many times andin which ways x is used, enabling applications such as liveness and dataflowanalyses~\\cite {Petricek:Coeffects}.", "Semantically, these systems often interpret x \\unknown.", "_r Xusing a graded comonad, as \\raisebox {-0.75mm}{\\scalebox {2}{\\square }}_r X, where one can access X only if r \\equiv 1.Of such works, the closest to ours is that of Gaboardi et al.~\\cite {Gaboardi:EffectsCoefects},who combine coeffects with effectful programs via distributive laws between the grades ofcoeffects and effects, allowing coeffectful analyses to be propagated through effectful computations.We leave detailed comparison with \\lambda _{[\\tau ]} for future work.", "}}We also note that the type [ \\tau ]\\, X can be intuitively also viewed as atemporally-graded variant of \\emph {promise types}~\\cite {Haller:Futures,Schwinghammer:Thesis},in that it expresses that a value of type X will be available in the future, but withadditional time guarantees.", "}\\subsection {Future work}}Currently, \\lambda _{[\\tau ]} does not support\\emph {sub-effecting}: we cannot deduce from {\\tau \\le \\tau ^{\\prime }} and {\\Gamma \\vdash M : X \\mathbin {!", "}\\tau }that {\\Gamma \\vdash M : X \\mathbin {!", "}\\tau ^{\\prime }}.", "Of course, we can simulatethis by inserting \\tau ^{\\prime } \\!\\mathbin {\\scriptstyle \\dot{\\smash{\\textstyle -}}}\\tau worth of explicit \\mathsf {\\color {keywordColor}delay}s intoM, but this is extremely intensional, fixing where \\mathsf {\\color {keywordColor}delay}s happen.In particular, we cannot type equations such as \\mathsf {\\color {keywordColor}let}_{}\\; x = (\\mathsf {\\color {keywordColor}return}_{}\\, V) \\;\\mathsf {\\color {keywordColor}in}\\; N \\equiv N[V/x]if \\mathsf {\\color {keywordColor}return}_{}\\, V was sub-effected to \\tau > 0, with the\\langle \\tau \\rangle in N^{\\prime }s context the culprit.", "However, when considering sub-effectingas a \\emph {coercion} \\mathsf {coerce}_{\\tau \\le \\tau ^{\\prime }}\\, M, we believe we canadd it by considering equations stating that it willproduce \\emph {all the possible ways} how \\tau ^{\\prime } \\!\\mathbin {\\scriptstyle \\dot{\\smash{\\textstyle -}}}\\tau worth of \\mathsf {\\color {keywordColor}delay}scould be inserted into M. Of course, this will require a more complex non-deterministic semantics.", "}It would be beneficial if $ []$ also included \\emph {recursion} in a way thatprograms could still make use of and respect the temporal discipline.This is likely unattainable for general recursion, but we are hopeful that \\emph {primitive recursion}(say, on natural numbers) can be added via \\emph {lightweight type-dependency}(akin to~\\cite {Xi:DependentML}) of time annotations on the values being recursedon---\\cite {Birkedal:DepRightAdjoints} could be useful here.$ It would be interesting to combine $\\lambda _{[\\tau ]}$ with linear [25] and separation logics [31], [61], so as to additionally model linearity and spatial properties of temporal resources.", "Related to this, it would also be interesting to extend $\\lambda _{[\\tau ]}$ with concurrency, e.g., using (multi)handlers [8], [19], [20].", "We also plan to look into how to naturally capture expiring and available-for-an-interval style resources in $\\lambda _{[\\tau ]}$ .", "We also plan to study the completeness of the denotational semantics of $\\lambda _{[\\tau ]}$ .", "For this, we need to restrict the semantics from arbitrary to Kleisli exponentials, and a notion of context morphism that respects the temporal discipline.", "It would be interesting to also study $\\lambda _{[\\tau ]}$ 's operational semantics, especially one that takes time seriously and does not model $\\mathsf {\\color {keywordColor}delay}$ s simply as uninterpreted operations [8], together with developing a prototype, and proving normalisation akin to [66].", "For future semantic investigations, it would be beneficial to also study the general theory of the kinds of temporally-aware graded algebraic effects used in this paper, by investigating their algebras and equational presentations [33], [47]." ], [ "Conclusion", "We have shown how a temporal, time-graded variant of Fitch-style modal type systems, when combined with an effect system based on graded monads, provides a natural framework for safe programming with temporal resources.", "To this end, we developed a modally typed, effectful, equationally-presented core calculus, and equipped it with a sound denotational semantics based on strong monoidal functors (for modelling modalities) and graded monads (for modelling effects).", "The calculus also includes temporally-aware graded algebraic effects and effect handlers, with the continuations of the former knowing that an operation's worth of additional time has passed before they start executing, and where the user-defined effect handlers are guaranteed to respect this temporal discipline.", "Acknowledgements  We thank Juhan-Peep Ernits and Andrej Bauer for many useful discussions.", "This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-21-1-0024." ], [ "Renamings", "We expand here on the definitions and results that we use in the proof of thm:renaming.", "First, we present the full definition of the renaming relation $\\rho : \\Gamma \\leadsto \\Gamma ^{\\prime }$ .", "As noted in the paper, it is given by the reflexive-transitive-congruent closure of the desired structural rules.", "In detail, it is given by the following cases: original-shortintertext=false,below-shortintertext-sep=0pt,above-shortintertext-sep=0pt $\\mathsf {id}^r & : \\Gamma \\leadsto \\Gamma \\\\\\mathsf {\\circ }^r & : (\\Gamma ^{\\prime } \\leadsto \\Gamma ^{\\prime \\prime }) \\longrightarrow (\\Gamma \\leadsto \\Gamma ^{\\prime }) \\longrightarrow (\\Gamma \\leadsto \\Gamma ^{\\prime \\prime })\\\\[1ex]\\mathsf {wk}^r & : \\Gamma \\leadsto (\\Gamma , x \\unknown.", "X)\\\\[1ex]\\mathsf {var}^r_{x \\unknown.", "X \\in \\Gamma } & : (\\Gamma , y \\unknown.", "X) \\leadsto \\Gamma \\\\[1ex]\\eta ^r & : (\\Gamma , \\langle 0 \\rangle ) \\leadsto \\Gamma \\\\(\\eta ^r)^{-1} & : \\Gamma \\leadsto (\\Gamma , \\langle 0 \\rangle )\\\\[1ex]\\mu ^r & : (\\Gamma , \\langle \\tau + \\tau ^{\\prime } \\rangle ) \\leadsto (\\Gamma , \\langle \\tau \\rangle , \\langle \\tau ^{\\prime } \\rangle )\\\\(\\mu ^r)^{-1} & : (\\Gamma , \\langle \\tau \\rangle , \\langle \\tau ^{\\prime } \\rangle ) \\leadsto (\\Gamma , \\langle \\tau + \\tau ^{\\prime } \\rangle )\\\\[1ex]\\mathsf {mon}^r_{\\tau \\le \\tau ^{\\prime }} & : (\\Gamma , \\langle \\tau \\rangle ) \\leadsto (\\Gamma , \\langle \\tau ^{\\prime } \\rangle )\\\\[1ex]\\mathsf {cong\\text{-}var}^r & : (\\Gamma \\leadsto \\Gamma ^{\\prime }) \\longrightarrow ((\\Gamma , x \\unknown.", "X) \\leadsto (\\Gamma ^{\\prime }, x \\unknown.", "X))\\\\\\mathsf {cong\\text{-}mod}^r & : (\\Gamma \\leadsto \\Gamma ^{\\prime }) \\longrightarrow ((\\Gamma , \\langle \\tau \\rangle ) \\leadsto (\\Gamma ^{\\prime }, \\langle \\tau \\rangle ))\\\\$ As in the rest of the paper, we assume that all contexts involved in the definition are well-formed (i.e., consist of distinct variables), and all mentioned variables are fresh for the contexts that they extend (e.g., $y$ for $\\Gamma $ in $\\Gamma , y \\unknown.", "X$ ).", "This renaming relation then has many desirable properties, some of which we use in the proof of thm:renaming.", "These results are all proved by either one-line arguments or by fairly straightforward induction on the derivations of the given judgements.", "Full details of these proofs can be found in the Agda formalisation.", "[Basic properties of the renaming relation] If $\\Gamma \\leadsto \\Gamma ^{\\prime }$ and $x \\in \\Gamma ^{\\prime }$ , then we have $(\\Gamma , y \\unknown.", "X) \\leadsto \\Gamma ^{\\prime }$ .", "If $\\Gamma \\leadsto \\Gamma ^{\\prime }$ , then we have $(\\Gamma , \\Gamma ^{\\prime \\prime }) \\leadsto (\\Gamma ^{\\prime }, \\Gamma ^{\\prime \\prime })$ .", "$\\Gamma \\leadsto (\\Gamma ,\\Gamma ^{\\prime })$ $\\Gamma \\leadsto (\\Gamma , \\langle \\tau \\rangle )$ $(\\Gamma , x \\unknown.", "X, y \\unknown.", "Y) \\leadsto (\\Gamma , y \\unknown.", "Y, x \\unknown.", "X)$ $(\\Gamma , \\langle \\tau \\rangle , x \\unknown.", "X) \\leadsto (\\Gamma , x \\unknown.", "X, \\langle \\tau \\rangle )$ $(\\Gamma , x \\unknown.", "X, y \\unknown.", "X) \\leadsto (\\Gamma , x \\unknown.", "X)$ If $\\Gamma \\leadsto \\Gamma ^{\\prime }$ , then we have $\\mathsf {time}\\; \\Gamma \\le \\mathsf {time}\\; \\Gamma ^{\\prime }$ .", "If $\\tau \\le \\mathsf {time}\\; \\Gamma $ , then we have $((\\Gamma \\mathbin {-}\\tau ), \\langle \\tau \\rangle ) \\leadsto \\Gamma $ .", "$(\\Gamma \\mathbin {-}\\tau ) \\leadsto \\Gamma $ If $\\tau _1 \\le \\tau _2$ , then we have $(\\Gamma \\mathbin {-}\\tau _2) \\leadsto (\\Gamma \\mathbin {-}\\tau _1)$ .", "If $\\rho : \\Gamma \\leadsto \\Gamma ^{\\prime }$ and $x \\unknown.", "X \\in _\\tau \\Gamma $ , then $\\rho \\, x \\unknown.", "X \\in _{\\tau ^{\\prime }} \\Gamma ^{\\prime }$ for some $\\tau ^{\\prime }$ with $\\tau \\le \\tau ^{\\prime }$ .", "If $\\rho : \\Gamma \\leadsto \\Gamma ^{\\prime }$ , then we have $\\rho \\mathbin {-}\\tau : (\\Gamma \\mathbin {-}\\tau ) \\leadsto (\\Gamma ^{\\prime } \\mathbin {-}\\tau )$ .", "In (12), the relation $x \\unknown.", "X \\in _\\tau \\Gamma $ , is defined inductively by the following cases: $*[Lab={\\color {rulenameColor}}]{\\phantom{...}}{x \\unknown.", "X \\in _0 \\Gamma , x \\unknown.", "X}\\qquad *[Lab={\\color {rulenameColor}}]{x \\unknown.", "X \\in _\\tau \\Gamma \\quad x \\ne y}{x \\unknown.", "X \\in _\\tau \\Gamma , y \\unknown.", "Y}\\qquad *[Lab={\\color {rulenameColor}}]{x \\unknown.", "X \\in _\\tau \\Gamma }{x \\unknown.", "X \\in _{\\tau + \\tau ^{\\prime }} \\Gamma , \\langle \\tau ^{\\prime } \\rangle }$ Intuitively, this relation captures that if $x \\unknown.", "X \\in _\\tau \\Gamma $ , then $x \\unknown.", "X \\in \\Gamma $ and there is $\\tau $ worth of context modalities to the right of $x$ in $\\Gamma $ , i.e., $x$ was brought into scope $\\tau $ time units ago.", "In (13), the operation $\\rho \\mathbin {-}\\tau : (\\Gamma \\mathbin {-}\\tau ) \\leadsto (\\Gamma ^{\\prime } \\mathbin {-}\\tau )$ on renamings is defined by induction on the derivation of $\\rho : \\Gamma \\leadsto \\Gamma ^{\\prime }$ .", "The definition is somewhat long but fairly straightforward—it proceeds by removing variables and context modalities from the right-hand sides of $\\Gamma $ and $\\Gamma ^{\\prime }$ until $\\tau $ becomes 0, see the definition of the corresponding operation $\\Gamma \\mathbin {-}\\tau $ on contexts for intuition.", "Properties such as (12) then ensure that variables in $\\Gamma \\mathbin {-}\\tau $ are never mapped by $\\rho $ to outside of $\\Gamma ^{\\prime } \\mathbin {-}\\tau $ .", "Full details of this definition can be found in the Agda formalisation." ], [ "Denotational semantics", "In this appendix we spell out full details of the more important semantic definitions that were abbreviated or left out in sect:semantics due to space constraints." ], [ "Strong monoidal functor for temporal resource types", "We require a functor $[ - ] : (\\mathbb {N},\\le ) \\rightarrow [ \\mathbb {C}, \\mathbb {C}]$ and families of natural isomorphisms (counit and comultiplication) $\\varepsilon _A : [ 0 ] A \\overset{\\cong }{\\rightarrow } A\\qquad \\delta _{A,\\tau _1,\\tau _2} : [ \\tau _1 + \\tau _2 ] A \\overset{\\cong }{\\rightarrow } [ \\tau _1 ] ([ \\tau _2 ] A)$ satisfying time-graded variants of the laws of a comonad $\\small @C=3em@R=3.5em@M=0.5em{[ 0 + \\tau ] A [r]^-{\\delta _{A,0,\\tau }} [dr]_{\\equiv } & [ 0 ]([ \\tau ] A) [d]^{\\varepsilon _{[ \\tau ]A}}&[ \\tau + 0 ] A [r]^-{\\delta _{A,\\tau ,0}} [dr]_{\\equiv } & [ \\tau ]([ 0 ]A) [d]^{[ \\tau ](\\varepsilon _A)}\\\\& [ \\tau ] A&& [ \\tau ] A}$ $\\small @C=5em@R=3.5em@M=0.5em{[ (\\tau _1 + \\tau _2) + \\tau _3 ] A [d]_{\\equiv } [r]^{\\delta _{A,\\tau _1+\\tau _2,\\tau _3}} & [ \\tau _1 + \\tau _2 ]([ \\tau _3 ] A) [r]^{\\delta _{[ \\tau _3 ]A, \\tau _1, \\tau _2}} & [ \\tau _1 ]([ \\tau _2 ]([ \\tau _3 ] A)) [d]^{\\mathsf {id}}\\\\[ \\tau _1 + (\\tau _2 + \\tau _3) ] A [r]_{\\delta _{A,\\tau _1,\\tau _2 + \\tau _3}} & [ \\tau _1 ]([ \\tau _2 + \\tau _3 ] A) [r]_{[ \\tau _1 ](\\delta _{A,\\tau _2,\\tau _3})} & [ \\tau _1 ]([ \\tau _2 ]([ \\tau _3 ] A))}$ and $(\\delta _{A,\\tau _1,\\tau _2},\\delta ^{-1}_{A,\\tau _1,\\tau _2})$ have to be monotone in $\\tau _1, \\tau _2$ : if $\\tau _1 \\le \\tau ^{\\prime }_1$ and $\\tau _2 \\le \\tau ^{\\prime }_2$ , then $\\small @C=5em@R=3.5em@M=0.5em{[ \\tau _1 + \\tau _2 ] A [rr]^{[ \\tau _1 + \\tau _2 \\le \\tau ^{\\prime }_1 + \\tau ^{\\prime }_2 ]_A} [d]_{\\delta _{A,\\tau _1,\\tau _2}} & & [ \\tau ^{\\prime }_1 + \\tau ^{\\prime }_2 ] A [d]^{\\delta _{A,\\tau ^{\\prime }_1,\\tau ^{\\prime }_2}}\\\\[ \\tau _1 ]([ \\tau _2 ] A) [r]_{[ \\tau _1 \\le \\tau ^{\\prime }_1 ]_{[ \\tau _2 ] A}} & [ \\tau ^{\\prime }_1 ]([ \\tau _2 ] A) [r]_{[ \\tau ^{\\prime }_1 ]([ \\tau _2 \\le \\tau ^{\\prime }_2 ]_A)} & [ \\tau ^{\\prime }_1 ]([ \\tau ^{\\prime }_2 ] A)}\\vspace{5.69046pt}$ A similar diagram follows for $\\delta ^{-1}_{A}$ from $(\\delta _{A,\\tau _1,\\tau _2},\\delta ^{-1}_{A,\\tau _1,\\tau _2})$ forming isomorphisms." ], [ "Strong monoidal functor for context modalities", "We require a contravariant functor $\\langle - \\rangle : (\\mathbb {N},\\le )^{\\text{op}} \\rightarrow [ \\mathbb {C}, \\mathbb {C}]$ and families of natural isomorphisms (unit and multiplication) $\\eta _A : A \\overset{\\cong }{\\rightarrow } \\langle 0 \\rangle A\\qquad \\mu _{A, \\tau _1, \\tau _2} : \\langle \\tau _1 \\rangle (\\langle \\tau _2 \\rangle A) \\overset{\\cong }{\\rightarrow } \\langle \\tau _1 + \\tau _2 \\rangle A$ satisfying time-graded variants of the laws of a monad $\\small @C=3em@R=3.5em@M=0.5em{\\langle \\tau \\rangle A [r]^-{\\eta _{\\langle \\tau \\rangle A}} [dr]_{\\equiv } & \\langle 0 \\rangle (\\langle \\tau \\rangle A) [d]^{\\mu _{A,0,\\tau }}&\\langle \\tau \\rangle A [r]^-{\\langle \\tau \\rangle (\\eta _A)} [dr]_{\\equiv } & \\langle \\tau \\rangle (\\langle 0 \\rangle A) [d]^{\\mu _{A,\\tau ,0}}\\\\& \\langle 0 + \\tau \\rangle A&& \\langle \\tau + 0 \\rangle A}$ $\\small @C=5em@R=3.5em@M=0.5em{\\langle \\tau _1 \\rangle (\\langle \\tau _2 \\rangle (\\langle \\tau _3 \\rangle A)) [r]^{\\mu _{\\langle \\tau _3 \\rangle A, \\tau _1, \\tau _2}} [d]_{\\mathsf {id}} & \\langle \\tau _1 + \\tau _2 \\rangle (\\langle \\tau _3 \\rangle A) [r]^{\\mu _{A,\\tau _1 + \\tau _2,\\tau _3}} & \\langle (\\tau _1 + \\tau _2) + \\tau _3 \\rangle A [d]^{\\equiv }\\\\\\langle \\tau _1 \\rangle (\\langle \\tau _2 \\rangle (\\langle \\tau _3 \\rangle A)) [r]_{\\langle \\tau _1 \\rangle (\\mu _{A,\\tau _2,\\tau _3})} & \\langle \\tau _1 \\rangle (\\langle \\tau _2 + \\tau _3 \\rangle A) [r]_{\\mu _{A,\\tau _1,\\tau _2 + \\tau _3}} & \\langle \\tau _1 + (\\tau _2 + \\tau _3) \\rangle A}\\vspace{5.69046pt}$ and $(\\mu _{A,\\tau _1,\\tau _2},\\mu ^{-1}_{A,\\tau _1,\\tau _2})$ have to be monotone in $\\tau _1, \\tau _2$ : if $\\tau _1 \\!\\le \\!", "\\tau ^{\\prime }_1$ and $\\tau _2 \\!\\le \\!", "\\tau ^{\\prime }_2$ , then $\\small @C=5em@R=3.5em@M=0.5em{\\langle \\tau ^{\\prime }_1 \\rangle (\\langle \\tau ^{\\prime }_2 \\rangle A) [d]_{\\mu _{A,\\tau ^{\\prime }_1,\\tau ^{\\prime }_2}} [r]^{\\langle \\tau _1 \\le \\tau ^{\\prime }_1 \\rangle _{\\langle \\tau ^{\\prime }_2 \\rangle A}} & \\langle \\tau _1 \\rangle (\\langle \\tau ^{\\prime }_2 \\rangle A) [r]^{\\langle \\tau _1 \\rangle (\\langle \\tau _2 \\le \\tau ^{\\prime }_2 \\rangle _A)} & \\langle \\tau _1 \\rangle (\\langle \\tau _2 \\rangle A) [d]^{\\mu _{A,\\tau _1,\\tau _2}}\\\\\\langle \\tau ^{\\prime }_1 + \\tau ^{\\prime }_2 \\rangle A [rr]_{\\langle \\tau _1 + \\tau _2 \\le \\tau ^{\\prime }_1 + \\tau ^{\\prime }_2 \\rangle _A} && \\langle \\tau _1 + \\tau _2 \\rangle A}$ A similar diagram follows for $\\mu ^{-1}_{A}$ from $(\\mu _{A,\\tau _1,\\tau _2},\\mu ^{-1}_{A,\\tau _1,\\tau _2})$ forming isomorphisms." ], [ "Adjunctions for boxing and unboxing resources", "We require adjunctions $\\langle \\tau \\rangle \\dashv [ \\tau ]$ for all $\\tau \\in \\mathbb {N}$ , i.e., families of natural transformations (unit and counit) $\\eta ^{\\dashv }_{A,\\tau } : A \\rightarrow [ \\tau ] (\\langle \\tau \\rangle A)\\qquad \\varepsilon ^{\\dashv }_{A,\\tau } : \\langle \\tau \\rangle ([ \\tau ] A) \\rightarrow A$ satisfying the two standard adjunction laws $\\small @C=4em@R=3.5em@M=0.5em{\\langle \\tau \\rangle A [r]^-{\\langle \\tau \\rangle (\\eta ^{\\dashv }_{A,\\tau })} [dr]_{\\mathsf {id}} & \\langle \\tau \\rangle ([ \\tau ](\\langle \\tau \\rangle A)) [d]^{\\varepsilon ^{\\dashv }_{\\langle \\tau \\rangle A,\\tau }}&[ \\tau ]A [r]^-{\\eta ^{\\dashv }_{[ \\tau ]A,\\tau }} [dr]_{\\mathsf {id}} & [ \\tau ](\\langle \\tau \\rangle ([ \\tau ] A)) [d]^{[ \\tau ](\\varepsilon ^{\\dashv }_{A,\\tau })}\\\\& \\langle \\tau \\rangle A&& [ \\tau ]A}$ and interacting well with the strong-monoidal structure of $\\langle - \\rangle $ and $[ - ]$ $\\small @C=3em@R=3.5em@M=0.5em{[ 0 ](\\langle 0 \\rangle A) [rr]^-{[ 0 \\le \\tau ]_{\\langle 0 \\rangle A}} [d]_{\\varepsilon _{\\langle 0 \\rangle A}} & & [ \\tau ](\\langle 0 \\rangle A)\\\\\\langle 0 \\rangle A [r]_-{\\eta ^{-1}} & A [r]_-{\\eta ^{\\dashv }_{A,\\tau }} & [ \\tau ](\\langle \\tau \\rangle A) [u]_{[ \\tau ](\\langle 0 \\le \\tau \\rangle _A)}}$ $\\small @C=3em@R=3.5em@M=0.5em{\\langle \\tau \\rangle ([ \\tau ] A) [d]_{\\varepsilon ^{\\dashv }_{A,\\tau }} [rr]^{\\langle 0 \\le \\tau \\rangle _{[ \\tau ] A}} && \\langle 0 \\rangle ([ \\tau ] A)\\\\A [r]_-{\\varepsilon ^{-1}_A} & [ 0 ] A [r]_-{\\eta _{[ 0 ]A}} & \\langle 0 \\rangle ([ 0 ] A) [u]_{\\langle 0 \\rangle ([ 0 \\le \\tau ]_A)}}$ $\\small @C=4em@R=3.5em@M=0.5em{A [r]^-{\\eta ^{\\dashv }_{A,\\tau _1}} [d]_{\\eta ^{\\dashv }_{A,\\tau _1 + \\tau _2}} & [ \\tau _1 ](\\langle \\tau _2 \\rangle A) [r]^-{[ \\tau _1 ](\\eta ^{\\dashv }_{\\langle \\tau _1 \\rangle A,\\tau _2})} & [ \\tau _1 ]([ \\tau _2 ](\\langle \\tau _2 \\rangle (\\langle \\tau _1 \\rangle A))) [d]^{[ \\tau _1 ]([ \\tau _2 ](\\mu _{A,\\tau _2,\\tau _1}))}\\\\[ \\tau _1 + \\tau _2 ](\\langle \\tau _1 + \\tau _2 \\rangle A) [rr]_-{\\delta _{\\langle \\tau _1 + \\tau _2 \\rangle A,\\tau _1,\\tau _2}} && [ \\tau _1 ]([ \\tau _2 ](\\langle \\tau _1 + \\tau _2 \\rangle A))}\\vspace{5.69046pt}$ $\\small @C=4em@R=3.5em@M=0.5em{\\langle \\tau _1 \\rangle (\\langle \\tau _2 \\rangle ([ \\tau _1 + \\tau _2 ] A)) [r]^-{\\langle \\tau _1 \\rangle (\\langle \\tau _2 \\rangle (\\delta _{A,\\tau _2,\\tau _1}))} [d]_{\\mu _{[ \\tau _1 + \\tau _2 ] A,\\tau _1, \\tau _2}} & \\langle \\tau _1 \\rangle (\\langle \\tau _2 \\rangle ([ \\tau _2 ]([ \\tau _1 ] A))) [r]^-{\\langle \\tau _1 \\rangle (\\varepsilon ^{\\dashv }_{[ \\tau _2 ]A,\\tau _1})} & \\langle \\tau _1 \\rangle ([ \\tau _1 ] A) [d]^{\\varepsilon ^{\\dashv }_{A,\\tau _1}}\\\\\\langle \\tau _1 + \\tau _2 \\rangle ([ \\tau _1 + \\tau _2 ]A) [rr]_-{\\varepsilon ^{\\dashv }_{A,\\tau _1 + \\tau _2}} && A}$" ], [ "$[ - ]$ -strong graded monad for modelling computations", "We require a functor $ \\mathbb {N} \\rightarrow [ \\mathbb {C}, \\mathbb {C}]$ together with natural transformations (unit, multiplication, and strength) $\\eta ^{_A : A \\rightarrow 0\\, A\\qquad \\mu ^{_{A,\\tau _1,\\tau _2} : \\tau _1 ( \\tau _2\\, A) \\rightarrow (\\tau _1 + \\tau _2)\\, A\\mathsf {str}^{_{A,B,\\tau } : [ \\tau ]A \\times B\\, \\tau \\rightarrow (A \\times B)\\, \\tau satisfying standard graded monad laws\\small @C=3em@R=3.5em@M=0.5em{ \\tau \\, A [dr]_{\\equiv } [r]^-{\\eta ^{_{A}} & 0\\, ( \\tau \\, A) [d]^{\\mu ^{_{A,0,\\tau }}& \\tau \\, A [r]^-{ \\tau \\, (\\eta ^{_{A})} [dr]_{\\equiv } & \\tau \\, ( 0\\, A) [d]^{\\mu _{A,\\tau ,0}}\\\\& (0 + \\tau )\\, A&& (\\tau + 0)\\, A}\\small @C=4.5em@R=3.5em@M=0.5em{ \\tau _1\\, ( \\tau _2\\, ( \\tau _3\\, A)) [r]^-{\\mu ^{_{_3 A,\\tau _1,\\tau _2}} [d]_{\\mathsf {id}} & (\\tau _1 + \\tau _2)\\, ( \\tau _3\\, A) [r]^-{\\mu ^{_{A,\\tau _1 + \\tau _2,\\tau _3}} & ((\\tau _1 + \\tau _2) + \\tau _3)\\, A [d]^{\\equiv }\\\\ \\tau _1\\, ( \\tau _2\\, ( \\tau _3\\, A)) [r]_-{ \\tau _1\\, (\\mu ^{_{A,\\tau _2,\\tau _3})} & \\tau _1\\, ( (\\tau _2 + \\tau _3)\\, A) [r]_-{\\mu ^{_{A,\\tau _1,\\tau _2 + \\tau _3}} & (\\tau _1 + (\\tau _2 + \\tau _3))\\, A}and [ - ]-strength laws\\small @C=3em@R=3.5em@M=0.5em{A \\times B [r]^-{\\varepsilon ^{-1}_A \\times \\eta ^{_A} [dr]_{\\eta ^{_{A \\times B}} & [ 0 ] A \\times 0\\, B [d]^{\\mathsf {str}^{_{A,B,0}}&[ \\tau ]A \\times \\tau \\, B [r]^-{\\mathsf {str}^{_{A,B,\\tau }} [dr]_{\\mathsf {snd}} & \\tau \\, (A \\times B) [d]^{ \\tau \\, (\\mathsf {snd})}\\\\& 0\\, (A \\times B)&& \\tau \\, B}\\hspace{-5.69046pt}\\small @C=2.5em@R=3.5em@M=0.5em{[ \\tau _1 ]([ \\tau _2 ]A) \\times \\tau _1\\, ( \\tau _2\\, B) [r]^-{\\mathsf {str}^{_{[ \\tau _2 ]A,_2 B,\\tau _1}} [d]_{\\delta ^{-1}_{A,\\tau _1,\\tau _2} \\times \\mu ^{_{B,\\tau _1,\\tau _2}} & T\\, \\tau _1\\, ([ \\tau _2 ] A \\times T\\, \\tau _2\\, B) [r]^-{ \\tau _1\\, (\\mathsf {str}^{_{A,B,\\tau _2})} & \\tau _1\\, ( \\tau _2\\, (A \\times B)) [d]^{\\mu ^{_{A \\times B,\\tau _1,\\tau _2}}\\\\[ \\tau _1 + \\tau _2 ] A \\times (\\tau _1 + \\tau _2)\\, B [rr]_-{\\mathsf {str}^{_{A,B,\\tau _1 + \\tau _2}} && (\\tau _1 + \\tau _2)\\, (A \\times B)}\\small @C=3em@R=3.5em@M=0.5em{[ \\tau ] A \\times ([ \\tau ] B \\times \\tau \\, C) [r]^-{\\mathsf {id}\\times \\mathsf {str}^{_{B,C,\\tau }} [d]_{\\alpha ^{-1}} & [ \\tau ] A \\times \\tau \\, (B \\times C) [r]^-{\\mathsf {str}^{_{A,B \\times C,\\tau }} & \\tau \\, (A \\times (B \\times C))\\\\([ \\tau ] A \\times [ \\tau ] B) \\times \\tau \\, C [r]_-{\\mathsf {m}_{A,B,\\tau }} & [ \\tau ](A \\times B) \\times \\tau \\, C [r]_-{\\mathsf {str}^{_{A \\times B,C,\\tau }} & \\tau \\, ((A \\times B) \\times C) [u]_{ \\tau \\, \\alpha }}}\\subsection {[ - ]-enrichment of graded monads}}We require morphisms\\mathsf {enr}_{A,B,\\tau } : [ \\tau ](A \\Rightarrow B) \\rightarrow (\\tau \\,A \\Rightarrow \\tau \\,B)satisfying time-graded analogues of enriched functor laws\\small @C=5em@R=4.5em@M=0.5em{\\mathbb {1}\\times \\tau \\, A [d]_{\\mathsf {snd}} [r]^-{\\mathsf {curry}(\\mathsf {snd}) \\times \\mathsf {id}} & (A \\Rightarrow A) \\times \\tau \\, A [d]^{\\eta ^{[]}_{\\!A \\Rightarrow A,\\tau } \\times \\mathsf {id}}\\\\ \\tau \\, A & [ \\tau ](A \\Rightarrow A) \\times \\tau \\, A [l]^-{\\mathsf {uncurry}(\\mathsf {enr}_{\\!A,A,\\tau })}}\\small @C=6em@R=4.5em@M=0.5em{[ \\tau ](B \\Rightarrow C) \\times \\big ([ \\tau ](A \\Rightarrow B) \\times T\\, \\tau \\, A \\big ) [r]^-{\\mathsf {id}\\times \\mathsf {uncurry}(\\mathsf {enr}_{\\!A,B,\\tau })} [d]_{\\alpha ^{-1}} & [ \\tau ](B \\Rightarrow C) \\times T\\, \\tau \\, B [ddd]^{\\mathsf {uncurry}(\\mathsf {enr}_{\\!B,C,\\tau })}\\\\\\big ([ \\tau ](B \\Rightarrow C) \\times [ \\tau ](A \\Rightarrow B)\\big ) \\times T\\, \\tau \\, A [d]_{\\mathsf {m}_{B \\Rightarrow C, A \\Rightarrow B, \\tau } \\times \\mathsf {id}}\\\\[ \\tau ]((B \\Rightarrow C) \\times (A \\Rightarrow B)) \\times T\\, \\tau \\, A [d]_{[ \\tau ](\\mathsf {comp}) \\times \\mathsf {id}}\\\\[ \\tau ](A \\Rightarrow C) \\times T\\, \\tau \\, A [r]_-{\\mathsf {uncurry}(\\mathsf {enr}_{\\!A,C,\\tau })} & T\\, \\tau \\, C}and interacting well with the unit and multiplication of \\small @C=5em@R=4.5em@M=0.5em{(A \\Rightarrow B) \\times A [r]^-{\\varepsilon ^{-1}_{A \\Rightarrow B} \\times \\eta ^{_{A}} [d]_{\\mathsf {uncurry}(\\mathsf {id})} & [ 0 ](A \\Rightarrow B) \\times 0\\, A [d]^{\\mathsf {uncurry}(\\mathsf {enr}_{\\!A,B,0})}\\\\B [r]_-{\\eta ^{_{B}} & 0\\, B}\\scriptsize @C=3em@R=4.5em@M=0.5em{[ \\tau _1 ]([ \\tau _2 ](A \\Rightarrow B)) \\times \\tau _1\\, ( \\tau _2\\, A) [r]^-{\\delta ^{-1}_{A \\Rightarrow B,\\tau _1,\\tau _2} \\times \\mu ^{_{A,\\tau _1,\\tau _2}} [d]_{[ \\tau ](\\mathsf {curry}(\\mathsf {id})) \\times \\mathsf {id}} & [ \\tau _1 + \\tau _2 ](A \\Rightarrow B) \\times (\\tau _1 + \\tau _2)\\, A [ddd]^{\\mathsf {uncurry}(\\mathsf {enr}_{\\!A,B,\\tau _1 + \\tau _2})}\\\\[ \\tau _1 ]\\big ( \\tau _2\\, A \\Rightarrow ([ \\tau _2 ](A \\Rightarrow B) \\times \\tau _2\\, A)\\big ) \\times \\tau _1\\, ( \\tau _2\\, A) [d]_{\\mathsf {uncurry}(\\mathsf {enr})}\\\\ \\tau _1\\, ([ \\tau _2 ](A \\Rightarrow B) \\times \\tau _2\\, A) [d]_{ \\tau _1\\, (\\mathsf {uncurry}(\\mathsf {enr}_{\\!A,B,\\tau _2}))}\\\\ \\tau _1\\, ( \\tau _2\\, A) [r]_{\\mu ^{_{A,\\tau _1,\\tau _2}} & (\\tau _1 + \\tau _2)\\, B}where the unit-like \\eta ^{[]}_{A,\\tau } : A is given by the compositeA \\overset{\\varepsilon ^{-1}_{A}}{-\\!\\!\\!-\\!\\!\\!\\rightarrow } [ 0 ] A \\overset{[ 0 \\le \\tau ]_A}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\rightarrow } [ \\tau ]A,and where the composition morphism \\mathsf {comp}_{A,B,C} : (B \\Rightarrow C) \\times (A \\Rightarrow B) \\rightarrow A \\Rightarrow Cis derived from the universal property of exponentials in the standard way~\\cite {Kelly:EnrichedCats}.", "}Given [ - ]-strength, [ - ]-enrichment is given as\\mathsf {enr}_{A,B,\\tau } \\mathrel {\\overset{\\text{\\tiny def}}{=}}[ \\tau ](A \\Rightarrow B) \\overset{\\mathsf {curry}\\big ( \\tau \\, (\\mathsf {uncurry}(\\mathsf {id})) \\,\\circ \\, \\mathsf {str}^{_{A \\Rightarrow B, A, \\tau }\\big )}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\rightarrow } (\\tau \\,A \\Rightarrow \\tau \\,B)Conversely, given [ - ]-enrichment, [ - ]-strength is defined as\\mathsf {str}^{_{A,B,\\tau } \\mathrel {\\overset{\\text{\\tiny def}}{=}}[ \\tau ]A \\times B\\, \\tau \\overset{\\mathsf {uncurry}(\\mathsf {enr}_{\\!B,A \\times B,\\tau }) \\,\\circ \\, [ \\tau ](\\mathsf {curry}(\\mathsf {id})) \\times \\mathsf {id}}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\rightarrow } (A \\times B)\\, \\tau }\\newpage }{\\subsection }{Algebraic operations for algebraic effects}}We require natural transformations\\mathsf {op}^{_{A,\\tau } : [\\!", "[A_\\mathsf {op}]\\!", "]^g \\times [ \\tau _\\mathsf {op} ]([\\!", "[B_\\mathsf {op}]\\!", "]^g \\Rightarrow \\tau \\, A) \\rightarrow (\\tau _\\mathsf {op}+ \\tau )\\, Afor all \\mathsf {op}: A_\\mathsf {op} \\leadsto B_\\mathsf {op} \\mathbin {!}", "\\tau _\\mathsf {op} \\in \\mathcal {O}, and\\mathsf {delay}^{_{\\!A,\\tau ^{\\prime }}\\, \\tau : [ \\tau ] ( \\tau ^{\\prime }\\, A) \\rightarrow (\\tau + \\tau ^{\\prime })\\, Afor all \\tau \\in \\mathbb {B}, satisfying algebraicity laws with respect to \\mu ^{ and \\mathsf {str}^{\\small @C=4em@R=4.5em@M=0.5em{A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\tau \\, ( \\tau ^{\\prime } \\, A)) [d]_{\\mathsf {id}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\mu ^{_{A,\\tau ,\\tau ^{\\prime }})} [r]^-{\\mathsf {op}^{_{^{\\prime } A,\\tau }} & (\\tau _\\mathsf {op}+ \\tau )\\, ( \\tau ^{\\prime } \\, A) [dd]^{\\mu ^{_{A,\\tau _\\mathsf {op}+ \\tau ,\\tau ^{\\prime }}}\\\\A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow (\\tau + \\tau ^{\\prime })\\, A) [d]_{\\mathsf {op}^{_{A,\\tau + \\tau ^{\\prime }}}\\\\ (\\tau _\\mathsf {op}+ (\\tau + \\tau ^{\\prime }))\\, A [r]_-{\\equiv }& ((\\tau _\\mathsf {op}+ \\tau ) + \\tau ^{\\prime })\\, A}\\small @C=6em@R=4.5em@M=0.5em{[ \\tau ]( \\tau ^{\\prime }\\, ( \\tau ^{\\prime \\prime }\\, A)) [r]^{\\mathsf {delay}^{_{( \\tau ^{\\prime \\prime }\\, A),\\tau ^{\\prime }}\\, \\tau } [d]_{[ \\tau ](\\mu ^{_{A,\\tau ^{\\prime },\\tau ^{\\prime \\prime }})} & (\\tau + \\tau ^{\\prime })\\, ( \\tau ^{\\prime \\prime }\\, A) [dd]^{\\mu ^{_{A,\\tau + \\tau ^{\\prime }, \\tau ^{\\prime \\prime }}}\\\\[ \\tau ]( (\\tau ^{\\prime } + \\tau ^{\\prime \\prime }) \\, A) [d]_{\\mathsf {delay}^{_{\\!A,\\tau ^{\\prime } + \\tau ^{\\prime \\prime }}\\, \\tau }\\\\ (\\tau + (\\tau ^{\\prime } + \\tau ^{\\prime \\prime }))\\, A [r]_-{\\equiv }& ((\\tau + \\tau ^{\\prime }) + \\tau ^{\\prime \\prime })\\, A}\\small @C=5em@R=4em@M=0.5em{[ \\tau + \\tau ^{\\prime } ] A \\times [ \\tau ]( \\tau ^{\\prime }\\, B) [r]^-{\\mathsf {id}\\times \\mathsf {delay}^{_{\\!B,\\tau ^{\\prime }}\\, \\tau } [d]_{\\delta _{A,\\tau ,\\tau ^{\\prime }} \\times \\mathsf {id}} & [ \\tau + \\tau ^{\\prime } ] A \\times (\\tau + \\tau ^{\\prime })\\, B [ddd]^{\\mathsf {str}^{_{A, B, \\tau + \\tau ^{\\prime }}}\\\\[ \\tau ]([ \\tau ^{\\prime } ]A) \\times [ \\tau ]( \\tau ^{\\prime }\\, B) [d]_{\\mathsf {m}_{[ \\tau ^{\\prime } ]A, \\tau ^{\\prime }\\, B,\\tau }}\\\\[ \\tau ]([ \\tau ^{\\prime } ]A \\times \\tau ^{\\prime }\\, B) [d]_{[ \\tau ](\\mathsf {str}^{_{A,B,\\tau ^{\\prime }})}\\\\[ \\tau ](T\\, \\tau ^{\\prime }\\, (A \\times B)) [r]_-{\\mathsf {delay}^{_{\\!A \\times B,\\tau ^{\\prime }}\\, \\tau } & (\\tau + \\tau ^{\\prime })\\, (A \\times B)}\\small @C=4em@R=4em@M=0.5em{[ \\tau _\\mathsf {op}+ \\tau ] A \\times (A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\tau \\, B)) [r]^-{\\mathsf {id}\\times \\mathsf {op}^{_{B,\\tau }} [d]_{\\alpha \\,\\circ \\, (\\mathsf {swap} \\times \\mathsf {id}) \\,\\circ \\, \\alpha ^{-1}} & [ \\tau _\\mathsf {op}+ \\tau ] A \\times (\\tau _\\mathsf {op}+ \\tau )\\, B [ddddd]^{\\mathsf {str}^{_{A,B,\\tau _\\mathsf {op}+ \\tau }}\\\\A_\\mathsf {op}\\times ([ \\tau _\\mathsf {op}+ \\tau ]A \\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\tau \\, B)) [d]_{\\mathsf {id}\\times (\\delta _{A,\\tau _\\mathsf {op},\\tau } \\times \\mathsf {id})}\\\\A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ]([ \\tau ]A) \\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\tau \\, B) [d]_{\\mathsf {id}\\times \\mathsf {m}_{[ \\tau ]A, B_\\mathsf {op}\\Rightarrow B ,\\tau _\\mathsf {op}}}\\\\A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ]([ \\tau ]A \\times (B_\\mathsf {op}\\Rightarrow \\tau \\, B)) [d]_{\\mathsf {id}\\times [ \\tau _\\mathsf {op} ](\\mathsf {push\\text{-}under\\text{-}}\\Rightarrow )}\\\\A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow ([ \\tau ]A \\times \\tau \\, B)) [d]_{\\mathsf {id}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\mathsf {str}^{_{A,B,\\tau })}\\\\A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\tau \\, (A \\times B)) [r]_-{\\mathsf {op}^{_{A \\times B,\\tau }}& (\\tau _\\mathsf {op}+ \\tau )\\, (A \\times B)}where \\mathsf {push\\text{-}under\\text{-}}\\!\\!\\Rightarrow is a composite morphism that pushes the givenargument [ \\tau ]A under the exponential---it is defined using the universal property of \\Rightarrow .", "}\\newpage }\\subsection {-algebras for effect handling}}We require morphisms\\begin{array}{l}{A,\\tau ,\\tau ^{\\prime }} : \\Pi _{\\mathsf {op}\\in \\mathcal {O}} \\Pi _{\\tau ^{\\prime \\prime } \\in \\mathbb {N}} \\big (([\\!", "[A_\\mathsf {op}]\\!]", "\\times [ \\tau _\\mathsf {op} ]([\\!", "[B_\\mathsf {op}]\\!]", "\\Rightarrow \\tau ^{\\prime \\prime }\\, A)) \\Rightarrow (\\tau _\\mathsf {op}+ \\tau ^{\\prime \\prime })\\, A \\big )\\\\[0.5ex]\\hspace{199.16928pt}\\rightarrow \\tau \\, ( \\tau ^{\\prime }\\, A) \\Rightarrow (\\tau + \\tau ^{\\prime })\\, A\\end{array}satisfying laws stating that A returns a graded -algebra for \\mathsf {op}^{ and \\mathsf {delay}^{\\small @C=4em@R=4.5em@M=0.5em{\\mathcal {H} \\times \\tau \\, A [r]^-{\\mathsf {id}\\times \\eta ^{_{ \\tau \\, A}} [d]_{\\mathsf {snd}} & \\mathcal {H} \\times 0\\, ( \\tau \\, A) [d]^{\\mathsf {uncurry}({A,0,\\tau })}\\\\ \\tau \\, A [r]_-{\\equiv } & (0 + \\tau )\\, A}\\small @C=4.8em@R=4.5em@M=0.5em{\\mathcal {H} \\times [ \\tau ]( \\tau ^{\\prime }\\, ( \\tau ^{\\prime \\prime }\\, A)) [r]^-{\\mathsf {id}\\times \\mathsf {delay}^{_{^{\\prime \\prime } A,\\tau ^{\\prime }}\\, \\tau } [d]_{\\eta ^{\\dashv }_{A,\\tau } \\times \\mathsf {id}} & \\mathcal {H} \\times (\\tau + \\tau ^{\\prime })\\, ( \\tau ^{\\prime \\prime }\\, A) [ddddd]^{\\mathsf {uncurry}({\\!A,\\tau + \\tau ^{\\prime }\\!,\\tau ^{\\prime \\prime }})}\\\\[ \\tau ](\\langle \\tau \\rangle \\mathcal {H}) \\times [ \\tau ]( \\tau ^{\\prime }\\, ( \\tau ^{\\prime \\prime }\\, A)) [d]_{\\mathsf {m}_{\\langle \\tau \\rangle \\mathcal {H}, \\tau ^{\\prime }\\, ( \\tau ^{\\prime \\prime }\\, A), \\tau }}\\\\[ \\tau ](\\langle \\tau \\rangle \\mathcal {H} \\times \\tau ^{\\prime }\\, ( \\tau ^{\\prime \\prime }\\, A)) [d]_{[ \\tau ](\\varepsilon ^{\\langle \\rangle }_{\\mathcal {H},\\tau } \\times \\mathsf {id})}\\\\[ \\tau ](\\mathcal {H} \\times \\tau ^{\\prime }\\, ( \\tau ^{\\prime \\prime }\\, A)) [d]_{[ \\tau ](\\mathsf {uncurry}({A,\\tau ^{\\prime },\\tau ^{\\prime \\prime }}))}\\\\[ \\tau ]( (\\tau ^{\\prime } + \\tau ^{\\prime \\prime })\\, A) [d]_{\\mathsf {delay}^{_{\\!A,\\tau ^{\\prime } + \\tau ^{\\prime \\prime }}\\, \\tau }\\\\ (\\tau + (\\tau ^{\\prime } + \\tau ^{\\prime \\prime }))\\, A [r]_{\\equiv }& ((\\tau + \\tau ^{\\prime }) + \\tau ^{\\prime \\prime })\\, A}\\hspace{-5.69046pt}\\scriptsize @C=1em@R=4em@M=0.5em{\\mathcal {H} \\times (A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\tau \\, ( \\tau ^{\\prime }\\, A))) [r]^-{\\mathsf {id}\\times \\mathsf {op}^{_{^{\\prime } A,\\tau }} [d]_{\\langle \\mathsf {fst}, \\mathsf {id}\\rangle } & \\mathcal {H} \\times (\\tau _\\mathsf {op}+ \\tau )\\, ( \\tau ^{\\prime }\\, A) [ddddddddd]^{\\mathsf {uncurry}({A,\\tau _\\mathsf {op}+ \\tau ,\\tau ^{\\prime }})}\\\\\\mathcal {H} \\times (\\mathcal {H} \\times (A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\tau \\, ( \\tau ^{\\prime }\\, A)))) [d]_{(\\mathsf {proj}_{\\tau + \\tau ^{\\prime }} \\,\\circ \\, \\mathsf {proj}_{\\mathsf {op}}) \\times \\mathsf {id}}\\\\\\mathcal {H}_{\\mathsf {op},\\tau + \\tau ^{\\prime }} \\times (\\mathcal {H} \\times (A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\tau \\, ( \\tau ^{\\prime }\\, A)))) [d]_{\\mathsf {id}\\times (\\alpha \\,\\circ \\, (\\mathsf {swap} \\times \\mathsf {id}) \\,\\circ \\, \\alpha ^{-1})}\\\\\\mathcal {H}_{\\mathsf {op},\\tau + \\tau ^{\\prime }} \\times (A_\\mathsf {op}\\times (\\mathcal {H} \\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\tau \\, ( \\tau ^{\\prime }\\, A)))) [d]_{\\mathsf {id}\\times (\\mathsf {id}\\times (\\eta ^{\\dashv }_{\\mathcal {H},\\tau _\\mathsf {op}} \\times \\mathsf {id}))}\\\\\\mathcal {H}_{\\mathsf {op},\\tau + \\tau ^{\\prime }} \\times (A_\\mathsf {op}\\times ([ \\tau _\\mathsf {op} ](\\langle \\tau _\\mathsf {op} \\rangle \\mathcal {H}) \\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\tau \\, ( \\tau ^{\\prime }\\, A))))) [d]_{\\mathsf {id}\\times (\\mathsf {id}\\times \\mathsf {m}_{\\langle \\tau _\\mathsf {op} \\rangle \\mathcal {H}, B_\\mathsf {op}\\Rightarrow (^{\\prime } A), \\tau _\\mathsf {op}})}\\\\\\mathcal {H}_{\\mathsf {op},\\tau + \\tau ^{\\prime }} \\times (A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](\\langle \\tau _\\mathsf {op} \\rangle \\mathcal {H} \\times (B_\\mathsf {op}\\Rightarrow \\tau \\, ( \\tau ^{\\prime }\\, A)))) [d]_{\\mathsf {id}\\times (\\mathsf {id}\\times [ \\tau _\\mathsf {op} ](\\varepsilon ^{\\langle \\rangle }_{\\mathcal {H},\\tau } \\times \\mathsf {id}))}\\\\\\mathcal {H}_{\\mathsf {op},\\tau + \\tau ^{\\prime }} \\times (A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](\\mathcal {H} \\times (B_\\mathsf {op}\\Rightarrow \\tau \\, ( \\tau ^{\\prime }\\, A)))) [d]_{\\mathsf {id}\\times (\\mathsf {id}\\times [ \\tau _\\mathsf {op} ](\\mathsf {push\\text{-}under\\text{-}}\\Rightarrow ))}\\\\\\mathcal {H}_{\\mathsf {op},\\tau + \\tau ^{\\prime }} \\times (A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow (\\mathcal {H} \\times \\tau \\, ( \\tau ^{\\prime }\\, A)))) [d]_{\\mathsf {id}\\times (\\mathsf {id}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow \\mathsf {uncurry}({A,\\tau ,\\tau ^{\\prime }})))}\\\\\\mathcal {H}_{\\mathsf {op},\\tau + \\tau ^{\\prime }} \\times (A_\\mathsf {op}\\times [ \\tau _\\mathsf {op} ](B_\\mathsf {op}\\Rightarrow (\\tau + \\tau ^{\\prime })\\, A)) [d]_{\\mathsf {uncurry}\\, \\mathsf {id}}\\\\ (\\tau _\\mathsf {op}+ (\\tau + \\tau ^{\\prime }))\\, A [r]_{\\equiv }& ((\\tau _\\mathsf {op}+ \\tau ) + \\tau ^{\\prime })\\, A}\\vspace{14.22636pt}where we write \\mathcal {H} for the domain of the morphisms {A,\\tau ,\\tau ^{\\prime }},\\mathcal {H}_{\\mathsf {op},\\tau + \\tau ^{\\prime }} for the operation case([\\!", "[A_\\mathsf {op}]\\!]", "\\times [ \\tau _\\mathsf {op} ]([\\!", "[B_\\mathsf {op}]\\!]", "\\Rightarrow (\\tau + \\tau ^{\\prime })\\, A)) \\Rightarrow (\\tau _\\mathsf {op}+ (\\tau + \\tau ^{\\prime }))\\, A,and where the counit-like \\varepsilon ^{\\langle \\rangle }_{A,\\tau } is given by the composite\\langle \\tau \\rangle A \\overset{\\langle 0 \\le \\tau \\rangle _A}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\rightarrow }\\langle 0 \\rangle A \\overset{\\eta ^{-1}_A}{-\\!\\!\\!\\rightarrow } A.", "}\\newpage }\\subsection {Interpretation of values and computations}}Well-typed values are interpreted as follows\\small \\begin{array}{l}[\\!", "[\\Gamma ,x \\unknown.", "X,\\Gamma ^{\\prime } \\vdash x : X]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ,x \\unknown.", "X,\\Gamma ^{\\prime }]\\!", "]\\mathbb {1}\\overset{\\cong }{\\rightarrow }[\\!", "[\\Gamma ^{\\prime }]\\!", "]\\big ([\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times [\\![X]\\!", "]\\big )\\\\[0.5ex]\\hspace{106.69783pt}\\overset{\\mathsf {e}}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }\\langle \\mathsf {time}\\; \\Gamma ^{\\prime } \\rangle \\big ([\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times [\\![X]\\!", "]\\big )\\overset{\\varepsilon ^{\\langle \\rangle }}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times [\\![X]\\!", "]\\overset{\\mathsf {snd}}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }[\\![X]\\!]\\\\[3ex][\\!", "[\\Gamma \\vdash \\mathsf {f}(V_1, \\ldots , V_n) : B]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\overset{\\langle [\\![V_1]\\!]", ", \\ldots , [\\![V_n]\\!]", "\\rangle }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\![A_1]\\!]", "\\times \\ldots [\\![A_n]\\!]\\overset{[\\!", "[\\mathsf {f}]\\!", "]}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }[\\![B]\\!]\\\\[3ex][\\!", "[\\Gamma \\vdash ( V , W ) : X \\times Y]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\overset{\\langle [\\![V]\\!]", ", [\\![W]\\!]", "\\rangle }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\![X]\\!]", "\\times [\\![Y]\\!]\\\\[3ex][\\!", "[\\Gamma \\vdash (): \\mathsf {unit}]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\overset{!", "}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }\\mathbb {1}\\\\[3ex][\\!", "[\\Gamma \\vdash {\\mathop {\\mathsf {\\color {keywordColor}fun}}}\\; (x : X) \\mapsto M : X \\rightarrow Y \\mathbin {!", "}\\tau ]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\overset{\\mathsf {curry}([\\![M]\\!", "])}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\![X]\\!]", "\\Rightarrow \\tau \\, [\\![Y]\\!]\\\\[3ex][\\!", "[\\Gamma \\vdash \\mathsf {\\color {keywordColor}box}_{\\tau }\\,V : [ \\tau ]\\, X]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\overset{\\eta ^{\\dashv }}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }[ \\tau ] \\big (\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!", "]\\mathbb {1})\\big )\\overset{[ \\tau ]([\\![V]\\!", "])}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[ \\tau ] [\\![X]\\!", "]\\end{array}\\vspace{14.22636pt}}Well-typed computations are interpreted as follows\\small \\begin{array}{l}[\\!", "[\\Gamma \\vdash \\mathsf {\\color {keywordColor}return}_{}\\, V : X \\mathbin {!}0]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\overset{[\\![V]\\!", "]}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }[\\![X]\\!", "]\\overset{\\eta ^{}{-\\!\\!\\!\\!\\!\\!\\longrightarrow } 0\\, [\\![X]\\!]\\\\[4ex][\\!", "[\\Gamma \\vdash \\mathsf {\\color {keywordColor}let}_{}\\; x = M \\;\\mathsf {\\color {keywordColor}in}\\; N : Y \\mathbin {!", "}\\tau + \\tau ^{\\prime }]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\overset{\\langle \\eta ^{\\dashv }, [\\![M]\\!]", "\\rangle }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[ \\tau ]\\big (\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!]", "\\mathbb {1})\\big ) \\times \\tau \\,[\\![X]\\!", "]\\\\[1ex]\\hspace{56.9055pt}\\overset{\\mathsf {str}^{}{-\\!\\!\\!\\!\\!\\!\\longrightarrow } \\tau \\, \\big (\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!]", "\\mathbb {1}) \\times [\\![X]\\!", "]\\big )\\overset{([\\![N]\\!", "])}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow } \\tau \\, ( \\tau ^{\\prime }\\, [\\![Y]\\!", "])\\overset{\\mu ^{}{-\\!\\!\\!\\!\\!\\!\\longrightarrow } (\\tau + \\tau ^{\\prime })\\, [\\![Y]\\!]\\\\[4ex][\\!", "[\\Gamma \\vdash V\\,W : Y \\mathbin {!", "}\\tau ]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\overset{\\langle [\\![V]\\!]", ", [\\![W]\\!]", "\\rangle }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }([\\![X]\\!]", "\\Rightarrow \\tau \\, [\\![Y]\\!])", "\\times [\\![X]\\!", "]\\overset{\\mathsf {uncurry}(\\mathsf {id})}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow } \\tau \\, [\\![Y]\\!]\\\\[4ex][\\!", "[\\Gamma \\vdash \\mathsf {\\color {keywordColor}match}\\;V\\;\\mathsf {\\color {keywordColor}with}\\;\\lbrace ( x , y ) \\mapsto M\\rbrace _{} : Z \\mathbin {!", "}\\tau ]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\overset{\\langle \\mathsf {id}, [\\![V]\\!]", "\\rangle }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\times ([\\![X]\\!]", "\\times [\\![Y]\\!", "])\\\\[1ex]\\hfill \\overset{\\alpha ^{-1}}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }([\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\times [\\![X]\\!])", "\\times [\\![Y]\\!]\\overset{[\\![M]\\!", "]}{-\\!\\!\\!\\!\\!\\!\\longrightarrow } \\tau \\, [\\![Z]\\!]\\\\[4ex][\\!", "[\\Gamma \\vdash \\mathsf {op}\\;V\\; (x \\,.\\, M) : X \\mathbin {!", "}\\tau _\\mathsf {op}+ \\tau ]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\overset{\\langle [\\![V]\\!]", ", \\eta ^{\\dashv }\\rangle }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\!", "[A_\\mathsf {op}]\\!]", "\\times [ \\tau _\\mathsf {op} ]\\big (\\langle \\tau _\\mathsf {op} \\rangle ([\\!", "[\\Gamma ]\\!]", "\\mathbb {1})\\big )\\\\[1ex]\\hfill \\overset{\\!\\!\\mathsf {id}\\times [ \\tau _\\mathsf {op} ](\\mathsf {curry}([\\![M]\\!", "]))}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow \\,\\,}[\\!", "[A_\\mathsf {op}]\\!]", "\\times [ \\tau _\\mathsf {op} ]\\big ([\\!", "[B_\\mathsf {op}]\\!]", "\\Rightarrow \\tau \\, [\\![X]\\!", "]\\big )\\overset{\\mathsf {op}^{}{-\\!\\!\\!\\!\\!\\!\\longrightarrow } (\\tau _\\mathsf {op}+ \\tau )\\, [\\![X]\\!]\\\\[4ex][\\!", "[\\Gamma \\vdash \\mathsf {\\color {keywordColor}delay}\\;\\tau \\; M : X \\mathbin {!", "}\\tau + \\tau ^{\\prime }]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\overset{\\eta ^{\\dashv }}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }[ \\tau ]\\big (\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!]", "\\mathbb {1})\\big )\\\\[1ex]\\hfill \\overset{[\\![M]\\!", "]}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }[ \\tau ]( \\tau ^{\\prime }\\, [\\![X]\\!", "])\\overset{\\mathsf {delay}^{\\, \\tau }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow } (\\tau + \\tau ^{\\prime })\\, [\\![X]\\!]}{}}{}\\begin{array}{l}[\\!", "[\\Gamma \\vdash \\mathsf {\\color {keywordColor}handle}\\; M \\;\\mathsf {\\color {keywordColor}with}\\; H \\;\\mathsf {\\color {keywordColor}to}\\; x\\;\\mathsf {\\color {keywordColor}in}\\; N : Y \\mathbin {!", "}\\tau + \\tau ^{\\prime }]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~\\\\[1ex]\\hspace{8.5359pt}[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\overset{\\langle \\mathsf {id}, \\langle \\eta ^{\\dashv }, [\\![M]\\!]", "\\rangle \\rangle }{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\times \\Big ([ \\tau ]\\big (\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!]", "\\mathbb {1})\\big ) \\times \\tau \\, [\\![X]\\!", "]\\Big )\\\\[1ex]\\hspace{19.91684pt}\\overset{\\!\\!\\mathsf {id}\\times \\mathsf {str}^{}{-\\!\\!\\!-\\!\\!\\!\\longrightarrow \\,\\,}[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\times \\tau \\, \\big (\\langle \\tau \\rangle ([\\!", "[\\Gamma ]\\!]", "\\mathbb {1}) \\times [\\![X]\\!]", "\\big )\\overset{\\mathsf {id}\\times \\tau \\, ([\\![N]\\!", "])}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ]\\!]", "\\mathbb {1}\\times \\tau \\, \\big ( \\tau ^{\\prime }\\, [\\![Y]\\!", "]\\big )\\\\[1ex]\\hspace{96.73918pt}\\overset{[\\![H]\\!]", "\\times \\mathsf {id}}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }\\mathcal {H} \\times \\tau \\, \\big ( \\tau ^{\\prime }\\, [\\![Y]\\!", "]\\big )\\overset{\\mathsf {uncurry}({[\\![Y]\\!", "],\\tau ,\\tau ^{\\prime }})}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow } (\\tau + \\tau ^{\\prime })\\, [\\![Y]\\!]\\\\[4ex][\\!", "[\\Gamma \\vdash \\mathsf {\\color {keywordColor}unbox}_{\\tau }\\; V \\;\\mathsf {\\color {keywordColor}as}\\; x \\;\\mathsf {\\color {keywordColor}in}\\; N : Y \\mathbin {!", "}\\tau ^{\\prime }]\\!", "]~\\mathrel {\\overset{\\text{\\tiny def}}{=}}~[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\overset{\\langle \\mathsf {id}, \\mathsf {e}_\\tau \\rangle }{-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times \\langle \\tau \\rangle \\big ([\\!", "[\\Gamma \\mathbin {-}\\tau ]\\!]", "\\mathbb {1}\\big )\\\\[1ex]\\hfill \\overset{\\!\\!\\mathsf {id}\\times \\langle \\tau \\rangle ([\\![V]\\!", "])}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow \\,\\,}[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times \\langle \\tau \\rangle \\big ([ \\tau ] [\\![X]\\!", "]\\big )\\overset{\\mathsf {id}\\times \\varepsilon ^{\\dashv }}{-\\!\\!\\!-\\!\\!\\!\\longrightarrow }[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\times [\\![X]\\!]\\overset{[\\![N]\\!", "]}{-\\!\\!\\!\\!\\!\\!\\longrightarrow }\\tau ^{\\prime }\\, [\\![Y]\\!", "]}{}where in the \\mathsf {\\color {keywordColor}handle}\\; M \\;\\mathsf {\\color {keywordColor}with}\\; H \\;\\mathsf {\\color {keywordColor}to}\\; x\\;\\mathsf {\\color {keywordColor}in}\\; N case we write \\mathcal {H} for\\Pi _{\\mathsf {op}\\in \\mathcal {O}} \\Pi _{\\tau ^{\\prime \\prime } \\in \\mathbb {N}} \\big (([\\!", "[A_\\mathsf {op}]\\!]", "\\times [ \\tau _\\mathsf {op} ]([\\!", "[B_\\mathsf {op}]\\!]", "\\Rightarrow \\tau ^{\\prime \\prime }\\, A)) \\Rightarrow (\\tau _\\mathsf {op}+ \\tau ^{\\prime \\prime })\\, A \\big )and [\\![H]\\!]", "for[\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\overset{\\langle \\langle \\mathsf {id}\\rangle _{\\tau ^{\\prime \\prime } \\in \\mathbb {N}} \\rangle _{\\mathsf {op}\\in \\mathcal {O}}}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }\\Pi _{\\mathsf {op}\\in \\mathcal {O}} \\Pi _{\\tau ^{\\prime \\prime } \\in \\mathbb {N}} \\Big ([\\!", "[\\Gamma ]\\!", "]\\mathbb {1}\\Big )\\overset{\\Pi _{\\mathsf {op}\\in \\mathcal {O}} \\Pi _{\\tau ^{\\prime \\prime } \\in \\mathbb {N}} \\big ( \\mathsf {curry}([\\!", "[M_\\mathsf {op}]\\!]", "\\,\\circ \\, \\alpha ^{-1}) \\big )}{-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!-\\!\\!\\!\\longrightarrow }\\mathcal {H}and where we recall from {sect:core-calculus} that we write H for {(x \\,.\\, k \\,.\\, M_{\\mathsf {op}})}_{\\mathsf {op}\\in \\mathcal {O}} .\\end{array}}{}}{}}{}\\end{array}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}$" ] ]
2210.07738
[ [ "A real-time GP based MPC for quadcopters with unknown disturbances" ], [ "Abstract Gaussian Process (GP) regressions have proven to be a valuable tool to predict disturbances and model mismatches and incorporate this information into a Model Predictive Control (MPC) prediction.", "Unfortunately, the computational complexity of inference and learning on classical GPs scales cubically, which is intractable for real-time applications.", "Thus GPs are commonly trained offline, which is not suited for learning disturbances as their dynamics may vary with time.", "Recently, state-space formulation of GPs has been introduced, allowing inference and learning with linear computational complexity.", "This paper presents a framework that enables online learning of disturbance dynamics on quadcopters, which can be executed within milliseconds using a state-space formulation of GPs.", "The obtained disturbance predictions are combined with MPC leading to a significant performance increase in simulations with jMAVSim.", "The computational burden is evaluated on a Raspberry Pi 4 B to prove the real-time applicability." ], [ "INTRODUCTION", "In recent years, unmanned aerial vehicles (UAVs) have increased in popularity due to their various possible applications, low cost, and variability.", "A common challenge in many UAV applications includes limitations in flight time due to a constrained battery capacity.", "A tethering system is a possible means to overcome this issue.", "However, such a tether creates an additional load and makes the UAV more susceptible to disturbances due to wind.", "The commonly used quadcopter setup for a UAV has 6 degrees of freedom (DoF).", "With four rotors, it is, therefore, an underactuated system, making it a good test platform for modern control approaches.", "A quadcopter is often controlled by a cascaded PID; the use of MPC for quadcopters is also well established (cf.", "[1]).", "However, most of these approaches cannot cope efficiently with exogenous disturbances due to the wind or a tether.", "There are different approaches to include disturbances in the control loop, such as disturbance rejection and backstepping (cf.", "[2]).", "This work proposes a more general approach by modeling disturbances as latent forces and moments acting on the quadcopter, which can be predicted using advanced machine learning techniques.", "In recent years, Gaussian process (GP) regression models, which are nonparametric kernel-based probabilistic models [3], have demonstrated significant potential for learning control-oriented models, [4], with the ability to incorporate prior knowledge about the data through the choice of a kernel.", "As a Bayesian framework, the strength of a GP regression is its ability to estimate the prediction uncertainty, which can be used for robustifying the control systems.", "In the context of MPC, it has been demonstrated in, e.g., [5], [6], [7], that GPs can approximate disturbances, nonlinearities, and model mismatches, where their information provided by the GPs can be exploited to enhance the MPC performance.", "Moreover, the GPs probabilistic descriptions have been leveraged to propagate uncertainties of the estimates over the MPC prediction horizon, yielding a low conservative mechanism for robustification in terms of chance constraints, which casts the problem in a stochastic MPC framework [5].", "This paper's contributions can be summarized as follows: We prove the feasibility and instruct the implementation of a real-time capable GP-based MPC for a quadcopter under disturbances using simulation.", "It is therefore assumed that the disturbances underlie some unknown dynamics.", "The idea is that if one uses GPs to mimic the underlying disturbance dynamics, then present and anticipated future disturbances can be inferred based on recorded past data.", "The predictions can then be used by MPC to counteract these unknown disturbances.", "Furthermore, a cautious control is achieved by formulating a chance-constrained MPC problem using the uncertainties provided by the GP model, as proposed in [5]." ], [ "BACKGROUND", "This section covers the required basics needed for this work.", "These include Model Predictive Control (cf.", "[1]), Gaussian processes (cf.", "[3]) as well as the LTI representation of GPs with stationary kernels (cf.", "[8])." ], [ "Model Predictive Control", "Assume a discrete-time nonlinear system given by $x_{k+1} = f(x_k, u_k)$ where $x_k\\in \\mathbb {R}^{n_x}$ is the state and $u_k\\in \\mathbb {R}^{n_u}$ the input of the system at time step $k$ .", "Model Predictive Control builds upon a model $\\hat{f}(x_k, u_k)$ of the system dynamics (REF ).", "Based on this model, the trajectory of the systems' state can be predicted in terms of the input and initial state.", "Thus, the goal is to find $u^\\ast (x_{t|t})= && & \\underset{u_0,\\cdots ,u_{N-1}}{\\mathrm {argmin}} l_f(x_{t+N|t}) + \\sum _{k=0}^{N-1} l(x_{t+k|t},u_{t+k|t}) \\nonumber \\\\\\text{s.t.", "}&& & x_{t+k+1|t}=\\hat{f}(x_{t+k|t},u_{t+k|t})\\nonumber \\\\&& & x_{t+k|t} \\in \\mathcal {X},\\forall k=1,\\cdots , N\\\\&& & u_{t+k|t} \\in \\mathcal {U},\\forall k=0,1,\\cdots , N-1 \\nonumber $ where $x_{t+k+1|t}$ is the prediction of $x_{t+k+1}$ at time $t$ , $l(x_{t+k|t},u_{t+k|t})$ a stage cost function penalizing the state and input over the prediction horizon $N$ , $l_f(x_{t+N|t})$ approximates the cost of regulating the state $x_{t+N|t}$ from time step $N$ to infinity and $\\begin{aligned}\\hspace{-5.69054pt} \\mathcal {X}\\!\\!\\lbrace x\\in \\mathbb {R}^{n_x}|H^xx\\le h^x\\rbrace ,\\ \\mathcal {U}\\!\\!\\lbrace u\\in \\mathbb {R}^{n_u}|H^uu\\le h^u\\rbrace \\end{aligned}$ are lower and upper bounds for the state and input, respectively, where $H^x\\in \\mathbb {R}^{n_{xc}\\times n_x}$ , $h^x\\in \\mathbb {R}^{n_{xc}}$ , $H_u\\in \\mathbb {R}^{n_{uc}\\times n_u}$ , $h^u\\in \\mathbb {R}^{n_{uc}}$ for $n_{xc}$ state and $n_{uc}$ input constraints." ], [ "Gaussian Processes", "Assume an unknown continuous-time function $g(t_i)$ with inputs $t_i$ and noisy outputs $y_i$ where $v_i$ denotes white Gaussian measurement noise such that $y_i = g(t_i) + v_i , \\quad v_i\\sim \\mathcal {N}(0, \\sigma _n^2),$ where $t$ in this work will denote time.", "Gaussian Process Regression is concerned with the task of learning the function $g(t_i)$ such that predictions of the output $y_{n+1}$ can be made based on an input $t_{n+1}$ and a training set $(t_1, y_1),\\dots ,(t_n,y_n)$ .", "Therefore it is assumed that the output data $y_{1:n+1}$ forms a multivariate normal distribution such that $y_{1:n+1}\\sim \\mathcal {N}(\\mu ,K)$ , where $\\mu \\in \\mathbb {R}^{n+1}$ and the entries of $K\\in \\mathbb {R}^{n+1,n+1}$ at any element $K_{i,j}$ are given by the kernel function $k(t_i,t_j)$ ; the sequence $y_{1:n+1}$ here denotes a column vector of length $n+1$ .", "Thus the correlation of any two $y_i$ and $y_j$ is given by the kernel function and the input data $t_i, t_j$ .", "A prediction of the unknown output $y_{n+1}$ is obtained from the multivariate normal distribution by marginalizing over $y_{1:n}$ .", "Since normally distributed predictions $y_{n+1}$ are obtained for arbitrary input values $t_{n+1}$ this procedure yields a distributed function $y_{n+1}=\\hat{g}(t_{n+1})$ called a Gaussian Process.", "The mean $\\mu $ defines a bias and is often assumed zero.", "The kernel which determines the data points' correlation is usually parameterized by a set of hyperparameters.", "These hyperparameters are fitted to the data such that the kernel function yields a most likely description of the data points true correlation.", "More formally, the optimal hyperparameters are determined by $\\max _{\\theta } p(\\theta |t_{1:n}, y_{1:n}),$ where $\\theta $ is a vector of the sought hyperparameters.", "In practice, one may use gradient descent methods to maximise the (log) likelihood of the optimization problem (REF ) and update the GP hyperparameters iteratively as $\\theta _{n+1} = \\theta _n + \\eta \\frac{dp(\\theta |t_{1:n}, y_{1:n})}{d\\theta }$ where $\\eta $ denotes a learning rate.", "The most widely used kernel, the squared exponential kernel $k(t,t^{\\prime }) = \\sigma _m^2\\exp (-(t-t^{\\prime })^2/l^2)$ with hyperparameters $l$ and $\\sigma _m^2$ , assumes a high correlation when the data points $t$ and $t^\\prime $ are close, thus, the corresponding output data points are likely to be similar.", "Therefore, it is often used for learning smooth functions.", "By varying the width of the squared exponential kernel, i.e., $l$ , one can adjust the input range over which the output data is assumed to be similar.", "The inference, or more precisely, marginalization in GPs, can be done in closed form as well as the calculation of the gradient in (REF ).", "However, both require the inversion of the covariance matrix $K$ , which yields a computational complexity of $\\mathcal {O}(n^3)$ .", "Thus, with increasing data, GPs become intractable for real-time applications.", "A solution to this problem was proposed in [8] by reformulating various GPs with stationary kernels $k(t_j,t_i)=k(\\tau )$ , where $\\tau =t_j-t_i$ , as LTI-systems on which inference and learning are carried out using Kalman-Filtering and Rauch-Tung-Striebel-Smoothing with a computational complexity of $\\mathcal {O}(n)$ .", "The transformation is done as follows: According to the Wiener–Khinchin theorem the spectral density $S(i\\omega )$ of $\\hat{g}(t)$ can be obtained by the Fourier-Transform of the stationary kernel function $S(i\\omega )=\\mathcal {F}\\lbrace k(\\tau )\\rbrace $ .", "Assuming a finite spectrum of $S(i\\omega )$ , a consecutive spectral factorization $\\mathcal {F}\\lbrace k(\\tau )\\rbrace =H(i\\omega )qH(i\\omega )$ yields a system with the transfer function $H(i\\omega )$ driven by white Gaussian noise with spectral density $q$ which yields an output with the same spectral density as $\\hat{g}(t)$ .", "Thus, the system driven by the noise can be written as a continuous LTI-system with a discrete output $\\begin{aligned}\\dot{z}(t) &= Fz(t) + Lw(t), \\quad \\hat{y}(t_i) = Hz(t_i) + \\hat{v}(t_i)\\end{aligned}$ where $z\\in \\mathbb {R}^{n_z}$ denotes the state vector of the GP state-space model, $w(t)\\sim \\mathcal {N}(0,q)$ , $\\hat{v}(t_i)\\sim \\mathcal {N}(0,\\hat{\\sigma }^2_n)$ represents measurement noise and the matrices $F,L,H$ are of appropriate dimension.", "The system yields a prediction $\\hat{y}(t_i)$ for arbitrary inputs $t_i$ based on the state $z(t_i)$ .", "For the inference, the system is discretized at a sampling distance $\\Delta T$ .", "Given a record of past outputs $y_{1:n}$ , the state $z_{n+1}\\sim ({m^z_{n+1},\\Sigma ^z_{n+1}})$ can be inferred by Kalman-Filtering, which yields a prediction of the output $y_{n+1} \\sim \\mathcal {N}(Cm^z_{n+1},C\\Sigma ^z_{n+1}C^\\top + \\hat{\\sigma }^2_n)$.", "For the learning, the gradient in (REF ) can be obtained efficiently by refactoring results of the inference [8].", "Finally, it is worth mentioning that $\\mathcal {F}\\lbrace k(\\tau )\\rbrace $ does not always yield the proposed rational form for arbitrary kernels, e.g., the squared exponential kernel.", "In such cases, it can be approximated to a desired order using a Taylor series or a Padé approximation, see [4] for more details." ], [ "Modeling", "The quadcopter position is denoted as $p\\in \\mathbb {R}^3$ and velocity $\\dot{p}\\in \\mathbb {R}^3$ in an inertial frame which is spanned by unit vectors $e_{\\hat{x}}, e_{\\hat{y}}, e_{\\hat{z}}\\in \\mathbb {R}^3$ .", "Furthermore, the attitude between the inertial frame and the body frame, which is aligned with the quadcopter axes along the unit vectors $e_{x}, e_{y}, e_{z}\\in \\mathbb {R}^3$ , is given by the Euler angles $\\Phi \\in \\mathbb {R}^3$ and attitude rate $\\dot{\\Phi }\\in \\mathbb {R}^3$ in the roll, pitch and yaw directions.", "The matrix $R_{rot}\\in \\mathbb {R}^{3\\times 3}$ denotes the rotation matrix from the body frame to the inertial frame.", "The quadcopter system is modeled as a rigid body with three rotational and three translational DoF and dynamics $\\ddot{p} = F/ m,$ $M= J\\dot{\\omega } + \\omega \\times J\\omega $ where $m\\in \\mathbb {R}$ is the quadcopter mass, $g={9.81}{}$ , $\\omega \\in \\mathbb {R}^3$ denotes the angular rate of the quadcopter in the body frame, $F\\in \\mathbb {R}^3$ the sum of all forces and $M\\in \\mathbb {R}^3$ the sum of all moments which act on the quadcopter body.", "The force $F$ is a summed effect of gravity, the propeller actuation, and aerodynamics, which, for simplicity, are assumed to scale linearly with the quadcopter velocity such that $F = R_{rot} \\sum _i F_i e_{z} - mge_{\\hat{z}} - k_D\\dot{p},$ where $F_i\\in \\mathbb {R}$ denotes the thrust generated by a single propeller $i$ and $k_D\\in \\mathbb {R}$ the drag force coefficient.", "The moment $M=\\begin{pmatrix}\\tau _x & \\tau _y & \\tau _z\\end{pmatrix}\\in \\mathbb {R}^3$ also stems from the actuation of the quadcopter's propellers and allows the quadcopter to roll, pitch and yaw.", "Moments caused by blade flapping (cf.", "[9]) are neglected in this model for simplicity.", "In order to derive a linear model for the quadcopter a state $x = \\begin{pmatrix} p^\\top & \\dot{p}^\\top & \\Phi ^\\top & \\dot{\\Phi }^\\top \\end{pmatrix}^\\top \\in \\mathbb {R}^{12}$ and normalized inputs $u=\\begin{pmatrix} \\frac{T}{T_{max}} & \\frac{\\tau _x}{\\tau _{x,max}} & \\frac{\\tau _x}{\\tau _{y,max}} & \\frac{\\tau _x}{\\tau _{z,max}} \\end{pmatrix}^\\top \\in \\mathbb {R}^4$ are defined, where $T[{}{}]$ is the thrust, $\\tau _x[{}{}],\\tau _y[{}{}],\\tau _z[{}{}]$ the longitudinal, lateral and vertical torques and $T_{max},\\tau _{x,max},\\tau _{y,max},\\tau _{z,max}$ the respective maximum values, such that $\\frac{T}{T_{max}}\\in [0,1],\\frac{\\tau _x}{\\tau _{x,max}}\\in [-1,1],\\frac{\\tau _y}{\\tau _{y,max}}\\in [-1,1],\\frac{\\tau _z}{\\tau _{z,max}}\\in [-1,1]$ .", "The state and input dimensionality are denoted with $n_x=12$ and $n_u=4$ , respectively.", "Linearizing the nonlinear dynamics around the hovering condition $\\dot{x}_s =0$ and $u_s = \\begin{pmatrix} \\frac{mg}{T_{max}} & 0 & 0 & 0 \\end{pmatrix}^{\\top }$ yields the linear model $\\dot{x} &\\approx \\begin{pmatrix}0 & I & 0 & 0\\\\0 & -\\frac{k_D}{m}I & A_1 & 0 \\\\0 & 0 & 0 & I \\\\0 & 0 & 0 & 0\\end{pmatrix}x+ \\begin{pmatrix}0 \\\\B_1 \\\\0 \\\\B_2\\end{pmatrix}(u-u_s),\\multicolumn{2}{l}{\\text{where}}\\\\A_1 &= \\begin{pmatrix}0 & g & 0 \\\\-g & 0 & 0 \\\\0 & 0 & 0\\end{pmatrix}, \\ B_1 = \\begin{pmatrix}0 & 0 & 0 & 0\\\\0 & 0 & 0 & 0\\\\\\frac{1}{m}T_{max} & 0 & 0 & 0\\end{pmatrix}, \\nonumber \\\\B_2 &= \\begin{pmatrix}0 & \\frac{1}{I_{xx}}\\tau _{x,max} & 0 & 0\\\\0 & 0 & \\frac{1}{I_{yy}}\\tau _{y,max} & 0\\\\0 & 0 & 0 & \\frac{1}{I_{zz}}\\tau _{z,max} \\nonumber \\end{pmatrix}.$ The wind and the tether will act as latent forces and moments on the quadcopter body.", "Assuming the tether is attached to the center of mass, we will only focus on latent forces.", "However, the proposed procedure can easily be extended to include latent moments." ], [ "Controller Design", "GPs are used here to predict the latent forces caused by the disturbances over the prediction horizon of the MPC.", "This information is then used to improve the MPC performance and guarantee constraint satisfaction up to probability levels $p_x$ and $p_u$ for the state and input, respectively.", "The control policy is thus to find an input sequence $u_{1:N}$ for the stochastic optimization problem $\\min _{u_0,\\cdots ,u_{N-1}} && & E\\left[\\Vert x_{N|t}\\Vert _P + \\sum _{k=0}^{N-1} \\Vert x_{t+k|t}\\Vert _Q + \\Vert u_{t+k|t}\\Vert _R\\right] \\nonumber \\\\\\text{s.t.", "}&& & x_{t+k+1|t}=\\hat{f}(x_{t+k|t},u_{t+k|t},d_{t+k|t}) \\nonumber \\\\&& & {\\rm Pr}(x_{t+k|t} \\in \\mathcal {X})\\le p_x,\\forall k= 1,\\dots ,N\\\\&& & {\\rm Pr}(u_{t+k|t} \\in \\mathcal {U})\\le p_u,\\forall k= 0,\\dots ,N-1 \\nonumber $ where $E$ denotes the expected value, $Q=Q^\\top \\succeq 0$ , $P=P^\\top \\succeq 0$ and $R=R^\\top \\succ 0$ , the norm $\\Vert x\\Vert _A = x^\\top Ax$ denotes the matrix norm and $d_{t+k|t}$ represents a prediction at time $t$ of the disturbance at time step $t+k$ .", "While $Q$ and $R$ are used for tuning the MPC behavior, the matrix $P$ weights the terminal cost.", "Note that computing the above problem is intractable, therefore, simplifications will be introduced." ], [ "Disturbance extraction", "The latent forces in all DoFs produce accelerations in the different directions of the quadcopter body.", "In order to train GPs to learn the dynamics of these accelerations, it is first necessary to extract and record data of past accelerations due to the disturbances.", "This can be performed by comparing measured velocities $\\dot{p}_{t|t}$ of the quadcopter with simulated velocities $\\dot{\\hat{p}}_{t|t}$ based on the proposed nonlinear quadcopter dynamics.", "Therefore, the accelerations in the different directions can be estimated by $\\ddot{p}_{d,i,t-1|t} = \\frac{\\dot{p}_{i,t|t}-\\dot{\\hat{p}}_{i,t|t}}{\\Delta T}$ where $i\\in \\lbrace \\hat{x},\\hat{y},\\hat{z}\\rbrace $ denotes the direction of the acceleration and $\\Delta T$ the discretization length of the state prediction.", "The recorded accelerations $\\ddot{p}_{d,i,0|t},\\dots ,\\ddot{p}_{d,i,t-1|t}$ in all DoFs are assumed to be uncorrelated since they stem from unknown disturbance dynamics and since the translational dynamics of the quadcopter can be separated into the single DoFs.", "Thus, the inference and learning in each DoF can be handled independently by a single one-dimensional GP such that three GPs are assigned to predict the accelerations associated with the disturbances.", "The three GPs to learn and predict $\\ddot{p}_{d,i}(t)=g_i(t)$ with $i\\in \\lbrace \\hat{x},\\hat{y},\\hat{z}\\rbrace $ will be used in their continuous-time state-space representation with the deterministic formulation $\\begin{aligned}\\dot{z}_i(t) &= F_{i}z_i(t), \\quad \\hat{y}_i(t) &= H_{i}z_i(t).\\end{aligned}$" ], [ "Predictor", "To define the predictor of the MPC, we combine the model of the quadcopter system and the deterministic state-space models of the GPs (REF ), which results in the augment model $\\begin{pmatrix}\\dot{x} \\\\\\dot{z}_{\\hat{x}}\\\\\\dot{z}_{\\hat{y}}\\\\\\dot{z}_{\\hat{z}}\\end{pmatrix} &= \\begin{pmatrix}A & C_{\\hat{x}}H_{\\hat{x}} & C_{\\hat{y}}H_{\\hat{y}} & C_{\\hat{z}}H_{\\hat{z}} \\\\0 & F_{\\hat{x}} & 0 & 0 \\\\0 & 0 & F_{\\hat{y}} & 0 \\\\0 & 0 & 0 & F_{\\hat{z}}\\end{pmatrix} \\begin{pmatrix}\\dot{x} \\\\\\dot{z}_{\\hat{x}}\\\\\\dot{z}_{\\hat{y}}\\\\\\dot{z}_{\\hat{z}}\\end{pmatrix}+ \\begin{pmatrix}B \\\\ 0 \\\\ 0 \\\\ 0\\end{pmatrix}u$ where $A, B$ are obtained from the quadcopter model in (REF ) and the matrices $C_{\\hat{x}}, C_{\\hat{y}}, C_{\\hat{z}}\\in \\mathbb {R}^{n_x}$ are column vectors mapping the predicted accelerations by the three GPs into the respective dimension of the quadcopter's state.", "Then, discretizing the augmented model above yields the predictor model of the MPC, which directly incorporates the GP's disturbance predictions during state propagation.", "In this work, we adopt the exact discretization approach [6].", "Note that the discrete-time form of (REF ) represents $\\hat{f}(x_{t+k|t},u_{t+k|t},d_{t+k|t})$ in (REF )." ], [ "Uncertainty propagation and constraints tightening", "To incorporate the uncertainty of the GP predictions in the MPC formulation, we make use of some formulations presented in [5].", "We consider the discrete-time representation of (REF ) as $z_{i,t+k|t} &=\\bar{F}_iz_{i,t+k-1|t} + \\bar{L}_iw_{i,t+k-1|t} \\\\\\hat{y}_{i,t+k|t} &=\\bar{H}_iz_{i,t+k|t} + \\hat{v}_{i,t+k|t},$ where the variables $\\bar{F}_i$ , $\\bar{H}_i$ , $\\bar{L}_i$ , $w_{i,t+k|t}\\sim \\mathcal {N}(0,Q_i)$ , $\\hat{v}_{i,t+k|t}\\sim \\mathcal {N}(0,\\hat{\\sigma }_{n,i}^2)$ are the discrete time representatives of the continuous time variables in (REF ) and $i\\in \\lbrace \\hat{x},\\hat{y},\\hat{z}\\rbrace $ .", "Therefore, the mean value and the covariance of the state of the GPs can be propagated as follows: $\\begin{aligned}\\bar{z}_{i,t+k|t} &= \\bar{F}_i\\bar{z}_{i,t+k-1|t} \\\\\\Sigma ^{z}_{i,t+k|t}&=\\bar{F}_i\\Sigma ^{z}_{i,t+k-1|t}\\bar{F}^\\top _i + \\bar{L}_iQ_i\\bar{L}^\\top _i\\end{aligned}$ and for the output of the GPs, they are $\\begin{aligned}\\bar{y}_{i,t+k|t} &= \\bar{H}_i\\bar{z}_{i,t+k|t} \\\\\\Sigma ^d_{i,t+k|t}&=\\bar{H}_i\\Sigma ^{z}_{i,t+k|t}\\bar{H}^\\top _i + \\hat{\\sigma }_{n,i}^2,\\end{aligned}$ with $i\\in \\lbrace \\hat{x},\\hat{y},\\hat{z}\\rbrace $ .", "The normal distribution of the GP predictions yields the quadcopter state stochastic such that $x_{t+k|t}\\sim \\mathcal {N}(\\bar{x}_{t+k|t}, \\Sigma ^{x}_{t+k|t})$ , where the quadcopter state evolves according to the nominal system in (REF ) plus the additive disturbances $\\Sigma ^d_{i,t+k|t}$ .", "Therefore, using (REF ) and (REF ), the covariance of the quadcopter state propagates as $\\Sigma ^x_{t+k+1|t} = A\\Sigma ^x_{t+k|t}A^\\top + \\sum _{i\\in \\lbrace \\hat{x},\\hat{y},\\hat{z}\\rbrace }C_i\\Sigma ^d_{i,t+k|t}C_i^\\top .$ Since the quadcopter system is unstable and the MPC only stabilizes the nominal system based on its deterministic predictor model, the evolution of the covariance of the quadcopter state may diverge over the prediction horizon according to (REF ).", "To account for this effect, an LQR controller $K_{\\infty }$ is used.", "The input then reads as $u_{t+k|t} = \\bar{u}_{t+k|t} + K_{\\infty }(\\bar{x}_{t+k|t} - x_{t+k|t}).$ where $\\bar{u}_{t+k|t}$ is computed by the MPC.", "Then, the growth of the uncertainty will be restricted as $\\Sigma ^x_{t+k+1|t} = (A-BK)\\Sigma ^x_{t+k|t}(A-BK)^\\top \\\\ + \\sum _{i\\in \\lbrace \\hat{x},\\hat{y},\\hat{z}\\rbrace } C_i\\Sigma ^d_{i,t+k|t}C_i^\\top .$ The feedback of the stochastic state in (REF ) yields the control input stochastic with the distribution $u_{t+k|t} \\sim \\mathcal {N}(\\bar{u}_{t+k|t}, K\\Sigma ^x_{t+k|t}K^\\top ).$ The stochastic nature of the state and input at any time step can be described by defining uncertainty regions $S^x_{t+k|t}$ and $S^u_{t+k|t}$ , respectively, around their expected values within which their uncertain values lie up to some probability level.", "These regions can then be subtracted from the original constraint sets $\\mathcal {X}$ and $\\mathcal {U}$ to obtain tightened sets $\\tilde{\\mathcal {X}}_{t+k|t} =\\mathcal {X} \\ominus S^x_{t+k|t}$ and $\\tilde{\\mathcal {U}}_{t+k|t}=\\mathcal {U} \\ominus S^u_{t+k|t}$ , where $\\ominus $ denotes the Pontryagin set difference, which are used to define guarantees on the satisfaction of the chance constraints in (REF ) [5].", "If the state/input lies within these tightened sets, then, up to the probability level, the quadcopter state/input will not exceed the original constraints despite the unknown disturbances.", "For the applied half-space constraints (REF ), the tightened constraint sets can be computed as [5] $\\begin{aligned}\\tilde{\\mathcal {X}}_{t+k|t} &\\!=\\!\\left\\lbrace \\!x_{t+k|t}| H^xx_{t+k|t}\\!", "\\le \\!", "l^x\\right.", "\\\\&\\qquad \\qquad \\left.", "- \\!|H^x|\\Phi ^{-1}\\!\\left( \\bar{p}\\right)\\sqrt{{\\rm diag}(\\Sigma ^x_{t+k|t})} \\right\\rbrace \\\\\\tilde{\\mathcal {U}}_{t+k|t} &=\\left\\lbrace u_{t+k|t}| H^uu_{t+k|t} \\le l^u\\right.", "\\\\&\\qquad \\qquad \\left.-\\!", "|H^u|\\Phi ^{-1}\\left( \\bar{p}\\right)\\sqrt{{\\rm diag}(\\Sigma ^u_{t+k|t})} \\right\\rbrace \\end{aligned}$ where ${\\rm diag}$ indicates a vector of the matrix diagonal elements, $\\Phi ^{-1}$ is the quantile function of a standard normal distribution, $\\bar{p}=1-\\left(\\frac{1}{n_x}-\\frac{p_x+1}{2n_x}\\right)$ where $0\\le p_x\\le 1$ is some probability level and $\\Sigma ^u_{t+k|t}=K\\Sigma ^x_{t+k|t}K^\\top $ , see [5] for more details.", "Finally, the MPC optimization problem is given as follows: $\\min _{\\bar{u}_0,\\cdots ,\\bar{u}_{N-1}} && & \\Vert \\bar{x}_{N|t}\\Vert _P + \\sum _{k=0}^{N-1} \\Vert \\bar{x}_{t+k|t}\\Vert _Q + \\Vert \\bar{u}_{t+k|t}\\Vert _R \\nonumber \\\\\\text{s.t.", "}&& & \\bar{x}_{t+k+1|t}=\\tilde{A}\\bar{x}_{t+k|t} + \\tilde{B}\\bar{u}_{t+k|t} \\nonumber \\\\&& & \\bar{x}_{t+k|t} \\in \\tilde{\\mathcal {X}}_{t+k|t},\\forall k= 1,\\dots ,N\\\\&& & \\bar{u}_{t+k|t} \\in \\tilde{\\mathcal {U}}_{t+k|t},\\forall k= 0,\\dots ,N-1 \\nonumber $ where the predictor model with $\\tilde{A}$ , $\\tilde{B}$ is obtained from discretizing (REF )." ], [ "Algorithm", "The proposed GP-based MPC for quadcopters is finally implemented as follows and sketched in Algorithm REF .", "The quadcopter first flies based on a nominal MPC while the disturbance data are collected for the GPs as described in section REF .", "The MPC is updated at intervals of $T={100}{}$ ; between the MPC updates, the GP hyperparameters are trained based on the available data as described in section REF .", "At the end of each time interval, it is not guaranteed that the GP hyperparameters will converge to local optima.", "However, assuming that the disturbance dynamics do not change arbitrarily fast, they likely converge after certain number of iterations as was demonstrated in [10].", "Once the GPs experienced \"sufficient\" training, the algorithm switches from the nominal MPC to the GP-based MPC, thus including the respective disturbance predictions.", "The term \"sufficient\" is still an open research question.", "In the current implementation, the switching simply happens after the GP hyperparameters have been updated at least 50 times using (REF ).", "In our experience, the hyperparameters typically converged in simulation flights at this point.", "KwInputInput Input: $P$ ,$Q$ ,$R$ ,$\\mathcal {X}$ ,$\\mathcal {U}$ ,$p_x$ ,$p_u$ ,$N$ ,$\\Delta T$ $t=0$ Perform a takeoff true $x_{t|t} \\leftarrow $ Telemetry GP hyperparameters updated 50 times $\\tilde{\\mathcal {X}}_{t+1:N|t}, \\tilde{\\mathcal {U}}_{t+1:N|t} \\leftarrow $ eq.", "REF $u^*_{t:(t+N)|t} \\leftarrow $ eq.", "REF $u^*_{t:(t+N)|t} \\leftarrow $ eq.", "REF Quadcopter $\\leftarrow u^*_{t|t} + \\begin{pmatrix}u_{hover} & 0 & 0 & 0\\end{pmatrix}^{\\top }$ $t>0$ $\\hat{x}_{t|t-1} \\leftarrow \\hat{f}(x_{t-1}, u_{t-1})$ $d_{t-1|t} \\leftarrow \\frac{x_{t} - \\hat{x}_{t|t-1}}{\\Delta T}$ Within MPC update interval $\\Delta T$ Update GPs with collected $d_{0:t}$ $t\\leftarrow t+1$ GP-based MPC with online disturbance prediction." ], [ "IMPLEMENTATION AND RESULTS", "In this section, we discuss the implementation of Algorithm REF and we demonstrate the results.", "The algorithm has been implemented in C++ on a Raspberry Pi 4 B in order to evaluate the computation times.", "The same implementation has then been tested in simulation as an offboard controller for the PX4 autopilot using the jMAVSim simulation environment.", "The used quadcopter parameters are depicted in table REF , the thrust-input necessary during hovering $u_{hover}=0.3$ and the MPC parameters are given in Table REF .", "Table: MPC ParametersThe LQR in (REF ) uses the same matrices $Q, R$ for its cost function as the MPC.", "The MPC's predictor model has been discretized at ${10}{}$ while it has been recomputed with a rate of ${30}{}$ , which rendered a better flight performance during the simulation.", "The optimization problem has been solved using the C++-OSQP library [11] and the osqp-eigen interfacehttps://github.com/robotology/osqp-eigen, http://eigen.tuxfamily.org.", "In order to connect to the PX4 Autopilot firmware and send offboard commands from the C++ script, the MAVSDK-libraryhttps://mavsdk.mavlink.io/main/en/index.html has been used.", "The squared exponential kernel has been considered for all the GPs, which are computed as LTI systems.", "Therefore, Fourier-Transform of the squared exponential function has been brought into rational form via a sixth-order Taylor approximation as proposed in [8].", "The GP hyperparameters are continuously updated as described in section REF based on the last 50 recorded data samples and a learning rate of $\\eta = \\begin{pmatrix} 0.03&0.01&0.005 \\end{pmatrix}^\\top $ , see (REF ).", "Next, the implementation has been tested on a potent host machine in order to control a quadcopter within the jMAVSim simulation environment [12] while the respective computation times that a Raspberry Pi 4 B would have required to compute the algorithm have been imitated via sleep commands.", "The goal was to track the reference trajectory depicted in Fig.", "REF .", "The resulting trajectories of the simulated flights under heavy wind conditions using a nominal and the GP-based MPC are depicted in Fig.", "REF .", "The jMavSim environment models wind by filtering white gaussian noise (variance set to set to ${24}{}$ ) and adding a mean wind speed vector (length set to to ${22}{}$ ) on top.", "However, various wind speed settings have been tested with similar results.", "Table: Parameters of the quadcopter.Figure: Trajectory of a quadcopter using a redconventional MPC (red, dashed) and blueGP-based MPC (blue, solid) under strong wind disturbances in jMAVSim.", "The black, dotted line represents the reference trajectory.The takeoff is performed by an automated sequence of the PX4 autopilot.", "The nominal and GP-based MPC take over midflight and try to track the depicted trajectory.", "Finally, the quadcopter lands, again using an automated sequence of the PX4 autopilot.", "Notably, the GP-based MPC performs significantly better in this task with a root-mean-square (RMS) euclidean distance of ${0.82}{}$ to the reference trajectory compared to the nominal MPC with an RMS euclidean distance of ${2.83}{}$ to the reference trajectory.", "Furthermore, it can be seen that the GPs tend to introduce oscillations, especially in the $z$ -coordinate.", "Interestingly, oscillations occur in the z-coordinate when the quadcopter changes its y-position.", "This is likely the cause of a mismatch between the nonlinear model and the true system dynamics, which overlies the disturbance estimates and may lead to undesired feedback dynamics between the GPs and the MPC.", "This indicates, that the proposed method relies on precise model predictions.", "scaled y ticks=false compat=1.6, ylabsh/.style=every axis y label/.style=at=(0,0.5), xshift=1, rotate=90 Figure: Normalized inputs during the manoeuvre in Fig.", ".", "Left plots: conventional MPC, right plots: GP-based MPC.These oscillations, as well as a more aggressive behaviour of the GP-based MPC compared to a conventional MPC can also be seen when analyzing the computed input signals.", "During the flight, the GP-based MPC computed oscillating actuation signals with a higher amplitude (see Fig.", "REF ).", "The C++ implementation in this work proved the feasibility of the proposed control approach for real-time applications: On a Raspberry Pi 4 Model B, computing the nominal MPC took about ${10}{}$ while the computation of the MPC with GPs took about ${20}{}$ .", "Updating the hyperparameters of a single GP with a batch size of 50 input-output data pairs required up to ${2}{}$ ." ], [ "CONCLUSIONS", "In this paper, the applicability of a GP-based MPC for predicting disturbances acting on a quadcopter and reducing their effects has been demonstrated in simulation and a practical implementation framework has been presented.", "It has been shown that this approach yields superior performance over a conventional MPC when disturbances affect the system and that the algorithm scheme is computable in real-time.", "In order to verify the potentials that this approach shows in simulation and to prove its robustness, its algorithm will be evaluated on indoor flight tests with a focus on reproducibility.", "As future research directions: the computational complexity of the GP-based MPC could be further reduced by applying Infinite Horizon Gaussian Processes [10], which may also remove artefacts introduced by the truncation of the batch data.", "An interesting research issue is to define an appropriate classifier that allows to determine when one should switch from the nominal to the GP-based MPC.", "Moreover, to deal with model mismatch effects, the proposed GP-based MPC could be combined with a preceding learning of such mismatches as shown in [13].", "A more complex quadcopter model for the MPC and disturbance estimation will be used in future investigations." ], [ "ACKNOWLEDGMENT", "Finally, we would like to thank David Pysik, Tavia Plattenteich, and Ievgen Zhavzharov for their valuable input and support with the PX4 autopilot." ] ]
2210.07836
[ [ "DroneARchery: Human-Drone Interaction through Augmented Reality with\n Haptic Feedback and Multi-UAV Collision Avoidance Driven by Deep\n Reinforcement Learning" ], [ "Abstract We propose a novel concept of augmented reality (AR) human-drone interaction driven by RL-based swarm behavior to achieve intuitive and immersive control of a swarm formation of unmanned aerial vehicles.", "The DroneARchery system developed by us allows the user to quickly deploy a swarm of drones, generating flight paths simulating archery.", "The haptic interface LinkGlide delivers a tactile stimulus of the bowstring tension to the forearm to increase the precision of aiming.", "The swarm of released drones dynamically avoids collisions between each other, the drone following the user, and external obstacles with behavior control based on deep reinforcement learning.", "The developed concept was tested in the scenario with a human, where the user shoots from a virtual bow with a real drone to hit the target.", "The human operator observes the ballistic trajectory of the drone in an AR and achieves a realistic and highly recognizable experience of the bowstring tension through the haptic display.", "The experimental results revealed that the system improves trajectory prediction accuracy by 63.3% through applying AR technology and conveying haptic feedback of pulling force.", "DroneARchery users highlighted the naturalness (4.3 out of 5 point Likert scale) and increased confidence (4.7 out of 5) when controlling the drone.", "We have designed the tactile patterns to present four sliding distances (tension) and three applied force levels (stiffness) of the haptic display.", "Users demonstrated the ability to distinguish tactile patterns produced by the haptic display representing varying bowstring tension(average recognition rate is of 72.8%) and stiffness (average recognition rate is of 94.2%).", "The novelty of the research is the development of an AR-based approach for drone control that does not require special skills and training from the operator." ], [ "Haptic Displays in Human-Robot Interaction Scenarios", "The need for an intuitive and efficient human-robot interface has led to several applications based on haptic interfaces, allowing to inform the operator of the robot state and its surroundings through tactile cues [13], [5].", "Many state-of-the-art approaches to control a single drone or a swarm rely on a wearable tactile interface, such as an intuitive system by Robeiro et al.", "[24] that is composed of two small and lightweight wearable displays to control UAV for racing competitions.", "The wearable displays preserving the high mobility of the human operator were proposed by Byun et al.", "[2], where epidermal tactile sensor arrays were applied to achieve the direct teleoperation of the swarm by human hand.", "Previously developed systems have achieved high precision in the haptic rendering of contact forces, however, they do not cover many scenarios involving UAV control.", "In this research, we propose an HDI concept, in which the haptic display is rendering the string tension of the bow for drone ballistic trajectory generation.", "In contrast to the previously suggested tactile interfaces, this research explores the capabilities of a multimodal display to intuitively and accurately convey a sense of elastic tension.", "The effectiveness of pressure and displacement feedback on the user's perception of tension in an elastic body is also investigated in this work (Sec.", "REF ).", "In recent research, there is an increasing interest in delivering haptic feedback to the user's forearm to decrease the restriction of their motion.", "Moriyama et al.", "[21] proposed a multi-contact interface delivering a haptic interaction experience to the forearm by two inverted five-bar linkage mechanisms inspired by the LinkTouch design introduced by Tsetserukou et al.", "[26].", "The combination of two contact points allowed the researchers to render shear and normal forces with a high recognition rate.", "The application of this device in archery games, however, is estimated to be less effective as the direction of linkage motion does not correlate with a sling or bow tension.", "Another approach is explored by Shim et al.", "[25] with the QuadStretch wearable display, where skin stretching of the forearm is achieved by the relative displacement of two bracelets.", "One of the applications suggested for QuadStretch includes a VR archery game, where the bow tension is transmitted as the magnitude of stretching force.", "In DroneARchery, we relied on a combination of linear displacement of a single contact point and normal forces applied by the LinkTouch display placed collinearly to the bow string displacement.", "Our hypothesis was that the linear displacement may provide highly distinguishable feedback about the bow tension, which was later supported by the experimental results." ], [ "Virtual and Augmented Reality Interfaces", "VR and AR interfaces have been utilized extensively for immersive human-robot interaction and game scenarios.", "Ibrahimov et al.", "[9] developed a VR-based teleoperation system DronePick that performs delivery of objects by a human-controlled quadrocopter.", "The virtual interface of DronePick supports the selection of target objects and real-time control over the drone position.", "VR interfaces are capable of solving complex problems of operator presence in many work processes.", "For example, Kalinov et al.", "[11] proposed an interface to supervise an autonomous robot remotely from a secluded workstation in a warehouse.", "With the high improvement in AR hardware, several projects have investigated the ability of AR to improve the control over a swarm of drones [16], [19].", "A multi-channel robotic system for human-swarm interaction in augmented reality is presented by Chen et al.", "[3].", "While drone in-flight control through AR is being extensively explored, the deployment and docking of large-scale swarms remain to be addressed." ], [ "Swarm Collision Avoidance", "Obstacle avoidance is a fundamental part of the operation of unmanned vehicles.", "Several probabilistic algorithms are widely implemented in multi-agent systems, e.g., Rapidly-exploring Random Tree [15] algorithm for calculating an obstacle-free path for the swarm of several UAVs.", "DRL improves problem-solving with non-linear function approximation [23].", "The DRL approach in the shooting was explored by Nikonova and Gemrot [22], in which the trained agent made the most successful shot that scores the most points, while we oppositely train the agents to avoid obstacles by dodging the shot.", "This paper suggests the DRL algorithm for collision avoidance in the drone swarm during the “arrow\" drone deployment, when the high number of “target\" drones leads to higher risks of collision.", "Therefore, we implemented the DRL algorithm to mitigate this problem and evaluated the system performance.", "The contribution to the DRL scenarios is that the developed system switches between the real and virtual swarms of drones depending on the user's task.", "Thus, users in AR can release real drones one by one, which will be launched on site.", "If the user is working with the swarm remotely or testing the system performance, the rest of the swarm will operate virtually, handled by the same algorithm.", "Therefore, the same collision avoidance approach is required to work both with real drones and in simulation.", "Moreover, the simulation serves as an important part of system verification before real launches.", "In the user study, a virtual swarm of “target\" drones interacted with a real “arrow\" drone to evaluate the scenario of the remote swarm deployment for operation in hazardous areas." ], [ "System Architecture", "The system architecture in Fig.", "REF presents how the game works through the integration of all system elements.", "Figure: Overview of the DroneARchery system.", "The user interacts with real (arrow) and virtual (target) drones in an augmented environment and generates real drone trajectory via a gesture interface processed by the CV algorithm.", "To improve the user aiming we propose to deliver the haptic stimuli representing bow tension to the forearm.", "The approach of haptic rendering is similar to the sliding bar of computer-based scenarios.The VICON motion capture system tracks the user's palms with attached spherical markers.", "The acquired hand position is then cued to the ROS framework with the Vicon-bridge package.", "The user initiates the interaction by holding the virtual bow with one hand and pulling the arrow with the other.", "The relative hand positions are used to calculate the drone trajectory, based on the virtual bow model described in Sec.", "REF .", "Further, the trajectory points are provided to the AR environment and the haptic display described in Sec.", "REF and Sec.", "REF .", "The drone follows the user's bow-holding hand and hovers at 15 $cm$ above it, providing a more realistic sense of aiming.", "We use Crazyflie 2.0 quadcopter (weight is 27 grams, flight time is of 7 minutes).", "Users move their hands, aiming at the different targets.", "At the same time, the VR headset provides visual trajectory, and the haptic display renders tactile patterns corresponding to the bow tension calculated from the palms' positions.", "The hand tracking approach described in Sec.", "REF detects the user unclenching the fingers holding the arrow and sends a signal to the drone to follow the generated parabolic trajectory.", "Virtual drones in the Unity environment react to the approach of a real drone and avoid the collision thanks to DRL Sec.", "REF ." ], [ "AR Game Environment", "Several options of environment rendering are being extensively implemented in the AR solutions: video see-through (VST) displays, optical see-through (OST) displays, waveguides, and laser beam scanning.", "In the DroneARchery scenario, the former rendering method was applied.", "Thus, the user wears a VR headset Oculus Quest 2, and the cameras capture the world around the user and transmit the processed video to the eyes.", "AR environment is developed through the Passthrough API Experimental technology in Quest 2 Software Development Kit (SDK), which the OpenXR backend can access in Unity.", "In the AR environment, the user can observe the trajectory of the archery before the shot (Fig.", "REF (b)).", "The AR gives the user an immersive experience and helps to hit the target more accurately." ], [ "Haptic Display", "While users can aim naturally by directing their hands at the target, they have no natural means to estimate the pulling force as it is calculated entirely virtually, and the goal becomes unattainable.", "Delivering the experience of the tension in the bow to the user poses a challenging task, as the bowstring changes its position and force magnitude at the contact point dynamically.", "Thus, for our research, we applied a haptic display based on the LinkTouch design, that is attached to the user's forearm.", "To provide haptic feedback of virtual elastic string motion, the LinkTouch display was modified and adapted to the user's forearm area (Fig.", "REF ).", "LinkTouch is a wearable haptic display with inverted five-bar linkages.", "This display includes two servomotors that determine the position of the endpoint that touches the user's arm.", "Thus, the display is capable of both executing a linear motion along the forearm and increasing the pressing force by its motion in the normal direction to the arm.", "The maximum normal force at the contact point is equal to 2 N. LinkTouch is controlled by an ESP32 microcontroller, which defines the angles of servomotors and handles the Bluetooth communication with the computer.", "In addition, the low weight of the display (135 grams including the battery) allows the user not to experience physical fatigue, which is confirmed in Sec.", "REF .", "Figure: LinkTouch display with M-shaped linkage for delivering multi-modal (force and position) tactile stimuli.By locating the haptic display at the forearm, we considered the sensitivity of the forearm.", "The directional sensitivity perception at the forearm is six times lower than at the fingertips [6], which is acceptable because of the larger dimension of the forearm.", "Moreover, it was considered that the user’s hands have to be free to emulate the movements of the archer and to be tracked by the CV algorithm.", "The known distance between the user's hands is transformed into the coordinates of the contact point generated by the haptic display.", "We assume that the maximum tension of the arrow corresponds to one meter between palms (this value is averaged and depends on the individual characteristics of each bow model).", "The operating range of LinkTouch is 75 mm.", "Based on the data obtained, we convert the distance between the user's palms into the coordinates of the contact point of the mechanism.", "Thus, clenched palms and palms spread one meter apart correspond to the two extreme positions of the haptic display.", "In addition to moving along the forearm, the contact point can also go up and down perpendicular, creating different pressure on the skin.", "In our case, the pressure represents the bowstring stiffness.", "After the shot, the display moves to the starting position, in which there is no contact with the skin." ], [ "Ballistic Trajectory Calculation", "The drone trajectory was calculated with the position of the user's hands provided by VICON Vantage V5 motion capture system.", "The pulling force of the virtual bow was calculated assuming a uniform Hooke's law constant $K$ .", "Thus, the potential energy stored in the bow is given in Eq.", "(REF ): $U = K \\cdot x^2 ,$ where $U$ is the stored potential energy, and $x$ is the stretching distance of the bowstring.", "The projectile motion was applied along the vector connecting the user's hands.", "The projectile initial velocity $v$ from the bow was then calculated from the kinetic energy via Eq.", "(REF ) at the start of the trajectory derived from the potential energy at the point of release.", "$K_E = \\tfrac{1}{2} \\cdot m \\cdot v^2 ,$ where $K_E$ is the kinetic energy of the drone, $m$ is the drone mass.", "The resulting velocity vector is defined by: $\\vec{V} =v \\cdot \\frac{ \\vec{P}_{bow}-\\vec{P}_{arrow} }{\\left\\Vert \\vec{P}_{bow}- \\vec{P}_{arrow}\\right\\Vert },$ where $P_{arrow}$ and $ P_{bow}$ are the positions of the user's hands holding the virtual arrow and bow, $\\vec{V}$ is the initial velocity vector.", "The trajectory of the drone is calculated as projectile motion in 3D space as described in Eq.", "(REF -REF ): $x = v_0\\cdot t \\cdot cos(\\theta ) \\cdot sin (\\gamma )$ $y = v_0\\cdot t \\cdot cos(\\theta ) \\cdot cos(\\gamma )$ $z = v_0 \\cdot t \\cdot sin(\\theta ) - \\frac{1}{2} \\cdot g \\cdot t^2,$ where $\\theta $ and $\\gamma $ are the initial launch angles in the vertical and horizontal planes, $t$ is the time moment, $g =9.8$ $\\ m/s^2$ is the acceleration of gravity." ], [ "Hand Tracking Approach", "We implemented the computer vision (CV) module to track the user's hand that holds a virtual bowstring.", "During the game, the drone is switching between two behavior strategies.", "When users pull the bowstring, the drone follows their left hand with the virtual bow to indicate the launching position.", "After releasing the bowstring, the drone follows the calculated trajectory.", "The moment of switching between these two modes is activated by pinching and opening the distance between the fingers holding the bowstring (see Fig.", "REF ).", "Figure: Finger tracking by MediaPipe framework.", "(a) Clenched fingers trigger the calculation of a bowstring tension.", "(b) Unclenched fingers trigger the bowstring release.Mediapipe Holistic framework [17] was applied for the finger tracking module to Logitech HD Pro C920 webcam data for string release estimation.", "At this moment, the next drone flies up to the user's hand and the procedure is repeated." ], [ "Deep Reinforcement Learning-based Approach for Collision Avoidance", "In the game scenario, the “arrow\" drone moves at high velocity towards the swarm hovering in formation.", "The task of the swarm is to avoid a collision with the arrow and each other.", "The state is a matrix that contains information about the positions of all agents and the position of the arrow in 3D space at each moment.", "Actions are the velocities that agents develop to reach the next state.", "Actions are sampled from 3D continuous action space, limited by the drone speed of 0.5 m/s.", "The reward system was designed for the agents as the sum of the reward for avoiding internal collision between agents given in Eq.", "(REF ) and the reward for formation control given in Eq.", "(REF ).", "$\\centering r_c ={\\left\\lbrace \\begin{array}{ll}0 & d_i > r_{emergency} \\\\- 1 & \\forall d_i \\le r_{emergency}\\end{array}\\right.", "},$ where $d_i$ is the distance between the rewarded agent and the $i$ object in the environment (other drones, an arrow, and borders), $r_{emergency}$ is the critical distance between the agent and object that indicates a high probability of collision of drones.", "$\\centering r_f = {\\left\\lbrace \\begin{array}{ll}0 & d_f < r_{formation} \\\\0.01 & d_f \\ge r_{formation}\\end{array}\\right.", "},$ where $d_f$ is the distance between the current position of the rewarded agent and its target position, $r_{formation}$ is the radius in which the agent is considered to be in the formation.", "To solve the problem of collision avoidance, the multi-agent Actor-Critic (A2C) approach was applied due to its ability to learn simultaneously the policy and value functions.", "Each agent has actor and critic networks (Fig.", "REF ).", "A2C takes observations as input, then passes them through critics' networks to evaluate how good the current state is, and the actors get a recommendation on what action to take.", "Actions sample from Gaussian Distribution for each agent.", "The optimal value was calculated with the Bellman equation for the value function.", "Figure: In the Actor-Critic architecture, the state of the environment is input, the Actor-Network calculates the optimal policy, and the Critic-Network calculates the value function.The loss of the critic network was calculated as the mean squared error of optimal values and current values.", "The log probability of an action is given in Eq.", "(REF ).", "$\\log _{\\pi \\theta }(a|s) = \\sum _{n=1}^{k}(-\\frac{(a-\\mu _i)^2}{2\\sigma _i^2} - \\log \\sqrt{2\\pi \\sigma _i^2})(V^*_i(s_t) - V_{\\pi }^{U_i}(s_t)),$ where $k$ is the dimension of action space, $\\mu _i$ and $\\sigma _i$ are the mean and the variance of policy, respectively, $V^*_i(s_t)$ is the optimal value, calculated with Bellman Equation, and $V_{\\pi }^{U_i}(s_t)$ is the current value obtained from critic network with weights $U_i$ of the $i$ agent.", "To encourage agents to explore the environment, entropy is subtracted from the actor loss function as in Eq.", "(REF ).", "Then we update the actor network by minimizing actor loss.", "$ActorLoss=\\frac{\\sum {\\log _{\\pi \\theta }(a|s)}}{batch size} + \\beta (-\\frac{\\sum {\\frac{\\log (2\\pi \\sigma _i^2) + 1}{2}}}{batch size}),$ where $\\beta $ is the hyperparameter that controls the influence of entropy loss, $batchsize$ is the number of states for training." ], [ "User Study", "We conducted a two-stage user study to identify the desirable haptic patterns able to provide a pulling experience to users and evaluate the improvement of user performance in a drone-based game scenario with haptic feedback of a virtual bow and with haptic feedback and visualized trajectory." ], [ "Participants", "We invited 10 participants, four females and six males, aged 21 to 28 years (mean = 24.2, std = 1.83), to experiment on tactile pattern recognition.", "According to the laboratory's internal regulations, the participants were informed about both experiments and agreed to the consent form.", "To accurately deploy a swarm of drones with low delays without special training of the operator we propose to use haptic feedback.", "It delivers to the user the information about the flight path of the drone, same as an archer receives information about the flight path of an arrow from the tension of the bowstring.", "In this experiment, we explore the haptic display's ability to transmit distinguishable signals to the user.", "Twelve tactile patterns have been designed to render the distance between the user's palms (Fig.", "REF ) and the pulling force (bowstring stiffness) during the game.", "Each pattern was designed as a combination of four sliding distances (1-4) and three normal forces applied to the user's forearm (strong as S, medium as M, and light as L).", "Figure: Proposed tactile patterns for distance presentation.", "The traveled distance by the end effector of the LinkTouch with inverted five-bar linkage is represented by the color bar below.Before the test, the 12 tactile patterns were demonstrated to the user with comments indicating distance and applied linear forces.", "The sequence was demonstrated several times according to the user's requirements.", "Then visual and sound feedback from the haptic display was separated from the user by headphones with white noise and an opaque screen.", "The haptic display generated tactile patterns in a random order three times (overall 36 patterns were presented to the user) with a 10-second time delay between each pattern." ], [ "Experimental Results", "The results of the user study of the tactile patterns applied by LinkTouch were summarized in a confusion matrix shown in Table REF .", "Table: Confusion Matrix for Recognition of Pulling Distance PatternTo evaluate the statistically significant difference between the perception of the patterns, we analyzed the results using single-factor repeated measures ANOVA (the normal force was set to be in linear proportion to the sliding distance in this experiment), with a chosen significance level of $\\alpha <0.05$ .", "According to the ANOVA results, there is a statistically significant difference in the recognition rates for the different combinations of applied forces and sliding distances, $F(11,108) = 1.936, p = 0.038$ .", "Two-way ANOVA showed no significant interaction effect between tension and sliding distance value in user recognition: $F = 2.71, p_{int} = 0.15 > 0.05$ .", "The experiment results revealed that the average pattern recognition rate is $68.9$ %.", "The highest recognition for all forces was achieved with the endpoints of display sliding ( $83.3$ % in position 4 for high force, $80.1$ % in position 4 for low force).", "However, the overall recognition of patterns paired with the applied forces is revealed to be higher, as shown in Table REF .", "Results of tactile pattern recognition paired by the distance (Table REF ) show that long-distance was the most discernible to users, and short distances were more often (in 23.4% and 28.9%) mistaken for longer ones.", "Table: Confusion Matrix of Force RecognitionTable: Confusion Matrix of Distance RecognitionThe average recognition rate of the forces is $94.2\\%$ , which supports the hypothesis behind LinkTouch that tactile feedback may be applied to improve the perception of elastic tension." ], [ "Participants", "This experiment involved 28 users, twelve females and sixteen males, aged 22 to 27 years (mean = 23.93, std = 1.94); several used had the experience of interacting with AR interfaces and drones (13 users).", "They were formed into four groups with an equal distribution of participants by gender and experience with the drone swarm.", "Before starting the experiment, the participants were given time to familiarize themselves with the system and take test shots to minimize individual differences in the learning effect.", "The aim of the experiment was to earn the maximum number of points in the archery game.", "The task could be accomplished by deploying the drone from a virtual bow in 3 gates of different sizes.", "The scoring policy was as follows: in each of the 3 gates, it was required to fire 3 shots, if the shot was successful, the user earned 1, 3, or 5 points according to the size of the gate, thus, the maximum number of points was equal 27.", "A shot was counted as successful when the drone was hitting a gate (Fig.", "REF ).", "Figure: Four groups of participants play DroneARchery.", "(a) The users are in a real environment without the visualized trajectory and tactile feedback (R group).", "(b) The users using haptic feedback from LinkTouch display without the visualized trajectory (RH group).", "(c) The users perform with AR interface without LinkTouch display (AR group).", "(d) The users perform with both the AR interface and LinkTouch display (ARH group).Before the start of the game, a drone flew up towards the user's left hand and then followed the hand until the moment of the shot with a shift along the z-axis of 15 cm.", "The users of the first evaluation group were aiming at the gates in a real environment without the visualized drone's trajectory and tactile feedback (R group).", "The participants of the second group were using haptic feedback from LinkTouch display (RH group) and the third group was performing with Oculus Quest 2 headset (AR group).", "The last fourth group was allowed to use both visual and tactile displays (ARH group).", "The ARH and AR groups could observe the flight path of the drone.", "The haptic display used in RH and ARH groups delivered tactile stimuli to the forearm, simulating the bowstring's tension when pulled.", "After the participant had released the stretched virtual bowstring, a shot was fired: the drone stopped following the hand and flew along the parabolic trajectory calculated from the position of the palms.", "The gates were located at a distance of 2.5 m from the user; the inner dimensions of the gates are shown in Fig.", "REF .", "Figure: Game elements, i.e.", "gates, are used as the targets to evaluate the aiming precision.During the game, the number of points scored by users was calculated as evaluation criteria of which group of users was more successful.", "Additionally, users evaluated their experience of the game on a 5-point Likert scale (Fig.", "REF ).", "For this purpose, four following questions were presented to each user: Naturalness: How natural were the tension sensations of the virtual bow?", "(Artificial — Natural) Target reachability: Was the goal achievable?", "(Impossible — Achievable) Physical demand: Was it physically difficult for you to fulfill the game?", "(Exhausting — Invigorating) Trajectory predictability: Did the drone fly in the expected direction when released from the virtual bow?", "(Unpredictable — Expectable)" ], [ "Experimental Results on Conveying the Tangible Experience of Virtual Archery Game", "The experimental results confirmed that DroneARchery allows users to control the drone accurately and comfortably.", "Fig.", "REF shows the results of the game assessment by study groups, and these rates are usually higher for the group of users using a tactile display.", "The results of the game task accomplished by groups are presented in Table REF .", "The average score of the RH group was 12 points, while the participants without a haptic display (R group) achieved an average score of 8 points.", "The ARH group achieved the highest average score using a haptic display and observing the drone's trajectory in real-time.", "Therefore, the force estimation accuracy was higher by 22.6% in the RH group supported with haptic feedback, by 51.5% in the AR group and 63.3% in the ARH group compared to the R group with respect to the maximum score (the percentage is obtained by dividing the difference in results for the maximum achievable number of points).", "The low score in the R and RH categories is the difficulty in predicting the ballistic trajectory for the drone to the target, whereas, in the ARH group, the AR gave the user a visualization of the trajectory, thus aiming at the target more precisely.", "Additionally, the results revealed that haptic feedback noticeably affected users' performance only when aiming at the smallest gate, with an average result higher by three times in the group supported by LinkTouch.", "A two-way ANOVA showed statistically significant effect of both gate dimension ($F= 3.46, p=0.034 < 0.05$ ) and haptic interaction scenario ($F=7.78, p=0.0006 < 0.05$ ) on the matching score of the players and no significant interaction effect between these parameters: $F = 0.84, p_{int}= 0.499 > 0.05$ .", "Table: Results of the virtual game experiment for 4 groups of users.Figure: Assessment of the system with a 5-point Likert scale based on user responses.Users evaluated the predictability of the final drone trajectory as higher in cases when they observe trajectory, and the haptic display rendered information about the tension of the bow (median of 4.7 in ARH vs 4.5 in AR vs 4.3 in RH vs 3.3 in R).", "Aside from that, users spend less effort at aiming and setting their hands in the correct position, which affected their fatigue (median of 4.4 in ARH vs 4.0 in AR vs 3.7 in RH vs 2.9 in R).", "If we consider separately groups using and not using Oculus headset the average value of naturalness in the groups without haptic display is lower than in the groups with tactile feedback (median of 4.3 in ARH vs 3.8 in AR vs 3.7 in RH vs 2.4 in R).", "For users who did not use AR feedback or a haptic display, the goal was estimated as less achievable (median of 4.7 in ARH vs 4.6 in AR vs 4.6 in RH vs 3.7 in R).", "Apart from the conclusions about the operation of the system as a whole, a noticeable observation was made, that all participants during the game felt comfortable and calm in the same environment with the drones (4.7 points out of 5.0).", "The two-way ANOVA analysis results showed a statistically significant difference between the Naturalness of archery shooting for four proposed HDI concepts ($F= 3.65, p=0.027 < 0.05$ ), same as for target reachability ($F= 4.24, p=0.015 < 0.05$ ), ease of physical execution ($F= 3.08, p=0.046 < 0.05$ ), trajectory predictability ($F= 3.87, p=0.022 < 0.05$ ).", "We also confirmed that the experience of users in interaction with drones does not affect the evaluation according to the chi-square test of independence, for example, trajectory predictability ($\\tilde{\\chi }^2= 4.43, p=0.61 > 0.05$ ).", "In addition, we compared the average scores earned by users with and without experience with AR interfaces or drones.", "There were in total 14 participants in AR and ARH group, where 8 people did not have any experience with AR and drone-based systems.", "The total average score was 21.0 out of 27 (AR) and 24.6 out of 27 (ARH) for people without experience and 20.7 out of 27 (AR) and 23.8 out of 27 (ARH) for people with experience, supporting our hypothesis that experiences with AR did not significantly affect users' performance.", "Drone agents were trained to avoid obstacles over 1000 epochs.", "Evaluation occurred when interacting with the environment in the simulation where 3 agents avoided collision with each other and with the fourth agent, an arrow.", "The initial positions of the drones at the beginning of each episode did not change.", "Figure: Episode duration before drones collided during the learning process.", "The maximum value of 50-time steps means successful performance.In the DRL algorithm, the values of hyperparameters leading to the successful completion of the task were selected empirically.", "The optimal learning rate for the problem being solved is 0.001, the batch size is 10000 states.", "Hyperparameters $\\gamma $ and $\\beta $ equals to 0.5 and 0.001 respectively.", "The execution time and the number of points per episode were taken as metrics.", "The minimal duration of an episode is 50 timestamps with a 10 Hz frequency, otherwise, drones are colliding with each other (Fig.", "REF ).", "Policy and value losses fluctuate greatly over time since we operate in a continuous action space and the speed can take both negative and positive values.", "In the simulation, the accuracy of the algorithm was tested during 300 episodes, of which 96% were successful.", "In addition, the Reinforcement Learning approach was compared with the Artificial Potential Fields (APF) [14] method to examine the time advantage of calculating a new drone position (average time is 1.35 ms DRL vs 1.24 ms APF for 1000 launches in the simulation).", "We invited 10 users from the previous user study (5 have experience with drones) to evaluate DRL-based swarm behavior in simulation.", "The results revealed that users perceived swarm motion as predictable (median 4.4 out of 5) and natural (median 4.0 out of 5)." ], [ "Conclusions and Future Work", "We developed the DRL-based collision avoidance and drone swarm control for environments with fast-moving obstacles and a new interactive human-drone interaction in AR with a wearable tactile display.", "Visualized trajectory and haptic feedback allow users to more accurately predict the drone's ballistic flight path, the proposed DRL approach makes interaction safe.", "Two user studies were conducted to evaluate users' recognition of haptic patterns, the naturalness of drone deployment in the AR game, and the accuracy of aiming with the help of an augmented environment.", "The average pattern recognition rate was found to be 68.88%, suggesting lowering the resolution of sliding distance, as a normal force was recognized at an average of 94.2%.", "The accuracy of hitting the target was increased by 63.3% among users using a haptic display and AR technology.", "Moreover, the experimental results revealed that users' experience with AR interfaces did not significantly affect their performance in drone launching (score of 24.6 out of 27 for people without experience and 23.8 out of 27 for users with experience).", "The proposed DroneARchery system can be potentially used in the entertainment industry as a novel game scenario.", "The visualized ballistic trajectory and intuitive haptic interface allow users to set the landing point of the drone at visually occluded areas in a cluttered environment.", "Additionally, this technology can be implemented in drone teleoperation and swarm teaching of collision avoidance with formation control in presence of dynamic obstacles.", "Thus, DroneARchery will potentially allow humans and drones to learn from each other through game interaction.", "The reported study was funded by RFBR and CNRS, project number 21-58-15006." ] ]
2210.07730
[ [ "A note on coCartesian fibrations" ], [ "Abstract We prove properness of (co)Cartesian fibrations as well as a straightening and unstraightening equivalence, which is compatible with cartesian products, when the base is the nerve of a small category." ], [ "Introduction", " 1.1 An essential tool in higher category theory is the theory (co)Cartesian fibrations; the higher categorical analogue of Grothendieck (op)fibrations.", "By Lurie's straightening equivalence, coCartesian fibrations correspond to functors to the $\\infty $ -category of $\\infty $ -categories and manipulating coCartesian fibrations is often the preferred way to construct such functors.", "1.2 The first proof of the straightening equivalence was given by Lurie in [4] and relies on a comparison with simplicial categories.", "Lurie then uses the straightening equivalence to prove important fundamental properties of coCartesian fibrations.", "An efficient streamlined proof has appeared in [2].", "This proof also uses the comparison with simplicial categories but proves the fundamental properties of coCartesian fibrations first and derives the equivalence from there.", "1.3 In upcoming joint work with Denis-Charles Cisinski we provide a new construction of the $\\infty $ -category of $\\infty $ -categories inspired by the construction of universes in semantic models of type theory.", "The straightening equivalence is then derived from a property of the universe of $\\infty $ -categories: directed univalence.", "1.4 The present article serves as backbone to the above mentioned work on directed univalence.", "Similar to [2], we will prove fundamental properties of coCartesian fibrations from scratch using only methods from (marked) simplicial sets.", "As such, there is some overlap with [2] and we indicate whenever this is the case.", "Since the purpose of this article is to lay the technical foundations to prove a straightening theorem, we are careful to prove everything we need without using a straightening theorem.", "1.5 The main contributions of this article are the following: A proof that pullback along (co)Cartesian fibrations preserves Joyal equivalences (Theorem REF ).", "This has been proven by Lurie [4] using the full power of his straightening equivalence.", "Although we don't need this result, a weaker assertion which is much easier to prove suffices for us (Proposition REF ), we are not aware of a proof of this result using only elementary methods, so we include here.", "A proof of the straightening equivalence for nerves of categories (Theorem REF ).", "We can't completely avoid straightening, but this version avoids simplicial categories.", "This is not the full straightening equivalence as it only applies when the base is the nerve of a small category.", "The upshot however is that the proof is relatively short and we also prove a compatibility with cartesian products.", "The author would like to thank Denis-Charles Cisinski for many useful discussions and for his encouragement.", "Most of this work has been completed when the author was a member of the SFB 1085 Higher Invariants funded by the Deutsche Forschungsgesellschaft (DFG)." ], [ "Reminder on the coCartesian model structure", " 2.1 We denote $\\mathbf {sSet}^+$ the category of marked simplicial sets.", "It's objects are given by pairs $(K,E_K)$ where $K$ is a simplicial set and $E_K$ is a set of 1-simplices of $K$ containing all the degenerate 1-simplices, called marked edges.", "It's morphisms are maps of simplicial sets preserving the marked edges.", "In general we will denote a marked simplicial set by $K^+$ .", "2.2 The forgetful functor $\\mathbf {sSet}^+ \\rightarrow \\mathbf {sSet}$ has both a left and a right adjoint.", "We denote the left adjoint by $(\\cdot )^\\flat \\colon \\mathbf {sSet} \\rightarrow \\mathbf {sSet}^+$ Given a simplicial set $A$ , the marked simplicial set $A^\\flat $ has precisely the degenerate 1-simplices marked.", "The right adjoint will be denoted by $(\\cdot )^\\sharp \\colon \\mathbf {sSet}\\rightarrow \\mathbf {sSet}^+$ Given a simplicial set $B$ , the marked simplicial set $B^\\sharp $ has all 1-simplices marked.", "2.3 The functor $(\\cdot )^\\sharp $ has a further right adjoint, denoted by $\\mu \\colon \\mathbf {sSet}^+\\rightarrow \\mathbf {sSet}$ Given a marked simplicial set $A^+$ , the simplicial set $\\mu (A^+)$ is the simplicial subset of $A$ spanned by the marked edges.", "Definition 2.4 We define the class of marked left anodyne extensions to be the smallest saturated class containing the morphisms (A1) $(\\Lambda ^n_k)^\\flat \\rightarrow (\\Delta ^n)^\\flat $ for $n \\ge 2$ and $0<k<n$ , (A2) $J^\\flat \\rightarrow J^\\sharp $ , (B1) $(\\Delta ^1)^\\sharp \\times (\\Delta ^1)^\\flat \\cup \\lbrace 0\\rbrace \\times (\\Delta ^1)^\\sharp \\rightarrow (\\Delta ^1)^\\sharp \\times (\\Delta ^1)^\\sharp $ , (B2) $(\\Delta ^1)^\\sharp \\times (\\partial \\Delta ^n)^\\flat \\cup \\lbrace 0\\rbrace \\times (\\Delta ^n)^\\flat \\rightarrow (\\Delta ^1)^\\sharp \\times (\\Delta ^n)^\\flat $ .", "Definition 2.5 A map $X^+\\rightarrow A^+$ is called a marked left fibration if it has the right lifting property with respect to the class of marked left anodyne extensions.", "A marked simplicial set $X^+$ is called marked left fibrant if the map $X^+\\rightarrow \\Delta ^0$ is a marked left fibration.", "2.6 There are dual notions of marked right anodyne extensions and marked right fibrations.", "The class of marked right anodyne extensions has generators (A1) and (A2) of Definition REF and the classes (B1') $(\\Delta ^1)^\\sharp \\times (\\Delta ^1)^\\flat \\cup \\lbrace 1\\rbrace \\times (\\Delta ^1)^\\sharp \\rightarrow (\\Delta ^1)^\\sharp \\times (\\Delta ^1)^\\sharp $ , (B2') $(\\Delta ^1)^\\sharp \\times (\\partial \\Delta ^n)^\\flat \\cup \\lbrace 1\\rbrace \\times (\\Delta ^n)^\\flat \\rightarrow (\\Delta ^1)^\\sharp \\times (\\Delta ^n)^\\flat $ .", "The marked right fibrations have the right lifting property against the marked right anodyne extensions.", "Theorem 2.7 Let $A^+$ be a marked simplicial set.", "Then there is a unique model structure on the category $\\mathbf {sSet}^+/A^+$ with cofibrations given by maps whose underlying map of simplicial sets is a monomorphism, fibrant objects given by marked left fibrations $X^+\\rightarrow A^+$ .", "Moreover, the fibrations between fibrant objects are precisely the marked left fibrations.", "See [6] for this precise statement.", "An alternative proof of the existence of this model structure when $A^+= A^\\sharp $ is in [4].", "Definition 2.8 We will call the model structure on $\\mathbf {sSet}^+/A^+$ the coCartesian model structure.", "We denote its homotopy category by $\\mathbf {coCart}(A^+)$ .", "The dual model structure will be called the Cartesian model structure and its homotopy category is denoted by $\\mathbf {Cart}(A^+)$ .", "2.9 A useful property of coCartesian fibrations is that pullback along them preserves cellular marked right anodyne extensions, which are those marked right anodyne extensions lying in the saturated class generated by the sets (B1) and (B2) in Definition REF .", "Theorem 2.10 Consider a pullback square of marked simplicial sets $\\begin{tikzcd}X^+ {j} [swap]{p}& Y^+ {q}\\\\A^+{i} & B^+ \\end{tikzcd}$ where $p$ and $q$ are marked left fibrations and $i$ is a cellular marked right anodyne extension.", "Then $j$ is a marked right anodyne extension.", "The dual statement for marked right fibrations and cellular marked left anodyne extensions also holds.", "See [6].", "2.11 One of the main goals of this article is to extend this Theorem to general marked right anodyne extensions, see section .", "The main difficulty is to prove that pulling back along (co)Cartesian fibrations preserves Joyal equivalences.", "A proof of this fact using straightening/unstraightening can be found in [5].", "However, the goal of this note is to prove properties of (co)Cartesian fibrations without straightening/unstraightening and instead (eventually) derive it as a consequence.", "A proof for left/right fibrations without straightening/unstraightening has appeared in [1] and we will make use of this fact in the following Proposition.", "Proposition 2.12 In the Theorem above, if the underlying map of simplicial sets $q\\colon Y \\rightarrow B$ is a left (resp.", "right) fibration and $i$ is a marked right (resp.", "left) anodyne extension, then the map $j$ is a marked right (resp.", "left) anodyne extension.", "It suffices to show this when the map $i\\colon A^+\\rightarrow B^+$ belongs to the class of generators for marked right anodyne extensions.", "We first show this for the class (A1).", "We need to show that for any diagram of pullback squares of the form $\\begin{tikzcd}Y^+{j} & X^+ \\\\(\\Lambda ^n_k)^\\flat {i} & (\\Delta ^n)^\\flat \\end{tikzcd}$ where the vertical maps are marked left fibrations with underlying map of simplicial sets being left fibrations and $i$ is inner horn inclusion, the map $j$ is a marked right anodyne extension.", "We observe that the marked simplicial sets $Y^+$ and $X^+$ have precisely the equivalences marked.", "In particular $X^+$ is fibrant over the point.", "Since the underlying map of simplicial sets $Y\\rightarrow X$ is a Joyal trivial cofibration by [1], the map $Y^\\flat \\rightarrow X^\\flat $ is a trivial cofibration over the point.", "We have a square $\\begin{tikzcd}Y^\\flat & X^\\flat \\\\Y^+& X^+\\end{tikzcd}$ Here, the vertical maps are marked left anodyne, since they are given by marking equivalences and the upper horizontal map is a trivial cofibration.", "Thus the lower horizontal map is a trivial cofibration over the point.", "Since $X^+$ is fibrant, this map is in fact marked right anodyne by [6].", "For the class (A2) it suffices to show that for a pullback square $\\begin{tikzcd}Y^+{i} & X^\\sharp \\\\J^\\flat & J^\\sharp \\end{tikzcd}$ the map $i$ is marked left anodyne.", "Clearly, the map $i$ is the identity on underlying simplicial sets and $i$ is obtained by marking equivalences, thus is marked left anodyne.", "The classes (B1') and (B2') follow from Theorem REF since they are cellular marked right anodyne." ], [ "Invariance properties of the (co)Cartesian model structure", " 3.1 The goal of this section is to show that for any Joyal equivalence $A\\rightarrow B$ we obtain a Quillen equivalence of coCartesian (resp.", "Cartesian) model structures $\\mathbf {sSet}^+/(A^\\sharp )\\rightarrow \\mathbf {sSet}^+/(B^\\sharp )$ .", "Theorem 3.2 Let $i\\colon A\\rightarrow B$ be inner anodyne.", "Then the induced functor $i_!\\colon \\mathbf {sSet}^+(A^\\sharp )\\rightarrow \\mathbf {sSet}^+(B^\\sharp )$ is a Quillen equivalence for the coCartesian and Cartesian model structures.", "We only prove the coCartesian case as the Cartesian case is analogous.", "It is clear that $i_!$ is left Quillen, thus it suffices to show that the left derived functor $\\mathbf {L}i_!$ is an equivalence of categories.", "Since $i$ is a bijection on objects, the right derived functor $\\mathbf {R}i^*$ is conservative, hence it suffices to show that $\\mathbf {L}i_!$ is fully faithful.", "Consider a commutative square $\\begin{tikzcd}X^\\natural {j} [swap]{p} & Y^\\natural {q}\\\\A^\\sharp {i} & B^\\sharp \\end{tikzcd}$ in which $p$ and $q$ are coCartesian fibrations and $j$ is marked left anodyne.", "We prove that the induced map on fibers is an equivalence (in the (co)Cartesian model structure over the point).", "Consider a point $a\\colon \\Delta ^0\\rightarrow A$ .", "Choose a commutative square $\\begin{tikzcd}\\Delta ^0{u} [swap] & E^\\sharp {f}\\\\A^\\sharp & B^\\sharp \\end{tikzcd}$ in which $u$ is cellular marked right anodyne and $f$ is a marked right fibration and denote $F^\\sharp := A^\\sharp \\times _{B^\\sharp }E^\\sharp $ .", "Note that the induced map $\\Delta ^0\\rightarrow D^\\sharp $ is cellular marked right anodyne by [1] and $i$ is a marked left anodyne extension.", "It follows again from [1] that the pullback square $\\begin{tikzcd}F^\\sharp & E^\\sharp \\\\A^\\sharp & B^\\sharp \\end{tikzcd}$ has horizontal arrows cellular marked right fibrations and vertical arrows marked right fibrations.", "Pulling back the maps $p$ and $q$ along this pullback square, we obtain a commutative diagram of pullback squares $\\begin{tikzcd}X^\\natural _a & Y^\\natural _a\\\\X^\\natural _F& Y^\\natural _E \\\\X^\\natural & Y^\\natural \\end{tikzcd}$ Here we define $X^\\natural _F:= F^\\sharp \\times _{A^\\sharp } X^\\natural ,\\quad Y^\\natural _E:= E^\\sharp \\times _{B^\\sharp } Y^\\natural $ The maps $X^\\natural _F\\rightarrow X^\\natural , \\quad Y^\\natural _E \\rightarrow Y^\\natural $ are marked right fibrations whose underlying map of simplicial sets are right fibrations.", "Since $X^\\natural \\rightarrow Y^\\natural $ was assumed to be marked left anodyne it follows from Proposition REF that $X^\\natural _F\\rightarrow Y^\\natural _E$ is marked left anodyne.", "Also by Theorem REF the maps $X^\\natural _a \\rightarrow X^\\natural _F,\\quad Y^\\natural _a \\rightarrow Y^\\natural _E$ are marked right anodyne extensions.", "Since marked right anodyne and marked left anodyne extensions are in particular weak equivalences in the (co)Cartesian model structure over the point, it follows by 2-out-of-3 that the map $X^\\natural _a \\rightarrow Y^\\natural _a$ is an equivalence.", "This shows that the derived unit is an isomorphism and hence $\\mathbf {L}i_!$ is fully faithful.", "Corollary 3.3 Let $\\begin{tikzcd}Y^\\natural {j} & X^\\natural \\\\A^\\sharp {i} & B^\\sharp \\end{tikzcd}$ be a pullback square of marked simplicial sets.", "Suppose $i$ is inner anodyne and $X^\\natural \\rightarrow B^\\sharp $ is a marked left (resp.", "right) fibration.", "Then the map $j$ is a marked left (resp.", "right) anodyne extension.", "We show this for marked left fibrations.", "Since marked left anodyne extensions are saturated, it suffices to show this for inner horn inclusions.", "By the previous Theorem REF , the functor $i^\\ast \\colon \\mathbf {sSet}^+/(\\Delta ^n)^\\sharp \\rightarrow \\mathbf {sSet}^+/(\\Lambda _k^n)^\\sharp $ is a Quillen equivalence for the coCartesian model structures.", "Hence the map $j$ is a trivial cofibration over $B^\\sharp $ with fibrant target, thus by [6] a marked left anodyne extension.", "Corollary 3.4 Let $i\\colon A\\rightarrow B$ be a Joyal equivalence.", "Then the induced functor $i_!\\colon \\mathbf {sSet}^+/A^\\sharp \\rightarrow \\mathbf {sSet}^+/B^\\sharp $ is a Quillen equivalence when both categories are endowed with the (co)Cartesian model structure.", "Let $W$ be the class of morphisms for which $i_!$ is a Quillen equivalence.", "We want to show that the class $W$ contains the Joyal equivalences.", "According to [1] it suffices to show that $W$ is closed under 2-out-of-3, $W$ contains the inner anodyne extensions $W$ contains the trivial fibrations.", "The first assertion follows from the fact that Quillen equivalences are closed under 2-out-of-3.", "The second assertion follows from the previous Theorem and the third assertion is clear.", "Remark 3.5 A proof along similar lines has appeared as [2]." ], [ "Properness of (co)Cartesian fibrations", " 4.1 In this section we prove that the pullback of an inner anodyne map along a (co)Cartesian fibration is a Joyal equivalence.", "This has first appeared in the literature in [4], but the proof uses the straightening/unstraightening equivalence.", "Our proof will only use elementary properties of (locally) coCartesian fibrations.", "Together with Theorem REF , this generalizes Proposition REF .", "4.2 Recall that a morphism of simplicial sets $i\\colon A\\rightarrow B$ is called final, if for any morphism $f\\colon B\\rightarrow C$ , the induced morphism $i\\colon (A,fi)\\rightarrow (B,f)$ in the slice $\\mathbf {sSet}/C$ is a Contravariant equivalence.", "A monomorphism is final if and only if it is right anodyne, see [1].", "Furthermore recall that a morphism $p\\colon X\\rightarrow Y$ is called proper, if for any diagram $\\begin{tikzcd}A^{\\prime }{i^{\\prime }}& B^{\\prime }& X{p}\\\\A{i} & B& Y\\end{tikzcd}$ in which the squares are pullbacks and the map $i$ is final, it follows that $i^{\\prime }$ is final.", "Examples of proper morphisms are left fibrations, see [1] and coCartesian fibrations, see [6] (or [4] using straightening).", "Theorem 4.3 Let $p\\colon X\\rightarrow Y$ be an inner fibration of $\\infty $ -categories.", "Then $p$ is proper if and only if in any diagram $\\begin{tikzcd}A^{\\prime }{i^{\\prime }}& B^{\\prime }{p^{\\prime }} & X{p}\\\\\\lbrace 1\\rbrace & \\Delta ^1& Y\\end{tikzcd}$ in which the squares are pullbacks, the map $i^{\\prime }$ is final.", "This is [1].", "Corollary 4.4 Locally coCartesian fibrations between $\\infty $ -categories are proper.", "In the diagram of the theorem, if $p$ is locally coCartesian then the pullback $p^{\\prime }$ is a coCartesian fibration.", "Since coCartesian fibrations are proper, the assertion follows.", "Lemma 4.5 Suppose we have a diagram $\\begin{tikzcd}X{rr}{i}[swap]{p} & & Y{q}\\\\& S &\\end{tikzcd}$ in which $p$ is a left fibration, $i$ is a trivial cofibration of the Joyal model structure and $q$ is a Joyal fibration.", "Then $q$ is also a left fibration.", "Choose a factorization $\\begin{tikzcd}Y{rr}{j}[swap]{q} & & Z{r}\\\\& S &\\end{tikzcd}$ where $j$ is left anodyne and $r$ is a left fibration.", "Since Joyal equivalences are cofinal [1], the composition $ji$ is cofinal.", "Since this determines a Covariant equivalence between the left fibrations $p$ and $r$ , it follows that $ji$ is in fact a Joyal equivalence.", "Since $i$ was assumed to be a Joyal equivalence, it follows that $j$ is also a Joyal equivalence.", "By the Retract Lemma, the map $q$ is a retract of $r$ and thus a left fibration.", "Lemma 4.6 Consider the commutative diagram $\\begin{tikzcd}E^\\natural [swap]{e} & \\\\X^\\natural [swap]{p}{j} & Y^\\natural {q}\\\\(\\Lambda ^n_k)^\\sharp {i} & (\\Delta ^n)^\\sharp \\end{tikzcd}$ in which the lower square is a pullback, each vertical arrow is a marked left fibration and the map of simplicial sets $i$ is inner anodyne.", "Assume furthermore that the underlying map of $e$ is a left fibration of simplicial sets.", "Choose a factorization $\\begin{tikzcd}E^\\natural [swap]{e}{k} & F^\\natural {f}\\\\X^\\natural {j} & Y^\\natural \\end{tikzcd}$ where $k$ is marked left anodyne and $f$ is a marked left fibration.", "Then the underlying map of $f$ is a locally coCartesian fibration.", "Moreover, for each 0-simplex $x\\colon \\Delta ^0 \\rightarrow X$ , the induced map on fibers $E_x\\rightarrow F_x$ is a Joyal equivalence.", "Note that the markings on $E^\\natural $ correspond to the $pe$ -coCartesian edges and that markings on $F^\\natural $ correspond to the $qf$ -coCartesian edges.", "First we observe that the base-change map $E^\\natural \\rightarrow \\left(\\Lambda ^n_i\\right)^\\sharp \\times _{\\left(\\Delta ^n\\right)^\\sharp } F^\\natural $ is marked left anodyne by Corollary REF .", "Consequently, for each object $m\\in \\Delta ^n$ , the induced map on fibers $E_m \\rightarrow F_m$ is a Joyal equivalence.", "Next we observe that $f$ is an isofibration, since the marked edges in $Y^\\natural $ are precisely the $q$ -coCartesian edges and thus in particular the equivalences are marked.", "We obtain a commutative diagram $\\begin{tikzcd}E_m {rr}{k_m}[swap]{e_m} & & F_m{f_m}\\\\& X_m &\\end{tikzcd}$ where $k_m$ is a trivial cofibration of the Joyal model structure, $e_m$ is a right fibration (by assumption) and $f_m$ is an isofibration between $\\infty $ -categories, thus a Joyal fibration.", "By Lemma REF the map $f_m$ is thus a left fibration.", "In other words, the map $F\\rightarrow X$ induces for each $m\\in \\Delta ^n$ a left fibration on fibers $F_m\\rightarrow X_m$ .", "Now since $k_m$ is a Joyal equivalence, it is in particular a Covariant equivalence between the left fibrations $e_m$ and $f_m$ and thus a fiberwise equivalence.", "It remains to show that $f\\colon F\\rightarrow Y$ is locally coCartesian.", "By construction, we have a commutative diagram $\\begin{tikzcd}F{rr}{f}[swap]{qf}& & Y{q}\\\\& \\Delta ^n & \\end{tikzcd}$ with $qf$ and $q$ being coCartesian fibrations and $f$ sending $qf$ -coCartesian edges to $q$ -coCartesian edges.", "Thus by [4] the map $f$ is locally coCartesian since the maps $f_m\\colon F_m\\rightarrow Y_m$ are in fact left fibrations.", "Theorem 4.7 Suppose we have a pullback square $\\begin{tikzcd}X[swap]{p}{j} & Y{q}\\\\A{i} & B\\end{tikzcd}$ in which $p$ and $q$ are coCartesian fibrations and $i$ is inner anodyne.", "Then $j$ is a Joyal equivalence.", "It suffices to show the assertion for squares of the form $\\begin{tikzcd}X[swap]{p}{j} & Y{q}\\\\\\Lambda ^n_k{i} & \\Delta ^n\\end{tikzcd}$ with $i$ an inner horn inclusion.", "According to [1] the map $j$ is a Joyal equivalence if and only if it induces an essentially surjective functor on homotopy categories and it induces a fully faithful functor $\\mathbf {L}j_!\\colon \\mathbf {LFib}(X)\\rightarrow \\mathbf {LFib}(Y)$ where $\\mathbf {LFib}(X)$ denotes the homotopy category of the covariant model structure on simplicial sets over $X$ .", "Since $j$ is a pullback of an inner anodyne map, it is a bijection on objects, thus clearly essentially surjective.", "For the second condition, we need to show that the derived counit $id \\rightarrow \\mathbf {R}j^\\ast \\mathbf {L}j_!$ is an isomorphism in $\\mathbf {LFib}(X)$ .", "Let $e\\colon E\\rightarrow X$ be a left fibration.", "To prove that the derived counit is an isomorphism, we construct a particular fibrant replacement of the composition $E\\rightarrow X\\rightarrow Y$ in the covariant model structure over $Y$ .", "To start off, we have a pullback square in marked simplicial sets $\\begin{tikzcd}X^\\natural [swap]{p}{j} & Y^\\natural {q}\\\\(\\Lambda ^n_k)^\\sharp {i} & (\\Delta ^n)^\\sharp \\end{tikzcd}$ in which $p$ and $q$ are marked left fibrations.", "Since $e$ is a left fibration, we have a marked left fibration $e\\colon E^\\sharp \\rightarrow X^\\sharp $ and pulling back along the inclusion $X^\\natural \\rightarrow X^\\sharp $ , we obtain a marked left fibration $E^\\natural \\rightarrow X^\\natural $ .", "We thus have a diagram $\\begin{tikzcd}E^\\natural [swap]{e} & \\\\X^\\natural [swap]{p}{j} & Y^\\natural {q}\\\\(\\Lambda ^n_k)^\\sharp {i} & (\\Delta ^n)^\\sharp \\end{tikzcd}$ where the markings on $E^\\natural $ correspond to the $pe$ -coCartesian edges.", "Now complete the diagram as follows, $\\begin{tikzcd}E^\\natural [swap]{e}{k} & F^\\natural {f}\\\\X^\\natural [swap]{p}{j} & Y^\\natural {q}\\\\(\\Lambda ^n_k)^\\sharp {i} & (\\Delta ^n)^\\sharp \\end{tikzcd}$ with $k$ marked left anodyne and $f$ a marked left fibration.", "The markings on $F^\\natural $ thus correspond to the $qf$ -coCartesian edges.", "The underlying map of simplicial sets $f\\colon F\\rightarrow Y$ is in general not a coCartesian fibration, but the previous Lemma REF shows that it is a locally coCartesian fibration and that for any $x\\in X$ the induced map $E_x\\rightarrow F_x$ is a Joyal equivalence (between Kan complexes).", "Now find a factorization $\\begin{tikzcd}F{rr}{l}[swap]{f} & & G{g}\\\\& Y &\\end{tikzcd}$ with $l$ left anodyne and $g$ a left fibration.", "By Lemma REF the map $f$ is proper and thus by [1] the induced map on fibers $F_y\\rightarrow G_y$ is cofinal for any $y\\in Y$ .", "To summarize, we have constructed a commutative diagram $\\begin{tikzcd}E{k}[swap]{e} & F{l}[swap]{f} & G{g}\\\\X{j} & Y \\end{tikzcd}$ in which $k$ and $l$ are left anodyne and $g$ is a left fibration, thus $g$ is a fibrant replacement for the composition $je$ .", "Now for any $x\\in X$ we have induced maps on fibers $E_x \\xrightarrow{} F_x \\xrightarrow{} G_x$ where $k_x$ is cofinal by Lemma REF and $l_x$ is cofinal by the above arguments.", "In particular this implies that the induced map $E\\rightarrow X\\times _Y G$ is cofinal which in turn implies that the derived counit is an isomorphism.", "Thus $X\\rightarrow Y$ is fully faithful and this finishes the proof.", "4.8 As a consequence we show that the coCartesian model structure is functorial with respect to coCartesian fibrations.", "Theorem 4.9 Let $p\\colon X^\\natural \\rightarrow A^\\sharp $ be a Cartesian fibration (i.e.", "a marked right fibration).", "Then the pullback functor $p^\\ast \\colon \\mathbf {sSet}^+/A^\\sharp \\rightarrow \\mathbf {sSet}^+/X^\\natural $ is a left Quillen functor when each category is endowed with the coCartesian model structure.", "Since marked simplicial sets are locally cartesian closed, the functor $p^\\ast $ is a left adjoint.", "To show that $p^\\ast $ is left Quillen, it suffices to show that it preserves marked left anodyne extensions.", "In particular, it suffices to show that $p^\\ast $ preserves the generating marked left anodyne extensions of Definition REF .", "We first show this for set (A1).", "We need to show that for any diagram of pullback squares of the form $\\begin{tikzcd}X^\\natural _{\\Lambda ^n_k}{j} & X^\\natural _{\\Delta ^n} & X^\\natural {p}\\\\(\\Lambda ^n_k)^\\flat {i} & (\\Delta ^n)^\\flat & A^\\sharp ,\\end{tikzcd}$ where $i$ is inner horn inclusion, the map $j$ is a trivial cofibration in $\\mathbf {sSet}^+/X^\\natural $ .", "We observe that the marked simplicial sets $X^\\natural _{\\Lambda ^n_k}$ and $X^\\natural _{\\Delta ^n}$ have precisely the equivalences in their fibers over $(\\Lambda ^n_k)^\\flat $ and $(\\Delta ^n)^\\flat $ marked.", "In particular $X^\\natural _{\\Delta ^n}$ is fibrant over the point.", "Since the underlying map of simplicial sets $X_{\\Lambda ^n_k}\\rightarrow X_{\\Delta ^n}$ is a Joyal trivial cofibration by Theorem REF , the map $X^\\flat _{\\Lambda ^n_k}\\rightarrow X^\\flat _{\\Delta ^n}$ is a trivial cofibration over the point.", "We have a square $\\begin{tikzcd}X_{\\Lambda ^n_k}^\\flat & X_{\\Delta ^n}^\\flat \\\\X_{\\Lambda ^n_k}^\\natural & X^\\natural _{\\Delta ^n}\\end{tikzcd}$ Here, the vertical maps are marked left anodyne, since they are given by marking equivalences and the upper horizontal map is a trivial cofibration.", "Thus the lower horizontal map is a trivial cofibration over the point.", "Since $X^\\natural _{\\Delta ^n}$ is fibrant, this map is in fact marked left anodyne by [6] and thus a trivial cofibration in $\\mathbf {sSet}^+/X^\\natural $ ." ], [ "Straightening and unstraightening", " 5.1 This section proves a straightening/unstraightening equivalence for coCartesian fibrations.", "For this we assume that $A$ is the nerve of a small category.", "We will prove the following Theorem.", "Theorem 5.2 For any simplicial set $B$ there is a Quillen equivalence $\\mathrm {Fun}(A,\\mathbf {sSet}^+/B^\\sharp ) \\simeq \\mathbf {sSet}^+/A^\\sharp \\times B^\\sharp $ where the right hand side is endowed with the Cartesian model structure and the left hand side is endowed with the projective Cartesian model structure.", "5.3 Contrary to the existing literature [4], [2], our proof does not involve simplicial categories.", "Our equivalence is also not the full straightening/unstraightening equivalence as our base category is assumed to be the nerve of a category.", "Our proof follows ideas from [3].", "5.4 We first define the functors involved.", "Let $X^+\\rightarrow A^\\sharp $ be a map and $B^+$ be a marked simplicial set.", "We have a functor $\\mathbf {sSet}^+/B^+\\rightarrow \\mathbf {sSet}^+/A^\\sharp \\times B^+$ given by sending a map $Y^+\\rightarrow B^+$ to the product $X^+\\times Y^+\\rightarrow A^\\sharp \\times B^+$ .", "This functor has a right adjoint $\\mathrm {Map}^B(X^+,-) : \\mathbf {sSet}^+/A^\\sharp \\times B^+\\rightarrow \\mathbf {sSet}^+/B^+$ By the universal property a map $\\begin{tikzcd}K^+{rr}& & \\mathrm {Map}^B(X^+,W^+)\\\\& B^+ &\\end{tikzcd}$ is thus given by a commutative triangle $\\begin{tikzcd}X^+ \\times K^+{rr}& & W^+\\\\& A^\\sharp \\times B^+ &\\end{tikzcd}$ Note that $\\mathrm {Map}^B(X^+,W^+)$ is (contravariantly) functorial in $X^+\\rightarrow A^+$ .", "This defines a functor $\\rho : \\mathbf {sSet}^+/A^\\sharp \\times B \\rightarrow \\mathbf {Fun}(A,\\mathbf {sSet}^+/B^+),\\quad W^+ \\mapsto (a\\mapsto \\mathrm {Map}^B((a/A)^\\sharp ,W^+)$ Given a map $p : W^+\\rightarrow A^\\sharp \\times B^+$ The marked simplicial set $\\mathrm {Map}^{B^+}(X^+,W^+)$ can be described as the pullback $\\begin{tikzcd}\\mathrm {Map}^{B^+}(X^+,W^+)& \\mathrm {Hom}^+(X^+,W^+){p_\\ast }\\\\B^+& \\mathrm {Hom}^+(X^+,A^\\sharp \\times B^+ )\\end{tikzcd}$ where the bottom map is given by the product of the fixed map $X^+\\rightarrow A^\\sharp $ and the identity on $B^+$ .", "5.5 This functor has a left adjoint $\\lambda : \\mathbf {Fun}(A,\\mathbf {sSet}^+/B^+)\\rightarrow \\mathbf {sSet}^+/A^\\sharp \\times B^+.$ Given a functor $F : A\\rightarrow \\mathbf {sSet}^+/B^+$ we obtain the functor $A^{op}\\times A \\rightarrow \\mathbf {sSet}^+/A^\\sharp \\times B^+,\\quad (a,a)\\mapsto ((a/A)^\\sharp \\times F(a)^+\\rightarrow A^\\sharp \\times B^+)$ The value of the left adjoint is then given by taking the coend $\\lambda (F)= \\int ^{A}(a/A)^\\sharp \\times F(a)^+$ Example 5.6 Let $X^+\\rightarrow B^+$ be a map of marked simplicial sets and let $a$ be an object of $A$ .", "Let $a\\otimes X^+$ be the functor given by left Kan extension along the inclusion $\\lbrace a\\rbrace \\rightarrow A$ .", "Then we have $\\lambda (a\\otimes X^+)=\\begin{tikzcd}(a/A)^\\sharp \\times X^+ \\\\A^\\sharp \\times B^+\\end{tikzcd}$ Proposition 5.7 For any coCartesian equivalence $K^+\\rightarrow L^+$ over $A^\\sharp $ and any marked left fibration $X^+$ over $A^\\sharp \\times B^+$ the induced map $\\mathrm {Map}^{B^+}(L^+,X^+) \\rightarrow \\mathrm {Map}^{B^+}(K^+,X^+)$ is a coCartesian equivalence over $B^+$ .", "We have a pullback square $\\begin{tikzcd}\\mathrm {Map}^{B^+}(L^+,X^+)& \\underline{\\mathrm {Hom}}^+(L^+,X^+)\\\\\\mathrm {Map}^{B^+}(K^+,X^+)& \\underline{\\mathrm {Hom}}^+ (K^+,X^+)\\times _{\\underline{\\mathrm {Hom}}^+(K^+, A^\\sharp \\times B^+)} \\underline{\\mathrm {Hom}}^+(L^+,A^\\sharp \\times B^+)\\end{tikzcd}$ Since the right hand vertical map is a trivial fibration whenever $K^+\\rightarrow L^+$ is marked left anodyne, the functor $\\mathrm {Map}_{A^\\sharp }^{B^+}(-,X^+)$ sends trivial cofibrations to weak equivalences in the opposite of the cocartesian model structure on $\\mathbf {sSet^+}/B^+$ .", "Corollary 5.8 Suppose $X^+\\rightarrow A^\\sharp \\times B^+$ is a marked left fibration.", "Let $X^+_a$ be the pullback $\\begin{tikzcd}X^+_a& X^+\\\\\\lbrace a\\rbrace \\times B^+& A^\\sharp \\times B^+\\end{tikzcd}$ Then there is a coCartesian equivalence $X^+_a \\simeq \\mathrm {Map}_{A^\\sharp }^{B^+}((a/A)^\\sharp ,X^+)$ over $B^+$ .", "The map $\\lbrace a\\rbrace \\rightarrow (a/A)^\\sharp $ is marked left anodyne.", "Proposition 5.9 The right adjoint preserves fibrations between fibrant objects.", "Let $\\begin{tikzcd}X^+{rr} & & Y^+\\\\& A^\\sharp \\times B^+ &\\end{tikzcd}$ be a fibration between marked left fibrations.", "In particular by [6] it is a marked left fibration.", "We have a pullback square $\\begin{tikzcd}\\mathrm {Map}_{A^\\sharp }^{B^+}(K^+,X^+)& \\underline{\\mathrm {Hom}}^+(K^+,X^+)\\\\\\mathrm {Map}_{A^\\sharp }^{B^+}(K^+,Y^+)& \\underline{\\mathrm {Hom}}(K^+,Y^+)\\end{tikzcd}$ The right hand side is a marked left fibration hence the pullback is a marked left fibration between marked left fibrations over $B^+$ .", "Corollary 5.10 The functors determine a Quillen adjunction $\\lambda \\colon \\mathrm {Fun}(A,\\mathbf {sSet}^+/B^+) \\leftrightarrow \\mathbf {sSet}^+/A^\\sharp \\times B^+ \\colon \\rho $ 5.11 In order to prove that this defines a Quillen equivalence, we show that this Quillen adjunction respects evaluation at a point in $A$ .", "It is easy to see that the following square commutes for each object $a$ of $A$ : $\\begin{tikzcd}\\mathrm {Fun}(A,\\mathbf {sSet}^+/B^+){\\lambda } [swap]{ev_a} & \\mathbf {sSet}^+/A^\\sharp \\times B^+{a^*}\\\\\\mathbf {sSet}^+/B^+ {id} & \\mathbf {sSet}^+/B^+\\end{tikzcd}$ Proposition 5.12 The induced transformation $ev_a\\rho \\Rightarrow a^*$ is a coCartesian equivalence for each fibrant object of $\\mathbf {sSet}^+/A^\\sharp \\times B^+$ .", "Let $W^+\\rightarrow A^\\sharp \\times B^+$ be a coCartesian fibration.", "We need to show that the counit $\\lambda \\rho (W^+)\\rightarrow W^+$ induces a coCartesian equivalence after taking fibers at the object $a$ of $A$ .", "The map $\\lambda \\rho (W^+)_a\\rightarrow W^+_a$ can be written as $\\mathrm {Map}^{B^+}((a/A)^\\sharp ,W^+) \\rightarrow W_a^+$ and an explicit computation shows that this coincides with the map of Corollary REF , hence is a trivial fibration.", "5.13 One of the key observations is that the pullback functor preserves homotopy colimits, see also [2].", "Proposition 5.14 Let $f\\colon K\\rightarrow L$ be a map of simplicial sets.", "Then the induced functor $f^\\ast \\colon \\mathbf {sSet}^+/L^\\sharp \\rightarrow \\mathbf {sSet}^+/K^\\sharp $ preserves homotopy colimits.", "We won't give a full proof here, as we cannot improve on the proof of [2] or offer a different viewpoint.", "The idea is to reduce to the case when $K\\cong \\Delta ^0$ and thus $f\\colon \\Delta ^0 \\rightarrow L$ is the specification of an object in $L$ .", "Then one observes that the pullback $f^\\ast $ is weakly equivalent to pulling back along a marked left fibration.", "By Theorem REF pulling back along a marked left fibration is left Quillen, thus it preserves homotopy colimits and the assertion follows.", "Corollary 5.15 Let $a$ be an object of $A$ .", "Then the functor $\\mathbf {R}a^\\ast : \\mathbf {sSet}^+/A^\\sharp \\times B^+\\rightarrow \\mathbf {sSet}^+/B^+$ preserves homotopy colimits.", "Corollary 5.16 The functor $ev_a$ preserves homotopy colimits.", "This follows immediately from the commutative diagram (REF ) and the previous Corollary.", "Corollary 5.17 The functor $\\rho $ preserves homotopy colimits.", "We show that the map $hocolim_I \\mathbf {R}\\rho \\rightarrow \\mathbf {R}\\rho \\ hocolim_I$ is an equivalence.", "Since equivalences are computed point wise, it is enough to show that $\\mathbf {R}ev_a hocolim_I \\mathbf {R}\\rho \\rightarrow \\mathbf {R}ev_a \\mathbf {R}\\rho \\ hocolim_I$ is an equivalence.", "By the previous corollary $ev_a$ commutes with homotopy colimits and by Proposition REF we have $\\mathbf {R}ev_a \\mathbf {R}\\rho \\simeq \\mathbf {R}a^\\ast $ Thus the assertion follows from Proposition REF .", "Lemma 5.18 Let $X^\\natural \\rightarrow A^\\sharp $ be a cartesian fibration and $Y^+\\rightarrow B^\\sharp $ be a map.", "Let $Z^\\natural \\rightarrow A^\\sharp \\times B^\\sharp $ be a cartesian fibration.", "Then a map $\\begin{tikzcd}X^\\natural \\times Y^+{rr}{f}& & Z^\\natural \\\\& A^\\sharp \\times B^\\sharp &\\end{tikzcd}$ is a cartesian equivalence if and only if for all points $a$ of $A$ the map $\\begin{tikzcd}X^\\natural _a \\times Y^+{rr}{f_a}& & Z^\\natural _a \\\\& B^\\sharp &\\end{tikzcd}$ is a cartesian equivalence.", "Choose a factorization $Y^+\\rightarrow W^\\natural \\rightarrow B^\\sharp $ into a marked right anodyne extension followed by a cartesian fibration.", "We get an induced factorization $X^\\natural \\times Y^+\\rightarrow X^\\natural \\times W^\\natural \\rightarrow A^\\sharp \\times B^\\sharp $ into marked right anodyne followed by cartesian fibration.", "We find a solution to the lifting problem $\\begin{tikzcd}X^\\natural \\times Y^+{f} & Z^\\natural \\\\X^\\natural \\times W^\\natural [dashed]{g} & A^\\sharp \\times B^\\sharp \\end{tikzcd}$ since the left hand side is marked right anodyne and the right hand side is a cartesian fibration by assumption.", "Thus, the map $f$ is a cartesian equivalence if and only if the map $g$ is.", "Now let us take fibers at the inclusion $\\lbrace a\\rbrace \\times B^\\sharp \\rightarrow A^\\sharp \\times B^\\sharp .$ We obtain $\\begin{tikzcd}X^\\natural _a \\times Y^+{f_a} & Z^\\natural _a \\\\X^\\natural _a \\times W^\\natural {g_a} & B^\\sharp \\end{tikzcd}$ Again, the left hand side is marked right anodyne hence $f_a$ is a cartesian equivalence if and only if $g_a$ is a cartesian equivalence.", "It thus suffices to show that $g$ is a cartesian equivalence if and only if $g_a$ is.", "This now follows from the fact that cartesian equivalences between cartesian fibrations are detected pointwise.", "Proposition 5.19 A map $\\lambda (a\\otimes X)\\rightarrow W^+$ in $\\mathbf {sSet}^+/A^\\sharp \\times B^+$ is a weak equivalence if and only if the adjoint map $a\\otimes X\\rightarrow \\rho W^+$ is.", "It suffices to show that for an object $a$ of $A$ , a map $X^+\\rightarrow B^\\sharp $ and a cartesian fibration $Z^\\natural \\rightarrow A^\\sharp \\times B^\\sharp $ , a map $\\begin{tikzcd}\\lambda (a\\otimes p) {rr}& & Z^\\natural \\\\& A^\\sharp \\times B^\\sharp &\\end{tikzcd}$ is a cartesian equivalence if and only if the adjoint map $\\begin{tikzcd}a\\otimes p {rr}& & \\rho (Z^\\natural )\\end{tikzcd}$ is a pointwise cartesian equivalence.", "In the first case we have the map $\\begin{tikzcd}A/a^\\sharp \\times X^+{rr} & & Z^\\natural \\\\& A^\\sharp \\times B^\\sharp &\\end{tikzcd}$ which is a cartesian equivalence if and only if for any point $a^{\\prime }$ of $A$ the induced map $\\begin{tikzcd}\\mathrm {Hom}(a^{\\prime },a) \\times X^+{rr} & & Z_{a^{\\prime }}^\\natural \\\\& B^\\sharp &\\end{tikzcd}$ is a cartesian equivalence.", "In the second case, we have a pointwise cartesian equivalence if and only if for any point $a^{\\prime }$ of $A$ the induced map $\\begin{tikzcd}\\mathrm {Hom}(a^{\\prime },a)\\times X^+{rr}& & \\mathrm {Map}_A(A/a^{\\prime \\sharp },Z^\\natural )\\\\& B^\\sharp &\\end{tikzcd}$ is a cartesian equivalence.", "Since we have a cartesian equivalence $\\mathrm {Map}_A(A/a^{\\prime \\sharp },Z^\\natural )\\simeq Z^\\natural _{a^{\\prime }}$ over $B^\\sharp $ , this is equivalent to the first case.", "[Proof of Theorem REF ] Since weak equivalences between Cartesian fibrations are computed fiberwise, it follows from Proposition REF that $\\rho $ reflects weak equivalences between fibrant objects.", "It thus suffices to show that the derived unit is a weak equivalence.", "It follows from Proposition REF that the derived unit is a weak equivalence for each object of the form $a\\otimes X$ .", "Since any functor is a homotopy colimit of objects of this form and the functor $\\rho $ preserves homotopy colimits by Corollary REF , it follows that the derived unit is in fact a weak equivalence." ] ]
2210.07753
[ [ "Robust Preference Learning for Storytelling via Contrastive\n Reinforcement Learning" ], [ "Abstract Controlled automated story generation seeks to generate natural language stories satisfying constraints from natural language critiques or preferences.", "Existing methods to control for story preference utilize prompt engineering which is labor intensive and often inconsistent.", "They may also use logit-manipulation methods which require annotated datasets to exist for the desired attributes.", "To address these issues, we first train a contrastive bi-encoder model to align stories with corresponding human critiques, named CARP, building a general purpose preference model.", "This is subsequently used as a reward function to fine-tune a generative language model via reinforcement learning.", "However, simply fine-tuning a generative language model with a contrastive reward model does not always reliably result in a story generation system capable of generating stories that meet user preferences.", "To increase story generation robustness we further fine-tune the contrastive reward model using a prompt-learning technique.", "A human participant study is then conducted comparing generations from our full system, ablations, and two baselines.", "We show that the full fine-tuning pipeline results in a story generator preferred over a LLM 20x as large as well as logit-based methods.", "This motivates the use of contrastive learning for general purpose human preference modeling." ], [ "Introduction", "These authors contributed equally to this work.", "Order is alphabetical.", "Correspondence to [email protected].", "Contemporary, controlled automated story generation uses intelligent systems to generate text from a minimal number of inputs—often a simple prompt and some other criteria—that conveys a coherent and consistent sequence of events.", "These events adhere to a given set of preferences without requiring manual collection of datasets for particular preferences.", "Large, pre-trained neural language models possess impressive generation capabilities.", "These models, however, struggle with coherence and controllability across longer segments of text [4], [39].", "Figure: Illustration of our technique for generating story content controlled by preferences.A language model generates candidates, which are ranked by the CARP model to produce scores.", "The scores are used to fine-tune the language model to produce higher scoring—and thus more aligned with preferences—story continuations.In this paper, we examine whether the consistency of story generation by language models can be improved for a wider class of preferences.", "Consider the following: a person wishes to prompt a story generating model to produce a narrative, but also wants the resulting story to hold some subjective property.", "This could mean the plot of said story “being sad” or “having well formed characters”, or “having a sophisticated plot twist.\"", "Conventionally, language models' few-shot prompting capabilities can be prompted with instructions about the subjective quality desired alongside the first line of the story.", "For some types of preferences, large language models—such as GPT-3 [4] or GPT-NeoX-20B [3]—can consistently meet the criteria.", "However, for other preferences, such as prompting the moral alignment of a protagonist within a story, performance can be highly variable.", "Other methods guide large language models toward particular topics by post-processing the logits.", "Examples of this include GeDi [13] and Plug & Blend [15]—in both cases a second model then modifies the output logits of the language model and increase frequency of a topic in the narrative.", "However logit-based methods - such as GeDi - are often less capable when presented with complex preferences.", "This in turn limits the model's expressiveness.", "Further, these methods require potentially expensive collection of annotated datasets aligned to user preference.", "To circumvent this explicit data requirement while allowing a large class of preferences we leverage CARP [18].", "CARP is a contrastively-trained bi-encoder which learns to align a story with a corresponding human critique of the story.", "Previous work has shown these classifications correlate strongly with a human baseline, demonstrating effectiveness as a general purpose preference model for generation in alignment with human preferences.", "We use Proximal Policy Optimization (PPO) [31] to fine-tune GPT-2-750M [26] to generate text consistent with a given initial criterion.", "Reward is represented as the CARP similarity of the generated story and the desired preference.", "Initial attempts indicated the reward signal generated by CARP could sometimes be exploited by the generator, resulting in collapse.", "In some other cases the generator failed to learn anything at all.", "To address this, we present CARP CoOp: a robust version of CARP leveraging prompt tuning for a stronger reward signal.", "We deploy a pseudo-labeling technique on CARP's latent space.", "This allows identification of preferences resistant or susceptible to collapse.", "Further, we find CARP CoOp is extremely data-efficient, easily incorporating previously unknown preferences with only a couple hundred examples when they are available.", "This efficiency is demonstrated on a moral alignment dataset which classifies character stories into 'good', 'netural' or 'evil'.", "We present this as a pipeline for fine-tuning a new language model that can robustly generate text consistent with a complex set of preferences expressed in natural language.", "We evaluate our approach with a human participant study; participants were asked to match sections of generated stories to a list of preference labels ranging from moral alignment to detailed subjective imagery.", "We show that our proposed technique is better at producing story segments that capture the given preference than prompting a much larger (GPT-NeoX-20B) language model and the GeDi method utilizing logit manipulation.", "Further, we conduct an ablation study by fine-tuning GPT-2-750M with standard CARP but without CoOp, showing CARP can still improve preferences over the NeoX baseline.", "In summary, we make the following contributions: Introduction of a contrastively trained preference model, CARP, as a reward signal for preference learning in story generation.", "A new model, Pseudo CARP CoOp, that improves the robustness of preference learning via CARP over a wide class of preferences.", "The introduction of the Alignment CARP CoOp model which signals the moral alignment of story characters.", "This demonstrates the data efficiency of CARP CoOp when annotated data is available.", "A human subject study evaluating how well existing and proposed generation methodologies satisfy desired human preferences." ], [ "Related Work", "Much of this work is based on prior work related to neural networks—recurrent and transformer-based—capable of producing stories [28], [12], [17], [5], [8], [2].", "The controllability of neural language models is a central concern in story generation.", "It remains an open problem as how best to ensure user-desired story properties.", "Story generation can be controlled by conditioning generation on high-level plot outlines [8], [22], [27], story in-filling [6], [38], fine-tuning on goals [34], [1].", "Similarly to GeDi, Plug & Blend [15] learns modifiers to apply to language model logits to control the topic of the generation by training an auxillary topic classification model.", "CARP is a contrastively-trained bi-encoder trained on paired story text and critique text.", "Like CLIP [25], a bi-encoder for images and text, it learns to align positive examples of stories adhering to a specific critique and reject negatives.", "In the case of CARP, a positive example is a story and a critique written expressly for that story.", "A negative example is a story and any critique of other stories.", "CARP was trained on the Story Critique Dataset, composed of 1.3 million story-critique pairs.", "It has been shown CARP matches human preferences more reliably than a baseline auto-regressive model fine-tuned to predict critiques given a passage and vice-versa.", "Prior work on preference learning for language models using natural language critiques include [30], [29].", "In particular [14] shows the robustness of using a contrastive learning based signal to aid in natural language generation.", "We use reinforcement learning via PPO and the contrastive CARP model as a reward signal for generation.", "Work by [23] trained a language model to generate less non-normative language using a normative text classifier [10] to modify the loss function.", "A similar strategy was used to train a language model to generate stories that ended in a given goal [35], [1].", "Finally, we leverage prompt tuning in the construction of our final CARP CoOp pipeline.", "Existing CLIP literature shows such approaches can make contrastive-based classification models more robust.", "In particular Context Optimization (CoOp) [40] is a technique originally designed to optimize prompts for multimodal contrastive models such as CLIP.", "CoOp produces a sequence of embedded terms [V$_1$ ] [V$_2$ ] ... [V$_M$ ] [CLASS] such that the embedded sequence produces greater classification accuracy.", "The sequence of embedded terms $V_{1\\le i < M}$ is shared across all classes.", "We adapt this to text and incorporate it into an end-to-end architecture." ], [ "Data Preliminaries", "For preference learning, we use the Story Critique dataset.", "The dataset consists of more than 80,000 unique stories with 1,378,696 total critiques.", "Every critique refers to a specific passage of the story, and so we construct 1,378,696 passage-critique pairs for training.", "The dataset is anonymized—unique identifiers including comment ID, submission IDs, URLs, and proper nouns have been removed.", "This Story Critique dataset was originally used to train the CARP [18] contrastive model.", "We also test our model on the Moral Stories dataset [7], to demonstrate our approach on datasets for which CARP was not originally trained.", "The Moral Stories dataset consists of 12,000 short narratives.", "Each element of the dataset contains a context, a moral action, the consequences of that action and an immoral action and corresponding consequences.", "Whereas the Story Critique dataset does not have critiques addressing the moral attributes of characters, we use Moral Stories to create a labeled dataset of stories and the moral alignment of the main character.", "Further where the moral stories dataset has only two label alignments we generate a dataset with three labels: ”good”, ”neutral”, and ”evil”.", "Generation is done by randomly sampling a context, action and consequence and then few-shot prompting GPT-J-6B [37] to generate and then classify story segments as having a character that is acting “good”, “evil”, or “neutral”.The concept of character alignments is drawn from fantasy stories and role-playing games such as Dungeons & Dragons  [33].", "These labels are oversimplifications of the complexities that surround moral stance but allow for users to intuitively provide story critieria such as “I want the character in my story to be good”.", "The logits associated with these labels provide a score for each label.", "In total, we produce a dataset with 17,157 story-alignment pairs.", "To create a language model that generates stories that correspond with preferences we first needed a language model that reliably generates stories/narratives.", "We use the ROCStories  [21] corpus, which consists of 100,000 five-sentence stories about common, everyday occurrences.", "We fine-tune the GPT-2-750M language model on a subset of ROCStories to produce the base model from which all successive models are adapted.", "Despite the model being trained on relatively simplistic stories, subsequent fine-tuning for preferences results in more expressive stories.", "This is due to the CARP model's tendency to shift the output distribution closer to human-written stories from the Story Critique corpus and Moral Stories corpus.", "5% of the ROCStories corpus is held out as a validation set, which is used later to assist with further fine-tuning." ], [ "Fine-Tuning for Preferences with CARP", "The base generation model—GPT-2-750M fine-tuned on ROCStories—cannot guarantee that any generated story will conform to a set of user preferences.", "We look at the use case where a user may want to generate stories that meet some preference criteria such as being a sad story, having a lot of descriptive imagery, or involving a main character that has a good alignment.", "Larger models—GPT-J-6B, GPT-NeoX-20B, or GPT-3—can accept more complicated prompts that include preferences in addition to the first sentence of the story.", "However, these models are not required to attend to all elements in the prompt and, as will be demonstrated, can vary significantly in how much they adhere to the prompt.", "Specialized story corpora for different preferences are not common.", "To tune a story generation model to meet given preference criteria, we require some means of judging generated story segments, computing loss, and back-propagating that loss back through the language model.", "The CARP model can score a story segment based on how well a natural text target criteria applies.", "Casting a language model as a policy for generating the next event in an unfolding story, we use Proximal Policy Optimization (PPO) [31]) to fine-tune the base language model using CARP as a reward function.", "PPO uses an experience replay buffer of tuples $\\langle s_t, a_t, s_{t+1}, r\\rangle $ where $s_t$ is a state at time $t$ , $a_t$ is the action taken at time $t$ , $s_{t+1}$ is the successor state, and $r$ is the reward earned.", "PPO samples batches from the experience replay and backpropagates loss through the generative model.", "For tuning the language model we can think of the action $a_t$ as the next token to be generated given the state $s_t$ which is the sequence of previously generated tokens.", "The reward $r$ is given by CARP at the end of the trajectory (story) with the distance in log-probability from the base momentum model as an additional per token regualarizing reward.", "To generate samples for the experience replay buffer, we randomly select a story from the held-out ROCStories validation set and use the first five tokens.", "For preference learning, the best practice is to prompt the language model with a sequence within the task distribution instead of the [SOS] (start-of-sequence) token.", "The successor state is the continuation generated by the story generation model.", "The continuation is truncated at 60 tokens or the end-of-text token.", "Finally, the reward component is the score generated by CARP, given the continuation and the target criteria text.", "On every step, we sample 64 records from the experience replay.", "The model is tuned for 20k steps, which takes on the range of an hour on a single A100 GPU.", "In preference learning, it is common to freeze all but a small number of the transformer blocks [9].", "We found freezing all but the last two layers of the language model provided the best result.", "Exact hyperparameters are provided in the appendix." ], [ "Robust CARP", "CARP produces scores along a continuous range with many middling values.", "As a resultCARP provides a relatively weak classification signal as observed during preliminary experiments.", "This makes it challenging for PPO to discriminate between continuations.", "Consequently, the CARP-tuned model often fails to learn to meet some criteria and overfits for others depending on how sensitive CARP is to the target criterion.", "Discretizing the reward scale and pushing contrastive reward scores farther apart has been a successful strategy in other text fine-tuning tasks (cf.", "[16]).To make CARP more robust to generated continuations we adopt a similar approach, whereby we use a clustering-based pseudo-labeling technique to identify a discrete set of preference categories.", "Pseudo-labeling is a common technique used for image classification using ResNet [11] embeddings (cf.", "[36], [24]).", "For a given story in the CritiqueCirclce dataset, a distribution over the set of identified discrete preferences is computed as the softmaxed cosine similarity to each of the labels.", "This is used to form a new training set.", "In Section REF we overview the technique for identifying pseudo labels for criteria classes.", "In Section REF we describe how to update CARP to use generated pseudo- and alignment labels." ], [ "Pseudo Labeling", "We generate pseudo-labels from the Story Critique dataset as follows.", "We observe that CARP's critique embeddings lie on a spherical manifold.", "Thus we apply UMAP [20] to project the embeddings from their high dimension to 2 dimensions, which was determined via a hyperparameter sweep.", "The choice of 2 dimensions provides the added benefit of visualization of the CARP model's latent space.", "Then we apply hierarchical density-based clustering (HDBSCAN) [19] on CARP's critique embeddings to identify clusters of critiques.", "Since HDBSCAN works best on vectors of low dimensionality projection to a low dimension is an important preprocessing step.", "HDBSCAN resulted in 91 clusters.", "However, HDBSCAN failed to cluster half of the reviews it was given, which are subsequently added to a “noise” cluster.", "We hand-label clusters by sampling several critiques from each.", "Any cluster where an associated story feature was ambiguous was discarded.", "Clusters with identical story features were merged.", "As an example: there were two separate but nearby clusters associated with humor.", "We observe that most of these clusters correspond with reviews of distinct story features—varying from the use of imagery, character dialogue, or humor.", "The full list of pseudo labels is in the appendix.", "Finally we compute a high-dimensional centroid for every cluster by taking a sample-wise mean of the respective latent vectors.", "For a new, arbitrary latent vector, we measure its distribution over the classifiers as a softmax over its cosine similarities to each cluster centroid.", "However, taking the distance to centroid as a measure for classifying new points is often inconsistent for HDBSCAN.", "For example, a maximal centroid similarity approach would misclassify points in the central cluster that are far from its center.", "We alleviated this issue by removing samples with distance to centroid falling below a threshold.", "Figure: Left: The latent space for CARP, represented with 2 dimensional reductions of the critique embeddings.", "Each point represents a review.", "Black correspond to points labelled as noise by clustering, while other colors correspond to cluster labels.Right: The hand-picked clusters." ], [ "CARP CoOp", "In this section we describe how we update CARP to use pseudo labels.", "Whereas the original CARP takes a story/critique pair and produces a cosine similarity score, we now require CARP to produce a score for each pseudo label, corresponding to each critique class.", "However, once we start using user-provided preference classes, we no longer have the text associated with the preference as an input.", "To rectify this issue, we simultaneously learn to generate a soft-critique for each critique class.", "We incorporate the CoOp prompt tuning technique into CARP in an end-to-end fashion as shown in Figure REF (right).", "Specifically, the CoOp prompt tuning layer learns a unified embedding [V$_1$ ] [V$_2$ ] ... [V$_{M/2}$ ] [CLASS] [V$_{(M/2)+1}$ ] ... [V$_M$ ] where V$_1$ ,...,V$_M$ are shared parameters learned across all classes and [CLASS] is a token embedding that maximizes the log-likelihood of a specific critique class.", "We train two new versions of CARP:CARP CoOp models tuned on pseudo labels and alignment labels are available at url redacted Pseudo CARP CoOp is trained using pseudo labels derived from the Story Critique dataset as described in Section REF .", "Pseudo CARP CoOp has six pseudo-labels derived from the Story Critiques dataset, chosen for semantic dissimilarity and separation in the embedding space.", "This choice allows for a stronger reward score during preference learning.", "Alignment CARP CoOp is trained using the augmented Moral Stories corpus described in Section .", "Alignment CARP CoOp has the three alignment-critique embeddings.", "Both models start with the pre-trained original CARP with embedding layers frozen.", "To train Pseudo CARP CoOp we filter the Story Critique stories on relative distance to the centroid of the desired pseudo label clusters, rejecting all samples with cosine similarity below twice the average cosine-similarity of the dataset.", "For the story/critique pairs that pass the filter we compute the distance of the critique to each centroid in the chosen clusters.", "We say a pair belongs to a cluster if it has minimal distance to that cluster among all clusters.", "For the chosen critique pseudo labels, we select 1000 samples belonging to each class, balancing the dataset among pseudo-labels.", "This is crucial as otherwise we observe the model overfits to overrepresented classes and fails to provide strong signals for less-represented classes.", "Softmaxing gives a distribution over which we can minimize a KL-divergence loss between the predicted label and the target label.", "To train the Alignment CARP CoOp model we perform a similar procedure as above, softmaxing over the logits generated by GPT-J-6B and thresholding.", "Here we observe it is sufficient to train both CARP CoOp and Alignment CARP CoOp on 1000 examples per label to achieve competitive downstream performance.", "This demonstrates the CoOp method is highly data efficient, requiring a minimial number of examples per class to fine-tune when initialized with the pretrained original CARP model.", "Figure REF shows our final process for fine-tuning the story generator language model with CARP CoOp models.", "As before, the CARP CoOp model produces scores for story segments generated by the language model.", "In this configuration, however, a criteria label is used instead of a text criteria and we use the negative log-likelihood loss for the corresponding label.", "Note the KL-divergence is computed as the distance to centroid as our stationary distribution (softmaxed distance to centroid per pseudo class).", "This is done to improve data efficiency since we found it was common for stories in the Story Critique dataset to fit under multiple critique labels.", "Table: Sample generated story segments." ], [ "Experimental Design", "We test our pipeline in two separate regimes: First with access to labeled preference data with which we can fine-tune CoOp and GeDi models.", "The second case is without access to labeled preference data.", "Here we leverage the Pseudo CoOp model and the pretrained GeDi model to guide models generating text adhering to a particular topic.", "We recruit human study participants and ask them to choose which preference from a list best describes a segment of generated story.", "In the case we have labeled preference data we use the alignment dataset labeling main characters as being either good, neutral, or evil.", "We call these fineituned models Alignment CoOp and Alignment GeDi, respectively.", "Otherwise participants select topic labels from among family, music, accidents, religion, imagery (involving a lot of descriptiveness of visual properties), or fighting.", "Often stories adhering to a certain alignment often involve a specific topic, e.g.", "good stories often center around family and bad stories often involve accidents.", "Hence we separate the evaluation of stories generated with a desired alignment from the those with a desired topic to prevent artificial mislabeling.", "In total we evaluate four separate classes of models: GeDi LM: The GPT-2-750M language model guided via GeDi.", "NeoX: The GPT-NeoX-20B language model prompted with the preference criteria and an initial sentence.", "Default CARP LM: The base GPT-2-750M language model fine-tuned via vanilla CARP rewards (as described in Section .", "This model is an ablation of the full model to assess the importance of CARP CoOp.", "CARP CoOp LM: The base GPT-2-750M model fine-tuned via CARP CoOp rewards.", "This results in 36 different story generators (6 topic labels $\\times $ 4 models, plus 3 alignment labels from augmented Moral Stories $\\times $ 4 models).", "We recruited 25 people on the Prolifichttps://prolific.co crowdsourcing site.", "Each participant read 44 story segments drawn randomly from the set of generated story segments across all models.", "Participants took on average 18 minutes to complete the task and were paid $12.00 per hour.", "Stories are generated by randomly selecting 5 prefix tokens from a held out ROCStories validation set and generating the continuation.", "In total, we generated 20 story segments per preference per model.", "All generated stories are available in the appendix." ], [ "Results and Analysis", "Table REF shows the aggregate percentage of the time participants correctly selected the correct criteria for each story segment read, broken down according to whether the particpants were choosing among topic labels or character alignments.", "It shows overall that participants were better at identifying the correct criteria when reading story segments generated by CARP CoOp guided LMs.", "Figure REF shows how often people identified the preference criterion used to generate each story across models.", "The degree to which participants could identify the criterion used to prompt NeoX varied widely.", "In some cases, CARP guided LMs performed better than NeoX, but not universally so.", "Pseudo CARP LM beat NeoX in creating stories with recognizable criteria in all labels except “religious” stories for which it seems to be particularly sensitive.", "Pretrained-GeDi guided models do decently well on some topics despite not being included in the original GeDi training set.", "However in other topics GeDi does quite poorly, particularly with stories involving action.", "In all cases except religion CARP CoOp guided models are preferred.", "Figure REF shows results on the character alignments criteria.", "As before, NeoX has widely varying quality of response to the different criteria.", "For all three criteria, Alignment CARP LM meets or exceeds NeoX.", "CARP guided LMs only exceed NeoX on the “good” criterion.", "GeDi guided LMs perform decently well, despite the complexity of the task, when GeDi fine-tuned with the Alignment dataset.", "However on average CARP CoOp is preferred.", "Figure: Histogram of human preference for GeDi, Neox baselines, Default CARP guided LM, and Pseudo CARP guided LM over critiques.", "Note we we do not have pseudo-labels for romance, horror so do not evaluate a Pseudo CARP guided LM on these critiques.Figure: Histogram of human preference for GeDi, Neox baseline, Default CARP guided LM, and Alignment CARP guided LM over alignments.The NeoX baseline is many times larger in terms of parameters than the other models fine-tuned with different versions of CARP below.", "The NeoX baseline also has direct access to the criterion as part of the prompt, whereas other models have the criterion implicitly represented.", "Yet despite these advantages, NeoX fails to preffered to CARP and CARP CoOp guided methods a majority of the time.", "We conclude that NeoX, despite its size, is not guaranteed to attend to prompts with preference criteria in all cases.", "A generative model over 20x as small and tuned CARP rewards proves to be more reliable in its ability to provide stories that meet given criteria.", "These models are made even more robust by discretizing the criteria into distinct classes via CARP CoOp and learning a prompt that generates a stronger, less ambiguous, reward signal.", "Additionally of interest is the observation Pseudo CARP LM and Alignment CARP LM generative models both start from a base model tuned on ROCStories, which are simple stories with simple sentence structures.", "Without ever seeing an example story from the Story Critiques or Moral Stories datasets, these models learn through reinforcement by Pseudo CARP CoOp and Alignment CARP CoOp, respectively, to prefer language and story structures that are more complex than the original ROCStories dataset.", "Finally we observe an average inter-annotator agreement on questions of $0.74$ which is decent despite the complexity and potential for multi-label applicability in the tasks.", "Agreement is around $0.77 \\pm .01$ among model classes except GeDi guided LMs which have noticeably lower agreement at $0.62$ .", "We speculate this is due to GeDi's poor performance for some out of distribtion topics.", "Similarly agreement on questions within each preference class is $0.75 \\pm 0.04$ with the exception of the imagery preference which is low at $0.58$ .", "We speculate this is due to the complexity of the task.", "These observations are supported by an entropy test (figure REF ) which measures annotator agreement across models.", "Figure: Entropy box-whisker plot of model generations annotations.", "Lower is better.", "Entropy is computed as the binary entropy over human participant answer proportions." ], [ "Limitations", "The dataset used to train CARP primarily uses short-form creative writing samples and in-line critiques of these samples.", "One primary limitation of the model is the inability to reason over longer sequences of text.", "As such, the model also cannot reason effectively about preferences or critiques as would be applicable to longer form creative writing.", "Like other neural text generation tasks, longer stories would be generated by successively generating segments and pre-pending to the context prompt.", "In such a case it would be feasible to change the preference or criteria mid-stream.", "Since CARP is trained solely on short form stories and in-line critiques of those stories, we lack the ability to reason about sequential or long form preferences.", "In a similar vein, it is not possible to condition the model based on sequential forms of human preferences (e.g.", "\"This story would be better if it were X, and so in turn I would change Y or Z in the writing or plot structure\").", "Tuning to the preferences of an individual editor or reader is also not possible.", "The experiments focus on six of the more well-bounded and separated critique classes in CARP embedding space.", "As such, our results represent an “upper-bound” of sorts regarding downstream language model performance.", "However, all 91 clusters are distinct enough that were we to train Pseudo CARP CoOp on all pseudo labels, the embeddings would not change much from the original CARP.", "That is, Pseudo CARP CoOp generally improves performance or performance is unchanged but rarely ever hurt by additional pseudo-label training.", "The Story-Critique dataset is non-public due to licensing issues despite best efforts to make it publicly available.", "As such, exploring the preferential biases which are undoubtedly present in CARP is not feasible for these authors though we do acknowledge the complication this presents in evaluation, assessment of ethical concerns and general limitations of the model/approach." ], [ "Conclusions", "We have demonstrated a GPT-2-750M can be fine-tuned using reinforcement learning via contrastive model trained to correlate story segments with human text critiques.", "With a novel combination of pseudo labeling and integrated prompt tuning techniques, this fine-tuning technique becomes more robust because the reward signal becomes stronger and more discriminative.", "The resulting models are, in nearly all cases, competitive with or outperform 20x larger prompted models when it comes to observed conformance to preference and character alignment guidance.", "Our results are not unique to CARP and the Story Critiques corpus on which it was trained.", "We show how the Moral Stories dataset can be augmented to provide story control guidance revolving around the moral alignment of characters.", "The models trained from this are significantly more capable than larger models that rely strictly on a model's ability to attend to instructions in the prompt.", "It points to future directions for aligning models with human value systems.", "The work presented provides a means by which to give users of generative neural language models control over certain qualities of the outputs through the use of general purpose preference models.", "This makes language models more useful for human-initiated creative activities such as fictional story writing, but also provides a means of control for other generative text process." ], [ "Ethics Statement", "As with all neural text generation systems, our work is prone to echoing the biases present in the dataset [32] and generate non-normative text (i.e.", "in violation of social norms).", "No existing automated storytelling systems is able to entirely eliminate these biases.", "Our system is susceptible to biases in the Story Critique dataset.", "In preliminary experiments the model often conflated extreme acts of violence with humor.", "However, in some cases, the ability to provide guiding preferences can reduce the amount of toxic, prejudicially biased, and non-normative output because the preferences can push the model into latent spaces where these are less prevalent.", "With large scale language models, out of distribution (OOD) inputs or preferences present a challenge.", "If exemplar stories are not present in the training set, they will not be well understood by the model nor will a prompt or critique mentioning these topics perform well.", "For example, if most training data concern cisgendered, male vampires, then the model will struggle to provide representative stories and critiques about queer, non-binary vampires.", "Fictional stories that are presented to readers as non-fictional can be used to influence or misinform.", "We, and others who work on story generation, must take care to ensure that generated content is not presented as factual real-world occurrences.", "The ability to control neural text generation output—whether for stories or otherwise—may have important downstream applicability.", "In particular, the ability to control generation makes text generators less unpredictable for writing assistance." ], [ "Preference Learning Hyperparameters", "Below we've included an example of CARP CoOp preference learning parameters.", "{   #LM Model args   \"lm_name\": \"gpt2-large\",   \"ref_lm_name\": \"gpt2-large\",   \"tk_name\": \"gpt2-large\",   \"num_layers_unfrozen\": 2,   \"save_folder\": 'ckpts/roc_evil_coop_model/',   \"use_lm_ckpt\": True,   \"lm_ckpt_path\": \"raw-roc-gpt2-large/\",     #Carp model args   \"carp_version\": \"coop\",   \"carp_config_path\":...   \"carp_ckpt_path\": ...     #Training args   \"steps\": 20000,     #PPO Args   \"ppo_epochs\": 4,   \"txt_in_len\": 14,   \"txt_out_len\": 60,   \"lr\": .5e-6,   \"init_kl_coef\":0.2,   \"target\": 50,   #KL Divergence target   \"horizon\":10000,   \"gamma\":1,   #Discount factor   \"lam\":0.95,   \"cliprange\": .2,   \"cliprange_value\":.2,   \"vf_coef\":.15,     #Review   \"review\": \"evil\",     #Dataset   \"data_path\": \"dataset/roc_prompts.txt\",     #Minimize or maximize   'minimize': False, }" ], [ "Pseudo labels", "[36] uses a contrastive learning setup similar to SimCLR for pretraining an image embedder on an unlabelled image dataset.", "KNN is then used to cluster the embedded dataset and label new images.", "This method outperforms previous unsupervised classification approaches by a significant margin.", "For our approach, we start by clustering embedded reviews from the Story-Critique dataset.", "For the embedding process, we use CARPs review embedder, followed by dimensionality reduction from $\\mathbb {R}^{2048}$ to $\\mathbb {R}^2$ with UMAP [20].", "We chose UMAP, as opposed to PCA or tSNE , since CARPs latent space exists on a manifold: the unit $R^{2048}$ n-sphere.", "We use cosine distance as a metric since CARP was originally trained via a cosine similarity based loss.", "We then use hierarchical density-based clustering (HDBSCAN) [19] in this low dimensional space.", "One caveat with HDBSCAN is that it can deem some of the data as noise, leaving it unclustered.", "We chose $\\mathbb {R}^{2}$ as a target space for UMAP as it minimized the proportion of embeddings left unclustered by HDBSCAN.", "The resulting clusters were found, through hand labelling, to mostly correspond with high level story features (e.g.", "use of dialogue, use of humor, discussion of music/instruments).", "To generate the centroids, we took the mean of the embeddings from the most promising clusters (with respect to hand labelling) in the original $\\mathbb {R}^{2048}$ latent space, then projected it back onto the unit n-sphere.", "The centroids can then be used as classifiers by taking cosine similarity against them in CARPs latent space for stories (with its story embedder) or reviews (with its review embedder).", "In order to generate a pseudolabeled dataset from the (story, review) pairs in the Story-Critique dataset, we first prune pairs whose embedded review does not have sufficiently high cosine similarity with any one centroid (using a threshold of twice the average cosine similarity).", "We then take the index of the centroid that maximizes similarity to the review embedding, and assign this as a pseudolabel to the paired story.", "The sample used to generate the centroids and the sample used to generate the pseudolabeled dataset are both independently sampled subsets of the Story-Critique dataset.", "Captions for the clusters, as well as more details on the hand labelling process, are available within the appendix, along with plots of the most promising clusters in the latent space.", "We also plan to release a version of CARP tuned on pseudo labels." ], [ "COOP", "To train Psuedo CoOp for classification, we follow a similar approach to [24] in that we use CARP as a teacher to generate pseudo labels for Pseudo CoOp as a student.", "For Pseudo CoOp, we only use a subset of 6 centroids (see appendix) to generate pseudolabels.", "Additionally, the pruning described previously in the pseudo labelling section ensures a strong training signal for Pseudo CoOp.", "Once pseudolabels are generated and assigned to the passages, we minimize a negative log-likelihood loss." ], [ "Pseudo Labelling Details", "The projected latent space and extracted hand-picked clusters can be seen in REF .", "Clusters were hand picked by sampling reviews from each HDBSCAN cluster and manually labelling them for apparent patterns.", "Several clusters overlapped in story feature, and several had no obvious underlying feature (i.e.", "they were ambiguous).", "To resolve this, we dropped points within ambiguous clusters, and merged nearby clusters with the same feature.", "Below is a table with captions for all the hand picked clusters as well as a color to match captions in the table to clusters in .", "The captions for the 6 clusters whose centroids for used for training Pseudo CoOp are bolded.", "Table: NO_CAPTION" ] ]
2210.07792
[ [ "$t{\\bar t}H$ production in NNLO QCD" ], [ "Abstract The associated production of a Higgs boson with a top-antitop quark pair is a crucial process at the LHC since it allows for a direct measurement of the top-quark Yukawa coupling.", "We present the computation of the radiative corrections to this process at the next-to-next-to-leading order (NNLO) in QCD perturbation theory.", "This is the very first computation for a $2\\to3$ process with massive coloured particles at this perturbative order.", "We develop a soft Higgs boson approximation for loop amplitudes, which enables us to reliably quantify the impact of the yet unknown two-loop contribution.", "At the centre-of-mass energy $\\sqrt{s}=13$ TeV the NNLO corrections increase the next-to-leading order result for the total cross section by about 4% and lead to a significant reduction of perturbative uncertainties." ], [ "Introduction", "About ten years ago the ATLAS and CMS collaborations announced the discovery of a scalar resonance [1], [2] whose properties closely resembled those expected for the Higgs boson predicted by the Standard Model (SM).", "By now, the experimental data have significantly sharpened this picture by assembling information from different production and decay channels, and a framework of Higgs boson interactions has emerged that is fully consistent with the SM hypothesis [3], [4].", "Since the Higgs boson couplings to SM particles are proportional to their masses, a special role is played by the coupling to the top quark, which is the heaviest particle known to date.", "The observation of Higgs boson production in association with a top–antitop quark pair was reported by the ATLAS and CMS collaborations in 2018 [5], [6].", "This production mode allows for a direct measurement of the top-quark Yukawa coupling.", "Any deviation from the SM prediction could be a signal of New Physics.", "At present ATLAS and CMS measure the signal strength in this channel to an accuracy of ${\\cal O}(20\\%)$ , but at the end of the High-Luminosity phase the uncertainties are expected to go down to the ${\\cal O}(2\\%)$ level [7].", "The first theoretical studies of the hadroproduction of a top–antitop quark pair and a Higgs boson ($t {\\bar{t}}H$ ) in the SM were carried out in Refs.", "[8], [9] at leading order (LO) in QCD perturbation theory, and in Refs.", "[10], [11], [12], [13], [14], [15] at next-to-leading order (NLO).", "NLO EW corrections were reported in Refs.", "[16], [17], [18].", "The resummation of soft-gluon contributions close to the partonic kinematical threshold was considered in Refs.", "[19], [20], [21], [22], [23], [24].", "Full off-shell calculations with decaying top quarks were presented at NLO QCD [25] and NLO QCD+EW [26].", "The current theoretical uncertainties of the $t {\\bar{t}}H $ cross section are at the ${\\cal O}(10\\%)$ level [27].", "To match the experimental precision expected at the end of the high-luminosity phase of the LHC, next-to-next-to-leading order (NNLO) predictions in QCD perturbation theory are needed.", "At the partonic level, the NNLO calculation of $t {\\bar{t}}H$ production requires the evaluation of tree-level contributions with two additional unresolved partons in the final state, of one-loop contributions with one unresolved parton, and of purely virtual contributions.", "The required tree-level and one-loop scattering amplitudes can nowadays be evaluated with automated tools.", "The two-loop amplitude for $t {\\bar{t}}H$ production is not known.", "Being a five-leg amplitude involving particles with different masses, its computation is at the frontier of current possibilities [28].", "Even with all the required amplitudes available, their implementation in a complete NNLO calculation at the fully differential (exclusive) level is a highly non-trivial task because of the presence of infrared (IR) divergences at intermediate stages of the calculation.", "In particular, these divergences do not permit a straightforward implementation of numerical techniques.", "Various methods have been proposed and used to overcome these difficulties at the NNLO level (see Refs.", "[29], [30], [31], [28] and references therein).", "In this work we will use the transverse-momentum ($q_T$ ) subtraction method [32].", "The $q_T$ subtraction formalism is a method to handle and cancel the IR divergences in QCD computations at NLO, NNLO and beyond.", "The method uses IR subtraction counterterms that are constructed by considering and explicitly computing the $q_T$ distribution of the produced final-state system in the limit $q_T \\rightarrow 0$.", "If the produced final-state system is composed of non-QCD (colourless) partons (such as vector bosons, Higgs bosons, and so forth), the behaviour of the $q_T$ distribution in the limit $q_T \\rightarrow 0$ has a universal (process-independent) structure that is explicitly known up to the NNLO level through the formalism of transverse-momentum resummation [33].", "The resummation formalism can, however, be extended to the production of final states containing a heavy-quark pair [34], [35], [36].", "Exploiting such extension, the NNLO computations of top-quark and bottom-quark pair production were completed in Refs.", "[37], [38], [39], [40].", "The production of a heavy-quark pair accompanied by a colourless particle (such as the Higgs boson) does not pose any additional conceptual complications in the context of the $q_T$ subtraction formalism.", "However, the implementation of the formalism requires the computation of appropriate soft-parton contributions.", "The results of this computation at NLO and, partly, at NNLO were presented in Ref.", "[41], and the evaluation of the NNLO soft terms has been subsequently completed [42].", "Using these results and following the NNLO computation of the off-diagonal partonic channels [41], in this work we present the complete NNLO result for $t {\\bar{t}}H $ production.", "The two-loop amplitudes are not yet known, and we evaluate them by developing and using a soft Higgs boson approximation.", "As we will show, this approximation allows us to obtain the NNLO corrections with very small residual uncertainties.", "The paper is organised as follows.", "In Sec.", "we discuss the factorisation formula for the emission of a soft Higgs boson.", "In Sec.", "we illustrate our NNLO calculation of $t {\\bar{t}}H $ production, in Sec.", "we present our results for the total cross section, and in Sec.", "we summarise our findings." ], [ "Soft Higgs boson approximation", "We consider the hard scattering process $c(p_1)+{\\bar{c}}(p_2)\\rightarrow t(p_3)+{\\bar{t}}(p_4)+H(k)\\,,\\qquad c=q,{\\bar{q}},g,$ where the collision of the massless partons of flavours $c$ and $\\bar{c}$ and momenta $p_1$ and $p_2$ produces a top-antitop quark pair of momenta $p_3$ and $p_4$ , and a Higgs boson with momentum $k$ without additional QCD radiation.", "We denote the pole masses of the top quark and the Higgs boson by $m_t$ and $m_H$ , respectively.", "The renormalised all-order scattering amplitude for the process in Eq.", "(REF ) is denoted as ${\\cal M}(\\lbrace p_i\\rbrace ,k)$ .", "We are interested in the behaviour of ${\\cal M}(\\lbrace p_i\\rbrace ,k)$ in the limit in which the Higgs boson becomes soft ($k\\rightarrow 0$ ).", "In this limit ${\\cal M}(\\lbrace p_i\\rbrace ,k)$ fulfils the following factorisation formula: ${\\cal M}(\\lbrace p_i\\rbrace ,k)\\simeq F(\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}});m_t/{\\mu _{\\rm R}})\\,\\frac{m_t}{v}\\sum _{i=3,4} \\frac{m_t}{p_i \\cdot k}\\, {\\cal M}(\\lbrace p_i\\rbrace )\\, ,$ where $v=(\\sqrt{2}G_F)^{-1/2}=246.22$  GeV and ${\\cal M}(\\lbrace p_i\\rbrace )$ is the amplitude in which the Higgs boson has been removed.", "Therefore ${\\cal M}(\\lbrace p_i\\rbrace )$ refers to the $c{\\bar{c}}\\rightarrow t{\\bar{t}}$ amplitude for $t {\\bar{t}} $ production.", "The perturbative function $F(\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}}); m_t/{\\mu _{\\rm R}})$ can be extracted from the soft limit of the scalar form factor of the heavy quark.", "Up to ${\\cal O}(\\alpha _{\\mathrm {S}}^2)$ it reads [43], [44] $F(\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}});m_t/{\\mu _{\\rm R}})=&\\;1+\\frac{\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}})}{2\\pi } \\left(-3\\,C_F\\right)\\nonumber \\\\&\\hspace*{-70.0pt}+\\left(\\frac{\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}})}{2\\pi }\\right)^2\\left(\\frac{33}{4}\\,C_F^2-\\frac{185}{12} \\,C_F C_A+\\frac{13}{6} \\,C_F (n_L+1)-6\\,C_F\\beta _0\\ln \\frac{{\\mu _{\\rm R}}^2}{m_t^2}\\right)+{\\cal O}(\\alpha _{\\mathrm {S}}^3)\\, ,$ where $C_F$ and $C_A$ are the QCD colour factors, $n_L$ is the number of massless flavours, $\\beta _0=\\beta ^{(n_L)}_0=\\frac{11}{12}C_A-\\frac{1}{6}n_L$ is the first coefficient of the QCD beta function and $\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}})=\\alpha _{\\mathrm {S}}^{(n_L)}({\\mu _{\\rm R}})$ is the QCD running coupling with $n_L$ active flavours at the renormalisation scale ${\\mu _{\\rm R}}$Note that the result of [44] is obtained in a scheme with $n_L+1$ active flavours.", "To the purpose of our calculation, it is straightforward to check that the decoupling transformation from $\\alpha _{\\mathrm {S}}^{(n_L+1)}$ to $\\alpha _{\\mathrm {S}}^{(n_L)}$ simply amounts to replacing $\\beta _0^{(n_L+1)}$ with $\\beta _0^{(n_L)}$ in Eq.", "(REF )..", "Since virtual amplitudes for the production of a $t{\\bar{t}}$ pair are available up to two-loop order [45], the factorisation formula in Eq.", "(REF ) can be used to provide an approximation of the virtual $t{\\bar{t}}H$ amplitudes up to the same order, or as a check of future multi-loop computations of such amplitudes.", "Strictly speaking, the formula holds as $k\\rightarrow 0$, and, therefore, the limit $m_H\\ll m_t$ has to be taken as well.", "We also note that the factorisation formula in Eq.", "(REF ) can straightforwardly be extended to the production of an arbitrary number of top-quark pairsWe have numerically checked up to one-loop order by using Recola  [46], [47], [48] that, for a very light and soft Higgs boson, the formula holds, besides $t {\\bar{t}}H $ , for instance also for $t{\\bar{t}}t{\\bar{t}}H$ production..", "The factorisation formula in Eq.", "(REF ) can be derived by using the eikonal approximation and following the strategy used for soft-gluon factorisation [49].", "Alternatively, it can be derived by using the techniques of the low-energy theorems [50], [51], [52], [53].", "The basic observation is that at the bare level we have $\\lim _{k \\rightarrow 0} {\\cal M}^{\\rm bare}(\\lbrace p_i\\rbrace ,k) = \\frac{m_{t,0}}{v}\\sum _{i=3,4}\\,\\frac{m_{t,0}}{p_i \\cdot k}\\, {\\cal M}^{\\rm bare}(\\lbrace p_i\\rbrace )\\, ,$ where $m_{t,0}$ is the bare mass of the top quark.", "The renormalisation of the heavy-quark mass and wave function induces a modification of the Higgs coupling to the top quark.", "The bare amplitude for the emission of a soft Higgs boson from a top quark with momentum $p$ can be written as $\\lim _{k \\rightarrow 0} {\\cal M}_{t\\rightarrow tH}^{\\rm bare}(p,k) = \\frac{m_{t,0}}{v}\\, \\frac{\\partial }{\\partial m_{t,0}}\\, {\\cal M}_{t\\rightarrow t}^{\\rm bare}(p) \\biggl |_{p^2 = m_t^2}\\, ,$ where ${\\cal M}^{\\rm bare}_{t\\rightarrow t}(p)={\\bar{t}}_0(p)\\left(-m_{t,0}-\\Sigma (p)\\right)t_0(p)\\, .$ By using the results of the ${\\cal O}(\\alpha _{\\mathrm {S}}^2)$ contribution to the heavy-quark self-energy $\\Sigma (p)$  [54], [55] we can evaluate the right-hand side of Eq.", "(REF ).", "After renormalising mass and wave function of the heavy quark in the on-shell scheme and the strong coupling in the ${\\overline{\\rm MS}}$ scheme, we derive the effective coupling $F(\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}});m_t/{\\mu _{\\rm R}})$ of Eq.", "(REF ), which describes the interaction of the top quark with a soft Higgs boson.", "To the best of our knowledge, the factorisation formula in Eq.", "(REF ) has never been presented in the literature beyond tree-level, and is therefore a new result of this work." ], [ "The computation", "The cross section for $t {\\bar{t}}H$ production can be written as $\\sigma =\\sigma _{\\rm LO}+\\Delta \\sigma _{\\rm NLO}+\\Delta \\sigma _{\\rm NNLO}+\\dots $ where $\\sigma _{\\rm LO}$ is the LO cross section, $\\Delta \\sigma _{\\rm NLO}$ is the NLO QCD correction, $\\Delta \\sigma _{\\rm NNLO}$ the NNLO QCD contribution, and so forth.", "According to the $q_T$ subtraction formalism [32] the differential cross section $d\\sigma $ can be evaluated as $d\\sigma ={\\cal H}\\otimes d\\sigma _{\\rm LO}+\\left[d\\sigma _{\\rm R}-d\\sigma _{\\rm CT}\\right]\\, .$ The first term on the right-hand side of Eq.", "(REF ) corresponds to the $q_T=0$ contribution.", "It is obtained through a convolution (denoted by the symbol $\\otimes $ ), with respect to the longitudinal-momentum fractions $z_1$ and $z_2$ of the colliding partons, of the perturbatively computable function ${\\cal H}$ with the LO cross section $d\\sigma _{\\rm LO}$ .", "The real contribution $d\\sigma _{\\rm R}$ is obtained by evaluating the cross section to produce the $t {\\bar{t}}H$ system accompanied by additional QCD radiation that provides a recoil with finite transverse momentum $q_T$ .", "When $d\\sigma $ is evaluated at NLO, $d\\sigma _{\\rm R}$ can be straightforwardly obtained by integrating the corresponding tree-level matrix elements, while at NNLO it can be evaluated for example by using the dipole subtraction formalism [56], [57], [58].", "The role of the counterterm $d\\sigma _{\\rm CT}$ is to cancel the singular behaviour of $d\\sigma _{\\rm R}$ in the limit $q_T\\rightarrow 0$ , making the square-bracket term in Eq.", "(REF ) finite.", "The explicit form of $d\\sigma _{\\rm CT}$ is completely known up to NNLO: it is obtained from the perturbative expansion of the resummation formula of the logarithmically enhanced contributions to the $q_T$ distribution of the $t {\\bar{t}}H $ system [36], [41].", "Our computation is implemented within the Matrix framework [59], suitably extended to $t {\\bar{t}}H$ production, along the lines of what has been done for heavy-quark production [38], [39], [40].", "The required tree-level and one-loop amplitudes are obtained with OpenLoops  [60], [61], [62].", "In order to numerically evaluate the contribution in the square bracket of Eq.", "(REF ), a technical cut-off $r_{\\rm cut}$ is introduced on the dimensionless variable $q_T / M$ , where $M$ is the invariant mass of the $t\\bar{t} H$ system.", "The final result, which corresponds to the limit $r_{\\rm cut}\\rightarrow 0$, is extracted by computing the cross section at fixed values of $r_{\\rm cut}$ and performing the $r_{\\rm cut}\\rightarrow 0$ extrapolation.", "More details on the procedure and its uncertainties can be found in Refs.", "[59], [41].", "We now come back to the first term on the right-hand side of Eq.", "(REF ).", "The function ${\\cal H}$ can be decomposed as ${\\cal H}=H\\delta (1-z_1)\\delta (1-z_2)+\\delta {\\cal H}\\, ,$ where the hard coefficient $H$ contains purely virtual contributions and flavour indices are understood.", "More precisely, we define $H(\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}});M^2/{\\mu _{\\rm IR}}^2)=1+\\frac{\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}})}{2\\pi } H^{(1)}(M^2/{\\mu _{\\rm IR}}^2)+\\left(\\frac{\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}})}{2\\pi }\\right)^2 H^{(2)}(M^2/{\\mu _{\\rm IR}}^2)+\\ldots $ with $H^{(n)}=\\left.\\frac{2{\\rm Re}\\left({\\cal M}^{(n)}_{\\rm fin}({\\mu _{\\rm IR}},{\\mu _{\\rm R}}){\\cal M}^{(0)*}\\right)}{|{\\cal M}^{(0)}|^2}\\right|_{{\\mu _{\\rm R}}=M}\\, .$ In Eq.", "(REF ) ${\\cal M}^{(0)}$ is the Born level amplitude for the $c{\\bar{c}}\\rightarrow t{\\bar{t}}H$ process, while ${\\cal M}^{(n)}_{\\rm fin}$ are the perturbative coefficients of the finite part of the renormalised virtual amplitude ${\\cal M}_{\\rm fin}({\\mu _{\\rm IR}})=\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}})\\left({\\cal M}^{(0)}+\\frac{\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}})}{2\\pi }{\\cal M}^{(1)}_{\\rm fin}({\\mu _{\\rm IR}},{\\mu _{\\rm R}})+\\left(\\frac{\\alpha _{\\mathrm {S}}({\\mu _{\\rm R}})}{2\\pi }\\right)^2 {\\cal M}^{(2)}_{\\rm fin}({\\mu _{\\rm IR}},{\\mu _{\\rm R}})+\\ldots \\right)$ after subtraction of IR singularities at the scale ${\\mu _{\\rm IR}}$ .", "The IR finite amplitude ${\\cal M}_{\\rm fin}$ in Eq.", "(REF ) is obtained from the all-order renormalised virtual amplitude ${\\cal M}$ as $|{\\cal M}_{\\rm fin}({\\mu _{\\rm IR}})\\rangle ={\\mathbf {Z}}^{-1}({\\mu _{\\rm IR}})| {\\cal M}\\rangle \\, ,$ where ${\\mathbf {Z}}({\\mu _{\\rm IR}})$ is the multiplicative renormalisation factor that removes the $\\epsilon $ poles of the multiloop amplitude, defined as in Ref. [63].", "In the case of $n=2$ the definition of $H^{(n)}$ in Eq.", "(REF ) allows us to isolate the only unknown contribution to the NNLO cross section in Eq.", "(REF ).", "Indeed at NNLO the square bracket in Eq.", "(REF ) is computable for all the partonic channels, and the other contributions, embodied in the function $\\delta {\\cal H}^{(n)}$ on the right hand side of Eq.", "(REF ), are completely known (see the discussion in Sec.", "2 of Ref. [41]).", "In particular, at NNLO $\\delta {\\cal H}$ also includes the one-loop squared contribution and the soft-parton contributions [42].", "We point out that, if all the perturbative ingredients are available, the dependence on ${\\mu _{\\rm IR}}$ exactly cancels out between $H$ and $\\delta {\\cal H}$ in Eq.", "(REF ).", "In the following we will use the factorisation formula in Eq.", "(REF ) to construct approximationsWe note that our treatment differs from that used in early NLO calculations [64] of $t {\\bar{t}}H $ production and also in the recent study of Ref. [65].", "Our approximation is purely soft, while in Refs.", "[64], [65] the Higgs boson is treated as a collinear parton.", "Moreover, our approximation is applied only to (part of) the virtual contribution, while all the remaining contributions are treated exactly.", "of the coefficients $H^{(1)}$ and $H^{(2)}$ .", "In order to do so, we need to introduce a prescription that, from an event containing a $t{\\bar{t}}$ pair and a Higgs boson, defines a corresponding event in which the Higgs boson is removed.", "We will use the $q_T$ recoil prescription [66], in which the momenta of the top and antitop are left unchanged, and the transverse momentum of the Higgs boson is equally reabsorbed by the initial-state partons.", "This prescription is used to evaluate the $t{\\bar{t}}$ amplitudes on the right-hand side of the factorisation formula in Eq.", "(REF ).", "We also need to define the subtraction scale ${\\mu _{\\rm IR}}$ .", "In the evaluation of the $H^{(1)}$ and $H^{(2)}$ contributions the subtraction scale ${\\mu _{\\rm IR}}$ is set to the virtuality of the $t {\\bar{t}}H$ system.", "In the IR subtracted $t{\\bar{t}}$ amplitudes required to evaluate the factorisation formula, we will set ${\\mu _{\\rm IR}}$ to the virtuality of the $t{\\bar{t}}$ pair.", "At tree-level and one-loop order the $t{\\bar{t}}$ amplitudes are obtained with OpenLoops  [60], [61], [62], while at two-loop order we use the results of Ref. [45].", "Our numerical results are obtained for proton–proton collisions at the centre-of-mass energies between $\\sqrt{s}=8\\,\\mathrm {TeV}$ and $\\sqrt{s}=100\\,\\mathrm {TeV}$.", "We use the NNLO NNPDF31 [67] parton distribution functions (PDFs) throughout, with the QCD running coupling $\\alpha _{\\mathrm {S}}$ evaluated at 3-loop order.", "The pole mass of the top quark is $m_t=173.3$  GeV, the Higgs boson mass $m_H=125$  GeV, and the Fermi constant $G_F = 1.16639\\times 10^{-5}$  GeV$^{-2}$.", "The central values of the renormalisation and factorisation scales, ${\\mu _{\\rm R}}$ and ${\\mu _{\\rm F}}$ , are fixed at ${\\mu _{\\rm R}}={\\mu _{\\rm F}}=(2m_t+m_H)/2$.", "In order to validate our soft Higgs boson approximation, we first study its quality at LO.", "We compare the LO cross sections $\\sigma _{\\rm LO}$ in the $gg$ and $q{\\bar{q}}$ partonic channels with the corresponding approximated results from the soft factorisation formula in Eq.", "(REF ).", "In the $gg$ channel the result obtained in the soft approximation is a factor of 2.3 (2) larger than the exact result at $\\sqrt{s}=13\\,(100)$  TeV.", "The situation is better for the $q{\\bar{q}}$ channel, where the soft approximation is only a factor 1.11 (1.06) larger than the exact result.", "Despite the fact that the physical Higgs boson is far from the kinematical region for which Eq.", "(REF ) is derived, the soft approximation gives the right order of magnitude of the LO cross section.", "We now move to NLO.", "We compute the contribution $\\Delta \\sigma _{\\rm NLO,H}$ of the coefficient $H^{(1)}$ to the NLO correction and its soft approximation, $\\Delta \\sigma _{\\rm NLO,H}|_{\\rm soft}$ .", "Note that in the soft approximation both numerator and denominator of Eq.", "(REF ) are evaluated in the soft limit, that is, we define $H^{(n)}|_{\\rm soft}=\\left.\\frac{2{\\rm Re}\\left({\\cal M}^{(n)}_{\\rm fin}({\\mu _{\\rm IR}},{\\mu _{\\rm R}}){\\cal M}^{(0)*}\\right)_{\\rm soft}}{|{\\cal M}^{(0)}|_{\\rm soft}^2}\\right|_{{\\mu _{\\rm R}}={\\tilde{M}}}\\, ,$ where ${\\tilde{M}}$ is the virtuality of the $t{\\bar{t}}$ pair.", "By using this approximation we are effectively reweighting the exact LO cross section appearing in the first term in Eq.", "(REF ).", "This is expected to be a better approximation than simply computing the numerator in the soft limit, since the effect of the soft approximation largely cancels out in the ratio.", "The results are shown in Table REF for both the $gg$ and $q{\\bar{q}}$ channels: In the $gg$ channel the $\\Delta \\sigma _{\\rm NLO,H}$ contribution is underestimated in the approximation by just about $30\\%$ on the inclusive level at both collider energies.", "We find that this deviation depends only mildly on kinematic variables, which suggests that the good agreement is not due to some accidental cancellation between different phase space regions.", "The approximation works even better for the $q{\\bar{q}}$ channel, where the exact result is underestimated by only $5\\%$We note that the soft factorisation formula in Eq.", "(REF ) accounts only for Higgs boson radiation off the final-state top quarks.", "At LO this is the only Higgs boson production mechanism in the $q{\\bar{q}}$ partonic channel, whereas in the $gg$ channel radiation off a virtual top quark contributes as well.", "This can explain the worse quality of the soft approximation in the $gg$ channel.", "Starting from NLO, however, diagrams with virtual top quarks radiating a Higgs boson are present in both partonic channels..", "The observed deviation for $\\Delta \\sigma _{\\rm NLO,H}$ can be used as an estimate of the uncertainty in our approximation of $\\Delta \\sigma _{\\rm NNLO,H}$ , which is discussed in what follows.", "Table: Hard contribution to the NLO and NNLO cross sections in the soft approximation.Results are shown for the gggg and qq ¯q{\\bar{q}} partonic channels for s=13\\sqrt{s}=13 TeV and s=100\\sqrt{s}=100 TeV.", "Exact results at LO and NLO are shown for comparison.Our results for the hard-virtual contribution to the NNLO cross section in the soft approximation $\\Delta \\sigma _{\\rm NNLO,H}|_{\\rm soft}$ are reported in the last row of Table REF .", "In the $gg$ ($q{\\bar{q}}$ ) channel $\\Delta \\sigma _{\\rm NNLO,H}|_{\\rm soft}$ is about $1\\%$ ($2-3\\%$ ) of the LO cross section.", "Therefore, we can anticipate that at NNLO the uncertainties due to the soft approximation will be rather small.", "As a first cross check we have repeated our calculation by using other variants of the recoil prescription of Ref.", "[66], for example by reabsorbing the transverse momentum of the Higgs boson entirely into one of the initial state momenta: we find that the results are very close to those obtained with the symmetric prescription, leading to a negligible uncertainty compared to that derived below.", "We have also varied the infrared subtraction scale ${\\mu _{\\rm IR}}$ at which the soft approximation is applied, by repeating the computation for ${\\mu _{\\rm IR}}=M/2$ and ${\\mu _{\\rm IR}}=2M$ while adding the exact evolution terms from $M/2$ and $2M$ to $M$ .", "The hard-virtual NNLO contribution $\\Delta \\sigma _{\\rm NNLO,H}|_{\\rm soft}$ changes by ${}^{+164\\%}_{-25\\%}$ (${}^{+142\\%}_{-20\\%}$ ) in the $gg$ channel and by ${}^{+4\\%}_{-0\\%}$ (${}^{+3\\%}_{-0\\%}$ ) in the $q{\\bar{q}}$ channel at $\\sqrt{s}=$ 13(100) TeV.", "In order to provide a conservative estimate of the uncertainty, we start from the NLO results.", "As discussed above, at NLO the soft approximation underestimates $\\Delta \\sigma _{\\rm NLO,H}$ in the $gg$ channel by $30\\%$ and by $5\\%$ in the $q{\\bar{q}}$ channel.", "Therefore, the uncertainty on $\\Delta \\sigma _{\\rm NNLO,H}|_{\\rm soft}$ cannot be expected to be smaller than these values.", "We multiply this uncertainty by a tolerance factor that is chosen to be 3 for both the $gg$ and the $q{\\bar{q}}$ channels, taking into account the overall quality of the approximation and the effect of the ${\\mu _{\\rm IR}}$ variations discussed above.", "In order to obtain the final uncertainty on the full NNLO cross section, we linearly combine the ensuing uncertainties from the $gg$ and $q{\\bar{q}}$ channels.", "As we will see in the next Section, the overall uncertainty on the NNLO cross section estimated in this way is still significantly smaller than the residual perturbative uncertainties." ], [ "Results", "We are now ready to present our results for the inclusive $t {\\bar{t}}H $ cross section.", "In Table REF we report LO, NLO and NNLO cross sections computed with the same setup as in Section .", "The scale uncertainties are obtained through the customary procedure of independently varying the renormalisation (${\\mu _{\\rm R}}$ ) and factorisation (${\\mu _{\\rm F}}$ ) scales by a factor of two around their central value with the constraint $0.5 \\le {\\mu _{\\rm R}}/{\\mu _{\\rm F}}\\le 2$.", "Since, as can be seen from Table REF , such scale uncertainties are highly asymmetric, especially at NNLO, in the following we will conservatively consider their symmetrised version as our estimate of perturbative uncertainty.", "More precisely, we take the maximum between the upward and downward variations and leave the central value unchanged.", "Table: LO, NLO and NNLO cross sections at s=13\\sqrt{s}=13 TeV and s=100\\sqrt{s}=100 TeV.", "The errors stated in bracket at NNLO combine numerical errors with the uncertainty due to the soft Higgs boson approximation.The errors stated in brackets at NNLO are obtained by combining the uncertainty from the soft Higgs boson approximation, estimated as discussed above, with the (much smaller) systematic uncertainty from the subtraction procedure.", "Comparing NLO and LO results we see that NLO corrections increase the LO result by $25\\%$ at $\\sqrt{s}=13$  TeV and by $44\\%$ at $\\sqrt{s}=100$  TeV.", "The impact of NNLO corrections is much smaller: they increase the NLO result by $4\\%$ at $\\sqrt{s}=13$  TeV and by $2\\%$ at $\\sqrt{s}=100$  TeV.", "The NNLO contribution of the off-diagonal channels [41] is below the permille level at $\\sqrt{s}=13$  TeV, while it amounts to about half of the computed correction at $\\sqrt{s}=100$  TeV.", "Perturbative uncertainties are reduced down to the few-percent level.", "The uncertainty from the soft Higgs boson approximation amounts to about $\\pm 0.6\\%$ at both collider energies.", "We point out that this uncertainty, although not negligible, is still significantly smaller than the remaining perturbative uncertainties.", "Figure: LO, NLO and NNLO cross sections with their perturbative uncertainties as functions of the centre-of-mass energy, computed as discussed in the text.The experimental results from ATLAS  and CMS  at s=13\\sqrt{s}=13 TeV are also shown for comparison.The lower panel illustrates the impact of NNLO corrections with respect to the NLO result.", "The inner NNLO band denotes the uncertainty from the soft approximation combined with the systematic uncertainty from the subtraction procedure.In Fig.", "REF we show the LO, NLO and NNLO cross sections and their perturbative uncertainties as functions of the centre-of-mass energy $\\sqrt{s}$ .", "The lower panel illustrates the relative impact of the NNLO corrections with respect to the NLO result.", "The inner NNLO band denotes the combination of the uncertainty from the soft approximation with the systematic uncertainty from the subtraction procedure.", "We see that NNLO corrections range from about $+4\\%$ at low $\\sqrt{s}$ to about $+2\\%$ at $\\sqrt{s}=100$  TeV.", "The perturbative uncertainty is reduced from $\\pm 9\\%$ at NLO in the entire range of $\\sqrt{s}$ to $\\pm 3\\%$ ($\\pm 2\\%$ ) at $\\sqrt{s}=8$  TeV ($\\sqrt{s}=100$  TeV).", "We observe that the NNLO band is fully contained within the NLO band.", "The experimental results by ATLAS (Fig.", "04a in the auxiliary material of Ref.", "[3]) and CMS [4] at $\\sqrt{s}=13$  TeV are also shown for reference in Fig.", "REF .", "We point out that for a sensible comparison with experimental data NLO EW corrections should be considered as well.", "At $\\sqrt{s}=13$  TeV NLO EW corrections increase the cross section by $1.7\\%$ with respect to the NLO result [27]." ], [ "Summary", "The associated production of a Higgs boson with a top–antitop quark pair is a crucial process at hadron colliders since it allows for a direct measurement of the top-quark Yukawa coupling.", "In this Letter we have presented first NNLO QCD results for the $t{\\bar{t}}H$ cross section in proton collisions.", "The calculation is complete except for the finite part of the two-loop virtual amplitude that is computed by using a soft Higgs boson approximation.", "Such approximation is constructed by applying a soft factorisation formula that is presented here, for the first time, up to NNLO in QCD perturbation theory.", "This formula will offer strong checks of future exact computations of two-loop amplitudes for processes in which a Higgs boson is produced in association to heavy quarks.", "Since the quantitative impact of the genuine two-loop contribution in our computation is relatively small, our approximation allows us to control the NNLO $t {\\bar{t}}H $ cross section to better than $1\\%$ .", "The NNLO corrections are moderate, and range from about $+4\\%$ at $\\sqrt{s}=13$  TeV to $+2\\%$ at $\\sqrt{s}=100$  TeV, while QCD perturbative uncertainties are reduced to the few-percent level.", "When combined with NLO EW corrections, our calculation allows us to obtain the most advanced perturbative prediction to date for the $t{\\bar{t}}H$ cross section.", "Acknowledgements We would like to thank Thomas Gehrmann and Gudrun Heinrich for helpful discussions and comments on the manuscript.", "We are most grateful to Jonas Lindert and the OpenLoops collaboration for their continuous assistance during the course of this project.", "We thank the organisers of “Loops and Legs in Quantum Field Theory” (Ettal, April 2022), where this work was initiated.", "This work is supported in part by the Swiss National Science Foundation (SNF) under contract 200020$\\_$ 188464.", "The work of SD is supported by the Italian Ministero della Università e della Ricerca (grant PRIN201719AVICI 01).", "The work of SK is supported by the ERC Starting Grant 714788 REINVENT." ] ]
2210.07846
[ [ "Transversal transport of magnons in a modified Lieb lattice" ], [ "Abstract We studied a two-band magnon insulating model whose geometry is that of a modified Lieb lattice, on which one of the sites was removed.", "There is an anisotropic ferromagnetic exchange interaction between the three nearest neighbours, and the anisotropy opens a gap in the magnon energy band structure.", "A non-vanishing Berry curvature is induced by a Dzyaloshinskii-Moriya interaction (DMI).", "The topology of the bands is trivial (in the sense of a null Chern number), but the finite Berry curvature induces Hall-like transport effects whose coefficients were calculated.", "Their dependence on temperature and applied magnetic field were studied and show resemblance with other magnon insulating systems found on the literature." ], [ "Transversal transport of magnons in a modified Lieb lattice", "Pedro Gonçalves de Oliveira$^a$ (P. G. de Oliveira) E-mail address: [email protected] (corresponding author) and Antônio Sérgio Teixeira Pires$^a$ (A. S. T. Pires) E-mail address: [email protected] $^a$ Departamento de Física, Universidade Federal de Minas Gerais, Belo Horizonte, MG, CP702, 30123-970, Brazil.", "We studied a two-band magnon insulating model whose geometry is that of a modified Lieb lattice, on which one of the sites was removed.", "There is an anisotropic ferromagnetic exchange interaction between the three nearest neighbours, and the anisotropy opens a gap in the magnon energy band structure.", "A non-vanishing Berry curvature is induced by a Dzyaloshinskii-Moriya interaction (DMI).", "The topology of the bands is trivial (in the sense of a null Chern number), but the finite Berry curvature induces Hall-like transport effects whose coefficients were calculated.", "Their dependence on temperature and applied magnetic field were studied and show resemblance with other magnon insulating systems found on the literature.", "Keywords: Spin waves; magnons; Dzyaloshinskii–Moriya interaction; transport; Hall-like effects." ], [ "Introduction", "Topological effects on condensed matter systems have been intensely studied since the discovery of the quantum Hall effect by von Klitzing et al [1].", "The attention falls naturally on the so-called topological insulators (TIs), which are electronic systems that have gapped bands on the bulk and robust conducting edge (or surface) modes.", "These systems have different insulating phases characterized by topological indices [2], and may show Hall-like effects when subjected to a field or temperature gradient [3], [4], [5], [6], [7].", "These effects arise in materials with strong spin-orbit coupling, and can be related to the Berry phase and Berry curvature of the electronic bands [8].", "In analogy to TIs in electronic systems, there has been a great interest in topological magnon insulators (TMIs) lately.", "Magnons are spin-wave excitations of the ground state of systems of localized spins, and when magnon bands have non-trivial topology (in the form of finite Berry curvature) the same Hall-like transport effects can arise [9], [10], [11], [12], [13], [14], [15], [16], [17].", "Since magnons are bosons, magnonic systems are intrinsically different from electronic ones, which motivates their study.", "A notable fact is that, because of their uncharged nature, magnons favour dissipationless transport, being of great interest to spintronics [18].", "While topological effects in magnon systems were first discovered in a three-dimensional material with the geometry of the pyrochlore lattice [19], the main theoretical interest nowadays falls on two-dimensional lattices, where the most studied geometries are the honeycomb lattice [20], [21], [22] and the kagome lattice [12], [23], [24], [25], [14].", "The latter can be layered with triangular planes to form the pyrochlore structure.", "Other lattices that were predicted to hold topological magnon effects are the Shastry-Sutherland [26], square [27], checkerboard [28], [29], [30] and Lieb [31] lattices.", "The latter is of particular interest because it's the geometry that $CuO_2$ planes assume in high-$T_c$ cuprate superconductors [32].", "In a tight-binding approach it is a three-band model that shows a flat band [33] and a single Dirac cone in the Brillouin zone.", "The energy gap can be opened with the creation of a TI phase by an intrinsic spin-orbit interaction term [34], [35].", "The Berry curvature and anomalous Hall effect of the electronic Lieb lattice were studied by He et al [36].", "A magnon model for the same lattice was studied by Cao et al [31], where it was shown that topological insulating phases can be induced by a complex hopping between next-near-neighbours.", "The insulating system presents thermal spin Hall effect, and the temperature dependence of the thermal Hall conductivity is related to its topological phases.", "Based on the necessity of finding novel magnetic systems with non-trivial topologies, we propose a study of a modified version of the ferromagnetic Lieb lattice, where one of the inequivalent sites is removed.", "The topological effects are induced by a Dzyaloshinskii-Moriya interaction (DMI) between the next-next-near neighbours, which is equivalent to the above mentioned complex hopping.", "The DMI breaks the time-reversal symmetry (TRS), and it is the most common way of inducing non-vanishing Berry curvature and topological effects.", "We present the system's geometry and Hamiltonian in section , calculate its magnon band structure in section REF , calculate and discuss its Hall-like transport coefficients in section and present our conclusions in section ." ], [ "Model", "We consider a lattice with two inequivalent sites (A and B) in each square unit cell, with the following Hamiltonian: $H =&-J_{1}\\sum _{\\left\\langle i,j\\right\\rangle }\\left( S_{i}^{x}S_{j}^{x}+S_{i}^{y}S_{j}^{y}+\\lambda S_{i}^{z}S_{j}^{z}\\right) -J_{2}\\sum _{\\left\\langle \\left\\langle i,j\\right\\rangle \\right\\rangle \\in A}\\left(S_{i}^{x}S_{j}^{x}+S_{i}^{y}S_{j}^{y}+\\lambda S_{i}^{z}S_{j}^{z}\\right)\\nonumber \\\\& -J_{3}\\sum _{\\left\\langle \\left\\langle \\left\\langle i,j\\right\\rangle \\right\\rangle \\right\\rangle }\\left( S_{i}^{x}S_{j}^{x}+S_{i}^{y}S_{j}^{y}+\\lambda S_{i}^{z}S_{j}^{z}\\right) \\nonumber \\\\& -D\\sum _{\\left\\langle \\left\\langle \\left\\langle i,j\\right\\rangle \\right\\rangle \\right\\rangle } \\nu _{ij}\\left( S_{i}^{x}S_{j}^{y}-S_{i}^{y}S_{j}^{x}\\right) -B\\sum _{i}S_{i}^{z} $ The lattice can be seen as a modified Lieb lattice in which one of the three inequivalent sites was removed.", "The unit cell is a square, which side was taken as equal to one.", "A sketch of the lattice can be seen in Figure REF .", "There are ferromagnetic exchange interactions (solid lines) between near-neighbours $A$ and $B$ (strength $J_{1}$ ) and next-near-neighbours $A$ (strength $J_{2}$ ).", "On the diagonals between $A$ and $B$ (dashed lines) there are a $J_{3}$ exchange interaction and a Dzyaloshinskii-Moriya interaction (DMI) [37], [38] of the form $-\\mathbf {D}_{ij}\\cdot \\left( \\mathbf {S}_{i}\\times \\mathbf {S}_{j}\\right) $ .", "The later is responsible for the finite Berry curvature and Hall-like transport effects which are the focus of this paper.", "Figure: Modified Lieb lattice studied on this paper.", "Circles (squares) represent the A (B) sites.", "Solid lines are ferromagnetic exchange bonds (J 1 J_1 between A and B; J 2 J_2 between two A sites).", "Dashed linesrepresent both J 3 J_3 and DM interaction.", "The unit cell is represented bythe dashed square.The DM interaction is not forbidden by Moriya's rules, for there is no center of inversion in the midpoint of the bond [38].", "Hence we can consider the interaction as a regular DMI and there is no need for introducing and external electric field to induce the interaction, as it is the case in some lattices where the regular DMI is forbidden [39].", "The same Moriya's rules restrict the DM vector to the $z$ direction, as a two-dimensional lattice is symmetric in respect to a reflection upon its own plane.", "We take $\\mathbf {D}_{ij}=D \\nu _{ij}\\mathbf {\\hat{z}}$ (fourth term in (REF )), where, $\\nu _{ij}=\\pm 1$ for different bond directions, following Figure REF .", "In all the exchange interactions there is an anisotropy $\\lambda >1$ on the $z$ direction, which is responsible for the stabilization of the magnetic order in an easy-axis configuration [40], [41] (otherwise the system would have no long-range order according to the Mermim-Wagner theorem).", "The last term in (REF ) is a Zeeman interaction with a constant magnetic field $\\mathbf {B=}B\\mathbf {\\hat{z}.", "}$ Figure: Configuration of the DMI vectors 𝐃\\textbf {D} on the diagonal bonds, departing from an AA site." ], [ "Magnon bands", "We are interested in the linear spin-wave regime, so we use the zeroth order expansion of the Holstein-Primakoff representation: $S_{i}^{+} & \\approx \\sqrt{2S} \\ a_{i}\\text{ ; }S_{i}^{-} \\approx \\sqrt{2S} \\ a_{i}^{\\dagger }\\text{ ; }S_{i}^{z}=S-a_{i}^{\\dagger }a_{i}\\\\S_{j}^{+} & \\approx \\sqrt{2S} \\ b_{j}\\text{ ; }S_{j}^{-} \\approx \\sqrt{2S} \\ b_{j}^{\\dagger }\\text{ ; }S_{j}^{z}=S-b_{j}^{\\dagger }b_{j}\\nonumber $ We are studying a ferromagnetic model, but since we have two inequivalent sites we need two operators $a$ and $b$ .", "Here, the index $i$ represents sites in the sublattice $A$ and $j$ in sublattice $B$ .", "This spin-wave approximation works for low temperature, and the magnon-magnon interactions (higher order therms on the expansion) are much smaller than the linear contribution and can be neglected [42], [43].", "After transforming, keeping only the quadratic terms and applying a Fourier transform, we can write the harmonic momentum-space Hamiltonian as: $H_{harm}=S\\sum _{k}\\psi _{k}^{\\dagger }\\left( h_{0}\\hat{1}_{2}+\\hat{M}_k\\right)\\psi _{k}$ where $\\psi _{k}^{\\dagger }=\\left( a_{k}^{\\dagger }\\text{ \\ \\ }b_{k}^{\\dagger }\\right) .$ Here, $h_{0}$ and the $2\\times 2$ matrix $\\hat{M}_{k}$ are $h_{0} & =J_{2}\\left( \\lambda -\\cos k_{x}\\right) +2\\lambda \\left(J_{1}+2J_{3}\\right) +\\frac{B}{S}\\\\M_{11} & =J_{2}\\left( \\lambda -\\cos k_{x}\\right) \\nonumber \\\\M_{12} & =M_{21}^{\\ast }=-2J_{1}\\cos \\frac{k_{y}}{2}-4J_{3}\\cos k_{x}\\cos \\frac{k_{y}}{2} -4iDm_{k}\\nonumber \\\\M_{22} & =-J_{2}\\left( \\lambda -\\cos k_{x}\\right) \\nonumber $ where $m_{k}= -\\sin k_{x}\\sin \\frac{k_{y}}{2}$ .", "We can write the matrix $\\hat{M}_{k}$ as a Pauli vector $\\hat{M}=h_{x}\\left( \\mathbf {k}\\right) \\hat{\\sigma }_{x}+h_{y}\\left(\\mathbf {k}\\right) \\hat{\\sigma }_{y}+h_{x}\\left( \\mathbf {k}\\right)\\hat{\\sigma }_{z}$ so that the dispersion relation is simply $\\frac{E_{\\pm }}{\\hbar }=\\omega _{\\pm }\\left( \\mathbf {k}\\right) =S\\left( h_{0}\\left(\\mathbf {k}\\right) \\pm h\\left( \\mathbf {k}\\right) \\right)$ where $h\\left( \\mathbf {k}\\right) = \\left\\Vert \\mathbf {h} \\left( \\mathbf {k}\\right) \\right\\Vert =\\sqrt{h_{x}^{2}\\left( \\mathbf {k}\\right)+h_{y}^{2}\\left( \\mathbf {k}\\right) +h_{z}^{2}\\left( \\mathbf {k}\\right) },$ and $h_{x}\\left( \\mathbf {k}\\right) & =-2\\cos \\frac{k_{y}}{2}\\left( J_{1}+2J_{3}\\cos k_{x}\\right) \\\\h_{y}\\left( \\mathbf {k}\\right) & =4Dm_{k}\\nonumber \\\\h_{z}\\left( \\mathbf {k}\\right) & =J_{2}\\left( \\lambda -\\cos k_{x}\\right)\\nonumber $ Sometimes it is useful to perform the substitution $\\tan \\phi =\\frac{D}{J_{3}},$ arriving at $h_{x}\\left( \\mathbf {k}\\right) =-2J_{1}\\cos \\frac{k_{y}}{2}-4J_{D}\\gamma _{k}\\cos \\phi \\text{ \\ , \\ }h_{y}\\left( \\mathbf {k}\\right)=4J_{D}m_{k}\\sin \\phi $ where $\\gamma _{k}=\\cos k_{x}\\cos \\frac{k_{y}}{2}$ and $J_{D}=\\sqrt{J_{3}^{2}+D^{2}}$ .", "The phase $\\phi $ can be seen as a magnetic flux generated by the DM term.", "In the case of a pure DM interaction in the diagonals (no exchange $J_{3}$ ), the phase is $\\phi =\\pi /2$ .", "The explicit dispersion relation is $\\omega _{\\pm }\\left( \\mathbf {k}\\right) & =SJ_{2}\\left( \\lambda -\\cos k_{x}\\right) +2S\\lambda \\left( J_{1}+2J_{3}\\right) + B + \\\\& \\pm S\\sqrt{4\\cos ^{2} \\frac{k_{y}}{2} \\left( J_{1}+2J_{3}\\cos k_{x}\\right) ^{2}+J_{2}^{2}\\left( \\lambda -\\cos k_{x}\\right)^{2}+16D^{2}m_{k}^{2}} \\nonumber $ The band structure of the system is plotted in Figure REF for $S=1/2$ , $B=0$ , $J_1=1$ , $J_2=0.5$ , $J_3=0.2$ , $D=0.1$ and $\\lambda =1.5$ .", "There is a gap of width $2SJ_{2}\\left( \\lambda -1\\right)$ at the high-symmetry point $X^{\\prime }=\\left( 0,\\pm \\pi \\right) $ .", "The gap vanishes into a Dirac point in the isotropic limit ($\\lambda =1$ ) independently of the value of other parameters.", "Therefore, the anisotropy is responsible for the gap, and not the DM interaction, as it is the case of other magnon insulating systems [20], [22], [31], [30].", "For a zero applied field $B=0$ and small enough values of $D$ the valence band has a single global minimum at the $\\Gamma $ point, which has zero value if and only if $\\lambda =1$ .", "Hence, in the isotropic limit we have a gapless system with a Goldstone mode.", "For our purposes this possibility is absent, since we defined $\\lambda > 1$ for an easy axis configuration.", "Therefore, we have an insulating system with no Goldstone mode.", "Figure: Band structure of our system for S=1/2S=1/2, B=0B=0, J 1 =1J_1=1, J 2 =0.5J_2=0.5,J 3 =0.2J_3=0.2, D=0.1D=0.1 and λ=1.5\\lambda =1.5." ], [ "Transport coefficents", "For magnon systems it is known that an external in-plane magnetic field gradient generates not only a parallel spin current, but also a spin response on the transverse direction [9]: $j_{x}^{S}=-\\sigma _{xy}\\left( \\partial _{y}B\\right)$ This is the so-called spin Hall effect of magnons, and it is responsible for a protected spin current on the edges of a 2D magnet [16], [17].", "In a semiclassical picture, this current can be explained by the effect of the borders, which act as an effective field gradient, confining the magnons inside the magnet and generating a spin current perpendicular to the gradient (along the edge)$\\cdot $ The transverse conductivity $\\sigma _{xy}$ can be obtained from the Berry curvature of the system [10], [11]: $\\sigma _{xy}=-\\frac{1}{\\hbar V_{BZ}}\\sum _{\\lambda }\\int _{BZ}dk_{x}dk_{y}\\text{}n_{\\lambda }\\left( \\mathbf {k}\\right) \\Omega _{\\lambda }\\left( \\mathbf {k}\\right)$ where $\\lambda $ sweeps the magnon energy bands.", "Here, $n_{\\lambda }\\left(\\mathbf {k}\\right) =\\left( e^{\\hbar \\omega _{\\lambda }\\left( \\mathbf {k}\\right)/k_{B}T}-1\\right) ^{-1}$ is the Bose-Einstein distribution and $\\Omega _{\\lambda }\\left( \\mathbf {k}\\right) $ is the off-plane component of the (vector) Berry curvature of the band defined as [10], [11], [8]: $\\mathbf {\\Omega _{\\lambda }}\\left( \\mathbf {k}\\right) =i\\left\\langle \\nabla _{\\mathbf {k}}u_{\\lambda }\\left( \\mathbf {k}\\right) \\right|\\times \\left|\\nabla _{\\mathbf {k}}u_{\\lambda }\\left( \\mathbf {k}\\right) \\right\\rangle $ where $\\left|u_{\\lambda }\\left( \\mathbf {k}\\right) \\right\\rangle $ is the Bloch wave function of the $\\lambda $ band.", "For a two-band system parametrized by $\\mathbf {h}\\left( \\mathbf {k}\\right) =\\left( h_{x}\\left( \\mathbf {k}\\right),h_{y}\\left( \\mathbf {k}\\right),h_{z}\\left( \\mathbf {k}\\right)\\right)$ as ours, the Berry curvatures can be written as [22], [8] $\\Omega _{+}\\left( \\mathbf {k}\\right) =-\\frac{1}{2h^{3}}\\mathbf {h} \\cdot \\left( \\partial _{k_{x}}\\mathbf {h} \\times \\partial _{k_{y}}\\mathbf {h}\\right)$ and $\\Omega _{-}\\left( \\mathbf {k}\\right) =-\\Omega _{+}\\left(\\mathbf {k}\\right) $ .", "For our system the definition above gives: $\\Omega _{+}\\left( \\mathbf {k}\\right) =& -2\\frac{DJ_{2}}{h\\left( \\mathbf {k}\\right)^3} \\Bigg \\lbrace J_{1}\\left[ \\sin ^{2}k_{x}-\\left( \\lambda -\\cos k_{x}\\right) \\cos k_{x}\\sin ^{2} \\frac{k_{y}}{2} \\right] + \\\\&+2J_{3}\\left[ \\left( \\lambda -\\cos k_{x}\\right)\\left( \\sin ^{2}k_{x}\\cos ^{2} \\frac{k_{y}}{2} -\\cos ^{2}k_{x}\\sin ^{2} \\frac{k_{y}}{2} \\right) +\\sin ^{2}k_{x}\\cos k_{x}\\right] \\Bigg \\rbrace \\nonumber $ The Berry curvature of the valence band through the Brillouin zone is plotted in Figure REF .", "As we could expect, the curvature is highly concentrated around the point in the Brillouin zone where the energy gap occurs [8].", "Figure: Berry curvature of the valence band of the system for S=1/2S=1/2, J 1 =1J_1=1, J 2 =0.5J_2=0.5,J 3 =0.2J_3=0.2, D=0.1D=0.1 and λ=1.5\\lambda =1.5.The Chern number is defined as proportional to the integral of the Berry curvature in the Brillouin zone and is an integer number which labels the inequivalent topological phases in Chern insulators [2].", "Despite of the finite Berry curvature, the Chern numbers of the bands are null for any combination of parameters values (provided that $\\lambda > 0$ ), which means that the system doesn't present a non-trivial topological insulating phase.", "Nevertheless, the non-vanishing Berry curvature gives rise to Hall-like effects like the spin Hall effect shown above.", "The Berry curvature is related to transverse spin and heat currents in response to an applied temperature gradient [15].", "These are the spin Nernst effect [44], $j_{x}^{N}=-\\alpha _{xy}\\left( \\partial _{y}T\\right)$ and the thermal Hall effect [10], [11], [12] $j_{x}^{Q}=-\\kappa _{xy}\\left( \\partial _{y}T\\right) .$ Here, $j_{x}^{N}$ is the spin current and $j_{x}^{Q}\\,$ the heat current.", "The transport coefficients are defined as [10], [11], [44], [21]: $\\alpha _{xy} & =-\\frac{k_{B}}{\\hbar V_{BZ}}\\sum _{\\lambda }\\int _{BZ}dk_{x}dk_{y}\\text{ \\ }c_{1}\\left( n_{\\lambda }\\left( \\mathbf {k}\\right)\\right) \\Omega _{\\lambda }\\left( \\mathbf {k}\\right) \\nonumber \\\\\\kappa _{xy} & =-\\frac{k_{B}^{2}T}{\\hbar V_{BZ}}\\sum _{\\lambda }\\int _{BZ}dk_{x}dk_{y}\\text{ \\ }c_{2}\\left( n_{\\lambda }\\left( \\mathbf {k}\\right)\\right) \\Omega _{\\lambda }\\left( \\mathbf {k}\\right)$ with $c_{1}\\left( x\\right) & =\\left( 1+x\\right) \\ln \\left( 1+x\\right) -x\\ln x \\nonumber \\\\c_{2}\\left( x\\right) & =\\left( 1+x\\right) \\ln \\left( \\frac{1+x}{x}\\right) ^{2}-\\left( \\ln x\\right) ^{2}-2Li_{2}\\left( -x\\right)$ where $Li_{2}\\left( x\\right) $ is the Spence's dilogarithm function.", "The spin Hall conductivity $\\sigma _{xy}$ is plotted as a function of temperature for three values of $D$ in Figure REF (in all transport coefficients plots the temperature and applied magnetic field are shown in units of the exchange energy $J_1$ ).", "The plot shows a monotonically rising behavior of $\\sigma _{xy}$ , similar to what could be observed in the checkerboard lattice [30], [29].", "At zero temperature $\\sigma _{xy}$ is zero due to the absence of magnon excitations.", "That is a consequence of the fact that boson numbers are not conserved and vanish in the zero temperature limit.", "However, as the temperature increases from zero, magnons are thermally excited and $\\sigma _{xy}$ becomes finite.", "At low temperatures, the lower band dominates.", "In all temperature plots the range of the temperature axis was chosen to best show the character of the curve, but we wouldn't expect the model to work at such high temperatures.", "In fact, the linear spin-wave approximation deals with perturbations of the ordered ground state and only works at low temperatures.", "On way to quantify the validity of the model is calculating the expected value of the total boson number: $\\Delta =\\frac{1}{V_{BZ}}\\sum _{\\lambda }\\int _{BZ}dk_{x}dk_{y} \\ n_{\\lambda }$ The spin-wave approach works for $\\Delta \\ll S$ .", "Some values of $\\Delta $ for different spins and temperatures are represented in Table REF .", "As we can see, the approximation works better for low temperatures and high values of spin.", "Figure: Spin Hall conductivity versus temperature for S=1/2S=1/2, B=0B=0, J 1 =1J_1=1, J 2 =0.5J_2=0.5, J 3 =0J_3=0, λ=1.2\\lambda =1.2 and different values of DD.Table: Expectation value of the boson number for different temperatures and spin.", "The theory parameters are B=0B=0, J 1 =1J_1=1, J 2 =0.5J_2=0.5, J 3 =0J_3=0, λ=1.2\\lambda =1.2 and D=0.2D=0.2.", "Temperature is in units of J 1 J_1.Figure: Spin Nernst coefficient versus temperature for S=1/2S=1/2, B=0B=0, J 1 =1J_1=1, J 2 =0.5J_2=0.5, J 3 =0J_3=0, λ=1.2\\lambda =1.2 and different values of DD.In Figure REF we present the spin Nernst coefficient as a function of temperature.", "The coefficient $c_1(x)$ decreases with $x$ and this leads to a flattening of $\\alpha _{xy}$ for high temperature.", "We see a monotonic response to the temperature, similar to other magnon systems like the AFM checkerboard and FM Kagome lattices [28], [25], but in contrast to the FM checkerboard and both FM and AFM honeycomb lattices [30], [22], [44].", "The thermal Hall conductivity versus temperature is shown in Figure REF , and is monotonically increasing with no sign change.", "This behaviour is similar to the FM honeycomb, AFM checkerboard and AFM Kagome lattices [21], [29], [24].", "The FM Lieb and Kagome lattices also show the same behaviour for some choices of interaction parameters; for other combinations, $\\kappa _{xy}(T)$ has a local minimum/maximum and can even change sign with the increase of the temperature [31], [23].", "This heterogeneous character of the $\\kappa _{xy}(T)$ for different parameters may be related to different topological phases of the insulating system, indexed by the Chern numbers of the bands.", "As mentioned above, our system is an insulator with a single trivial phase (the Chern number is zero for any combination of parameters), so we wouldn't expect any change in the character of the curve for different parameters.", "The same can be said about the other transport coefficients.", "In fact, the only change we detected in the transport coefficients was from a quantitative nature (see Figure REF for an example).", "Figure: Thermal Hall conductivity versus temperature for S=1/2S=1/2, B=0B=0, J 1 =1J_1=1, J 2 =0.5J_2=0.5, J 3 =0J_3=0, λ=1.2\\lambda =1.2 and different values of DD.Figure: Spin Hall conductivity versus temperature for S=1/2S=1/2, B=0B=0, J 1 =J 2 =J 3 =1J_1=J_2=J_3=1, D=0.2D=0.2 and λ=1.2\\lambda =1.2.", "This plot exemplifies the fact that the transport coefficients curves change only quantitatively for different combination of parameters.Figure REF shows $\\sigma _{xy}$ as a function of the applied magnetic field $B$ perpendicular to the lattice plane, for $D=0.2$ and $T=0.1$ .", "From Eq.", "REF we see that a magnetic field does not affect the Berry curvature.", "However, increasing $B$ , $\\omega _+$ and $\\omega _-$ increase, and so, for a given $T$ , a smaller number of magnons are excited and $\\sigma _{xy}$ decreases.", "A strong magnetic field diminishes the thermal population difference between the bands leading to a suppression of $\\sigma _{xy}$ , and so affecting substantially the conductivity.", "The same behaviour is noted in the other transport coefficients (Figures REF and REF ).", "This behaviour was predicted for generic ferromagnetic 2D films in the dipolar regime [13], and was also observed in theorical calculations on the checkerboard and Kagome lattices [30], [14], [24]." ], [ "Conclusions", "We studied a two-band ferromagnetic magnon model with the geometry of a modified Lieb lattice.", "An easy-axis anisotropy induces a gap in the $X^{\\prime }=(0,\\pm \\pi )$ point.", "When a uniform Dzyaloshinsky-Moriya interaction is present between next-next-near neighbours, we find a non-vanishing Berry curvature due to time-reversal symmetry breaking.", "The Chern numbers of the bands are null, so the insulating system has only one (trivial) phase.", "Nevertheless, the finite Berry curvature induces three magnon Hall-like effects whose transport coefficients were studied: the spin Hall effect, the thermal spin Hall effect and spin Nernst effect.", "The response of the coefficients to the temperature is monotonic without sign change, and resembles other topological magnon systems found on the literature.", "The presence of an external off-plane magnetic field through a Zeeman interaction minimizes the transport effects, as we expected by thermodynamical considerations.", "As far as we know, up to now there is no material described by the lattice studied here.", "Nevertheless, there is a variety of compounds described by the Lieb lattice.", "Thus, we believe that through a modification of the Lieb lattice one could synthesize a compound where our model could be used.", "Another possibility is in the field of optical lattices, where advances in synthesizing techniques make it possible to mimic DM interactions using laser beams." ], [ "Acknowledgments", "This work was supported by CAPES (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) and CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico)." ] ]
2210.07757
[ [ "Fine-grained Category Discovery under Coarse-grained supervision with\n Hierarchical Weighted Self-contrastive Learning" ], [ "Abstract Novel category discovery aims at adapting models trained on known categories to novel categories.", "Previous works only focus on the scenario where known and novel categories are of the same granularity.", "In this paper, we investigate a new practical scenario called Fine-grained Category Discovery under Coarse-grained supervision (FCDC).", "FCDC aims at discovering fine-grained categories with only coarse-grained labeled data, which can adapt models to categories of different granularity from known ones and reduce significant labeling cost.", "It is also a challenging task since supervised training on coarse-grained categories tends to focus on inter-class distance (distance between coarse-grained classes) but ignore intra-class distance (distance between fine-grained sub-classes) which is essential for separating fine-grained categories.", "Considering most current methods cannot transfer knowledge from coarse-grained level to fine-grained level, we propose a hierarchical weighted self-contrastive network by building a novel weighted self-contrastive module and combining it with supervised learning in a hierarchical manner.", "Extensive experiments on public datasets show both effectiveness and efficiency of our model over compared methods.", "Code and data are available at https://github.com/Lackel/Hierarchical_Weighted_SCL." ], [ "Introduction", "Discovering novel categories based on some known categories has attracted much attention in both Natural Language Processing [26], [27] and Computer Vision [28], [9].", "Previous works assume that novel categories are of the same granularity (or of the same class hierarchy level) as known categories.", "However, in real-world scenarios, novel categories can be more fine-grained sub-categories of known ones (e.g., sports and tennis).", "A typical application of this scenario is when data analysts want to perform more fine-grained analysis on data with only coarse-grained annotations, where re-labeling fine-grained categories can be time consuming and labour intensive.", "For example, in the intent detection field, discovering more fine-grained user intents can help to provide better services to customers, but labeling fine-grained intent categories is often much more difficult than labeling coarse-grained ones, since fine-grained annotation often requires higher expertise.", "Figure: An example of proposed FCDC task (fine-grained label names need to be assigned by experts).To meet this requirement, we investigate a new scenario named Fine-grained Category Discovery under Coarse-grained supervision (FCDC).", "As shown in Figure REF , FCDC needs models to discover fine-grained categories (e.g., tennis and music) based only on coarse-grained (e.g., sports and arts) labeled data which are easier and cheaper to obtain.", "In addition to being in line with above practical needs, FCDC is also a challenging task.", "Firstly, performing FCDC requires models to increase intra-class distance to ensure fine-grained separability with only coarse-grained supervision.", "However, coarse-grained classification only focuses on inter-class distance and does not care about intra-class distance [2], so samples with the same coarse-grained labels will be close to each other and hard to be separated in fine-grained feature space.", "Secondly, since fine-grained differentiation depends on correct coarse-grained classification, FCDC also requires models to control inter-class distance to ensure coarse-grained separability.", "Although increasing intra-class distance can contribute to fine sub-classes separability, it will also decrease inter-class distance, which can result in overlapping between different coarse-grained classes and therefore lead to misclassification.", "So how to control and coordinate inter-class and intra-class distance to ensure both coarse-grained and fine-grained separability is the core challenge of FCDC.", "To address above challenges and transfer knowledge from coarse-grained level to fine-grained level, we propose a hierarchical weighted self-contrastive network.", "By performing different experiments on each layer of BERT, [12] find bottom layers of BERT capture more surface features and top layers capture more high-level semantic features, which means BERT can extract features of different granularities from shallow to deep [23].", "Inspired by this phenomenon, the core motivation of our model is to learn coarse-grained knowledge by shallow layers of BERT and learn more fine-grained knowledge by the rest of deep layers hierarchically.", "This motivation is not only consistent with the feature extraction process of BERT, but also corresponding with the shallow-to-deep learning process of humans.", "Specifically, we use given coarse-grained labels to train shallow layers of BERT to learn some surface knowledge, then we propose a weighted self-contrastive module to train deep layers of BERT to learn more fine-grained knowledge based on the learned surface knowledge.", "To ensure both coarse-grained and fine-grained separability, we further propose a weighted self-contrastive module to better coordinate inter-class and intra-class distance in the fine-grained feature space.", "Specifically, given a query sample, we firstly propose a weighting strategy by weighting different negative samples to control both inter-class and intra-class distance.", "Then we propose a self-contrastive strategy to generate positive samples to coordinate inter-class and intra-class distance to avoid the overlapping between different coarse-grained classes.", "We further verify effectiveness and efficiency of our model both theoretically (Section REF ) and experimentally (Section REF ).", "The main contributions of our work can be summarized as threefold: We propose to investigate a practical scenario called Fine-grained Category Discovery under Coarse-grained supervision (FCDC), we further propose a hierarchical model to learn fine-grained knowledge from shallow to deep to facilitate the FCDC task.", "To better coordinate inter-class and intra-class distance, we propose a novel weighted self-contrastive module to ensure both coarse-grained and fine-grained separability.", "Extensive experiments on public datasets show that our model significantly advances best compared methods with a large margin and gets double training efficiency than state-of-the-art contrastive learning methods." ], [ "Contrastive learning", "Contrastive Learning (CL) aims at grouping similar samples closer and separating dissimilar samples far from each other in a self-supervised way [11], which has gained popularity in both Natural Language Processing (NLP) [22] and Computer Vision (CV) [4].", "A critical point for CL is to build high-quality positive and negative samples.", "The simplest way to construct negative samples is to use other in-batch data as negatives [5].", "Further, [10] built a dynamic queue with momentum-updated encoder to keep consistency of representations of negatives.", "However, these methods consider all negatives equally important, which may lose discriminative information of different negatives.", "As for positive samples, in CV, one common way is taking two different transformations of the same image as the query and positive sample [6].", "And in NLP, augmentation techniques such as word deletion [18], adversarial attack [24] and dropout [7] were proposed to generate positives.", "Although there are some recent works [1] using outputs from different levels of a network as positives, we have totally different motivations: they aim at providing more high-quality positives for representation learning while we aim at better adjusting intra-class and inter-class distance.", "Figure: The overall architecture of our model.", "CE means Cross Entropy." ], [ "Novel Category Discovery", "With data volume increases, novel categories especially novel fine-grained categories may be introduced into datasets [17].", "To discover novel categories without human annotation, most previous work adopted clustering methods and transfer learning methods to generate pseudo labels for unlabeled data to train their models [25].", "For example, [26] proposed an alignment strategy to perform DeepCluster [3] to discover novel categories.", "[8] proposed a mutual mean teaching network to refine noisy pseudo labels to perform unsupervised person re-identification.", "Recently, Two similar tasks as FCDC are proposed.", "[2] proposed to perform fine-grained image classification under coarse-grained supervision with angular contrastive learning, and they performed this task in a few-shot learning way which needs extra fine-grained labels for each categories.", "[17] proposed to perform fine-grained text classification with coarse-grained annotations, but they need extra fine-grained label hierarchy and corresponding label names to assist in the task.", "These two tasks both rely on extra fine-grained knowledge from human annotations, which is usually unavailable when novel categories appear in real-world applications.", "Comparatively, our FCDC is a category discovery task which does not require fine-grained knowledge and is more adapted to real world scenarios." ], [ "Problem Formulation", "Denote by $\\mathcal {Y}_{coarse}=\\lbrace \\mathcal {C}_{1}, \\mathcal {C}_{2},...,\\mathcal {C}_{M}\\rbrace $ a set of coarse-grained classes.", "The training set of FCDC is a set of texts $\\mathcal {D}_{train}=\\lbrace \\mathcal {D}_{1},\\mathcal {D}_{2},...,\\mathcal {D}_{N}\\rbrace $ with their coarse-grained labels $\\lbrace \\textit {c}_{1},\\textit {c}_{2},...,\\textit {c}_{N}\\rbrace $ , where $\\textit {c}_{i} \\in \\mathcal {Y}_{coarse}$ .", "Different from previous tasks [2], [17] where the fine-grained label set $\\mathcal {Y}_{fine} = \\lbrace \\mathcal {F}_{1}, \\mathcal {F}_{2},...,\\mathcal {F}_{K}\\rbrace $ is already known, FCDC assumes that we do not have any prior knowledge about fine-grained labels.", "So FCDC requires models to perform clustering methods (e.g., K-Means) to discover fine-grained clusters $\\mathcal {Y}_{fine}$ with $\\mathcal {D}_{train}$ .", "Since performing clustering will assign each input with a specific cluster assignment, FCDC can also classify inputs into proper fine-grained categories $\\lbrace \\textit {f}_{1},\\textit {f}_{2},...,\\textit {f}_{N}\\rbrace $ .", "Although the number of fine-grained clusters $\\textit {K}$ can be estimated with various methods from the clustering area, we assume it is known in FCDC following previous similar works [26], [2] to make a fair comparison." ], [ "Proposed Approach", "As shown in Figure REF , our model mainly contains three components: BERT, Dynamic Queue and Momentum BERT.", "BERT is used to extract both coarse-grained and fine-grained features.", "Dynamic Queue can store more negative samples grouping by their coarse-grained labels following [2].", "Momentum BERT is used to update representations of samples in Dynamic Queue.", "Inspired by the \"shallow to deep\" learning process of humankind and the ability of pre-trained models to extract features from coarse-grained to fine-grained [12], [23], a core motivation of our model is to learn fine-grained knowledge in a progressive way.", "Specifically, our model can learn coarse-grained knowledge at shallow layers under coarse-grained supervision and learn more fine-grained knowledge at deep layers with the proposed weighted self-contrastive learning." ], [ "Supervised Learning", "We firstly perform supervised learning on Transformer layer L of BERT to learn coarse-grained knowledge.", "Given the i-th document $\\mathcal {D}_{i}$ with its coarse-grained label $\\textit {c}_{i}$ , we use all token embeddings from the L-th layer of BERT as its shallow features.", "Then we apply a mean-pooling layer to get its shallow feature representation $h_{i}^{L}$ : $h_{i}^{L} = mean\\raisebox {0mm}{-}pooling(BERT_{L}(\\mathcal {D}_{i}))$ where $h_{i}^{L} \\in \\mathbb {R}^{h}$ is the hidden state of feature representations, $h$ is the dimension of hidden representations.", "Then we perform supervised learning with cross entropy loss on coarse-grained labels to get supervised loss $\\mathcal {L}_{sup}^{L}$ at layer L: $z_{i}^{L} = \\sigma (W_{a}h_{i}^{L} + b_{a}) \\\\\\mathcal {L}_{sup}^{L} = \\mathbb {-} \\frac{1}{N} \\sum _{i=1}^{N} log \\frac{exp(({z}_{i}^{L})^{c_{i}})}{\\sum _{j=1}^{K}exp((z_{i}^{L})^{j})}$ where $z_{i}^{L} \\in \\mathbb {R}^{M}$ is the output logits, $M$ is the number of coarse-grained classes.", "$\\sigma $ is the Tanh activation function, $W_{a} \\in \\mathbb {R}^{h*M}$ and $b_{a} \\in \\mathbb {R}^{M}$ are learnable weights and bias terms, respectively.", "$(z_{i})^{j}$ is the j-th element of output logits $z_{i}$ ." ], [ "Weighted Self-contrastive Learning", "As shown in Figure REF , denote the coarse-grained inter-class and intra-class distance by $d_{coarse}$ and $d_{fine}$ , respectively.", "Supervised learning on coarse-grained labels can ensure $d_{coarse} \\gg 0$ but will also make $d_{fine} \\approx 0$ , which can bring difficulties for fine-grained categorization.", "So how to increase $d_{fine}$ to ensure separability of fine-grained sub-classes is a severe challenge.", "Meanwhile, increasing $d_{fine}$ without restraint will result in overlapping between different coarse-grained classes and therefore lead to misclassification.", "So how to constrain $d_{fine}$ to ensure the proper classification on coarse-grained classes is the other challenge.", "In summary, our total goal can be described as: $0 \\ll d_{fine} < d_{boundary} \\ll d_{coarse}$ where $d_{boundary}$ is a threshold to ensure that samples fall into proper coarse-grained classes.", "To achieve above objectives, we propose a weighted self-contrastive module by introducing a novel generation strategy for positive samples and a weighting strategy for negative samples.", "Figure: The effectiveness of our self-contrastive module, which can ensure both intra-class and inter-class distance." ], [ "Negative Key Generation", "Given the i-th document $\\mathcal {D}_{i}$ , we use all token embeddings from the output layer of BERT as its deep features.", "Then we apply a mean-pooling layer to get its deep feature representation $h_{i}^{o} \\in \\mathbb {R}^{h}$ : $h_{i}^{o} = mean\\raisebox {0mm}{-}pooling(BERT_{o}(\\mathcal {D}_{i}))$ In-batch negative keys   Given $h_{i}^{o}$ with its coarse-grained label $c_{i}$ as a query $q$ , we treat shallow and deep features of other in-batch samples as its in-batch negative keys, where $k_{-}^{in}(i) = \\lbrace h^{L}_{j}, h^{o}_{j} \\rbrace _{j=1...N, j \\ne i}$ .", "In this way, we can increase distance between different samples so that satisfying $ d_{fine} \\gg 0$ and $d_{coarse} \\gg 0$ .", "To satisfy $d_{coarse} \\gg d_{fine}$ , we propose a weighting strategy by giving more weights to samples with different coarse-grained labels as the query $q$ to further increase their distance.", "So $k_{-}^{in}$ can be divided into two groups according to the coarse-grained labels: $k_{-}^{diff}(i) = \\lbrace k \\in k_{-}^{in}(i) : c_{k} \\ne c_{i}\\rbrace \\\\k_{-}^{same}(i) = \\lbrace k \\in k_{-}^{in}(i) : c_{k} = c_{i}\\rbrace $ Momentum negative keys   To provide more negative keys, we build a momentum BERT and a set of dynamic queues $\\lbrace \\mathcal {Q}_{i}\\rbrace _{i=1}^{M}$ to store previous samples grouped by their coarse-grained labels following [2], where M is the number of coarse-grained classes.", "Specifically, given $h_{i}^{o}$ with its coarse-grained label $c_{i}$ as a query, we treat samples from the queue $\\mathcal {Q}_{c_{i}}$ as its momentum negative keys: $k_{-}^{m}(i) = \\lbrace k \\in \\mathcal {Q}_{c_{i}} \\rbrace $ Feature representations of samples in dynamic queues are extracted by momentum BERT, and the parameters of momentum BERT are updated in a momentum way [10].", "At the end of each iteration, the dynamic queues will be updated by adding novel samples and removing the earliest samples.", "Since samples in $k_{-}^{m}(i)$ have the same coarse-grained label as the query, they are much harder to be separated and beneficial to better representation learning.", "The overall negative keys for the query $h_{i}^{o}$ is : $k_{-}(i) = \\lbrace k_{-}^{diff}(i), k_{-}^{same}(i), k_{-}^{m}(i)\\rbrace $" ], [ "Positive Key Generation", "By weighting different negative samples, we can satisfy the condition $0 \\ll d_{fine} \\ll d_{coarse}$ .", "But increasing $d_{fine}$ without restraint will violate the condition $ d_{fine} < d_{boundary}$ and make some samples fall into incorrect coarse-grained classes.", "To solve this problem, we propose a self-contrastive strategy by treating shallow features of a query as its positive key.", "Specifically, given the deep feature representation $h_{i}^{o}$ for document $\\mathcal {D}_{i}$ as a query, we treat $h_{i}^{L}$ as its positive key: $k_{+}(i) = h_{i}^{L}$ As shown in Figure REF , after supervised learning on coarse-grained labels at layer L, $h_{i}^{L}$ can be very close to the class center of $c_{i}$ , so pulling $h_{i}^{o}$ close to $h_{i}^{L}$ will also pull $h_{i}^{o}$ close to the class center of $c_{i}$ .", "In this way, we can increase $d_{fine}$ with restraint and satisfy the condition $ d_{fine} < d_{boundary}$ without computing the specific value of $d_{boundary}$ .", "Another advantage of our self-contrastive strategy is that we can get double training efficiency than traditional data augmentation-based methods [22], [7] since we only need to perform forward and backward propagation only once to get and update both queries and positive keys (Section REF )." ], [ "Weighted Self-contrastive Loss", "Given the query $h_{i}^{o}$ with its positive key $k_{+}(i)$ and negative keys $k_{-}(i)$ , the overall loss of our weighted self-contrastive module is: $\\mathcal {L}_{cont} = \\sum _{i=1}^{N} \\mathbb {-} log \\frac{e^{sim(h_{i}^{o}, h_{i}^{L})/\\tau }}{\\sum \\limits _{l \\in k_{-}(i)}\\!\\alpha _{l}\\!\\sum \\limits _{k \\in l}e^{sim(h_{i}^{o}, h_{k})/\\tau }}$ where $\\alpha _{l} \\in \\lbrace \\alpha _{same}, \\alpha _{diff}, \\alpha _{m} \\rbrace $ are weighting factors for different negative keys, $sim(h_{i}, h_{j})$ is cosine similarity $\\frac{h_{i}^{T}h_{j}}{\\left\\Vert h_{i}\\right\\Vert \\cdot \\left\\Vert h_{j}\\right\\Vert }$ and $\\tau $ is a temperature hyperparameter.", "By weighting different negative keys and selecting shallow features as the positive key, our model can satisfy the goal in Inequation REF and provide conditions for subsequent fine-grained categorization." ], [ "Theoretical Analysis", "The effectiveness of our weighted self-contrastive learning compared with traditional contrastive learning from the gradient perspective is analyzed below.", "Self-contrastive Strategy   Compared with traditional contrastive loss which only aims at grouping queries and their transformations closer, our self-contrastive strategy aims at pulling queries and their shallow features closer: $\\begin{aligned}sim(h_{i}^{o}, h_{i}^{L}) := sim(h_{i}^{o}, h_{i}^{L}) + \\frac{1}{\\tau }\\end{aligned}$ Since $\\tau $ is positive, the positive similarity will increase and $h_{i}^{o}$ will be grouped closer to $h_{i}^{L}$ .", "After supervised learning on coarse-grained labels at layer L, $h_{i}^{L}$ can be close to the class center of $c_{i}$ , so pulling $h_{i}^{o}$ closer to $h_{i}^{L}$ will also pull $h_{i}^{o}$ closer to the class center of $c_{i}$ .", "So our Self-Contrastive strategy can guarantee queries fall into correct coarse-grained categories and get double training efficiency since we only need to perform forward and backward propagation only once to get and update both queries and positive keys.", "Weighting Strategy   Since negatives with the same coarse-grained labels as queries have larger gradients [20], traditional contrastive loss will push these negatives farther from queries than those with different coarse-grained labels as queries, which leads to $d_{coarse} < d_{fine}$ and is opposite of what we expect to solve the FCDC task.", "To mitigate this limitation, we propose a weighting strategy to give more weights to samples with different coarse-grained labels as the query to further increase their distance: $\\begin{aligned}sim(h_{i}^{o}, h_{j}^{o}) := sim(h_{i}^{o}, h_{j}^{o}) - \\alpha _{l} \\cdot \\mathcal {P}_{i,j}\\end{aligned}$ $\\mathcal {P}_{i,j} = \\frac{1}{\\tau } \\cdot \\frac{e^{sim(h_{i}^{o}, h_{j}^{o})/\\tau }}{{\\sum \\limits _{l \\in k_{-}(i)}\\!\\alpha _{l}\\!\\sum \\limits _{k \\in l}e^{sim(h_{i}^{o}, h_{k})/\\tau }}}$ By increasing the weighting factor $\\alpha _{l}$ for negatives with different coarse-grained labels as queries, the corresponding similarity will decrease faster.", "So negatives with different coarse-grained labels from queries will be pushed farther than those with the same coarse-grained labels as queries, which can guarantee $d_{fine} < d_{coarse}$ for the FCDC task.", "Table: Statistics of datasets.", "# indicates the number of samples.", "|𝒞\\mathcal {C}|, |ℱ\\mathcal {F}| means the number of coarse-grained and fine-grained classes, respectively.Table: Model comparison results (%) on fine-grained categories.", "Average results over 5 runs are reported.", "'+ CE' means adding coarse-grained supervision with cross entropy loss.", "We also perform statistical significance test and all the p-values are less than 10 -6 10^{-6}, which means our improvement is significant." ], [ "Overall Loss", "To further guarantee samples to be classified into proper coarse-grained categories, we also add supervised learning on coarse-grained labels at the output layer.", "So the overall loss for our hierarchical weighted self-contrastive network is: $\\mathcal {L} = \\mathcal {L}_{sup}^{o} + \\gamma _{1} \\mathcal {L}_{sup}^{L} + \\gamma _{2} \\mathcal {L}_{cont}$ where $\\mathcal {L}_{sup}^{o}$ is the cross entropy loss at the output layer.", "$\\gamma _{1}$ and $\\gamma _{2}$ are weighting factors.", "After representation learning, we simply perform the non-parametric clustering method K-Means to discover fine-grained categories based on features extracted by the output layer of BERT." ], [ "Datasets", "To evaluate effectiveness of our model, we conduct experiments on three public datasets.", "Statistics of three datasets can be found in Table REF .", "CLINC is an intent classification dataset released by [15].", "Web of Science (WOS) is a paper classification dataset released by [13].", "HWU64 is a personal assistant query classification dataset released by [16]." ], [ "Implementation Details", "We use the pre-trained BERT model (bert-base-uncased) implemented by Pytorch [21] as our backbone and adopt most of its suggested hyper-parameters.", "We use the cuml library [19] to perform K-Means on GPU to speed up calculations.", "We use the AdamW optimizer with 0.01 weight decay.", "Gradient clipping is also used with the norm 1.0.", "For hyper-parameters, temperature $\\tau $ is set to 0.1, layer L is set to 8, and the weighting factors $\\alpha _{l}$ for $\\lbrace k_{-}^{diff}(i), k_{-}^{same}(i), k_{-}^{m}(i)\\rbrace $ are set to {1.4, 1.0, 1.0}, weighting factors $\\lbrace \\gamma _{1}, \\gamma _{2}\\rbrace $ are set to {0.001, 0.008}.", "The training batch size is set to 128, and the testing batch size is set to 64.", "The momentum queue size for each coarse-grained category is set to 128, and the momentum factor for Momentum BERT is set to 0.9.", "The hidden dimension $h$ is 768, the learning rate is set to $5e^{-5}$ , the dropout rate is set to 0.1.", "The training epoch is set to 20.", "For a fair comparison, we use the same BERT model as ours to extract features for all compared methods and adopt hyper-parameters in their original papers." ], [ "Compared Methods", "Baselines   We perform FCDC with BERT in unsupervised and coarse-supervised way as baselines.", "Self-supervised Methods   DeepCluster [3] and DeepAligned [26] are self-supervised methods using self-training techniques and achieve state-of-the-art results in many category discovery tasks.", "Ancor [2] is a self-supervised method designed for few-shot fine-grained classification with coarse-grained labels.", "SimCSE [7] and Delete One Word [22] are contrastive learning methods in NLP with different data augmentation techniques.", "Self-supervised + Cross Entropy   To investigate the influence of coarse-grained supervision on compared models, we further add cross entropy loss on coarse-grained labels $\\mathcal {L}_{sup}^{o}$ to their loss function." ], [ "Evaluation Metrics", "We use fine-grained labels as ground truth to evaluate model performance on testing sets.", "Since no fine-grained knowledge is available for the FCDC task, we need to perform clustering to discover fine-grained categories.", "Clustering performance can reflect the quality of discovered fine-grained clusters (More compact clusters usually mean better discovered categories).", "And classification performance can reflect the semantic overlap between discovered clusters and real categories.", "To evaluate clustering performance, we use two broadly used external evaluation metrics.", "Adjusted Rand Index (ARI) is used to evaluate the degree of agreement between cluster assignments and ground truth.", "And Normalized Mutual Information (NMI) is used to evaluate the mutual information between cluster assignments and ground truth.", "To evaluate classification performance, we use metric Accuracy (ACC), which is obtained from Hungarian algorithm [14] to align cluster assignments and ground truth." ], [ "Main Results", "Model performance on fine-grained categories are reported in Table REF .", "From the results we can draw following conclusions.", "Our model significantly outperforms other compared methods across all datasets.", "We contribute reasons of better performance of our model to following two points.", "Firstly, we propose a hierarchical architecture to learn fine-grained knowledge from shallow to deep, which is consistent with the feature extraction process of BERT and the shallow-to-deep learning process of humans.", "Secondly, we propose a weighted self-contrastive module to coordinate inter-class and intra-class distance so that we can better learn both coarse-grained and fine-grained knowledge.", "Self-training methods perform badly on all datasets and evaluation metrics since they rely on abundant labeled data to generate high-quality pseudo labels for unlabeled data.", "Contrastive learning methods perform better than self-training methods since they do not need fine-grained labels to initialize their models.", "However, their performance is still much worse than ours since they cannot fully utilize given coarse-grained labels to control inter-class and intra-class distance between samples.", "We also find that model performance of most compared methods increases with the addition of coarse-grained supervision, which means coarse-grained supervision can boost model performance on fine-grained tasks.", "Our model performance on coarse-grained categories are reported in Table REF .", "From the table we can see that our model gets similar classification accuracy to the upper-bound coarse-supervised BERT, which means that our model can control not only intra-class distance to ensure fine-grained separability, but also inter-class distance to ensure coarse-grained variability.", "Table: Classification accuracy (%) on coarse-grained categories on test sets.Table: Results (%) of different model variants.", "'-' means that we remove the component from our model." ], [ "Ablation Study", "To investigate contributions of different components to our model, we compare the performance of our model with its variants on the CLINC dataset.", "As shown in Table REF , removing different components will affect model performance more or less, which indicates the effectiveness of different components of our model.", "Removing Momentum Encoder has minimal impact, since our model is insensitive to the number of negative samples.", "Removing weighting strategy or cross entropy loss at shallow layers also hurt model performance since they can help to learn coarse-grained knowledge and lay foundation for learning fine-grained knowledge.", "Above all, removing self-contrastive strategy results in a significant decrease, since it is responsible for controlling intra-class and inter-class distance.", "Figure: Training efficiency comparison." ], [ "Training Efficiency", "In this section, we compare the training efficiency of our model with contrastive methods SimCSE and Delete One Word on the CLINC dataset.", "We test all methods using the BERT base model trained on the same hardware platform (an AMD EPYC CPU 7702 and a RTX 3090 GPU) with the batch size 128.", "Average results over 100 epochs are shown in Figure REF .", "Compared with SimCSE and Delete One Word, our model gets double training efficiency both when adding or removing Momentum Encoder, which benefits from our self-contrastive strategy.", "Specifically, our model utilizes shallow features of queries as positive keys, which only needs to perform forward and backward propagation once to get and update both queries and positive keys." ], [ "Visualization", "We further visualize the learned embeddings of our model and SimCSE using t-SNE on the CLINC dataset in Figure REF .", "Our model can separate different coarse-grained categories with a larger margin than SimCSE (Top in Figure REF ), which benefits from our strategy of combining supervised learning and contrastive learning in a hierarchical way.", "Furthermore, our model can also separate different fine-grained categories with a larger margin (Bottom in Figure REF ), which benefits from the weighted self-contrastive module.", "In summary, our model can better control both inter-class and intra-class distance between samples to facilitate the FCDC task than traditional contrastive learning methods.", "Figure: TSNE visualization of learned embeddings.", "Top: coarse-grained categories.", "Bottom: fine-grained categories of one arbitrary coarse-grained category.", "Left: Ours.", "Right: SimCSE + CE." ], [ "Choices of ", "Effect of Shallow Layer L   The influence of the choice of shallow layer L on model performance is shown in Figure REF .", "Our model achieves the best performance when L=8.", "In this way, our model can learn coarse-grained knowledge at shallow layers (L<8) and provide enough model capacity to learn fine-grained knowledge at deeper layers (L>8), which is consistent with the feature extraction process of BERT [12].", "Effect of Weighting Factors   We investigate the influence of the ratio $\\beta = \\alpha _{diff} / \\alpha _{same}$ in Figure REF (We fixed $\\alpha _{m}=1$ since it has little influence).", "As analyzed in Section REF , by giving more weights to negatives with different coarse-grained labels as queries ($\\beta > 1$ ), our weighting strategy can keep these negatives further away from queries and guarantee $d_{fine} < d_{coarse}$ .", "On the contrary, when $\\beta < 1$ , negatives with the same coarse-grained labels as queries will be further away from the queries, which can hurt our model performance.", "Figure: Effect of shallow layer L.Figure: Effect of ratio β=α diff /α same \\beta = \\alpha _{diff} / \\alpha _{same}." ], [ "Conclusion", "In this paper, we investigate a novel task named Fine-grained Category Discovery under Coarse-grained supervision (FCDC), which can reduce significant labeling cost and adapt models to novel categories of different granularity from known ones.", "We further propose a hierarchical weighted self-contrastive model to approach the FCDC task by better controlling intra-class and inter-class distance.", "By performing supervised and contrastive learning on shallow and deep layers of pre-trained models, our model can learn fine-grained knowledge from shallow to deep with only coarse-grained supervision.", "Extensive experiments on public datasets show that our approach is more effective and efficient than compared methods." ], [ "Limitations", "The limitations of our method lies in two aspects.", "Firstly, following previous works, we need to know the number of fine-grained clusters K as prior knowledge, which is usually difficult to get in real-world scenarios.", "Secondly, our method cannot predict semantic meanings (e.g., label names) of discovered fine-grained categories, which is also an unexplored question in the field of novel category discovery.", "This work was supported by National Key Research and Development Program of China (2020AAA0108800), National Natural Science Foundation of China (62137002, 61721002, 61937001, 61877048, 62177038, 62277042).", "Innovation Research Team of Ministry of Education (IRT_17R86), Project of China Knowledge Centre for Engineering Science and Technology.", "MoE-CMCC “Artifical Intelligence” Project (MCM20190701), Project of Chinese academy of engineering “The Online and Offline Mixed Educational ServiceSystem for 'The Belt and Road’ Training in MOOC China”.", "“LENOVO-XJTU” Intelligent Industry Joint Laboratory Project." ] ]
2210.07733
[ [ "Bandwidth-efficient distributed neural network architectures with\n application to body sensor networks" ], [ "Abstract In this paper, we describe a conceptual design methodology to design distributed neural network architectures that can perform efficient inference within sensor networks with communication bandwidth constraints.", "The different sensor channels are distributed across multiple sensor devices, which have to exchange data over bandwidth-limited communication channels to solve, e.g., a classification task.", "Our design methodology starts from a user-defined centralized neural network and transforms it into a distributed architecture in which the channels are distributed over different nodes.", "The distributed network consists of two parallel branches of which the outputs are fused at the fusion center.", "The first branch collects classification results from local, node-specific classifiers while the second branch compresses each node's signal and then reconstructs the multi-channel time series for classification at the fusion center.", "We further improve bandwidth gains by dynamically activating the compression path when the local classifications do not suffice.", "We validate this method on a motor execution task in an emulated EEG sensor network and analyze the resulting bandwidth-accuracy trade-offs.", "Our experiments show that the proposed framework enables up to a factor 20 in bandwidth reduction with minimal loss (up to 2%) in classification accuracy compared to the centralized baseline on the demonstrated motor execution task.", "The proposed method offers a way to smoothly transform a centralized architecture to a distributed, bandwidth-efficient network amenable for low-power sensor networks.", "While the application focus of this paper is on wearable brain-computer interfaces, the proposed methodology can be applied in other sensor network-like applications as well." ], [ "Context and contributions", "In the last few years, technological advances such as miniaturization of microprocessors and energy-efficient batteries have increasingly enabled the usage of wearable, physiological sensors for ambulant health monitoring.", "Many applications however, will require recording of different data modalities or multiple channels of the same data type at different locations to extract meaningful patterns.", "This naturally leads to the concept of a body-sensor network (BSN), where the different sensors wirelessly share their data and solve a given task in a distributed fashion.", "Well-known applications include microphone arrays to detect heart and lung body sounds [1], electroencephalography (EEG) sensor networks [2],[3] and other distributed or modular neuro-sensor platforms [4].", "A major constraint in the design of these networks is that they should be energy-efficient, enabling a maximal battery lifetime.", "In BSNs, the typical energy bottleneck will be the wireless transmission of the data between the sensors and/or a fusion center [5],[2],[6].", "Simply offloading all the recorded data to the cloud where they can be jointly processed will thus severely hamper the battery lifetime, presenting the need for different, bandwidth-efficient solutions [2],[7].", "In this paper, we will present a framework to design deep neural network (DNN) architectures that deal with such bandwidth constraints.", "The framework is generic, in the sense that we make no prior assumptions on the DNN architecture itself.", "We start from a user-defined centralized neural network model for inference from multi-sensor input, and explain how this initial model can be used to build a distributed model that solves the same inference task.", "This conceptual methodology is then illustrated and analyzed in an EEG-based brain-computer interface task, which acts as a driver application driver throughout this paper.", "EEG is a widely used, noninvasive way to measure the electrical activity of the brain.", "These signals can be harnessed for various purposes, including the monitoring and analysis of sleeping patterns [8], epileptic seizure detection [9], the study of brain disorders after injuries [10] and brain-computer interfaces (BCI), which allows for direct communication between the human brain and external machines [11],[12],[13].", "Traditional EEG requires patients to wear a bulky EEG cap with many wires that are connected to the acquisition device.", "This means that monitoring the patient's EEG can typically only be done in a hospital or laboratory environment.", "These limitations of classical EEG have led to a growing desire for ambulatory EEG, allowing for continuous neuromonitoring in daily life [14].", "A major enabler for these purposes is the development of mini-EEG devices: concealable, lightweight, miniaturized devices that are deployed behind or in the ear [15],[16],[17] or attached to the scalp [18],[19].", "A single device would only be able to record one or a few EEG channels from its local area, hampering the performance in many of the previously mentioned applications.", "To mitigate this, multiple mini-EEG devices at different locations can be organized in a so-called wireless EEG sensor network (WESN) [2],[20],[19].", "Each device can then perform some local processing on its own channels, before sharing its information with the other devices to perform the original, centralized EEG tasks in a distributed manner.", "This shift from one EEG cap towards a network of wireless, miniaturized devices affects the design of the machine learning models we use to perform these tasks in two major ways.", "Firstly, to guarantee a comfortable user experience, we are only able to use a limited number of devices.", "We thus first need to solve an EEG channel selection task, determining how many and where these devices should be placed, minimizing the amount of devices, while maximizing the performance of the desired EEG task [21],[20].", "Secondly, the recorded channels are now stored on separate devices, meaning we cannot perform multi-channel processing without sharing the recorded data across the devices first.", "Simply transmitting the full, raw channels to a fusion center would incur enormous energy costs and severely hamper the battery life of the mini-EEG devices [2],[5].", "To achieve a viable battery life, we will thus need to limit the amount of data each device in the WESN can share and take this bandwidth constraint into account during the model design.", "Recently, deep learning or DNN models have become more and more popular in the processing and analysis of physiological signals, including EEG [22].", "This trend in combination with the shift towards low-power wearables cultivates a need to redesign such DNNs towards distributed architectures that can operate in modular sensor platforms, such as WESNs and other body-sensor networks.", "While a generic methodology for the first problem (i.e., channel selection and sensor placement for DNN-based inference) has been proposed in [21], generic methodologies for the second problem (i.e.", "translating DNNs to bandwidth-efficient modular architectures) are still largely lacking.", "In this paper, we study how we can adapt existing centralized DNN architectures to make them amenable for use in distributed settings with communication bandwidth constraints such as in body-sensor networks (and WESNs in particular).", "In the resulting distributed architecture, the sensor nodes learn to locally process the data, compress them to a desired degree and transmit them and finally fuse the compressed data to solve the desired task.", "To validate the applicability of this method, we study its performance on a motor execution EEG task and analyze the bandwidth-versus-performance tradeoff.", "The main contributions of this paper are: We introduce a design framework that maps a given centralized neural network architecture to a distributed architecture that is able to run efficiently on a bandwidth-constrained sensor network.", "We combine this framework with the early exit mechanism of [23] to further decrease the bandwidth by deciding on a per-sample basis how much data needs to be transmitted to the fusion center.", "We demonstrate the usage of our method by taking a centralized neural network architecture solving a given motor execution EEG classification task and decentralizing it.", "We analyze the resulting bandwidth-accuracy trade-offs of the resulting distributed architecture and demonstrate that with only small performance losses compared to the centralized baseline, substantial bandwidth gains can be achieved." ], [ "Distributed deep learning: related work", "The literature of distributed deep learning is diverse and covers many different topics.", "A first class of methods distributes networks across multiple compute nodes, either to enable training of a single very large network that would otherwise not fit in memory on multiple standard CPU's or GPU's [24] or to accelerate training by training a network on multiple devices in parallel and aggregating the gradient updates on each device [25].", "A second line of research aims to map centralized models to a number of hardware devices to perform efficient inference.", "For instance, Bhardwaj et al.", "[26] employ multiple model compression techniques to map a single network to a number of smaller student networks with a limited memory footprint, while also minimzing the inter-device communication cost.", "Stahl et al.", "[27] have a similar goal, but instead employs layer partitioning to perform the exact same operations as in the original network, but spread these out across devices.", "All the previous work has one major factor in common, which makes them not applicable to our problem statement.", "They share the assumption that either all devices have access to all the input data or all the input data are generated in a central location and the energy of communicating this data to the worker nodes is not a constraint.", "The literature on deep learning where different channels or modalities of the input data itself is split across different devices is quite limited.", "The closest work to ours in this regard is the distributed deep neural network (DDNN) framework of Teerapittayanon et al.", "[23].", "Similarly to our setting, the input data is distributed across devices.", "The local classifications of each device are aggregated and the confidence in this prediction is estimated with the normalized entropy of the resulting class probability distribution.", "If the confidence is high enough, this result - which only required the transmission of a classification vector of each node - is taken as the final result.", "Otherwise, a processed version of the data of each local device is forwarded to the cloud, thereby requiring a larger bandwidth.", "The main idea is thus to reduce bandwidth by only forwarding difficult samples that can't be correctly classified locally.", "Designing the optimal bandwidth-performance trade-off for a given application is then done by setting the desired confidence threshold of the local classification, with higher required confidence resulting in more data streamed to the cloud, but fewer misclassifications.", "In contrast, the main focus of this work will be to reduce bandwidth by designing an efficient architecture that compresses the data on our nodes to the desired degree, as will be described in the next section.", "However, both approaches are orthogonal to each other and we will ultimately combine them to gain even greater bandwidth gains without losing too much accuracy." ], [ "Paper Outline", "The paper is organized as follows.", "In section we introduce our framework and show how to combine it with the early exiting mechanism of [23].", "Section presents the WESN use case, providing an overview of the used EEG dataset, how we emulate the environment of a WESN and design the neural network architecture for this specific use case.", "We then present our experimental results in section and finish with some conclusion in section ." ], [ "Proposed Method", "In this section, we will conceptually describe the proposed bandwidth-efficient distributed architecture.", "We will first give a conceptual overview of the proposed architecture in Subsection REF , while in Subsection REF , we will explain in more detail how a centralized neural network can be cast to this distributed architecture and how it is trained.", "In Section and , we will then apply this architecture design framework to a specific EEG inference task." ], [ "Proposed architecture", "To build an architecture that minimizes the communication cost in a wireless sensor network, we propose a scheme where each local sensor (henceforth referred to as a node) compresses its recorded data as much as possible before sending it to a fusion center, where the final processing and inference will take place.", "The idea is to design an architecture that is able to interpolate between the two extreme cases of minimal and maximal communication, which also corresponds to minimal and maximal task accuracy.", "As the point of minimal communication, we take the setting where each node only transmits its local classification, as this would reasonably be the most condensed task-relevant information it could share.", "This setting, represented by the ClassFuse branch (orange path in Figure REF ), will serve as the basis of our architecture, with the other modules serving to trade extra bandwidth for an improved performance compared to this minimum-communication baseline.", "The point of maximal communication corresponds to each node simply transmitting its full recorded data.", "This would allow the fusion center to perform the same multi-channel processing as in the centralized case and achieve maximal accuracy.", "However, to achieve a trade-off between bandwidth and performance, we compress each signal at the local node, after which they are reconstructed at the fusion center.", "This is the task of the CompressFuse branch (blue path in Figure REF ).", "This CompressFuse branch essentially mimics the original centralized network, although it operates on data that is distorted through the compression-reconstruction scheme.", "Finally, the results of the two branches are fused to provide a final output.", "A high-level schematic of such an architecture is illustrated in Figure REF .", "Another way to look at this architecture is as a combination of early and late fusion, respectively represented by the CompressFuse and ClassFuse branches.", "We will now delve deeper into the design of these modules for our WESN case and how they are trained.", "Figure: Illustration of the distributed neural network architecture and its modules.", "The orange ClassFuse branch lets each node perform its own local classification with a single-channel neural network and combines these at the fusion center.", "The blue CompressFuse branch compresses each channel locally and reconstructs the full multi-channel signal at the fusion center.", "This reconstruction is then classified by the multi-channel neural network.", "The purple FullFuse module combines these two branches and performs the final classification." ], [ "Design of the modules", "In this subsection, we will delve deeper into how we can use this framework to transform a given centralized architecture that has access to all input channels simultaneously, into a decentralized version that performs the same task." ], [ "ClassFuse", "The task of the ClassFuse branch (orange path in Figure REF is to let each node perform local classification and optimally fuse them together at the fusion center.", "The local classifications are performed with the original centralized architecture,where the input dimensions are reduced with respect to the number of local channels at each node.", "Each node then outputs the class scores as a log-probability vector to the fusion center.", "At the fusion center, these probability vectors are fused into a final class probability vector.", "This fusion is performed with a simple multilayer perceptron (MLP) with 1 hidden layer and Rectified Linear Unit (ReLU) nonlinearities on the concatenated outputs of the nodes.", "We take advantage of the modular nature of this network to train it in two stages.", "First, the weights of the local classifiers are pre-trained with a single-channel classification task.", "Then, the full ClassFuse branch is trained end-to-end, with the weights of the local classifiers having a lower learning rate due to previously being pre-trained." ], [ "CompressFuse", "The task of the CompressFuse branch (blue path in Figure REF is to compress each local recording, reconstruct the full multi-channel signal at the fusion center and classify this with the original, centralized neural network.", "To compress the local sensor channels at each node, we use two strided convolutional layers, with the value of the strides together determining the amount of compression (e.g.", "a stride of 2 and a stride of 3 resulting in a downsampling with a factor 6).", "We then upsample each channel separately with two transposed, strided convolutional layers, mirroring the strides of the compression layers.", "Another possibility would be to just drop the reconstruction alltogether, since this only introduces redundant information in the signal.", "However, when employing an existing neural network for classification, hyperparameters such as the length of the kernels have been tuned assuming a specific length of the time window at the input and a certain desired receptive field.", "Not employing reconstruction would break these assumptions and force us to redesign the original centralized classification network, which we aim to avoid as much as possible.", "Similary to the ClassFuse branch, it is possible to perform the training of this branch in multiple stages.", "We could, for instance, first pre-train the compression-reconstruction layers as an auto-encoder by minimizing the mean squared error (MSE) between the reconstructed output and the input and then train the full CompressFuse end-to-end.", "Whether this two-step training will be necessary, will largely depend on the size of the compression network compared to the classification network." ], [ "FullFuse", "The FullFuse module combines the classifications of the ClassFuse and the CompressFuse branches to perform the final classification.", "Similarly to the ClassFuse, we simply use an MLP with 1 hidden layer and ReLU nonlinearity for this task.", "We train this module jointly with the previously trained ClassFuse and CompressFuse, once again employing a lower learning rate for the latter two.", "In Section , we will demonstrate that the output of FullFuse obtains a higher accuracy than both the ClassFuse and CompressFuse branch separately.", "In summary, the training of the network is thus comprised of the following steps: Train the single-channel local classifiers of each node.", "Train the full ClassFuse branch, combining the local classifications and fine-tune the local classifier weights.", "Train the CompressFuse branch, which compresses, reconstructs and classifies the node signals jointly.", "Train the entire network end-to-end, including the FullFuse module, which combines the two previous branches to perform the final classification.", "In Subsection REF , we will demonstrate the importance of using this piece-wise pre-training scheme, by comparing it to a direct end-to-end training from scratch.", "The ClassFuse branch allows us to reach a certain, basis classifcation accuracy with minimal communication, while the CompressFuse allows us to send additional information to boost this accuracy further.", "However, when the ClassFuse is able to already correctly classify a substantial fraction of the samples on its own, this implies that for many of these samples, the extra information of the CompressFuse branch is not necessary for a correct classification.", "Thus, we can save additional bandwidth by only transmitting the data for the CompressFuse when we are not confident that the ClassFuse has already successfully predicted the label of the current sample.", "This idea of allowing samples to exit the network early has previously been employed to reduce inference time [28] and in the Distributed Deep Neural Network (DDNN) framework of [23] to decide whether a sample is processed locally or in the cloud.", "A common metric for classification confidence in this line of work, which we will employ here as well, is the normalized entropy of the softmaxed classification vector, defined as $H(x) = -\\frac{1}{\\log |C|} \\sum _{i=1}^{|C|} x_{i} \\log (x_{i}).$ with $|C|$ the number of classes and $x$ a probability vector, which in this case is the softmaxed output vector of the ClassFuse branch.", "Similar to [23], we thus first perform classification using the ClassFuse branch and measure the entropy of the current sample's output.", "If the entropy is lower than a certain threshold, we keep this output.", "If the entropy threshold is exceeded, we activate the CompressFuse and combine this with the ClassFuse to perform inference on the full network.", "The value of this threshold introduces a trade-off which can easily be tuned after network training: the higher we put this threshold, the less frequently the CompressFuse branch will be activated, thereby saving more bandwidth at the cost of a reduced classification performance.", "A major advantage of combining early exiting with our bandwidth-efficient architecture design is that, in contrast to the compression factor of our CompressFuse branch, the confidence threshold is a continuous parameter, thus allowing us to perform the bandwidth-accuracy trade-off in a continuous manner rather than a discrete one.", "The efficient architecture design on the other hand, allows us to start this trade-off from a more favorable point than we could otherwise.", "In this section, we investigate the use of our distributed architecture in the context of a BCI task in a wireless EEG sensor network.", "We use data from a motor execution classification task, which is a well-known EEG-BCI paradigm for which large data sets as well as mature deep neural network architectures are available." ], [ "Data set", "Motor execution is a widely used paradigm in the field of BCI.", "Real or intended body movement typically goes hand in hand with neuronal activity in certain motorsensory areas of the brain.", "The goal of motor execution is then to derive from these signals which movement was performed.", "In this work, we will employ the High Gamma Dataset [29], containing about 1000 trials of executed movement following a visual cue, for each of the 14 subjects.", "The dataset also contains a separate test set of about 180 trials per subject, which we use to validate our results.", "The movements to be decoded are divided in 4 classes: left hand, right hand, feet and rest.", "While originally 128 channels were recorded for this dataset, we follow the approach of [29] and perform our experiments using only the 44 channels covering the motor cortex.", "The rest of our preprocessing procedure also follows the work of [29]: Resampling to 250 Hz Highpass-filtering at 4 Hz Standardizing mean and variance per channel to 0 and 1 Epoching in segments of 4.5 seconds, consisting of the 4 seconds after the visual cue and the 0.5 before.", "The neural network architecture we employ for classification - and the one we will convert to a distributed architecture for our WESN - is the multiscale parallel filter bank convolutional neural network (MSFBCNN) proposed in [30].", "For completeness, a detailed summary of this network in table format can be found in Appendix A." ], [ "WESN node emulation", "In traditional EEG caps, a channel is usually measured as the potential between an electrode at a given location and a common reference, typically the mastoid or Cz electrode.", "However, in the case of mini-EEG devices, we can only measure a local potential between two proximate electrodes belonging to the same device.", "To emulate this setting based on a standard cap-EEG recording, we follow the approach of [20].", "In this setting, each pair of electrodes within a preset maximum inter-electrode distance from each other is a candidate electrode pair or node we could measure.", "The signal this node records is then emulated by subtracting one of the channels from the other, thus removing the common (far-distance) reference in the process.", "We applied this method with a distance threshold of 3 cm to our dataset, converting the 44 channels in 286 candidate electrode pairs or nodes.", "The resulting set of nodes had an average inter-electrode distance of 1.98 cm with a standard deviation of 0.59 cm." ], [ "Node selection", "Since we are only able to use a limited number of mini-EEG devices, we will first perform a channel/node selection step to select the most relevant sensor nodes from the pool of 286 candidate nodes.", "To this end, we employ the regularized Gumbel-softmax method described in [21].", "This method allows us to learn the $M$ optimal nodes for a given task and neural network by training said network jointly with a special selection layer that is able to learn the discrete variables involved in feature selection through simple backpropagation.", "The value of $M$ will also be varied throughout our experiments.", "We jointly train this selection layer of size $M$ with the centralized MSFBCNN architecture using the data from all subjects in the data set, resulting in a subject-independent set of $M$ mini-EEG nodes that are optimally placed to solve the motor execution task.", "The $M$ selected nodes are then used to design a distributed version of the MSFBCNN network as explained next." ], [ "Distributed architecture design", "We build our distributed network by taking the MSFBCNN architecture as our centralized baseline.", "Thus, we employ a single-channel version of the MSFBCNN as our classifier on the local nodes and the multi-channel version as our classifier in the fusion center in the CompressFuse branch (this corresponds to all the blocks denoted as 'classifier' in Figure REF ).", "The MLP that fuses our local classifications in the ClassFuse branch (the first orange FC block in Figure REF ), consists of a simple MLP with one hidden layer of size 50 and ReLU nonlinearity in between.", "We use the same MLP architecture to fuse the output of the ClassFuse and CompressFuse (i.e.", "the purple FC block in Figure REF ).", "The 'Compress' block consists of two convolutional layers, each consisting of a single kernel with strides to match a desired compression factor (e.g.", "one stride of 2 and one of 3 to achieve a compression factor 6).", "The 'Recon' block is built symmetrically to the 'Compress' block, with transposed convolutions replacing the normal convolutions.", "Since the reconstruction happens at the fusion center, it would also be possible to jointly reconstruct the channels using spatiotemporal filters instead, though our experiments indicated no advantage in this approach for our application.", "The distributed network is trained using the data of all subjects jointly, using the procedure described in Section REF .", "Training is performed with the Adam optimzer [31], a learning rate of 0.001 and a batchsize of 64 for 50 epochs.", "Early stopping when the validation loss does not decrease for 5 epochs is employed to prevent overfitting.", "As soon as a layer has been trained for the first time, all subsequent fine-tuning of said layer will use a learning rate of only $10^{-4}$ , a tenth of the original learning rate.", "When the full network has been trained, subject-dependent decoders are obtained by fine-tuning the full network end-to-end with subject-specific data." ], [ "Impact of short-distance nodes", "First, we take a look on how much using short-distance nodes instead of channels built from far-distance electrodes (with a common reference) impacts the accuracy of our motor execution task in the centralized case.", "Figure REF compares the subject-dependent accuracy of training the centralized baseline on the $M$ optimal mini-EEG nodes and the $M$ optimal Cz-referenced channels.", "Clearly, using electrodes only 2 to 3 centimeters apart from each other significantly affects the motor execution accuracy.", "These performance drops have also previously been observed in the field of auditory attention decoding [32], though only when the average distance between the electrodes becomes smaller than 3 cm.", "As observed in Figure REF , the difference between short-distance electrode pairs (nodes) and the original cap-EEG data tends to decrease when using more nodes, which is consistent with the observations in [32],[20] Figure: Comparison of the subject-dependent centralized motor execution accuracy when using MM short-distance nodes and MM Cz-referenced channels.", "Mean test accuracies across the subjects are plotted as a function the number of channels/nodes.", "The displayed boxplots are computed over 10 runs and compared with independent samples t-test (no correction for multiple comparison).", "**** indicates statistically significant difference with p<0.005p <0.005." ], [ "Distributed architecture", "Next, we compare the performance of the proposed distributed architecture using different compression factors to the centralized baseline and investigate the individual and combined contribution of both branches.", "The results are summarized in Figure REF .", "A first observation is that, while the ClassFuse is clearly less accurate than the centralized baseline, it still achieves reasonable accuracy considering it only requires the nodes to transmit a probability vector of size 4 (due to the 4-class task) compared to a full window of size 1125 (4.5 seconds sampled at 250Hz).", "A second observation is that the fusion of the ClassFuse and CompressFuse branches consistently and significantly outperforms the two separate branches, resulting in a FullFuse that is competitive with the centralized baseline despite its much lower bandwidth usage.", "When moving to higher compression factors such as 16 however, the CompressFuse has more and more trouble reconstructing the original EEG signal, especially at a lower number of nodes.", "At this point, its performance even drops below the ClassFuse performance, while consuming more bandwidth.", "Remarkably, even at this stage it is still beneficial to fuse the two branches, suggesting that the information provided by the two branches is complementary.", "Thirdly, using more nodes results in a slowly increasing gap between the centralized baseline and the distributed architecture, since the unconstrained baseline is naturally more able to exploit the spatial correlations across the nodes.", "Finally, in terms of actual bandwidth gains, the efficient architecture design allows us to reach similar performance as the original network at 11% of the original bandwidth and even with 6% bandwidth, accuracy merely drops from 87% to 82% in the worst-case scenario.", "Figure: Comparison of the distributed architecture with different compression factors in the CompressFuse branch.", "Each compression factor was obtained by two strided convolutions, with each stride equal to the square root of the compression factor.", "Mean test accuracies across the subjects are plotted as a function the number of nodes and averaged over 10 runs.", "Shades indicate standard error of the mean." ], [ "Impact of pre-training", "To demonstrate the importance of the proposed training scheme, we also compare the performance of our network modules with and without this pre-training.", "As illustrated in Figure REF , the network accuracy severely drops for the ClassFuse and especially for the FullFuse when training from scratch.", "This implies that the increased complexity of the distributed architecture indeed necessitates a custom training scheme taking advantage of its modular nature to train the network piece-wise.", "Though not shown Figure REF , it should be noted that the CompressFuse branch on the other hand, does not require pre-training at all, due to the small amount of parameters in the currently used compression-reconstruction layers.", "However, it stands to reason that pre-training in this branch might become necessary as well when deeper and more complex architectures are used in this branch.", "Figure: Effect of pre-training on the ClassFuse and FullFuse with compression factor 9 in the underlying CompressFuse branch.", "Mean test accuracies across the subjects are plotted as a function the number of nodes and averaged over 10 runs.", "Shades indicate standard error of the mean." ], [ "Early exiting", "Now that we have a more bandwidth-efficient architecture, we employ early exiting to let the network decide which samples are processed by the bandwidth-friendly ClassFuse only and which by the complete FullFuse network.", "By tuning the required confidence threshold between 0 (all samples are handled by the full network) and 1 (all samples are processed by ClassFuse only) we can explore the accuracy-bandwidth trade-off in a continuous manner (instead of being confined to discrete non-prime compression factors) and find Pareto-optimal points, i.e., points where we cannot improve bandwidth or accuracy without sacrificing the other.", "We perform this trade-off for our distributed architecture with varying compression factors in Figure REF .", "Each point in this plot corresponds to a network with $M$ nodes, compression factor $D$ and local exit confidence threshold $T$ (varied from 0 to 1 with a step size of 0.01), which in turn corresponds to a percentage of samples handled by the ClassFuse alone, denoted by $\\lambda (T)$ .", "We compute the per-node bandwidth of this point, relative to the bandwidth required to run the centralized network (i.e.", "continuously transmitting the full recorded data window of length $L$ at each node).", "This relative per-node bandwidth B can be computed as: $B(T) = \\frac{1}{L} \\left( |C| + (1-\\lambda (T)) \\frac{L}{D} \\right).$ with $|C|$ the amount of classes, i.e.", "the size of the class probability vector (in this case 4).", "A first observation to be made from the bandwidth-accuracy curves is that often, bandwidth can be reduced up to 50% without any loss in accuracy.", "Interesting to note is that the deflection point at which the accuracy starts decreasing tends to shift more to the left, the more nodes we employ.", "This is not surprising, since more nodes implies a higher accuracy of the ClassFuse branch, thus less samples for which the full network has to be activated.", "The advantage of using multiple nodes is thus twofold: it increases accuracy due to the higher amount of recorded data (see Figure REF ), but also allows us to save more bandwidth per node by requiring samples to pass through the whole network less often.", "Thus, instead of using nodes for increased accuracy, we can also employ them to save per-node bandwidth for the same accuracy.", "For instance, while using 3 nodes allows us to reach 80% accuracy at 11% of the original bandwidth, using 6 nodes allows us to do so at 1.3% of the original bandwidth.", "A second observation is that, when requiring low bandwidths, starting from a more bandwidth-efficient network with higher compression factors in the CompressFuse branch and applying early exiting generally outperforms applying early exiting to a network with smaller compression factors.", "This is especially salient when comparing using CompressFuse branches with compression (yellow,green and red in Figure REF ) to the version without compression (in blue), which corresponds to transmitting the raw sensor data and using the centralized baseline when the ClassFuse is not confident enough.", "It is noted that these gains tend to decrease as we increase the compression factor,yet the non-compressive version remains outperformed by the compressive versions of CompressFuse.", "Finally, we can observe that the bandwidth gains of combining the efficient architecture design and the early exiting are substantial, allowing the network to operate at only 5% of the original bandwidth, while never losing more than 2% accuracy at that point.", "This demonstrates how the gains obtained by our proposed distributed architecture and those obtained by early exiting are complementary.", "Figure: Bandwidth-accuracy trade-offs when applying early exiting to the distributed architecture with different compression rates of the CompressFuse branch and for a different number of nodes.", "Bandwidth is measured as the average size of the data vector transmitted by each node relative to the full window size of each epoch, as determined by equation ().", "Mean test accuracies across the subjects are averaged over 10 runs.", "Dashed lines indicate centralized accuracy." ], [ "Conclusion and future outlook", "We have proposed a novel distributed neural network architecture design framework that can straightforwardly be mapped on a wireless sensor network and perform inference in this setting in a bandwidth-efficient manner.", "While we have applied it to the specific case of BCI in WESNs, the nature of this architecture is generic and can be applied to any kind of situation where the required input data is distributed across different sensor devices.", "This architecture consists of two parallel branches.", "The ClassFuse branch lets each node in the network perform its own local classification and then aggregates these in a fusion center.", "The purpose of this late fusion procedure is to produce reasonable classifications while consuming the minimal amount of communication energy.", "To then be able to perform a trade-off between bandwidth and performance, the CompressFuse branch compresses the recorded sensor signal of each node to a desired level and then approximately reconstructs the full multi-channel signal at the fusion center, where it can then be classified by a centralized network.", "These outputs of these two branches are then fused to perform the final classification.", "To train this architecture, we have proposed a step-by-step procedure, taking advantage of the modular structure of the architecture to first pre-train every block separately.", "We have experimentally demonstrated both the need and the advantage of training the network in this way.", "We have then combined the resulting network with the early exiting mechanism of [23] to decide on a per-sample basis whether to use the full network or the very bandwidth-friendly ClassFuse to process the current input.", "We have shown that the introduction of the CompressFuse branch allows to substantially push the Pareto-front upwards, in particular in low-bandwidth regimes.", "We have validated the performance of our architecture on an emulated WESN solving a motor execution EEG task.", "We have used our architecture to obtain accuracy-bandwidth curves for this task, showing that for a realistic amount of nodes, we could save a factor 20 in bandwidth at the cost of 2% mean test accuracy proving that good motor execution performances can be reached with both a low number of channels and a high reduction in the amount of data that needs to be transmitted from the nodes.", "An important observation in our experiments is the advantage of using multiple nodes in the sensor network.", "Not only does using more nodes increase classification accuracy, it also leads to a more favorable bandwidth-vs-accuracy trade-off, which in the case of WESNs implies an increased per-node battery life.", "In the future, we will explore ways to reach even higher reductions by using more sophisticated architectures for the CompressFuse branch, which is currently a very simple model consisting of strided convolutions and transposed convolutions.", "We will also explore the generality of our findings on other EEG tasks, such as auditory attention decoding [33] and epileptic seizure detection [9] and other distributed platforms than WESNs." ] ]
2210.07750
[ [ "Every generating polytope is strongly monotypic" ], [ "Abstract We prove a conjecture by McMullen, Schneider and Shephard that every polytope with the generating property is strongly monotypic.", "The other direction is already known, which implies that strong monotypy and the generating property for polytopes are the same notion.", "A criterion for monotypic and strongly monotypic polytopes is also given." ], [ "Introduction", "Monotypic polytopes and strongly monotypic polytopes were first introduced in [1] by McMullen, Schneider and Shephard.", "Monotypic polytopes can be seen as a subclass of simple polytopes: A polytope $P$ is monotypic if every polytope having the same set of normals as $P$ is combinatorially equivalent to $P$ .", "On the other hand, strongly monotypic polytopes is a subclass of monotypic polytopes: A polytope $P$ is strongly monotypic if every polytope $Q$ having the same set of normals as $P$ satisfies the arrangements of the hyperplanes containing the facets of $P$ and $Q$ are combinatorially equivalent.", "In [1] it was shown that monotypy and strong monotypy have relations to the intersection properties.", "For monotypy, the intersection of every two translates of a monotypic polytope $P$ is either empty or homothetic to a Minkowski summand of $P$ .", "For strong monotypy, we have a stronger conclusion: The intersection of any two translates of a strongly monotypic polytope $P$ is either empty or a Minkowski summand of $P$ .", "We remind that the Minkowski sum of two sets $X,Y$ is $\\lbrace x+y: x\\in X, y\\in Y\\rbrace $ .", "Note that the above property for strongly monotypic polytopes is actually the generating property for polytopes.", "The original form of the generating property is that: The intersection of every family of translates of a convex body $K$ is either empty or a Minkowski summand of $K$ .", "In [2], the property was shown to be equivalent to the form where a family of translates is replaced by a pair of translates.", "Therefore, a strongly monotypic polytope is always a generating set.", "Other sets that are not necessarily polytopes were shown to posses this property in [3], [4].", "In the other direction, it is natural to ask whether every polytope with the generating property is always a strongly monotypic polytope.", "It was shown to be the case for $\\mathbb {R}^3$ in [1].", "In fact, the authors of [1] conjectured that it holds for every dimension.", "In this article, we show that it is indeed the case.", "Theorem 1 Every polytope with the generating property is strongly monotypic.", "It follows that strong monotypy and the generating property are the same notion for polytopes.", "In contrast to strong monotypy, the condition for monotypy is already shown to be necessary and sufficient in [1]: A polytope $P$ is monotypy if and only if the intersection of every two translates of $P$ is either empty or homothetic to a Minkowski summand of $P$ .", "It means that we only need to prove the following equivalent form of Theorem REF .", "Theorem 2 Every polytope that is monotypic but not strongly monotypic does not have the generating property.", "The proof of Theorem REF in Section is done by proving Theorem REF .", "In order to do that, we need a more convenient description of monotypic and strongly monotypic polytopes.", "Monotypic polytopes and strongly monotopic polytopes $P$ can be recognized by the set of normals $N(P)$ .", "In [1], several equivalent versions of the necessary and sufficient conditions are given.", "The following theorems are among them.", "Theorem 3 (Condition $M3^{\\prime }$ of [1]) Monotypy of a polytope $P$ is equivalent to: If $V_1$ and $V_2$ are disjoint primitive subsets of $N(P)$ then $\\operatorname{pos}V_1\\cap \\operatorname{pos}V_2=\\lbrace 0\\rbrace $ .", "Here, $V$ is a primitive subset of $N(P)$ if $V$ is linearly independent and $\\operatorname{pos}V\\cap N(P)=V$ .", "The latter condition can be understood as the positive hull of $V$ does not contain any other normal than those in $V$ .", "Such a situation is encountered multiples times throughout the text.", "Theorem 4 (Condition $S4^{\\prime }$ of [1]) Strong monotypy of a polytope $P$ is equivalent to: If $Q$ is any polytope with $N(Q)\\subseteq N(P)$ then $Q$ is monotypic.", "We give other equivalent conditions $D$ and $DD$ as follows.", "Theorem 5 (Condition $D$ ) Monotypy of an $n$ -dimensional polytope $P$ is equivalent to: If some $n+1$ normals of $P$ are in conical position, then their positive hull contains another normal of $P$ .", "In this text, a set of points is said to be separated from 0 if there is a hyperplane strictly separating the set from 0.", "Also, some points are said to be in conical position if they are separated from 0 and none of the points is in the positive hull of the others.", "Theorem 6 (Condition $DD$ ) Strong monotypy of an $n$ -dimensional polytope $P$ is equivalent to: Every $n+1$ normals of $P$ are not in conical position.", "The equivalences in Theorem REF and Theorem REF are verified in Section by using Theorem REF and Theorem REF .", "Theorem REF is proved in Section using Theorem REF and Theorem REF .", "Before closing the introduction, we illustrate in Figure REF a section of a polytope $P$ in $\\mathbb {R}^3$ that is monotypic but not strongly monotypic (in black color) and a translate (in blue color) so that their intersection has the top most face being an edge (in red color) that is longer than the corresponding face of $P$ .", "It means their intersection is not a Minkowski summand of $P$ .", "It follows that $P$ does not have the generating property.", "The example actually provides an idea of a proof for Theorem REF in $\\mathbb {R}^3$ , where a polytope that is monotypic but not strongly monotypic must have four normals in conical position with only one other normal in the positive hull of the four normals.", "The facet of this only other normal is a parallelogram.", "We here only sketch the approach, while the proof in Section provides more details with a generalization for higher dimensions, where there are some unpleasant situations that are not so nice as two parallelograms intersecting at an edge.", "Figure: Part of a monotypic but not strongly monotypic polytope without the generating property" ], [ "Equivalence of the characterizations", "In order to prove Theorem REF , we will show that the two conditions $M3^{\\prime }$ and $D$ are equivalent by verifying both directions.", "Proposition 1 $M3^{\\prime }$ implies $D$ .", "Suppose we do not have $D$ , which means there exist $n+1$ normals in conical position with the positive hull not containing any other normal.", "Let $H$ be a hyperplane not through 0 such that for each $a$ of the $n+1$ normals, ray $\\overrightarrow{0a}$ cuts the hyperplane at only one point.", "Replacing every normal $a$ by the intersection of ray $\\overrightarrow{0a}$ and the hyperplane, and applying Radon's theorem to this set on the affine space, there will be two disjoint subsets $A_1,A_2$ whose convex hulls have a nonempty intersection.", "Consider a point $p\\in \\operatorname{conv}A_1\\cap \\operatorname{conv}A_2$ and let $A^{\\prime }_i\\subseteq A_i$ for $i=1,2$ be the set of vertices of a simplex whose relative interior contains $p$ .", "In other words, the points in $A^{\\prime }_i$ are linearly independent.", "Since $A^{\\prime }_i$ can be seen as a subset of the normals of the polytope and its positive hull is empty of other normals, $A^{\\prime }_i$ is primitive.", "However, the positive hulls of $A^{\\prime }_1,A^{\\prime }_2$ both contain $p$ , which is another point than 0.", "That is we do not have $M3^{\\prime }$ , the conclusion follows.", "To show that $D$ implies $M3^{\\prime }$ , we need the following lemmas.", "Lemma 1 Given two sets of normals $V_1, V_2$ .", "If $\\operatorname{pos}V_1$ and $\\operatorname{pos}V_2$ intersect at a point other than 0, then there are $V^{\\prime }_1\\subseteq V_1, V^{\\prime }_2\\subseteq V_2$ such that $(\\operatorname{pos}V^{\\prime }_1\\cap \\operatorname{pos}V^{\\prime }_2)\\setminus \\lbrace 0\\rbrace $ is precisely a ray in the relative interior of both the positive hulls.", "Moreover, the union $V^{\\prime }_1\\cup V^{\\prime }_2$ is separated from 0.", "Let $U_1\\subseteq V_1$ and $U_2\\subseteq V_2$ be minimal subsets so that $\\operatorname{pos}U_1\\cap \\operatorname{pos}U_2\\ne \\lbrace 0\\rbrace $ .", "We show that $(\\operatorname{pos}U_1\\cap \\operatorname{pos}U_2)\\setminus \\lbrace 0\\rbrace $ is a ray in the relative interior of both the positive hulls.", "Suppose there are two rays $\\overrightarrow{0p},\\overrightarrow{0q}$ in the intersection.", "Let $p=\\sum _{x_i\\in U_1} \\lambda _i x_i=\\sum _{x_i\\in U_2}\\lambda _i x_i,$ and $q=\\sum _{x_i\\in U_1} \\theta _i x_i=\\sum _{x_i\\in U_2}\\theta _i x_i.$ (Note that there is no confusion of $\\lambda _i$ for $x_i\\in U_1$ and $x_i\\in U_2$ since $U_1$ and $U_2$ are disjoint.", "The same is for $\\theta _i$ .)", "Due to the minimality of $U_1, U_2$ , all the coefficients $\\lambda _i, \\theta _i$ are positive, i.e.", "the points $p,q$ are in the relative interior of $\\operatorname{pos}U_1$ and $\\operatorname{pos}U_2$ .", "(We can even conclude that the points in each of $U_1,U_2$ are linearly independent by Carathéodory's theorem for positive hulls.)", "Consider the point $p-\\alpha q = \\sum _{x_i\\in U_1} (\\lambda _i-\\alpha \\theta _i) x_i = \\sum _{x_i\\in U_2} (\\lambda _i-\\alpha \\theta _i) x_i,$ where $\\alpha =\\min \\lbrace \\lambda _i/\\theta _i: x_i\\in U_1\\cup U_2\\rbrace $ .", "The choice of $\\alpha $ ensures that the coefficients $\\lambda _i-\\alpha \\theta _i$ for $x_i\\in U_1\\cup U_2$ are all nonnegative with at least one zero.", "Note that $p-\\alpha q$ is not zero as $p,q$ are two different normals.", "This contradicts with the minimality of $U_1, U_2$ .", "It follows that $p$ and $q$ are the same normal.", "It remains to prove that $U_1\\cup U_2$ is separated from 0.", "Suppose $0\\in \\operatorname{conv}(U_1\\cup U_2)$ , that is $-\\sum _{x_i\\in U_1}\\lambda _i x_i = \\sum _{x_i\\in U_2}\\lambda _i x_i$ for some nonnegative coefficients $\\lambda _i$ .", "Since $(\\operatorname{pos}U_1\\cap \\operatorname{pos}U_2)\\setminus \\lbrace 0\\rbrace $ is a ray in the relative interior of both positive hulls, we have $\\sum _{x_i\\in U_1} \\theta _i x_i = \\sum _{x_i\\in U_2}\\theta _i x_i$ for some positive coefficients $\\theta _i$ .", "Let $\\alpha =\\max \\lbrace \\lambda _i/\\theta _i: x_i\\in U_1\\rbrace $ , the equation $\\sum _{x_i\\in U_1} (-\\lambda _i + \\alpha \\theta _i) x_i =\\sum _{x_i\\in U_2} (\\lambda _i + \\alpha \\theta _i) x_i,$ has all nonnegative coefficients on both sides with at least a zero coefficient on the left.", "It means that the positive hull of a proper subset of $U_1$ intersects the positive hull of $U_2$ at a point other than 0, contradicting the minimality of $U_1, U_2$ .", "The sets $U_1,U_2$ confirm the conclusion.", "Lemma 2 The following claims are equivalent for a point set $X$ that spans $\\mathbb {R}^n$ : (i) There exist $n+1$ points of $X$ in conical position with the positive hull empty of other points of $X$ .", "(ii) For some $d\\le n$ , there exist $d+1$ points of $X$ that span a $d$ -dimensional space and are in conical position with the positive hull empty of other points of $X$ .", "The direction (i)$\\Rightarrow $ (ii) is trivial, just take some $(\\dim \\operatorname{span}X) + 1$ points among them that span a $(\\dim \\operatorname{span}X)$ -dimensional space.", "We show the other direction, that is (ii)$\\Rightarrow $ (i).", "At first, we start with the given $d+1$ points $X^{\\prime }$ of dimension $d$ , and increase their dimension one by one, by adding one point in each step, until we have $n+1$ points.", "If the current number of points is less than $n+1$ , the dimension is therefore less than $n$ , which implies the existence of another point $p$ of $X$ not in the span of $X^{\\prime }$ .", "If the positive hull of $X^{\\prime }\\cup \\lbrace p\\rbrace $ is not empty of other points of $X$ , we replace $p$ by any point of $X$ in $\\operatorname{pos}(X^{\\prime }\\cup \\lbrace p\\rbrace )\\setminus (X^{\\prime }\\cup \\lbrace p\\rbrace )$ , and recursively repeat it until the positive hull is empty of other points of $X$ .", "The process will eventually terminate as the positive hull contains fewer points of $X$ after each step.", "So, for any $X^{\\prime }$ , we can increase the dimension by one, by adding one point $p$ but still keeping the positive hull empty of other points in $X$ .", "Therefore, we can come up with $n+1$ points satisfying the hypothesis, thus achieving (i).", "Remark 1 If we have (ii), we also have the same (ii) for every higher $d$ , including $n$ , which is itself a stronger statement than (i).", "Also, if we remove the phrase “with the positive hull empty of other points of $X$ ” in both claims, the lemma still holds with an even simpler argument.", "Now comes the verification of $D$ implies $M3^{\\prime }$ .", "Proposition 2 $D$ implies $M3^{\\prime }$ .", "Suppose we do not have $M3^{\\prime }$ , which means there are two disjoint primitive subsets $V_1, V_2$ of $N(P)$ such that $(\\operatorname{pos}V_1 \\cap \\operatorname{pos}V_2)\\setminus \\lbrace 0\\rbrace $ is nonempty.", "Over all such pairs $V_1,V_2$ , we consider a pair so that $\\operatorname{pos}(V_1\\cup V_2)$ is minimal (up to inclusion).", "Due to Lemma REF , we have $(\\operatorname{pos}V_1\\cap \\operatorname{pos}V_2)\\setminus \\lbrace 0\\rbrace $ is a precisely a ray in the relative interior of both $\\operatorname{pos}V_1, \\operatorname{pos}V_2$ .", "Moreover, $V_1\\cup V_2$ is separated from 0.", "Consider a hyperplane that cuts every ray $\\overrightarrow{0p}$ for each $p\\in V_1\\cup V_2$ at exactly a point $p^{\\prime }$ .", "Let $U_1,U_2$ be the corresponding sets of those points $p^{\\prime }$ .", "Let $U$ be the intersection of all the normals in $\\operatorname{pos}(V_1\\cup V_2)$ with the hyperplane.", "Note that $\\operatorname{conv}U_1$ and $\\operatorname{conv}U_2$ are empty of other points in $U$ (as $V_1,V_2$ are primitive).", "Suppose we have $D$ , we can see that $U\\setminus (U_1\\cup U_2)$ is nonempty.", "Indeed, suppose $U\\setminus (U_1\\cup U_2)=\\emptyset $ , it follows that the points in $U$ span a $(|U|-1)$ -dimensional space and are in conical position with the positive hull empty of other normals, contradiction to Lemma REF .", "Although $U_1,U_2$ are contained in a hyperplane, it is more convenient to see this affine space as a linear space by a translation that takes $\\operatorname{conv}U_1\\cap \\operatorname{conv}U_2$ to the origin.", "Moreover, we assume the spaces spanned by $U_1$ and $U_2$ are orthogonal (otherwise, we transform the space).", "Let $p$ be a point in $U\\setminus (U_1\\cup U_2)$ that has the smallest distance to $\\operatorname{conv}U_1$ .", "It follows that $\\operatorname{conv}(\\lbrace p\\rbrace \\cup U_1)$ is empty of other points in $U$ since otherwise another point would be closer to $\\operatorname{conv}U_1$ than $p$ .", "Also note that the points $\\lbrace p\\rbrace \\cup U_1$ are in convex position since $p$ is not in the space spanned by $U_1$ .", "Let $p^{\\prime }$ be the projection of $p$ onto the space of $U_1$ .", "Since $U_1$ is the set of vertices of a simplex containing 0, there is a proper subset $U^{\\prime }_1\\subset U_1$ so that $0\\in \\operatorname{conv}(\\lbrace p^{\\prime }\\rbrace \\cup U^{\\prime }_1)$ .", "We then have $\\operatorname{conv}(\\lbrace p\\rbrace \\cup U^{\\prime }_1)$ and $\\operatorname{conv}U_2$ intersecting.", "Indeed, let $\\theta p^{\\prime }+\\sum _{x_i\\in U^{\\prime }_1} \\lambda _i x_i = 0,$ where $\\theta $ and $\\lambda _i$ for $x_i\\in U^{\\prime }_1$ are nonnegative numbers summing to 1, we have $\\theta p + \\sum _{x_i\\in U^{\\prime }_1} \\lambda _i x_i = \\theta p^{\\prime \\prime }$ is in the convex hull of $U_2$ , where $p^{\\prime \\prime }$ is the projection of $p$ onto the space of $U_2$ .", "The corresponding normals of $\\lbrace p\\rbrace \\cup U^{\\prime }_1$ and $U_2$ in the original linear space are the two sets satisfying the condition of $V_1,V_2$ but having a smaller positive hull of the union, contradiction.", "It follows that we do not have $D$ .", "We now have verified Theorem REF .", "Theorem REF follows from Propositions REF and REF .", "We show the equivalence of the conditions $S4^{\\prime }$ and $DD$ in order to prove Theorem REF .", "Let us remind Condition $S4^{\\prime }$ for a polytope $P$ to be strongly monotypic: If $Q$ is any polytope with $N(Q)\\subseteq N(P)$ then $Q$ is monotypic.", "In one direction, if a monotypic polytope $P$ is also strongly monotypic, it should not have $n+1$ normals in conical position (i.e.", "Condition $DD$ ).", "Indeed, suppose $V$ is the set of such $n+1$ normals, we consider the subset of $N(P)$ after removing every normal in the positive hull of $V$ except the normals in $V$ themselves, which is $N(P)\\setminus ((\\operatorname{pos}V)\\setminus V)$ .", "Any polytope taking this subset as the set of normals is not monotypic, due to the existence of the $n+1$ normals of $V$ in conical position with the positive hull not containing any other normal.", "In the other direction, if a polytope $P$ has Condition $DD$ , then every polytope $Q$ with $N(Q)\\subseteq N(P)$ should be monotypic.", "Indeed, Condition $DD$ means we do not have $n+1$ normals of $N(P)$ in conical position.", "The same situation also applies for the subset $N(Q)\\subseteq N(P)$ .", "We have then Condition $D$ for $Q$ , that is $Q$ is monotypic." ], [ "Equivalence of strong monotypy and the generating property", "By the equivalence of Theorem REF and Theorem REF , to prove either theorem, it remains to prove that every polytope that is monotypic but not strongly monotypic does not have the generating property.", "Consider such a polytope $P$ , we have some $n+1$ normals of $P$ in conical position for $P$ to be not strongly monotypic (by Condition $DD$ of Theorem REF ).", "By Remark REF , there are some $n+1$ normals in conical position that span the whole $\\mathbb {R}^n$ .", "Among such collections of normals, consider a set $X$ of $n+1$ normals so that its positive hull is minimal (up to inclusion).", "Note that the positive hull must contain another normal of $P$ for $P$ to be monotypic (by Condition $D$ of Theorem REF ).", "Let $Y$ be the intersection of the rays of the normals in $X$ with a hyperplane not through 0 so that each ray cuts the hyperplane at only one point.", "Let $Z$ be the intersection of the rays of the normals in $N(P)$ with the hyperplane.", "Proposition REF below shows that there is only one point $p$ of $(Z\\setminus Y) \\cap \\operatorname{conv}Y$ .", "Proposition 3 There is only one point $p$ of $(Z\\setminus Y) \\cap \\operatorname{conv}Y$ .", "Furthermore, there is a partition $Y=Y_0\\sqcup Y_1\\sqcup Y_2$ with a possibly empty $Y_0$ so that: If we treat the affine space of $Y$ as a linear space that takes $p$ as the origin by considering the sets $Y^{\\prime }=Y-p$ and $Y^{\\prime }_i=Y_i-p$ for $i=0,1,2$ , then (i) The spans of $Y^{\\prime }_0,Y^{\\prime }_1,Y^{\\prime }_2$ are linearly independent.", "(ii) The points in $Y^{\\prime }_0$ are linearly independent.", "(iii) Each of $Y^{\\prime }_1,Y^{\\prime }_2$ is the set of vertices of a simplex whose relative interior contains 0.", "Since the $n+1$ points in $Y$ affinely span an $(n-1)$ -dimensional affine space, we have a partition $Y=A_1\\sqcup A_2$ so that $\\operatorname{conv}A_1\\cap \\operatorname{conv}A_2\\ne \\emptyset $ .", "Let $p$ be a point in the intersection.", "We consider the linear space spanned by $Y^{\\prime }=Y-p$ .", "Also denote $A^{\\prime }_1=A_1-p$ and $A^{\\prime }_2=A_2-p$ .", "For each $i=1,2$ , there exists a subset $Y^{\\prime }_i\\subseteq A^{\\prime }_i$ so that $Y^{\\prime }_i$ is the set of vertices of a simplex whose relative interior contains the origin $p^{\\prime }=p-p=0$ .", "Denote $Y^{\\prime }_0=Y^{\\prime }\\setminus (Y^{\\prime }_1\\cup Y^{\\prime }_2)$ .", "The dimension of the span of $Y^{\\prime }$ is at most $|Y^{\\prime }_0|+(|Y^{\\prime }_1|-1)+(|Y^{\\prime }_2|-1)=n-1.$ To make the dimension precisely $n-1$ , the spans of $Y^{\\prime }_0,Y^{\\prime }_1,Y^{\\prime }_2$ must be linearly independent and the points in $Y^{\\prime }_0$ are also linearly independent.", "It remains to prove the intersection $(Z\\setminus Y)\\cap \\operatorname{conv}Y$ is precisely $p$ .", "Suppose there are two points of $Z\\setminus Y$ in the convex hull of $Y$ , one of the two points, say $q$ , is not $p$ .", "Suppose that the spans of $Y^{\\prime }_0,Y^{\\prime }_1,Y^{\\prime }_2$ are orthogonal, otherwise we apply an affine transformation.", "Consider a representation of $q^{\\prime }=q-p$ in the span of $Y^{\\prime }$ : $q^{\\prime }=\\sum _{x_i\\in Y^{\\prime }} \\lambda _i x_i,$ where $\\lambda _i$ are all nonnegative and their sum is 1.", "Since $q\\ne p$ , the point $q^{\\prime }$ is not the origin, hence, $\\lambda _i$ is positive for some $x_i\\in Y^{\\prime }$ .", "We show that the $n+1$ points in $\\lbrace q^{\\prime }\\rbrace \\cup Y^{\\prime }\\setminus \\lbrace x_i\\rbrace $ are in convex position and their convex hull is $(n-1)$ -dimensional.", "Indeed, if $x_i\\in Y^{\\prime }_0$ , then $q^{\\prime }$ is not in the span of the rest.", "In the other case, say $x_i\\in Y^{\\prime }_1$ , the projection of $q^{\\prime }$ onto the span of $Y^{\\prime }_1$ cannot lie in the convex hull of $Y^{\\prime }_1\\setminus \\lbrace x_i\\rbrace $ , since otherwise the points of $Y^{\\prime }_1$ would be no longer affinely independent.", "In either case, $q^{\\prime }$ is not in the convex hull of the rest, thus, they are in convex position.", "The dimension is also preserved by a similar argument.", "The corresponding $n+1$ points in the affine hull of $Y$ contradict the minimality of $X$ .", "By the nature of the normals of $Y$ in Proposition REF , we can assume that every normal in $Y$ forms an acute angle with $p$ , otherwise we can apply an affine transformation to obtain it.", "Denote by $\\pi (x)$ the projection of a point $x$ to the hyperplane $\\langle p,x\\rangle = 0$ .", "We show that Proposition REF still holds if we replace $Y^{\\prime }$ and $Y^{\\prime }_i$ for $i=0,1,2$ by the projections $\\pi (Y)$ and $\\pi (Y_i)$ for $i=0,1,2$ .", "Indeed, in the beginning of this section we consider a hyperplane so that the ray of each normal in $X$ cuts the hyperplane at precisely one point.", "Now instead of this hyperplane, we consider the hyperplane through $p$ and perpendicular to $p$ .", "Since the intersection of the ray of each normal $x\\in X$ to the new hyperplane is still one point due to the acuteness of the angle that each $y\\in Y$ forms with $p$ , the conclusion of Propostion REF still holds.", "However, this time each point $y^{\\prime }=y-p$ turns out to coincide with $\\pi (y)$ .", "For convenience, we additionally assume that the spans of $\\pi (Y_0), \\pi (Y_1), \\pi (Y_2)$ are orthogonal, also by an affine transformation.", "Let us treat the facet $F$ of normal $p$ as a polytope in one lower dimension by translating the polytope $P$ so that $F$ lies on the hyperplane $\\langle p,x\\rangle =0$ through the origin.", "Due to the orthogonality of the spans of $\\pi (Y_0),\\pi (Y_1),\\pi (Y_2)$ , the hyperplanes of the facets in $\\pi (Y_1)$ bound a set $\\mathbb {R}^{|Y_0|}\\times S_1\\times \\mathbb {R}^{|Y_2|-1}$ where $S_1$ is a simplex, since $\\pi (Y_1)$ is the set of vertices of a simplex whose relative interior contains 0.", "Likewise, the hyperplanes corresponding to $\\pi (Y_2)$ bound a set $\\mathbb {R}^{|Y_0|}\\times \\mathbb {R}^{|Y_1|-1}\\times S_2$ where $S_2$ is a simplex.", "On the other hand, the intersection of the hyperplanes of the facets in $\\pi (Y_0)$ is $\\lbrace x^*\\rbrace \\times \\mathbb {R}^{|Y_1|-1}\\times \\mathbb {R}^{|Y_2|-1}$ for some point $x^*$ .", "It means the hyperplanes for both $\\pi (Y_1),\\pi (Y_2)$ bound the intersection of the hyperplanes for $\\pi (Y_0)$ at $G=\\lbrace x^*\\rbrace \\times S_1\\times S_2,$ which is a translate of the Cartesian product of two simplices.", "For later usage, we note here that $S_1$ is decided completely by the hyperplanes $\\lbrace x:\\langle \\pi (y), x\\rangle = h_F(\\pi (y))\\rbrace $ for $y\\in Y_1$ .", "It is the simplex bounded by these hyperplanes in the span of $Y_1$ .", "The same situation also applies to $S_2$ .", "(Here we denote $h_K(\\vec{n})=\\sup _{x\\in K} \\langle \\vec{n}, x\\rangle $ for a set $K$ and a vector $\\vec{n}$ .)", "We can show that $G$ is actually a face of $P$ by the following fact about monotypic polytopes from [1].", "Theorem 7 (Property $M4$ of [1]) For every primitive subset of $k$ normals in $N(P)$ for a monotypic polytope $P\\subset \\mathbb {R}^n$ , the corresponding facets intersect at an $(n-k)$ -face of $P$ .", "Indeed, each vertex of $G$ is the intersection of the hyperplanes of the facets of the normals in $\\pi (Y_0)\\cup \\pi (Y_1\\setminus \\lbrace y_1\\rbrace )\\cup \\pi (Y_2\\setminus \\lbrace y_2\\rbrace )$ for some $y_1\\in Y_1,y_2\\in Y_2$ .", "In the original space $\\mathbb {R}^n$ , it is the intersection of the hyperplanes of the facets of the normals in $U=\\lbrace p\\rbrace \\cup Y_0\\cup (Y_1\\setminus \\lbrace y_1\\rbrace )\\cup (Y_2\\setminus \\lbrace y_2\\rbrace )$ .", "The set $U$ is actually primitive by Proposition REF .", "By Theorem REF , the facets of the normals in $U$ intersect at a face of dimension $n-|U|=0$ , i.e.", "a vertex of $P$ .", "It follows that each vertex of $G$ is also a vertex of $P$ .", "In other words, $G$ is a face of $P$ (and also of $F$ ).", "A corollary of $G$ being a face is: $S_1,S_2$ are the projections of $F$ onto the spans of $\\pi (Y_1),\\pi (Y_2)$ , respectively.", "The representation $G=\\lbrace x^*\\rbrace \\times S_1\\times S_2$ allows $G$ to have some nice intersection properties.", "If we translate $G$ to $G^{\\prime }$ by a vector in the span of $\\pi (Y_2)$ so that the projections of $G$ and $G^{\\prime }$ into the span of $\\pi (Y_2)$ are two simplices $S_2,S^{\\prime }_2$ intersecting at precisely a vertex $v^*$ of both simplices, then $G$ and $G^{\\prime }$ intersect precisely at a face $H=\\lbrace x^*\\rbrace \\times S_1\\times \\lbrace v^*\\rbrace $ of both $G$ and $G^{\\prime }$ , which is a translate of $S_1$ .", "Let $F^{\\prime }$ be the translate of $F$ by the same translation that takes $G$ to $G^{\\prime }$ , the face $H$ is not necessarily $F\\cap F^{\\prime }$ but it must be a face of $F\\cap F^{\\prime }$ , since it is the intersection of two faces $G, G^{\\prime }$ .", "Let us call $G$ the good face of $F$ and such a face $H$ the horizontal face of $G$ (and also of $F$ and $P$ ).", "The name “horizontal” is inspired by the example in Figure REF , where one kind of intersection is horizontal while the other is vertical.", "The choice does not matter though, as the roles of $Y_1$ and $Y_2$ are interchangeable.", "We draw only the face $G$ but not the polytope $P$ , whose dimension is at least 4.", "Figure: Translates of a good face intersecting at a face in two waysLet $F_\\epsilon $ be the intersection of $P$ and the hyperplane $\\langle p,x\\rangle = -\\epsilon $ for some small $\\epsilon > 0$ .", "For convenience, we also denote $F_0=F$ , which is on the hyperplane $\\langle p,x\\rangle = 0$ .", "By the definition of a monotypic polytope, $F_\\epsilon $ for a small enough $\\epsilon $ is combinatorially equivalent to $F$ with the same set of normals.", "That is $F_\\epsilon $ also has a good face and its horizontal faces, with similar representations to those of $F$ when being projected to the hyperplane of $F$ by $\\pi $ .", "To further utilize the acuteness of the angles, we first give an observation.", "Consider any hyperplane $A_q=\\lbrace x:\\langle q,x\\rangle =c\\rbrace $ for any normal $q$ that forms an acute angle with $p$ and some real $c$ .", "For any two reals $d,d^{\\prime }$ with $d>d^{\\prime }$ , denote $A_p=\\lbrace x:\\langle p,x\\rangle =d\\rbrace $ and $A^{\\prime }_p=\\lbrace x:\\langle p,x\\rangle =d^{\\prime }\\rbrace $ , we have $h_{\\pi (A_p\\cap A_q)} (\\pi (q)) < h_{\\pi (A^{\\prime }_p\\cap A_q)} (\\pi (q)).$ This can be observed in the plane picture of the 2-dimensional span of $\\lbrace p,q,\\pi (q)\\rbrace $ .", "(Note that the projection $\\pi (q)$ onto the hyperplane $\\langle p,x\\rangle =0$ lies in the span of $\\lbrace p,q\\rbrace $ .)", "For any $\\epsilon $ , denote the good face of $F_\\epsilon $ by $G_\\epsilon $ , we have a similar representation $\\pi (G_\\epsilon )=\\lbrace x^*_\\epsilon \\rbrace \\times S_1(\\epsilon )\\times S_2(\\epsilon )$ to the representation $G=\\lbrace x^*\\rbrace \\times S_1\\times S_2$ .", "Let us compare $S_1(\\epsilon )$ (and also $S_2(\\epsilon )$ ) for different values of $\\epsilon $ .", "Observation 1 For any two small enough $\\epsilon ,\\epsilon ^{\\prime }$ with $\\epsilon >\\epsilon ^{\\prime }$ , the simplex $S_1(\\epsilon ^{\\prime })$ lies in the relative interior of the simplex $S_1(\\epsilon )$ .", "The same applies for $S_2(\\epsilon ^{\\prime })$ and $S_2(\\epsilon )$ .", "We prove for $S_1(\\epsilon ^{\\prime })$ and $S_1(\\epsilon )$ only.", "The other situation is similar.", "By the representation of $S_1$ in $G=\\lbrace x^*\\rbrace \\times S_1\\times S_2$ , we have $S_1(\\epsilon )$ is the simplex that the hyperplanes $\\lbrace x:\\langle \\pi (y),x\\rangle = h_{\\pi (F_\\epsilon )} (\\pi (y))\\rbrace $ for $y\\in Y_1$ bound in the span of $Y_1$ .", "Likewise, the corresponding hyperplanes for $S_1(\\epsilon ^{\\prime })$ are $\\lbrace x:\\langle \\pi (y),x\\rangle = h_{\\pi (F_{\\epsilon ^{\\prime }})} (\\pi (y))\\rbrace $ for $y\\in Y_1$ .", "For each $y\\in Y_1$ , let $A_y$ denote that hyperplane of the facet of $y$ , and $A_p, A^{\\prime }_p$ denote the hyperplanes of $F_\\epsilon ,F_{\\epsilon ^{\\prime }}$ , respectively.", "Since the facet of $y$ cuts $F$ at an $(n-2)$ -face by the primitivity of $\\lbrace y,p\\rbrace $ (and Theorem REF ), we have $h_{\\pi (F_\\epsilon )}(\\pi (y)) = h_{\\pi (A_p\\cap A_y)}(\\pi (y)), \\qquad h_{\\pi (F_{\\epsilon ^{\\prime }})}(\\pi (y)) = h_{\\pi (A^{\\prime }_p\\cap A_y)}(\\pi (y)).$ The conclusion follows, since the acuteness of the angle that every $y\\in Y_1$ forms with $p$ gives $h_{\\pi (A_p\\cap A_y)}(\\pi (y)) > h_{\\pi (A^{\\prime }_p\\cap A_y)}(\\pi (y)).$ Pick a small enough $\\epsilon _0$ .", "By Observation REF with $F=F_0$ , each horizontal face $H_{\\epsilon _0}$ can contain a translate of the corresponding horizontal face $H$ of $F$ (which is also a face of $P$ ) in the relative interior, since $H_{\\epsilon _0}$ (resp.", "$H$ ) is a translate of $S_1(\\epsilon _0)$ (resp.", "$S_1$ ).", "As $H_{\\epsilon _0}$ and $H$ are simplices, the former cannot be a Minkowski summand of the latter.", "Let $F^{\\prime }_{\\epsilon _0}$ be a translate of $F_{\\epsilon _0}$ so that a face of their intersection is a horizontal face $H_{\\epsilon _0}$ of both $F_{\\epsilon _0}$ and $F^{\\prime }_{\\epsilon _0}$ .", "In other words, the translate vector lies in the span of $\\pi (Y_2)$ so that the projections of $F_{\\epsilon _0}$ and $F^{\\prime }_{\\epsilon _0}$ onto the span of $\\pi (Y_2)$ are two simplices $S_2(\\epsilon _0),S^{\\prime }_2(\\epsilon _0)$ intersecting precisely at a vertex of both the simplices.", "For any $\\epsilon <\\epsilon _0$ , the corresponding simplices $S_2(\\epsilon ), S^{\\prime }_2(\\epsilon )$ satisfy $S_2(\\epsilon )$ (resp.", "$S^{\\prime }_2(\\epsilon )$ ) lying in the relative interior of $S_2(\\epsilon _0)$ (resp.", "$S^{\\prime }_2(\\epsilon _0)$ ), by Observation REF .", "It follows that $S_2(\\epsilon ), S^{\\prime }_2(\\epsilon )$ , which are the projection of $F_\\epsilon $ and the corresponding translate $F^{\\prime }_\\epsilon $ (by the same translation that takes $F_{\\epsilon _0}$ to $F^{\\prime }_{\\epsilon _0}$ ) onto the span of $Y_2$ , are disjoint, i.e.", "$F_\\epsilon , F^{\\prime }_\\epsilon $ are also disjoint.", "That is the face of normal $p$ of $P\\cap P^{\\prime }$ , where $P^{\\prime }$ is the corresponding translate of $P$ , is the intersection $F_{\\epsilon _0}\\cap F^{\\prime }_{\\epsilon _0}$ , which means the horizontal face $H_{\\epsilon _0}$ is also a face of $P\\cap P^{\\prime }$ .", "It follows that $P\\cap P^{\\prime }$ cannot be a Minkowski summand of $P$ , since $H_{\\epsilon _0}$ is not a Minkowski summand of the corresponding face $H$ of $P$ .", "In other words, $P$ does not have the generating property." ], [ "Acknowledgement", "The author would like to thank Roman Karasev for his patient reading and commenting on various pieces in early drafts." ] ]
2210.07690
[ [ "Flattened Graph Convolutional Networks For Recommendation" ], [ "Abstract Graph Convolutional Networks (GCNs) and their variants have achieved significant performances on various recommendation tasks.", "However, many existing GCN models tend to perform recursive aggregations among all related nodes, which can arise severe computational burden to hinder their application to large-scale recommendation tasks.", "To this end, this paper proposes the flattened GCN~(FlatGCN) model, which is able to achieve superior performance with remarkably less complexity compared with existing models.", "Our main contribution is three-fold.", "First, we propose a simplified but powerful GCN architecture which aggregates the neighborhood information using one flattened GCN layer, instead of recursively.", "The aggregation step in FlatGCN is parameter-free such that it can be pre-computed with parallel computation to save memory and computational cost.", "Second, we propose an informative neighbor-infomax sampling method to select the most valuable neighbors by measuring the correlation among neighboring nodes based on a principled metric.", "Third, we propose a layer ensemble technique which improves the expressiveness of the learned representations by assembling the layer-wise neighborhood representations at the final layer.", "Extensive experiments on three datasets verify that our proposed model outperforms existing GCN models considerably and yields up to a few orders of magnitude speedup in training efficiency." ], [ "Introduction", "Graph Convolutional Networks (GCNs), which generalize the Convolutional Neural Networks (CNNs) on graph-structured data [1], have achieved impressive performance on various graph-based learning tasks [2], [3], [4], including recommendation [5].", "The core idea behind GCNs is to iteratively aggregate information from locally nearby neighbors in a graph using neural networks [6].", "Each node at one GCN layer performs graph convolution operations to aggregate information from its nearby neighbors at the previous layer.", "By stacking multiple GCN layers, the information can be propagated across the far reaches of a graph, which makes GCNs capable of learning from both content information as well as the graph structure.", "As such, GCN-based models are widely adopted in recommendation tasks [7], [5], [8], [9], [10], [11], [12], [13], [14] which require learning from relational datasets.", "However, although existing GCN-based recommendation models have set new standards on various benchmark tasks [7], [5], [8], [15], [11], [16], [17], [18], [19], many of them suffer from two pitfalls.", "First, many existing GCN models suffer from high computational complexity due to the use of multi-layer architectures and complicated modeling techniques.", "This may largely hinder their application to large-scale real-world recommender systems.", "For example, the metapath-guided GCN models [20], [8] propose to construct manifold metapaths for neighborhood aggregation, which introduces high complexity on data pre-processing and information aggregation.", "Meanwhile, the attention-based GCN (GAT) models propose to generalize graph convolution with the attention mechanism [2], [15], [9], which, however, introduces an excessive amount of additional parameters to fit.", "On the other hand, the recent advances on simplified GCNs such as [21], [18], indicate that it is feasible to remove certain components from existing architectures while still maintaining comparable performances.", "This motivates us to rethink the essential components of building an expressive GCN model for recommendations.", "Second, the recursive neighborhood aggregation among all nodes arises severe computational burden but may have limited contribution in recommendation tasks.", "Specifically, as pointed out in [22], the convolution in the GCN model is indeed a special form of Laplacian smoothing, which makes the feature of nodes within the same cluster similar to greatly ease the classification or regression task.", "Therefore, it is critical for GCN models to ensure that similar nodes have been grouped into the same cluster before performing neighborhood aggregations.", "In homogeneous networks, it is highly likely for two similar nodes to form a direct edge [23].", "However, the networks are heterogeneous in the context of recommendation.", "As such, the difficulty of recognizing similar nodes arises since we need to measure the correlation between any user-item pair based on their indirect relationships.", "Existing models usually measure the correlation by counting the historical interactions [8], [12], [9], [11].", "However, such intuitive measurements can be easily dominated by the popular user/items.", "Besides, the correlations are unlikely to scale linearly with the number of historical interactions.", "Therefore, it is worth exploring the definition of a principled metric to quantitatively evaluate the importance of neighbors in heterogeneous networks.", "This would also pave the way for the design of an informative and efficient neighbor sampling method for GCN models.", "In this paper, we propose the flattened GCN (FlatGCN) model which has a much lower complexity compared to existing GCN-based recommendation models but is able to achieve superior performance.", "The main contributions are summarized as follows.", "First, we propose the FlatGCN to simplify the recursive neighborhood aggregation in standard GCNs as flattened layer aggregations which require performing propagation only once over a single GCN layer.", "Moreover, the aggregation step in FlatGCN is a parameter-free operation such that it can be pre-computed to save vast memory and computational cost thereby easing the training on large-scale graphs when applied to real-world recommender systems.", "Second, we propose an informative and efficient sampling method named neighbor-infomax to select the top informative neighbors for neighborhood aggregation.", "Specifically, we propose a principled metric to explicitly measure the correlation among neighboring nodes in a user-item bipartite graph and then select the most informative neighbors according to the evaluation results so as to improve the quality of neighborhood aggregation.", "Third, we propose a layer ensemble technique which improves the expressiveness of the learned representations by assembling the layer-wise neighborhood representations at the final layer.", "Extensive experiments on two benchmarks and one commercial dataset verify that our proposed model outperforms existing GCN models considerably and yields up to a few orders of magnitude speedup in training, in terms of recommendation tasks." ], [ "METHODOLOGY", "In this section, we first introduce the neighbor information measurement to rank the neighbor set and select the most informative subset.", "Then we present the efficient and powerful flattened layer aggregation to represent the subgraph of a given user (item).", "Finally, we propose a novel layer ensemble architecture to predict the relevance score.", "We assume that we have learned users (items)' embeddings with given collaborative filtering models such as Meta2Vec [24] or LightGCN [18] which are set as the node features for the users (items).", "Figure: Sketch of FlatGCN.On the left, we present the flattened layer aggregation, utilizing the neighbor information measurement to rank the neighbors and select the top informative neighbors from each layer's neighbor set.For example, the sampling size of the middle layer is 2, and we select the top-2 informative neighbors, while the sampling size for the bottom layer is 4, and we select top-4 informative neighbors.The layer ensemble architecture is shown on the right." ], [ "Neighbor Information Measurement", "Starting from GraphSAGE [4] and FastGCN [6], GCN models downsample the neighbors to accelerate the recursive message passing.", "Since this downsampling process would inevitably lose information, it is critical to preserve as much information as possible in the preserved neighbor, which could maximally represent the root node.", "To this end, we introduce the mutual information measurement to evaluate to what extent a neighbor can describe the given root node.", "Besides, we also propose a simplified version of neighbor-root-node information measurement for efficient computation.", "Neighbor-Root-Node Mutual Information.", "Let $u$ denote the root node.", "We first select an arbitrary node $v\\in {\\mathcal {N}}_u$ .", "Let the random variable ${\\mathbf {v}}$ be the feature of a randomly picked node in ${\\mathcal {N}}_u$ , then the distribution of ${\\mathbf {v}}$ is $P_{\\mathbf {v}}= P({\\mathbf {v}}= {\\mathbf {e}}_v)$ , where ${\\mathbf {e}}_v$ is the outcome feature of node $v$ .", "Similarly, we assume ${\\mathbf {u}}$ be the feature associated with the root node $u$ , then the distribution of ${\\mathbf {u}}$ is $P_{\\mathbf {u}}= P({\\mathbf {u}}= {\\mathbf {e}}_u)$ , where ${\\mathbf {e}}_u$ is the outcome feature of node $u$ .", "After defining these notations, their mutual information $I({\\mathbf {v}},{\\mathbf {u}})$ is the KL-divergence between the joint distribution $P_{{\\mathbf {v}},{\\mathbf {u}}} = P({\\mathbf {v}}= {\\mathbf {e}}_v, {\\mathbf {u}}= {\\mathbf {e}}_u)$ and the product of marginal distributions $P_{\\mathbf {v}}\\otimes P_{\\mathbf {u}}$ : $\\begin{split}& I({\\mathbf {v}},{\\mathbf {u}}) = D_\\text{KL}(P_{{\\mathbf {v}},{\\mathbf {u}}}\\parallel P_{\\mathbf {v}}\\otimes P_{\\mathbf {u}})\\\\\\overset{(a)}{\\ge } &\\sup \\limits _{T\\in {\\mathcal {T}}} \\lbrace \\mathbb {E}_{{\\mathbf {e}}_v,{\\mathbf {e}}_u\\sim P_{{\\mathbf {v}},{\\mathbf {u}}}}[T({\\mathbf {e}}_v,{\\mathbf {e}}_u)] \\\\& - \\mathbb {E}_{{\\mathbf {e}}_v\\sim P_{{\\mathbf {v}},{\\mathbf {u}}^{\\prime }}\\sim P_{{\\mathbf {u}}^{\\prime }}}[e^{T({\\mathbf {e}}_v, {\\mathbf {e}}_{u^{\\prime }})-1}]\\rbrace ,\\end{split}$ where $(a)$ follows from $f$ -divergence representation based on KL-divergence [25]; random variable ${\\mathbf {u}}^{\\prime }$ denotes the feature associate with an arbitrary node $u^{\\prime }\\in {\\mathbf {U}}\\cup {\\mathbf {I}}$ ; $T\\in {\\mathcal {T}}$ is an arbitrary function that maps a pair of features to a real value, which reflecting the correlation of two features.", "After replacing the $f$ -divergence in Eq.", "(REF ) with a GAN-like divergence, Eq.", "(REF ) is written as follows, $\\begin{split}I_{\\text{GAN}}({\\mathbf {v}},{\\mathbf {u}}) \\ge &\\sup \\limits _{T\\in {\\mathcal {T}}}\\lbrace \\mathbb {E}_{P_{{\\mathbf {v}},{\\mathbf {u}}}}[\\text{log}\\sigma (T({\\mathbf {e}}_v,{\\mathbf {e}}_u))] \\\\&+ \\mathbb {E}_{P_{\\mathbf {v}},P_{{\\mathbf {u}}^{\\prime }}}[\\text{log}(1-\\sigma (T({\\mathbf {e}}_v,{\\mathbf {e}}_{u^{\\prime }})))]\\rbrace ,\\end{split}$ where $\\sigma (\\cdot )$ is the sigmoid function.", "Practical Simplification.", "In practice, we cannot go over the entire functional space to evaluate the exact value of $I_{\\text{GAN}}({\\mathbf {v}},{\\mathbf {u}})$ .", "Since the feature of nodes are learned collaborative filtering embeddings, we parameterize $T({\\mathbf {a}},{\\mathbf {b}})$ by computing their inner product as $T({\\mathbf {a}},{\\mathbf {b}}) = {\\mathbf {a}}^\\top \\cdot {\\mathbf {b}}$ .", "Through the trained CF embeddings, we obtain a lower bound of GAN-based mutual information as: $ \\begin{split}I_{CF}({\\mathbf {v}},{\\mathbf {u}}) = & \\log \\sigma ({\\mathbf {e}}_v^\\top \\cdot {\\mathbf {e}}_u) \\\\ & +\\frac{1}{N+M}\\sum _{u^{\\prime }\\in {\\mathbf {U}}\\cup {\\mathbf {I}}}\\log (1-\\sigma ({\\mathbf {e}}_v^\\top \\cdot {\\mathbf {e}}_{u^{\\prime }})),\\end{split}$ where $N=|{\\mathbf {U}}|$ and $M=|{\\mathbf {I}}|$ denote the numbers of users and items, respectively.", "In $I_{CF}({\\mathbf {v}},{\\mathbf {u}})$ , the first term reflects the correlation between node $v$ and the root node $u$ , meaning to what extend does the node $v$ describe the root node $u$ .", "The second term evaluates the difference between the sampling set and the whole user-item embeddings, which estimates the particularity of the node $v$ .", "Since $\\log (1-\\sigma (\\cdot ))$ is a concave function, we can further simplify the second term of Eq.", "(REF ) with Jensen's inequality to obtain the following neighbor information evaluation function, $C(v,u) = \\log \\sigma ({\\mathbf {e}}_v^\\top \\cdot {\\mathbf {e}}_u) + \\log (1-\\sigma ({\\mathbf {e}}_v^\\top \\cdot \\overline{{\\mathbf {e}}})),$ where $\\overline{{\\mathbf {e}}} = 1/(N+M)\\sum _{u^{\\prime }\\in {{\\mathbf {U}}\\cup {\\mathbf {I}}}} {\\mathbf {e}}_{u^{\\prime }}$ denotes the average embedding of all the user-item embeddings.", "With this neighbor information evaluation function Eq.", "(REF ), we propose the neighbor infomax sampling and the flattened layer aggregation in the next subsection." ], [ "Flattened Layer Aggregation", "Mathematically, let ${\\mathcal {G}}({\\mathbf {U}},{\\mathbf {I}},{\\mathbf {R}})$ denote the user-item interaction graph.", "${\\mathbf {U}}=\\lbrace u_1,\\cdots ,u_N\\rbrace $ denotes the user set, and ${\\mathbf {I}}= \\lbrace i_1,\\cdots ,i_M\\rbrace $ denotes the item set, while ${\\mathbf {R}}\\in \\mathbb {R}^{N\\times M}$ describes the user-item interaction matrix.", "After introducing the notations, the argument adjacent matrix for users and items can be defined as follows, ${\\mathbf {A}}=\\left[\\begin{matrix}\\textbf {0} & {\\mathbf {R}}\\\\{\\mathbf {R}}^\\top & \\textbf {0}\\end{matrix}\\right] \\in \\mathbb {R}^{(N+M)\\times (N+M)}.$ Therefore, we can define the multi-hop neighbors of a given node $u$ according to the adjacent matrix ${\\mathbf {A}}$ .", "Specially, ${\\mathcal {N}}_u^0=\\lbrace u\\rbrace $ .", "The first-order neighbor set is given by ${\\mathcal {N}}_u^1=\\lbrace v|{\\mathbf {A}}_{uv} =1\\rbrace $ , while the second-order neighbor set is denoted by ${\\mathcal {N}}_u^2=\\lbrace v|{\\mathbf {A}}_{uv}^2 \\ge 1\\rbrace $ , so on and so forth.", "As shown in Fig.", "REF , for a given root node $u$ , we select the most informative subset ${\\mathcal {S}}_u^k$ from a given neighbor set ${\\mathcal {N}}_u^k$ by greedily selecting the top-$S^k$ informative neighbors, namely, ${\\mathcal {S}}_u^k = \\text{top-rank}\\lbrace C(v,u)\\ \\text{for\\ } v \\in {\\mathcal {N}}_u^k, \\ S^k\\rbrace ,$ where top-rank is the function that returns the indices of the top-$S^k$ value and $S^k$ denotes the sampling size of the $k$ -th layer.", "Based on the definition of the neighbor infomax selection in Eq.", "(REF ), we can utilize flat infomax aggregation to extract the information of different layers, ${\\mathbf {h}}_u^k = \\frac{1}{S^k}\\sum _{v \\in {\\mathcal {S}}_u^k} {\\mathbf {e}}_v.$ Noted that the aggregation step Eq.", "(REF ) and Eq.", "(REF ) is a parameter-free operation such that it can be done in a pre-processing manner only once, thereby saving vast memory and computational cost to ease the training on large-scale graphs and application to real-world recommender systems." ], [ "Layer Ensemble Architecture", "With Eq.", "(REF ), FlatGCN aggregates the information of a $K$ -layer subgraph into $K$ vectors.", "Thus for any give user-item pair $(u,i)$ , we have $\\lbrace {\\mathbf {h}}_u^0,\\cdots ,{\\mathbf {h}}_u^K\\rbrace $ to represent the user subgraph and $\\lbrace {\\mathbf {h}}_i^0,\\cdots ,{\\mathbf {h}}_i^K\\rbrace $ to represent the item subgraph.", "The success of FM [26], [27] inspires that a hand-crafted cross-interaction function may be better than brutally learning with Multi-Layer Perception (MLP) function.", "As shown in Fig.", "REF , we first compute the interactions between different layers combinations for every layer of user's and the item's subgraphs.", "In such a way, we can easily take full advantage of the aggregation embedding of each subgraph layer for both user and item, and thus we model the complex interaction from user $u$ to item $v$ as, $\\hat{y}_{uv} = \\text{MLP}[\\parallel _{i=0}^K\\parallel _{j=0}^K({\\mathbf {h}}_u^i)^\\top \\cdot {\\mathbf {h}}_v^j],$ where $\\parallel $ indicates the concatenation operation, $({\\mathbf {h}}_u^i)^\\top \\cdot {\\mathbf {h}}_v^j$ denotes the relevance score between the $i$ th-layer representation of the user subgraph and $j$ th-layer of the item subgraph.", "It is noteworthy that Eq.", "(REF ) actually summarize the complicated relations between the user-item pair $(u,i)$ into a $K^2$ -dimension vectors, that greatly reduce the number of the parameters in the followed MLP function.", "Table: Overall Performance Comparison" ], [ "Experiment Setup", "Datasets.", "We utilize two benchmark datasets and a commercial dataset to evaluate the recommendation performance: AmazonBook and Yelp2018 are two benchmark datasets used in [17], [18].", "The WeChat dataset contains users' clicks on different articles recorded by the WeChat platform.", "We use Meta2Vec [24] and the state-of-the-art (SOTA) LightGCN [18] to produce the pre-trained embeddings of users and items in datasets to feed into the GCN model as the raw features.", "Evaluation Protocols.", "We randomly split the entire user-item recommendation records of each dataset into an embedding training set, GCN model training (validation) set, and a test set to evaluate the performance, where each of them contains 65%, 15%, and 20% of the full records, respectively.", "Besides, we apply the wildly used full-rank evaluation protocols [18] to evaluate the recommendation performance.", "Comparison Methods.", "We compare the following SOTA baselines: I. GCN [4] applies 2-layer GCN to process input embeddings following PinSAGE [7], followed by an inner product function.", "II.", "IntentGC [5] introduces a fast architecture named IntentNet to avoid unnecessary feature interactions for efficient training.", "III.", "NGCF* [17] is an embedding learning method.", "We fix its inputs as the given user-item embeddings and only train its propagation layers.", "We denote it as NGCF*.", "IV.", "LightGCN* [18] is the SOTA recommendation model.", "Similar to NGCF*, we fix its inputs as the given user-item embeddings and only train its propagation layers.", "Parameter Settings.", "The optimal parameter settings for all comparison methods are achieved by either empirical study or suggested settings by the original papers.", "We consider the 3-layer subgraphs and set the sampling size $S_k=25$ for $k=1,2$ .", "Especially, we adopt Adam [28] as the optimizer with learning rate as $0.001$ ; the $L_2$ regularization weight as $10^{-5}$ .", "Table: Comparison of sampling policies." ], [ "Result Analysis", "Performance Comparison.", "Table REF reports the performance on the three datasets w.r.t.", "precision (PRE@20), recall (REC@20), and ndcg (NCDG@20) metrics.", "The improvement gives the relative enhancement of FlatGCN over the best baseline (underlined).", "The superscript * denotes statistical significance against the best baseline with $p<0.05$ .", "Overall, our proposed FlatGCN statically significantly outperforms the best base baselines among all three datasets w.r.t.", "all evaluation metrics and both Meta2Vec and LightGCN embeddings.", "These results verify the consistent superiority of FlatGCN on characterizing the relevance between users and items than SOTA GCN-based recommendation methods.", "Particularly for the commercial WeChat dataset, FlatGCN leads to an enhanced performance by more than 30% on Meta2Vec embeddings.", "With LightGCN embeddings, FlatGCN outperforms the best baselines by over 60%.", "Figure: Comparison of the training time.", "All methods are running on the same GPU device.Learning Efficiency.", "One main advantage of FlatGCN is the low training complexity.", "In Fig.", "REF , we present the average training time of FlatGCN and the other baselines.", "We also present the preprocessing time of FlatGCN.", "The results in Fig.", "REF shows that FlatGCN achieves superior performance with one or two orders of magnitude speedup in training time.", "Notably, FlatGCN is 10 times faster than the fastest baseline: NGCF.", "Moreover, the preprocessing time of FlatGCN can be neglected, comparing with the training time of the baselines.", "Sampling Comparison.", "In tab:samp, we compare three different neighbor sampling methods on the WeChat dataset: 1) Random: Random walk-based sampling [7], which simulates random walks starting from each node and computes the L1-normalized visit count of neighbors visited by the random walk.", "2) Intuitive: First-order proximity-based sampling [8], [12], [9], examining the neighborhood similarity based on the edge weights (e.g., number of clicks).", "3) Infomax: Our proposed neighbor infomax selection.", "Overall, the random sampling method generates the worst performance.", "Intuitive sampling outperforms random sampling since the edge weights can represent the importance of an edge.", "Infomax sampling method achieves the best performance, due to its ability to select the most informative subset from the neighbor set." ], [ "Conclusion", "In this paper, we introduced the FlatGCN model which is able to achieve superior performance along with a few orders of magnitude speedup in training compared with existing models.", "Specifically, we proposed a flattened GCN architecture which is able to perform neighborhood propagation only once over a single GCN layer instead of recursively.", "Moreover, we proposed an informative sampling method named neighbor-infomax to select the top informative neighbors for neighborhood aggregation.", "Finally, we proposed a layer ensemble technique to improve the model expressiveness.", "Extensive experiments verified the superiority of the proposed FlatGCN on both learning performance and training speed." ] ]
2210.07769
[ [ "Limit laws in the lattice problem. III. Return to the case of boxes" ], [ "Abstract We study the error of the number of points of a lattice $L$ that belong to a rectangle, centred at $0$, whose axes are parallel to the coordinate axes, dilated by a factor $t$ and then translated by a vector $X \\in \\mathbb{R}^{2}$.", "When we consider the second order moment of the error relatively to $X \\in \\mathbb{R}^{2}/L$, one shows that, when $t$ is random and becomes large and when the error is normalized by a quantity which behaves, in the admissible case, as $\\sqrt{\\log(t)}$, it converges in distribution to an explicit positive constant.", "In the case of a typical lattice $L$, we show that this result still holds but the normalisation is more important, around $\\log(t)$.", "We also show that when $L=\\mathbb{Z}^{2}$, the error, when normalized by $t$, converges in distribution when $t$ is random and becomes large and we compute the moments of the limit distribution." ], [ "french (Résumé en français)Nous étudions l'erreur du nombre de points d'un réseau $L$ qui appartiennent à un rectangle, centré en 0, dont les axes sont parallèles aux axes de coordonnées, dilaté d'un facteur $t$ puis translaté d'un vecteur $X \\in \\mathbb {R}^{2}$ .", "Quand nous considérons le moment d'ordre 2 de l'erreur relativement à $X \\in \\mathbb {R}^{2}/L$ , on montre que, quand $t$ est aléatoire et devient grand et quand l'erreur est normalisée par une quantité qui se comporte, dans le cas admissible, comme $\\sqrt{\\log (t)}$ , elle converge en loi vers une constante positive explicite.", "Dans le cas d'un réseau $L$ typique, on montre que ce résultat tient toujours mais la normalisation est toutefois plus importante, autour de $\\log (t)$ .", "On montre aussi que quand $L = \\mathbb {Z}^{2}$ , l'erreur, quand elle est normalisée par $t$ , converge en loi quand $t$ est aléatoire et devient grand et on calcule les moments de la loi limite.", "english (Engish abstract)We study the error of the number of points of a lattice $L$ that belong to a rectangle, centred at 0, whose axes are parallel to the coordinate axes, dilated by a factor $t$ and then translated by a vector $X \\in \\mathbb {R}^{2}$ .", "When we consider the second order moment of the error relatively to $X \\in \\mathbb {R}^{2}/L$ , one shows that, when $t$ is random and becomes large and when the error is normalized by a quantity which behaves, in the admissible case, as $\\sqrt{\\log (t)}$ , it converges in distribution to an explicit positive constant.", "In the case of a typical lattice $L$ , we show that this result still holds but the normalisation is more important, around $\\log (t)$ .", "We also show that when $L=\\mathbb {Z}^{2}$ , the error, when normalized by $t$ , converges in distribution when $t$ is random and becomes large and we compute the moments of the limit distribution.", "english" ], [ "Introduction", "Let $P$ be a measurable subset of $\\mathbb {R}^{d}$ of non-zero finite Lebesgue measure.", "We want to evaluate the following cardinal number when $t \\rightarrow \\infty $ : $ N(tP + X, L) = | (tP + X) \\cap L|$ where $X \\in \\mathbb {R}^{d}$ , $L$ is a lattice of $\\mathbb {R}^{d}$ and $t P + X$ denotes the set $P$ dilated by a factor $t$ relatively to 0 and then translated by the vector $X$ .", "Under mild regularity conditions on the set $P$ , one can show that: $ N(tP + X, L) = t^{d}\\frac{\\text{Vol}(P)}{\\text{Covol}(L)} + o(t^{d}) $ where $o(f(t))$ denotes a quantity such that, when divided by $f(t)$ , it goes to 0 when $t \\rightarrow \\infty $ and where $\\text{Covol}(L)$ is the volume of a fundamental set of the lattice $L$ .", "We are interested in the error term $\\mathcal {R}(tP + X,L) = N(tP + X, L) - t^{d}\\frac{\\text{Vol}(P)}{\\text{Covol}(L)} \\textit {.", "}$ In the case where $d=2$ and where $P$ is the unit disk $\\mathbb {D}^{2}$ , Hardy's conjecture in $\\cite {hardy1917average}$ stipulates that we should have for all $\\epsilon > 0$ , $\\mathcal {R}(t \\mathbb {D}^{2}, \\mathbb {Z}^{2}) = O(t^{\\frac{1}{2}+\\epsilon }) $ where $Y = O(X)$ means that there exists $D > 0$ such that $ |Y| \\leqslant D |X|$ .", "One of the result in this direction has been established by Iwaniec and Mozzochi in $\\cite {iwaniec1988divisor}$ .", "They have proven that for all $\\epsilon > 0$ , $\\mathcal {R}(t \\mathbb {D}^{2}, \\mathbb {Z}^{2}) = O(t^{\\frac{7}{11}+\\epsilon }) \\textit {.", "}$ This result has been recently improved by Huxley in [7].", "Indeed, he has proven that: $\\mathcal {R}(t \\mathbb {D}^{2}, \\mathbb {Z}^{2}) = O(t^{K} \\log (t)^{\\Lambda }) $ where $K = \\frac{131}{208} $ and $\\Lambda = \\frac{18627}{8320}$ .", "In dimension 3, Heath-Brown has proven in $\\cite {heath2012lattice}$ that: $\\mathcal {R}(t \\mathbb {D}^{3}, \\mathbb {Z}^{3}) = O(t^{\\frac{21}{16}+\\epsilon }) \\textit {.", "}$ These last two results are all based on estimating what are called $\\textit {exponential sums}$ .", "Furthermore, in both cases, the error is considered in a deterministic way.", "Another approach was followed first by Heath-Brown in $\\cite {heath1992distribution}$ and then by Bleher, Cheng, Dyson and Lebowitz in $\\cite {bleher1993distribution}$ .", "They took interest in the case where the dilatation parameter $t$ is random.", "More precisely, they assumed that $t$ was being distributed according to the measure $\\rho (\\frac{t}{T}) dt$ (that is absolutely continuous relatively to Lebesgue measure) and where $\\rho $ is a probability density on $[0,1]$ and $T$ is parameter that goes to infinity.", "In that case, Bleher, Cheng, Dyson and Lebowitz showed the following result (which generalizes the result of Heath-Brown): Theorem ($\\cite {bleher1993distribution}$ ) Let $\\alpha \\in [0,1[^{2}$ .", "There exists a probability density $p_{\\alpha }$ on $\\mathbb {R}$ such that for every piecewise continuous and bounded function $g: \\mathbb {R} \\longrightarrow \\mathbb {R}$ , $ \\lim _{T \\rightarrow \\infty } \\frac{1}{T} \\int _{0}^{T} g \\big ( \\frac{\\mathcal {R}(t \\mathbb {D}^{2} + \\alpha , \\mathbb {Z}^{2})}{\\sqrt{t}} \\big ) \\rho (\\frac{t}{T}) dt = \\int _{\\mathbb {R}} g(x) p_{\\alpha }(x) dx \\textit {.", "}$ Furthermore $p_{\\alpha }$ can be extended as an analytic function over $\\mathbb {C}$ and satisfies that for every $\\epsilon > 0$ , $p_{\\alpha }(x) = O(e^{-|x|^{4- \\epsilon }})$ when $x \\in \\mathbb {R}$ and when $|x| \\rightarrow \\infty $ .", "We want to follow this approach on another problem.", "Namely, let us take $a > 0$ and $b > 0$ and let us define $\\text{Rect}(a,b)$ the rectangle centred around $(0,0)$ whose summits are $(a,b)$ , $(-a,b)$ , $(-a,-b)$ and $(a,-b)$ .", "Let us recall the following definitions: Definition 1 For a lattice $L$ of $\\mathbb {R}^{d}$ , its dual lattice $L^{\\perp }$ is defined by $L^{\\perp } = \\lbrace x \\in \\mathbb {R}^{d} \\text{ } | \\text{ } \\forall l \\in L \\text{, } <l,x> \\in \\mathbb {Z} \\rbrace $ where $< , >$ is the usual euclidean scalar product over $\\mathbb {R}^{d}$ .", "Definition 2 A lattice $L$ of $\\mathbb {R}^{d}$ is called admissible if there exists $C > 0$ such that for all $l=(l_{1},\\cdots ,l_{d}) \\in L-\\lbrace 0 \\rbrace $ , $| \\text{Num}(l)| \\geqslant C $ where $\\text{Num}(l) = l_{1} \\cdots l_{d}$ .", "During the rest of this article, we are going to use the following notation: $\\text{Num}(L) = \\inf \\lbrace | \\text{Num}(l) | \\text{ } | \\text{ } l \\in L-\\lbrace 0 \\rbrace \\rbrace \\textit {.}", "$ With this notation, saying that $L$ is admissible is equivalent to saying that $\\text{Num}(L) > 0$ .", "We recall that if $L$ is an admissible lattice, $L^{\\perp }$ is also admissible (see, for example, $\\cite {skriganov1994constructions}$ ).", "Let us also define the following quantity: Definition 3 For every lattice $L$ of $\\mathbb {R}^{2}$ , for every $t > 0$ , one sets: $V(L,t) = \\sum _{\\begin{array}{c} l \\in L \\\\ 0 < \\Vert l \\Vert \\leqslant t \\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} $ when for all $l \\in L$ such that $ 0 < \\Vert l \\Vert \\leqslant t $ , $\\text{Num}(l) \\ne 0$ and where $\\Vert \\cdot \\Vert $ is the usual euclidean norm over $\\mathbb {R}^{2}$ .", "We are going to use the following notation: for $f(t,X)$ a function of $\\mathbb {R}^{+} \\times \\mathbb {R}^{2}$ such that $f(t,\\cdot )$ is $L$ -periodic, $\\mathbb {E}_{X \\in \\mathbb {R}^{2}/L}(f)(t) = \\frac{1}{\\textit {covol}(L)} \\int _{X \\in \\mathbb {R}^{2} / L } f(t,X) dX \\textit {.}", "$ Let us take a probability density $\\rho $ over $[0,1]$ .", "One of the goal of this article is to prove the following theorem: Theorem 1 Let $L$ be an admissible lattice of $\\mathbb {R}^{2}$ .", "Then, first, there exists $C > 0$ such that for all $t$ large enough: $ \\frac{1}{C} \\log (t) \\leqslant V(L,t) \\leqslant C \\log (t) $ and we are going to write it $V(L,t) = \\Theta (\\log (t))$ .", "Second, if $P=\\text{Rect}(a,b)$ , $ \\mathbb {E}_{X \\in \\mathbb {R}^{2}/L}\\left( (\\frac{\\mathcal {R}(t P + X ,L)}{\\sqrt{V(L^{\\perp },t)}})^{2} \\right) \\textit { converges in distribution and in probability towards } \\frac{1}{4 \\pi ^{4} \\text{Covol}(L)^{2}} $ when $t \\in [0, T]$ is distributed according to $\\frac{1}{T} \\rho (\\frac{t}{T}) dt$ and when $T \\rightarrow \\infty $ .", "Theorem $\\ref {chap4:thm1}$ can be interpreted as followed: after averaging on $X$ , when $t$ is large and random, the error committed by making the approximation $N(t \\text{Rect}(a,b) + X, \\Gamma _{\\alpha ,\\alpha ^{\\prime }}) \\approx t^{2} \\frac{\\text{Area}(\\text{Rect}(a,b))}{\\text{Covol}(\\Gamma _{\\alpha ,\\alpha ^{\\prime }})}$ is of order $\\sqrt{\\log (t)}$ .", "This normalization is in fact suggested by $\\cite {trevisan2021}$ .", "This theorem is also in fact suggested by Theorem $\\ref {chap4:thm2}$ that we will prove (see the next section for some heuristic explanations).", "We are going to state it.", "Let us designate the space of unimodular lattices of $\\mathbb {R}^{d}$ by the notation ${S}_{d}$ .", "We are calling $\\Delta _{r}$ the following set: $\\Delta _{r} = \\lbrace \\text{Diag}(e^{t_{1}}, \\cdots , e^{t_{d}}) \\text{ } | \\text{ } (t_{1},\\cdots ,t_{d}) \\in \\mathbb {Z}^{d} \\text{, } t_{1}+ \\cdots + t_{d} = 0 \\textit { and } \\Vert (t_{1},\\cdots ,t_{d}) \\Vert \\leqslant r \\rbrace \\textit {.}", "$ We have that $\\Delta _{r} \\subset {S}_{d}$ and $|\\Delta _{r}| = n_{r}r^{d-1} + o(r^{d-1})$ when $r \\rightarrow \\infty $ .", "We also call $\\Delta = \\lbrace \\text{Diag}(e^{t_{1}}, \\cdots , e^{t_{d}}) \\text{ } | \\text{ } (t_{1},\\cdots ,t_{d}) \\in \\mathbb {Z}^{d} \\text{ and } t_{1}+ \\cdots + t_{d} = 0 \\rbrace \\textit { and}$ for every lattice $L$ of $\\mathbb {R}^{d}$ , we call $\\Vert L \\Vert = \\min \\lbrace \\Vert l \\Vert \\text{ } | \\text{ } l \\in L-\\lbrace 0 \\rbrace \\rbrace \\textit {.}", "$ With these notations, the Theorem $\\ref {chap4:thm2}$ states as the following: Theorem 2 Let $L \\in {S}_{d}$ be an admissible lattice.", "Let us take $(\\Theta _{\\delta })_{\\delta \\in \\Delta }$ a sequence of independent identically distributed real random variables that are symmetrical and admit moment of order 3 and whose $\\textit {alea}$ $\\omega $ belongs to a probability space $\\Omega $ .", "Let us set: $ \\tilde{V}(L,r) = \\sum _{\\delta \\in \\Delta _{r}} \\frac{1}{\\Vert \\delta L \\Vert ^{2 d}} $ and $\\tilde{S}(L,\\omega ,r) = \\sum _{\\delta \\in \\Delta _{r}} \\frac{\\theta _{\\delta }(\\omega )}{\\Vert \\delta L \\Vert ^{d}} \\textit {.", "}$ Then one has: $ \\tilde{V}(L,r) = \\Theta (r^{d-1}) $ and $ \\frac{\\tilde{S}(L,\\omega ,r)}{\\sqrt{\\tilde{V}(L,r)}}$ converges in distribution towards the standard normal distribution when $r \\rightarrow \\infty $ .", "Theorem $\\ref {chap4:thm2}$ says that for an admissible lattice $\\tilde{S}(L,\\omega ,r)$ , normalized by a quantity of order $r^{\\frac{d-1}{2}}$ , converges, in distribution, towards a normal centred distribution.", "When $L$ is typical, this must not be true: the regularization must be stronger because the orbit $\\delta L$ goes repeatedly into the cusp of the space ${S}_{d}$ , $\\textit {id est}$ the zone where $\\Vert \\delta L \\Vert $ is small.", "In the typical case (typical in the sense of the unique Haar probability measure $\\mu _{d}$ over ${S}_{d}$ ), we are going to prove the following result: Theorem 3 For every $\\epsilon > 0$ , for a typical $L \\in {S}_{d}$ , one has that $\\tilde{V}(L,r) = O(r^{2 d - 2 + \\epsilon })$ and $\\frac{\\tilde{V}(L,r)}{r^{ d - 1}} \\underset{r \\rightarrow \\infty }{\\rightarrow } \\infty \\textit {.", "}$ In the case where $d=2$ , one has also that for every $\\epsilon > 0$ , for $L$ a typical lattice, one has that $V(L,t) = O(\\log (t)^{2+\\epsilon })$ and $\\frac{V(L,t)}{\\log (t)^{2}} \\underset{t \\rightarrow \\infty }{\\rightarrow } \\infty \\textit {.", "}$ Furthermore, in this case, if $P=\\text{Rect}(a,b)$ , $ \\mathbb {E}_{X \\in \\mathbb {R}^{2}/L}\\left( (\\frac{\\mathcal {R}(t P + X ,L)}{\\sqrt{V(L^{\\perp },t)}})^{2} \\right) \\textit { converges in distribution and in probability towards } \\frac{1}{4 \\pi ^{4} \\text{Covol}(L)^{2}} $ when $t \\in [0, T]$ is distributed according to $\\frac{1}{T} \\rho (\\frac{t}{T}) dt$ and when $T \\rightarrow \\infty $ .", "In particular, in the typical case, the convergence in distribution and in probability of Theorem $\\ref {chap4:thm1}$ still holds.", "Yet, the normalization is larger in this case: \"around\" $\\log (t)$ whereas before it was in $\\sqrt{\\log (t)}$ .", "Finally, we are also going to tackle another extreme case: the case where $L = \\mathbb {Z}^{2}$ .", "$\\mathbb {Z}^{2}$ is a unimodular lattice such that there exists $l \\in L-\\lbrace 0 \\rbrace $ such that $Num(l) = 0$ (for example $(1,0)$ ).", "Typically, for $L \\in {S}_{d}$ , $\\text{Num}(L)$ is null but there does not exists a non-zero $l \\in L$ such that $Num(l) = 0$ .", "In that case, with $\\rho $ being a probability density over $[0,1]$ , we are going to prove the following theorem: Theorem 4 For all $x \\in \\mathbb {R}$ , when $t \\in [0,T]$ is distributed according to the probability measure $\\frac{1}{T} \\rho ( \\frac{t}{T}) dt$ on $[0,T]$ then, when $T \\rightarrow \\infty $ , $\\frac{\\mathcal {R}(t \\text{Rect}(a,a) + (x,x) ,\\mathbb {Z}^{2})}{t}$ converges in distribution.", "Furthermore, the limit distribution $\\beta $ has a compact support included in $[-4,4]$ and for every $k \\in \\mathbb {N}$ , one has that $\\int _{x \\in \\mathbb {R}} x^{k} d \\beta (x) = a_{k} $ where $a_{k} = \\frac{4^{k}(1 + (-1)^{k})(y^{k+1} + (1-y)^{k+1})}{2(k+1)}$ with $y = |t_{2,0} - t_{1,0}|$ where $t_{2,0}$ is the first $t \\geqslant 0$ such that $ -t + x \\in \\mathbb {Z}$ and $t_{1,0}$ is the first $t \\geqslant 0$ such that $ t + x \\in \\mathbb {Z}$ .", "In particular, we see that the normalization in this case is much more important than before and that the error $\\mathcal {R}(t \\text{Rect}(a,a) + (x,x), \\mathbb {Z}^{2})$ in this case is of order $t$ .", "In the next section, we are going to give some heuristic ideas about all these results and then give the plan of the rest of the paper." ], [ "Calculation of a Fourier transform, heuristic and plan of the rest of the paper", "To give some heuristic explanations, we will apply the Poisson formula which states that for a smooth and compact supported function $f: \\mathbb {R}^{2} \\longrightarrow \\mathbb {R}$ and for $L$ a lattice of $\\mathbb {R}^{2}$ , one has that for every $X \\in \\mathbb {R}^{2}$ $\\sum _{l \\in L} f(l+X) = \\frac{1}{\\text{Covol}(L)} \\sum _{l \\in L^{\\perp }} \\hat{f}(l) e^{-2 i \\pi <l,X>}$ where the Fourier transform $\\hat{f}$ is defined by: for every $\\xi \\in \\mathbb {R}^{2}$ $\\hat{f}(\\xi ) = \\int _{x \\in \\mathbb {R}^{2}} f(x) e^{2 i \\pi <\\xi ,x>} dx \\textit {.", "}$ In our case, we are interested into the following quantity: $N(t P + X,L) = \\sum _{l \\in L} \\mathbf {1}_{t P + X}(l)$ where $\\mathbf {1}_{t P + X}$ is the indicator function of the set $t P + X $ with $P = \\text{Rect}(a,b)$ .", "Yet, the function $\\mathbf {1}_{t P + X}$ is not smooth enough.", "But the Poisson formula gives us a good idea of the phenomena that are at play and that is because the Poisson formula applies after having realized a $\\textit {smoothing}$ of the studied problem (see Section 4).", "So, we are going to calculate the Fourier transform of $\\mathbf {1}_{t P + X}$ in the next subsection.", "Then we will give the heuristic and announce the plan of the rest of the paper." ], [ "Calculation of the Fourier transform of $\\mathbf {1}_{t P + X}$", "The main objective of this subsection is to prove the following proposition: Proposition 1 For $P = \\text{Rect}(a,b)$ , for $l \\in \\mathbb {R}^{2}$ , for $t > 0$ , one has: $ \\widehat{\\mathbf {1}_{t P + X}}(l) = \\frac{1}{\\pi ^{2}}\\frac{\\sin (2 \\pi tl_{1}a)}{ l_{1}} \\frac{\\sin (2 \\pi t l_{2}b)}{ l_{2}}e^{2 i \\pi <l,X>} $ where we convey that $\\frac{\\sin (0)}{0} = 1$ .", "Let $l \\in \\mathbb {R}^{2}$ and $t > 0$ .", "By making the change of variable $x= t y + X$ , one has that: $\\widehat{\\mathbf {1}_{t P + X}}(l) = e^{2 i \\pi <l,X>} t^{2} \\int _{y \\in P} e^{2 i \\pi t <l,y>} dy \\textit {.", "}$ Yet, we recall that $P = \\text{Rect}(a,b)$ and so one has: $\\int _{y \\in P} e^{2 i \\pi t <l,y>} = \\frac{\\sin (2 \\pi t l_{1} a)}{ \\pi t l_{1}}\\frac{\\sin (2 \\pi t l_{2} b)}{ \\pi t l_{2}} \\textit {.", "}$ So, with Equation ($\\ref {chap4:eq20}$ ) and Equation ($\\ref {chap4:eq21}$ ), one gets that: $\\widehat{\\mathbf {1}_{t P + X}}(l) = \\frac{e^{2 i \\pi <l,X>}}{\\pi ^{2}} \\frac{\\sin (2 \\pi t l_{1} a)}{l_{1}}\\frac{\\sin (2 \\pi t l_{2} b)}{l_{2}} \\textit {.", "}$ Now, we are going to give some heuristic explanations about the main results of this paper and the plan of the rest of the paper." ], [ "Elements of heuristic and plan of the rest of the paper", "Heuristically, the Poisson formula (see Equation $(\\ref {chap4:eq1100})$ ) gives us that: $N(t P + X,L) = \\sum _{l \\in L} \\mathbf {1}_{t P + X}(l) = \\frac{1}{\\text{Covol}(L)} \\sum _{l \\in L^{\\perp }} \\frac{1}{\\pi ^{2}}\\frac{\\sin (2 \\pi tl_{1}a)}{ l_{1}} \\frac{\\sin (2 \\pi t l_{2}b)}{ l_{2}}e^{2 i \\pi <l,X>}$ with $P = \\text{Rect}(a,b)$ .", "Yet, and this link was used a lot in $\\cite {trevisan2021}$ , in the typical case, the smallest $|l_{1} l_{2}|$ can be seen as a $\\Vert \\begin{pmatrix} e^{t} & 0 \\\\ 0 & e^{-t} \\end{pmatrix} L \\Vert $ .", "This link can also be seen in the following proposition (that will be useful for us later): Proposition 2 ([9]) $\\text{Num}(L) = d^{- \\frac{d}{2}} \\inf \\lbrace \\Vert \\delta L \\Vert ^{d} \\text{ } | \\text{ } \\delta \\in \\Delta \\rbrace $ where $\\Delta $ was defined by Equation $\\ref {chap4:ajout}$ .", "So it suggests that the term $\\frac{\\sin (2 \\pi tl_{1}a)}{ l_{1}} \\frac{\\sin (2 \\pi t l_{2}b)}{ l_{2}}e^{2 i \\pi <l,X>}$ can be as seen as $\\frac{\\theta _{\\delta }(\\omega )}{\\Vert \\delta L \\Vert ^{d}}$ and so there exists a link between Theorem $\\ref {chap4:thm1}$ and Theorem $\\ref {chap4:thm2}$ .", "Yet, to prove Theorem $\\ref {chap4:thm1}$ from Theorem $\\ref {chap4:thm2}$ , there are two difficulties.", "The first one is that Theorem $\\ref {chap4:thm1}$ and Theorem $\\ref {chap4:thm2}$ are about admissible unimodular lattices, which form a negligible set with respect to $\\mu _{2}$ and the relation about $|l_{1} l_{2}|$ and $\\Vert \\begin{pmatrix} e^{t} & 0 \\\\ 0 & e^{-t} \\end{pmatrix} L \\Vert $ is not certain in this case.", "The second one is the fact that the $\\sin (2 \\pi tl_{1}a)$ and $\\sin (2 \\pi t l_{2}a)$ do not behave like independent random variable when $T \\rightarrow \\infty $ and with $t$ being distributed according to the probability measure $\\frac{1}{T} \\rho (\\frac{t}{T}) dt$ .", "Sure we can, like we have done in $\\cite {trevisan2021}$ and in $\\cite {trevisan2021limit}$ , reduce the study to the $l \\in L^{\\perp }$ that are prime, which is a notion defined by: Definition 4 For a lattice $L$ , we say that a vector of $L$ is prime if it not a non-trivial integer multiple of another vector of $L$ .", "Yet, even in that case, it is not true that the $\\sin (2 \\pi tl_{1}a)$ and the $\\sin (2 \\pi t l_{2}b)$ behave asymptotically like independent random variables.", "Indeed, let $\\alpha \\ne \\alpha ^{\\prime }$ be real irrational numbers with bounded partial quotients in their continued fractions and let us define: $ \\Gamma _{\\alpha ,\\alpha ^{\\prime }} = \\lbrace (n + m \\alpha , n+ m \\alpha ^{\\prime } ) \\text{ } | \\text{ } n,m \\in \\mathbb {Z}^{2} \\rbrace \\textit {.", "}$ Then, we know from $\\cite {skriganov1994constructions}$ that $\\Gamma _{\\alpha ,\\alpha ^{\\prime }}$ is admissible in the sense of Definition $\\ref {chap4:def2}$ .", "So, it is also the case of $\\Gamma _{\\alpha ,\\alpha ^{\\prime }}^{\\perp } = \\frac{1}{\\alpha ^{\\prime } - \\alpha } \\lbrace (n + m \\alpha , n+ m \\alpha ^{\\prime } ) \\text{ } | \\text{ } n,m \\in \\mathbb {Z}^{2} \\rbrace \\textit {.", "}$ Then, if we consider, for $k \\geqslant 2$ , $v_{1}(k) = \\frac{1}{\\alpha ^{\\prime } - \\alpha } (k + (k+1) \\alpha , k + (k+1) \\alpha ^{\\prime } ) \\textit {, }$ $v_{2}(k) = \\frac{1}{\\alpha ^{\\prime } - \\alpha } (k+1 + (k+2) \\alpha , k+1 + (k+2) \\alpha ^{\\prime } ) \\textit {, }$ $v_{3}(k) = \\frac{1}{\\alpha ^{\\prime } - \\alpha } (k+2 + (k+3) \\alpha , k+2 + (k+3) \\alpha ^{\\prime } ) \\textit {, }$ we have that: $v_{1}(k),v_{2}(k)$ and $v_{3}(k)$ are prime vectors of $ \\Gamma _{\\alpha ,\\alpha ^{\\prime }}^{\\perp }$ and $-v_{1}(k) + 2 v_{2}(k) = v_{3}(k)$ which prevents from getting the wanting asymptotic independence.", "That is why, to get rid of this problem, we consider $\\mathbb {E}_{X \\in \\mathbb {R}^{2}/L}\\left( (\\frac{\\mathcal {R}(t P + X ,L)}{\\sqrt{V(L^{\\perp },t)}})^{2} \\right)$ instead of considering $\\frac{\\mathcal {R}(t P + X ,L)}{\\sqrt{V(L^{\\perp },t)}}$ .", "By doing so, the Parseval formula gives us that, if we want to prove Theorem $\\ref {chap4:thm1}$ , heuristically we have to study the convergence in distribution of $\\frac{G(L^{\\perp },t)}{V(L^{\\perp },t)} = \\frac{2}{\\pi ^{4} \\text{Covol}(L)^{2} V(L^{\\perp },t) } \\sum _{l \\in J_{2}(L^{\\perp },t)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{(\\sin (2 \\pi k t l_{1}) \\sin (2 \\pi k t l_{2}))^{2} }{k^{4}}$ where $J_{2}(L^{\\perp },t) = \\lbrace l \\in L^{\\perp } \\text{ } | \\text{ } 0 < \\Vert l \\Vert \\leqslant t \\text{ , } l \\text{ prime} \\text{ and } l_{1} > 0 \\rbrace $ and where $t \\in [0, T]$ is being distributed according to the probability measure $\\frac{1}{T} \\rho (\\frac{\\cdot }{T}) dt$ and where $T \\rightarrow \\infty $ .", "We have implicitly used Equation $(\\ref {chap4:eq1054})$ and centred the sum of the right-hand side on the prime vectors whose norm are smaller than $t$ .", "We have to cut the sum because of a problem of convergence.", "The final ideas that we use to prove Theorem $\\ref {chap4:thm1}$ are, first, the idea to use the well-known formula $\\sin ^{2}(\\cdot ) = \\frac{1-\\cos (2 \\cdot )}{2}$ .", "Then there are two different types of quantities that must be dealt with.", "The first one is of the type: $\\frac{1}{V(L^{\\perp },t)} \\sum _{\\begin{array}{c} l \\in L^{\\perp }-\\lbrace 0 \\rbrace \\\\ \\Vert l \\Vert \\leqslant t \\end{array}} \\frac{1}{ l_{1}^{2} l_{2}^{2} }$ and we show quickly that such a quantity converges almost surely when $T \\rightarrow \\infty $ .", "The other type of quantities have a term of the form $\\cos (t f(l))$ or a product of two $\\cos (t f(l))$ (with $f$ being a function of $l$ ) in the numerator of the terms.", "In that case, we show that the moment of order 2 of the quantities of this type converge to 0 when $T \\rightarrow \\infty $ .", "We use the moment of order 2 because it is quite convenient since we are dealing with numerators that have a term of the form $\\cos (t f(l))$ .", "Yet, this last part is a calculatory one.", "Furthermore, we have to underline the fact that these calculations still work for the typical $L$ considered in Theorem $\\ref {chap4:thm1000}$ , not only for admissible lattice $L$ .", "They are, in that sense, intrinsic.", "The estimate of $V(L,t)$ contained in Theorem $\\ref {chap4:thm1}$ is basically derived from the estimate of the integral $ \\int _{\\begin{array}{c} l \\in \\mathbb {R}^{2} \\\\ A \\leqslant \\Vert l \\Vert \\leqslant t \\\\ |\\text{Num}(l)| \\geqslant C\\end{array}} \\frac{1}{(l_{1}l_{2})^{2}} $ where $A > 0$ , $C > 0$ and $t \\rightarrow \\infty $ .", "It concludes the heuristic explanations that we wanted to give about Theorem $\\ref {chap4:thm1}$ .", "The proof of Theorem $\\ref {chap4:thm2}$ is quicker and it is basically an application of the central limit theorem with error term (see Theorem $\\ref {chap4:thm4}$ ).", "We already have said a few words about the last part of Theorem $\\ref {chap4:thm1000}$ .", "We will now give some explanations about the estimates $V$ and $\\tilde{V}$ presented in this theorem.", "The estimates of $\\tilde{V}$ are applications of the ergodic theorem whereas the estimates of $V$ is deduced from an upper estimate of $\\int _{\\begin{array}{c} l \\in \\mathbb {R}^{2} \\\\ A \\leqslant \\Vert l \\Vert \\leqslant t \\\\ |\\text{Num}(l)| \\geqslant C | \\log (\\Vert l \\Vert ) |^{-1-\\alpha }\\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} dl_{1} dl_{2} $ where $C > 0$ , $A > 0$ and $\\alpha > 0$ with $t \\rightarrow \\infty $ and from results of $\\cite {sprindzhuk1979metric}$ and of $\\cite {Skriganov}$ (see Theorem $\\ref {chap4:thm12}$ ).", "Finally, concerning the Theorem $\\ref {chap4:thm3}$ , the appropriate normalization of the error term $\\mathcal {R}$ is $t$ because, to within a multiplicative factor, it is the perimeter of $t \\text{Rect}(a,a) + X$ .", "Indeed, in this case, we can easily compute $\\mathcal {R}$ .", "The wanted convergence of Theorem $\\ref {chap4:thm3}$ is then deduced from it and from the application of the method of moments.", "$\\textbf {Plan of the paper.", "}$ In Section 3, we prove Theorem $\\ref {chap4:thm2}$ .", "In Section 4, we prove Theorem $\\ref {chap4:thm1}$ .", "The first subsection is dedicated to obtain the upper and lower estimate on $V(L,t)$ in the case where $L$ is admissible.", "Let us set: $ S(L,X,t) = \\frac{2}{\\pi ^{2} \\textit {covol}(L)} \\sum _{l \\in J_{2}(L^{\\perp },t)} \\frac{1}{l_{1}l_{2}} \\sum _{k=1}^{\\infty } \\frac{\\sin (2 \\pi k t l_{1}) \\sin (2 \\pi t l_{2}) \\cos (2 \\pi k <l,X> ) }{k^{2}} $ where we recall that $J_{2}(L^{\\perp },t) = \\lbrace l \\in L^{\\perp } \\text{ } | \\text{ } 0 < \\Vert l \\Vert \\leqslant t \\text{ , } l \\text{ prime} \\text{ and } l_{1} > 0 \\rbrace $ .", "The second subsection is dedicated to show that we can reduce the study of the convergence in distribution of $\\mathbb {E}_{X \\in \\mathbb {R}^{2}/L}\\left( (\\frac{\\mathcal {R}(t P + X ,L)}{\\sqrt{V(L^{\\perp },t)}})^{2} \\right)$ to the study of the convergence in distribution of $\\mathbb {E}_{X \\in \\mathbb {R}^{2}/L}\\left( \\frac{S(L,X,t)}{\\sqrt{V(L^{\\perp },t)}}^{2} \\right)$ when $t$ is being distributed according to $\\frac{1}{T} \\rho (\\frac{t}{T}) dt$ and when $T \\rightarrow \\infty $ (see Proposition $\\ref {chap4:prop3}$ ).", "The third subsection is dedicated to show that $ \\mathbb {E}_{X \\in \\mathbb {R}^{2}/L}\\left( \\frac{S(L,X,t)}{\\sqrt{V(L^{\\perp },t)}}^{2} \\right)$ (see Proposition $\\ref {chap4:prop8}$ ) converges in distribution when $t$ is being distributed according to $\\frac{1}{T} \\rho (\\frac{t}{T}) dt$ and when $T \\rightarrow \\infty $ .", "The fourth subsection concludes the proof of Theorem $\\ref {chap4:thm1}$ .", "In Section 5, we give the proof of Theorem $\\ref {chap4:thm1000}$ .", "The first subsection is dedicated to the estimates of $\\tilde{V}(L,r)$ .", "The second subsection is dedicated to the estimates of $V(L,t)$ .", "The third subsection concludes the proof of Theorem $\\ref {chap4:thm1000}$ , using, in particular, the third subsection of Section 4.", "In Section 6, we give the proof of Theorem $\\ref {chap4:thm3}$ .", "In the first subsection, we give a simple expression of $\\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t}$ (and before we see that the $a$ , in the statement of Theorem $\\ref {chap4:thm3}$ , can be chosen equal to 1).", "In the second subsection, we can reduce the study of the convergence in distribution of $ \\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t}$ , when $T \\rightarrow \\infty $ , to a simpler quantity.", "In the third subsection, we are going to apply the method of moments to get the convergence of this simpler quantity.", "In the fourth subsection, we conclude the proof of Theorem $\\ref {chap4:thm3}$ ." ], [ "Proof of Theorem 14", "To prove Theorem $\\ref {chap4:thm2}$ , we need to recall two important theorems.", "The first one is the central limit theorem with error term (see for example $\\cite {beck2010randomness}$ ): Theorem 5 Let $(Z_{i})_{1 \\leqslant i \\leqslant n}$ be a sequence of real random variables independent such that for all $1 \\leqslant i \\leqslant n $ , $\\mathbb {E}(Z_{i}) = 0$ and $Z_{i}$ admits a moment of order 3.", "Let us call: $ T = \\sum _{i=1}^{n} \\mathbb {E}(|Z_{i}|^{3}) \\textit { and } V = \\sum _{i=1}^{n} \\mathbb {E}(Z_{i}^{2}) \\textit {.}", "$ Then for every real $\\lambda $ , one has $| \\mathbb {P}( \\frac{\\sum _{i=1}^{n}Z_{i}}{\\sqrt{V}} \\geqslant \\lambda ) - \\frac{1}{\\sqrt{2 \\pi }} \\int _{\\lambda }^{\\infty } e^{- \\frac{u^{2}}{2}} du| < \\frac{40 T}{V^{\\frac{3}{2}}} \\textit {.", "}$ The second one is a theorem due to Minkowski: Theorem 6 There exists $K > 0 $ such that for all $L \\in {S}_{d}$ , $\\Vert L \\Vert \\leqslant K \\textit {.", "}$ We can now prove Theorem $\\ref {chap4:thm2}$ .", "[Proof of Theorem $\\ref {chap4:thm2}$ ] Let $L$ be an admissible lattice.", "So, according to Proposition $\\ref {chap4:prop1}$ and Equation $\\ref {chap4:ajout}$ , there exists $C > 0$ such that for every $\\delta \\in \\Delta $ , one has $\\Vert \\delta L \\Vert \\geqslant C \\textit {.", "}$ Theorem $\\ref {chap4:thm5}$ gives us that: $\\Vert \\delta L \\Vert \\leqslant K \\textit {.", "}$ Because of Equation ($\\ref {chap4:eq2}$ ) and Equation ($\\ref {chap4:eq3}$ ), one has $| \\Delta _{r} | K^{-2d} \\leqslant \\tilde{V}(L,r) \\leqslant | \\Delta _{r} | C^{-2d} \\textit {.", "}$ Yet, the cardinal number of $\\Delta _{r}$ satisfies that $|\\Delta _{r}| = n_{r}r^{d-1} + o_{r}(r^{d-1}) \\textit {.", "}$ So we get the first wanted result.", "According to Theorem $\\ref {chap4:thm5}$ , one has: $| \\mathbb {P}( \\frac{\\tilde{S}(L,\\omega ,r)}{\\sqrt{ \\tilde{V}(L,r)}} \\geqslant \\lambda ) - \\frac{1}{\\sqrt{2 \\pi }} \\int _{\\lambda }^{\\infty } e^{- \\frac{u^{2}}{2}} du| < \\frac{40 T_{1}(r)}{V_{1}(r)^{\\frac{3}{2}}}$ where $V_{1}(r) = \\sum _{\\delta \\in \\Delta _{r}} \\mathbb {E}((\\frac{\\theta _{\\delta }}{\\Vert \\delta L \\Vert ^{d}})^{2})$ and $T_{1}(r) = \\sum _{\\delta \\in \\Delta _{r}} \\mathbb {E}(|\\frac{\\theta _{\\delta }}{\\Vert \\delta L \\Vert ^{d}}|^{3}) \\textit {.", "}$ Yet, one has: $V_{1}(r) = \\Theta (r^{d-1})$ and $T_{1}(r) = \\Theta (r^{d-1})$ because the $\\theta _{\\delta }$ are independent, symmetrical, identically distributed and their common distribution admit a moment of order 3 and because of Equation ($\\ref {chap4:eq2}$ ) and of Equation ($\\ref {chap4:eq3}$ ).", "Thus, one gets that: $| \\mathbb {P}( \\frac{\\tilde{S}(L,\\omega ,r)}{\\sqrt{ \\tilde{V}(L,r)}} \\geqslant \\lambda ) - \\frac{1}{\\sqrt{2 \\pi }} \\int _{\\lambda }^{\\infty } e^{- \\frac{u^{2}}{2}} du| = O(\\frac{1}{r^{\\frac{d-1}{2}}}) \\textit {.", "}$ From this last Equation and by making $r \\rightarrow \\infty $ , one gets the wanted result." ], [ "Estimates of $V(L,t)$ when {{formula:94603548-7517-40f4-905d-7c9293aaa96b}} is admissible", "In this subsection, we are going to prove the following proposition, which is the first assertion of Theorem $\\ref {chap4:thm1}$ Proposition 3 Let $L$ be an admissible lattice of $\\mathbb {R}^{2}$ .", "One has that there exists $C > 0$ (big enough) such that for all $t$ large enough: $\\frac{1}{C} \\log (t) \\leqslant V(L,t) \\leqslant C \\log (t) \\textit {.}", "$ The main idea is that $V(L,t)$ can be compared to $ \\int _{\\begin{array}{c} l \\in \\mathbb {R}^{2} \\\\ A \\leqslant \\Vert l \\Vert \\leqslant t+B \\\\ |\\text{Num}(l)| \\geqslant C\\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} dl_{1} dl_{2} $ when $L$ is admissible and when $C,A$ and $B$ are positive constants that are well-chosen.", "And this last integral behaves like $\\log (t)$ to within one multiplicative constant.", "This last fact is the object of the following lemma.", "In this lemma, we use the notation $f(t) \\sim _{t \\rightarrow \\infty } g(t) $ to express the facts that, for $t$ large enough, $g(t) \\ne 0$ and that $\\frac{f(t)}{g(t)} \\rightarrow 1$ when $t \\rightarrow \\infty $ .", "Lemma 1 For all $C > 0$ , for all $A > 0$ , for all $t$ large enough, one has: $\\int _{\\begin{array}{c} l \\in \\mathbb {R}^{2} \\\\ A \\leqslant \\Vert l \\Vert \\leqslant t \\\\ |\\text{Num}(l)| \\geqslant C\\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} dl_{1} dl_{2} \\sim _{t \\rightarrow \\infty } \\frac{8 \\log (t)}{ C } \\textit {.", "}$ Let us set: $J(t) = \\int _{\\begin{array}{c} l \\in \\mathbb {R}^{2} \\\\ A \\leqslant \\Vert l \\Vert \\leqslant t \\\\ |\\text{Num}(l)| \\geqslant C\\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} dl_{1} dl_{2}$ and let us remark by the way that it is enough to prove the result for $A$ large enough.", "By passing into polar coordinates $(r,\\theta )$ and by using the symmetries, one has: $J(t) = 8 \\int _{\\begin{array}{c}A \\leqslant r \\leqslant t \\\\ 0 \\leqslant \\theta \\leqslant \\frac{\\pi }{4} \\\\ \\sin (2 \\theta ) \\geqslant \\frac{2 C }{r^{2}} \\end{array}} \\frac{1}{r^{3}} \\frac{4}{\\sin (2 \\theta )^{2}} dr d\\theta \\textit {.", "}$ By making the changes of variable $\\theta ^{\\prime } = 2 \\theta $ and, then, $u = \\tan (\\theta ^{\\prime })$ , one gets from Equation $(\\ref {chap4:eq11})$ and by taking $A$ large enough: $J(t) = 16 \\int _{A \\leqslant r \\leqslant t} \\frac{1}{r^{3}} (-1 + \\frac{r^{2}}{2 C} \\sqrt{1 - (\\frac{2 C}{r^{2}})^{2}}) dr \\textit {.", "}$ From Equation ($\\ref {chap4:eq12}$ ) and because of the fact that $ \\frac{1}{r^{3}} (-1 + \\frac{r^{2}}{2 C} \\sqrt{1 - (\\frac{2 C}{r^{2}})^{2}}) = \\frac{1}{2 r C } + O(\\frac{1}{r^{2}})$ (when $r \\rightarrow \\infty $ ), we finally get that: $J(t) \\sim _{t \\rightarrow \\infty } \\frac{8}{ C } \\log (t) \\textit {.", "}$ So we get the wanted result.", "We can now prove Proposition $\\ref {chap4:prop2}$ : [Proof of Proposition $\\ref {chap4:prop2}$ ] First, as $L$ is admissible, there exists $C >0$ such that for all $l \\in L-\\lbrace 0 \\rbrace $ , $|l_{1}l_{2}| \\geqslant C \\textit {.}", "$ So, according to Definition $\\ref {chap4:def10}$ and by integration by parts, there exists $A > 0$ , $B > 0$ and $D > 0$ such that: $\\frac{1}{D} \\int _{\\begin{array}{c} l \\in \\mathbb {R}^{2} \\\\ A \\leqslant \\Vert l \\Vert \\leqslant t+B \\\\ |\\text{Num}(l)| \\geqslant C\\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} dl_{1} dl_{2} \\leqslant V(L,t) \\leqslant D \\int _{A \\leqslant \\Vert l \\Vert \\leqslant t+B } \\min (\\frac{1}{C^{2}}, \\frac{1}{(l_{1}l_{2})^{2}}) dl_{1}dl_{2} \\textit {.", "}$ Lemma $\\ref {chap4:lemme2}$ gives us in particular that: $\\int _{\\begin{array}{c} l \\in \\mathbb {R}^{2} \\\\ A \\leqslant \\Vert l \\Vert \\leqslant t+B \\\\ |\\text{Num}(l)| \\geqslant C\\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} dl_{1} dl_{2} = \\Theta (\\log (t)) \\textit {.", "}$ and $\\int _{A \\leqslant \\Vert l \\Vert \\leqslant t+B } \\min (\\frac{1}{C^{2}}, \\frac{1}{(l_{1}l_{2})^{2}}) dl_{1}dl_{2} = \\Theta (\\log (t)) + \\frac{1}{C^{2}} \\int _{\\begin{array}{c} A \\leqslant \\Vert l \\Vert \\leqslant t+B \\\\ |Num(l)| \\leqslant C\\end{array}} dl_{1} dl_{2} \\textit {.", "}$ Yet, one has that: $\\int _{\\begin{array}{c} A \\leqslant \\Vert l \\Vert \\leqslant t+B \\\\ |Num(l)| \\leqslant C\\end{array}} dl_{1} dl_{2} = \\Theta (\\log (t)) \\textit {.", "}$ Equation ($\\ref {chap4:eq1057}$ ), Equation ($\\ref {chap4:eq17}$ ), Equation ($\\ref {chap4:eq1058}$ ) and Equation ($\\ref {chap4:eq1059}$ ) give us then the wanted result.", "$\\textit {Remark.", "}$ We think that in dimension $d \\geqslant 3$ , we must have that for all $C > 0$ , for all $A > 0$ , one has: $\\int _{\\begin{array}{c} l \\in \\mathbb {R}^{d} \\\\ A \\leqslant \\Vert l \\Vert \\leqslant t \\\\ |\\text{Num}(l)| \\geqslant C\\end{array}} \\frac{1}{l_{1}^{2} \\cdots l_{d}^{2}} dl_{1} \\cdots dl_{d} = \\Theta (\\log (t)^{d-1}) \\textit {.", "}$ In fact, the upper part of Equation ($\\ref {chap4:eq14}$ ) can be proven like that: By symmetry, we are brought back to the case where $l_{i} > 0$ for all $i$ .", "We set $ \\phi : (l_{1}, \\cdots , l_{d}) \\longmapsto (l_{1}, l_{1}l_{2}, \\cdots , l_{1} \\cdots l_{d}) \\textit {.", "}$ Then $\\phi $ is a $C^{\\infty }$ -diffeomorphism from $(\\mathbb {R}_{+}-\\lbrace 0 \\rbrace )^{d}$ to itself and the jacobian matrix evaluated on $(l_{1}, \\cdots , l_{d})$ , denoted by $\\text{Jac}(l_{1}, \\cdots , l_{d})$ , satisfies that: $\\text{Jac}(l_{1}, \\cdots , l_{d}) = \\prod _{i=1}^{d-1} \\phi _{i}(l_{1},\\cdots ,l_{d}) \\textit {.", "}$ Furthermore, because $l$ belongs to the domain of the integral, we see that for all $i \\in \\lbrace 1, \\cdots , d-1 \\rbrace $ , $ b_{i} = \\frac{1}{C t^{(1+\\epsilon _{1})(d-i)}} \\leqslant \\phi _{i}(l) \\leqslant t^{(1+\\epsilon _{1})i}=h_{i} \\textit { and}$ $\\phi _{d}(l) \\geqslant C =b_{d} \\textit {.}", "$ Hence, from ($\\ref {chap4:eq153}$ ), we get that: $\\int _{\\begin{array}{c}\\forall 1 \\leqslant i \\leqslant d \\textit {, } l_{i} > 0 \\ \\\\ A \\leqslant \\Vert l \\Vert _{2} \\leqslant t \\\\ \\text{Num}(l) \\geqslant C\\end{array}} \\frac{1}{l_{1}^{2} \\cdots l_{d}^{2}} dl_{1} \\cdots dl_{d} \\leqslant \\int _{ \\begin{array}{c} \\forall 1 \\leqslant i \\leqslant d-1 \\textit {, } b_{i} \\leqslant u_{i} \\leqslant h_{i} \\\\ b_{d} \\leqslant u_{d} \\end{array}} \\frac{1}{u_{d}^{2} \\prod _{i=1}^{d-1} u_{i}} du \\textit {.", "}$ The right hand side can be easily calculated and gives the wanted result.", "To prove the lower part of the Equation $(\\ref {chap4:eq14})$ , we think that one way is to use hyperspherical coordinates.", "We are now going to tackle the proof of the second part of Theorem $\\ref {chap4:thm1}$ .", "It should be noted that, by density, it is enough to treat the case where the support of $\\rho $ is included in $[\\alpha ,1]$ with $0 < \\alpha < 1$ ." ], [ "Smoothing and reduction to the study of a Fourier series", "In this subsection, we are going to prove the following proposition: Proposition 4 Let $L$ be an admissible lattice of $\\mathbb {R}^{2}$ .", "Then, for all $T$ large enough, we have that: $ \\mathbb {E}_{X \\in \\mathbb {R}^{2}/L} \\left( (\\frac{\\mathcal {R}( t \\mathbb {D}^{2}+X,L)}{\\sqrt{V(L^{\\perp },t)})}-\\frac{S(L^{\\perp },X,t)}{\\sqrt{V(L^{\\perp },t)}})^{2} \\right) \\rightarrow 0$ where the convergence towards 0 is uniform in $\\alpha T \\leqslant t \\leqslant T$ when $T \\rightarrow \\infty $ , where $S(L,X,t) = \\frac{2}{\\pi ^{2} \\textit {covol}(L)} \\sum _{l \\in J_{2}(L,t)} \\frac{1}{l_{1}l_{2}} \\sum _{k=1}^{\\infty } \\frac{\\sin (2 \\pi k t l_{1}) \\sin (2 \\pi k t l_{2}) \\cos (2 \\pi k <l,X> ) }{k^{2}}$ and where $J_{2}(L,t) = \\lbrace l \\in L \\text{ } | \\text{ } 0 < \\Vert l \\Vert \\leqslant t \\text{, } l \\text{ prime} \\text{ and } l_{1} > 0 \\rbrace \\textit {.", "}$ Proposition $\\ref {chap4:prop3}$ basically tells us that the asymptotical study of $ \\mathbb {E}_{X \\in \\mathbb {R}^{2}/L}\\left( (\\frac{\\mathcal {R}(t P + X ,L)}{\\sqrt{V(L^{\\perp },t)}})^{2} \\right)$ , when $T \\rightarrow \\infty $ , can be reduced to the study of $ \\mathbb {E}_{X \\in \\mathbb {R}^{2}/L}\\left( \\frac{S(L^{\\perp },X,t,T)}{\\sqrt{V(L^{\\perp },t)}} \\right)$ (it is in fact stronger).", "It is suggested by the Poisson formula.", "Yet, we can not use it directly to prove this fact because the indicator function $\\mathbf {1}_{t P +X}$ , with $P = \\textit {Rect}(a,b)$ is not regular enough.", "So, the first thing we are going to do to prove Proposition $\\ref {chap4:prop3}$ is to $\\textit {smooth}$ the studied problem.", "It is the object of the next subsubsection.", "The study of $\\frac{\\mathcal {R}(t P + X ,L)}{\\sqrt{V(L^{\\perp },t)}}$ is going to be reduced to the study of two Fourier series.", "Then, we will do some calculations to simplify this study and finally the study of $\\frac{\\mathcal {R}(t P + X ,L)}{\\sqrt{V(L^{\\perp },t)}}$ is going to be reduced to the study of $\\frac{S(L^{\\perp },X,t)}{\\sqrt{V(L^{\\perp },t)}}$ ." ], [ "Reduction to the study of two Fourier series", "This subsubsection is dedicated to the $\\textit {smoothing}$ of the problem.", "We are going to do it like in $\\cite {Skriganov}$ .", "Let us take $\\omega $ a $C^{\\infty }$ -function, of compact support included in $B_{f}(0,1)$ , such that $\\int _{\\mathbb {R}^{2}} \\omega (x) dx = 1 \\textit {, }$ $\\omega (x) \\geqslant 0$ and such that $\\omega $ is spherically symmetric (so will be its Fourier transform).", "Let us recall the following definition: Definition 5 Let $\\mathbf {O} \\subset \\mathbb {R}^{2}$ be a connected compact set of $\\mathbb {R}^{2}$ with a boundary that is regularly piecewise.", "Let $1 > \\tau > 0$ .", "A couple of compact region $(\\mathbf {O}_{\\tau }^{+},\\mathbf {O}_{\\tau }^{-})$ is called a $\\tau $ -co-approximation if $ \\mathbf {O}_{\\tau }^{-} \\subset \\mathbf {O} \\subset \\mathbf {O}_{\\tau }^{+} $ and if the points at the frontiers $\\partial \\mathbf {O}_{\\tau }^{\\pm }$ are at least distant from $\\tau $ of the frontier $\\partial \\mathbf {O}$ .", "For example, we can think that $\\mathbf {O} = P$ and, in that case, a $\\tau $ -co-approximation is given by $(1 \\pm \\beta \\tau ) P$ , with $\\beta > 0$ well-chosen.", "Let us define: $\\mathfrak {R}_{\\tau }^{\\pm }(\\mathbf {O},X) = \\frac{1}{covol(L)} \\sum _{l \\in L^{\\perp }-\\lbrace 0\\rbrace } \\hat{\\mathbf {1}}_{\\mathbf {O}_{\\tau }^{\\pm }}(l) \\hat{\\omega }(\\tau l) e^{ 2 i \\pi <l,X> } \\textit {.", "}$ For $\\mathbf {O} = t P = t \\text{Rect}(a,b)$ , there exists $\\beta > 0$ (independent from $t$ ) such that $\\mathbf {O}_{\\tau }^{\\pm } = (t \\pm \\beta \\tau ) P$ is a $\\tau $ -co-approximation.", "Let us take: $\\tau = \\tau (t) = \\frac{\\log (t)^{\\gamma }}{t}$ where $ \\frac{1}{2} > \\gamma > 0$ .", "The main objective of this subsubsection is to prove the following proposition: Proposition 5 For this choice of $\\tau $ and of $\\tau $ -co-approximation $(t \\pm \\beta \\tau )P$ , there exists $B > 0$ (independent from X) such that for all $T$ large enough $\\frac{\\mathfrak {R}_{\\tau }^{-}(tP,X)}{\\sqrt{V(L^{\\perp },t)}} - B \\log (T)^{\\gamma - \\frac{1}{2}} \\leqslant \\frac{\\mathcal {R}(tP+X,L)}{\\sqrt{V(L^{\\perp },t)}} \\leqslant \\frac{\\mathfrak {R}_{\\tau }^{+}(tP,X)}{\\sqrt{V(L^{\\perp },t)}} + B\\log (T)^{\\gamma - \\frac{1}{2}} \\textit {.", "}$ Basically, it says that one can brought back the study of $\\frac{\\mathcal {R}(tP+X,L)}{\\sqrt{V(L^{\\perp },t)}}$ to the study of the two quantities $\\frac{\\mathfrak {R}_{\\tau }^{\\pm }(tP,X)}{\\sqrt{V(L^{\\perp },t)}}$ , which represent the $\\textit {smoothed}$ Fourier series of $\\frac{\\mathcal {R}(tP+X,L)}{\\sqrt{V(L^{\\perp },t)}}$ .", "To prove this proposition, the following lemma is going to be useful: Lemma 2 Let us take $0 < \\tau < 1$ .", "Then, one has: $\\mathfrak {R}_{\\tau }^{-}(\\mathbf {O},X) + \\frac{\\text{Area}(\\mathbf {O}_{\\tau }^{-}) - \\text{Area}(\\mathbf {O})}{covol(L)} \\leqslant \\mathcal {R}(\\mathbf {O}+X,L) \\leqslant \\mathfrak {R}_{\\tau }^{+}(\\mathbf {O},X) + \\frac{\\text{Area}(\\mathbf {O}_{\\tau }^{+}) - \\text{Area}(\\mathbf {O})}{covol(L)} \\textit {.", "}$ Let us set $\\omega _{\\tau }(x) = \\tau ^{-2} \\omega (\\tau ^{-1} x)$ for all $x \\in \\mathbb {R}^{2}$ .", "Then, one has: $\\hat{\\omega }_{\\tau }(\\xi ) = \\hat{\\omega }(\\tau \\xi )$ for all $\\xi \\in \\mathbb {R}^{2}$ .", "Let us consider the following convolution products: $h^{\\pm }(x) = (\\omega _{\\tau } * \\mathbf {1}_{\\mathbf {O}_{\\tau }^{\\pm }})(x) = \\int _{\\mathbb {R}^{2}} \\omega _{\\tau }(x-y) \\mathbf {1}_{\\mathbf {O}_{\\tau }^{\\pm }}(y) dy \\textit {.", "}$ The functions $h^{\\pm }$ are $C^{\\infty }$ over $\\mathbb {R}^{2}$ and have a compact support.", "By using Definition $\\ref {chap4:def4}$ , one has, for all $x \\in \\mathbb {R}^{2}$ : $h^{-}(x) \\leqslant \\mathbf {1}_{\\mathbf {O}}(x) \\leqslant h^{+}(x) \\textit {.", "}$ By replacing $x \\in \\mathbb {R}^{2}$ by $l-x$ with $l \\in L$ and by summing over $l \\in L$ , one gets with Equation ($\\ref {chap4:eq26}$ ): $\\sum _{l \\in L} h^{-}(l-X) \\leqslant N( \\mathbf {O} + X, L) \\leqslant \\sum _{l \\in L} h^{+}(l-X) \\textit {.", "}$ By using the Poisson formula and the fact that Fourier transform a convolution product into an usual product, one gets the wanted result thanks to Equation ($\\ref {chap4:eq24}$ ).", "We can now tackle the proof of Proposition $\\ref {chap4:prop5}$ : [Proof of Proposition $\\ref {chap4:prop5}$ ] According to Lemma $\\ref {chap4:lemme3}$ , one has that, for every $t>0$ and $X$ , $\\mathfrak {R}_{\\tau }^{-}(tP,X) + \\frac{\\text{Area}((t - \\beta \\tau ) P) - \\text{Area}(t P)}{covol(L)} \\leqslant \\mathcal {R}(tP+X,L) \\leqslant \\mathfrak {R}_{\\tau }^{+}(tP,X) + \\frac{\\text{Area}((t + \\beta \\tau ) P) - \\text{Area}(t P)}{covol(L)} \\textit {.", "}$ Yet, one has the following two equations: $\\text{Area}((t - \\beta \\tau ) P) - \\text{Area}(t P) = \\text{Area}(P)(- 2 \\beta \\tau t + \\beta ^{2} \\tau ^{2})$ and $\\text{Area}((t + \\beta \\tau ) P) - \\text{Area}(t P) = \\text{Area}(P)( 2 \\beta \\tau t + \\beta ^{2} \\tau ^{2}) \\textit {.", "}$ Yet here $\\tau = \\frac{\\log (t)^{\\gamma }}{t}$ and so one gets that for every $T$ large enough, for every $ T \\geqslant t \\geqslant \\alpha T$ , $| \\frac{\\pm 2 \\beta \\tau t + \\beta ^{2} \\tau ^{2}}{\\sqrt{V(L^{\\perp },t)}} | \\leqslant M \\log (T)^{\\gamma - \\frac{1}{2}}$ where $M> 0$ .", "To obtain this last equation we have used the fact that $\\sqrt{V(L^{\\perp },t)} = \\Theta (\\sqrt{\\log (t)})$ that is given by proposition $\\ref {chap4:prop2}$ .", "By using Equation $\\ref {chap4:eq29}$ , Equation $\\ref {chap4:eq30}$ , Equation $\\ref {chap4:eq31}$ and Equation $\\ref {chap4:eq32}$ , one gets the wanted result.", "The study is now reduced to the study, when $T \\rightarrow \\infty $ , of the two quantities $\\frac{\\mathfrak {R}_{\\tau }^{\\pm }(tP,X)}{\\sqrt{V(L^{\\perp },t)}}$ .", "In the next section, we are going to reduce the study of $\\frac{\\mathfrak {R}_{\\tau }^{+}(tP,X)}{\\sqrt{V(L^{\\perp },t)}}$ to the study of the Fourier series $\\frac{S(L^{\\perp },X,t,T)}{\\sqrt{V(L^{\\perp },t)}}$ and the results are also going to hold for $\\frac{\\mathfrak {R}_{\\tau }^{-}(tP,X)}{\\sqrt{V(L^{\\perp },t)}}$ ." ], [ "Simplification of the Fourier series $\\frac{\\mathfrak {R}_{\\tau }^{+}(tP,X)}{\\sqrt{V(L^{\\perp },t)}}$", "The main objective of this subsubsection is to prove the following proposition: Proposition 6 Let us suppose that $L$ is admissible.", "Then, one has, for all $\\gamma > 0$ , $\\mathbb {E}_{X \\in \\mathbb {R}^{2}/L}\\left( (\\frac{\\mathfrak {R}_{\\tau }^{+}(tP,X)-S(L^{\\perp },t,X)}{\\sqrt{V(t,L^{\\perp })}})^{2} \\right) \\rightarrow 0$ where $\\tau = \\frac{\\log (t)^{\\gamma }}{t}$ and where the convergence towards 0 is uniform in $t$ such that $\\alpha T \\leqslant t \\leqslant T$ and when $T \\rightarrow \\infty $ .", "To do this, we will need several intermediate lemmas.", "We will use tools from Fourier Analysis, integration calculus and Geometry of numbers.", "$\\emph {In the rest of this section, we are going to suppose that L is admissible.", "}$ $\\emph {Under this hypothesis, the dual lattice of L is also going to be admissible.", "}$" ], [ "Reduction of $\\mathfrak {R}_{\\tau }^{+}$ to a finite sum.", "Let us introduce the following sum: $S_{1}(L^{\\perp },t,X) = \\frac{1}{\\pi ^{2} \\textit {covol}(L)} \\sum _{\\begin{array}{c}l \\in L^{\\perp } \\\\ 0 < \\Vert l \\Vert \\leqslant t \\end{array}} \\frac{\\hat{\\omega }(\\tau l) \\sin (2 \\pi l_{1} t^{+} ) \\sin (2 \\pi l_{2} t^{+})e^{2 i \\pi <l,X> }}{l_{1} l_{2}}$ where $t^{+} = t + \\beta \\tau $ .", "Then the main objective of this paragraph is to prove the following lemma: Lemma 3 One has for all $T$ large enough, $| \\frac{|\\mathfrak {R}_{\\tau }^{+}(tP,X)-S_{1}(L^{\\perp },t,X)|}{\\sqrt{V(t,L^{\\perp })}} | = O(\\log (T)^{\\gamma - \\frac{1}{2}}) \\textit {.", "}$ where $O$ is uniform in $X \\in \\mathbb {R}^{2}$ and in $t$ when $\\alpha T \\leqslant t \\leqslant T$ .", "Recall that $\\gamma $ is a fixed parameter such that $ 0 < \\gamma < \\frac{1}{2}$ .", "To prove this lemma, we are going to need a lemma that gives an upper estimate of certain integrals: Lemma 4 Let us set for $N \\geqslant 2 $ and for $C > 0$ , $D_{N,C} = \\lbrace x=(x_{1},x_{2}) \\in (\\mathbb {R}^{+})^{2} \\text{ } | \\text{ } \\Vert x \\Vert \\geqslant N \\text{, } |\\text{Num}(x)| \\geqslant C \\rbrace \\textit {.", "}$ Then one has, for all $g: \\mathbb {R}^{+} \\longrightarrow \\mathbb {R}^{+} $ measurable, there exists $M > 0$ such that for every $N$ large enough, $\\int _{x \\in D_{N,C}} \\frac{g(\\Vert x \\Vert )}{|x_{1}x_{2}|}dx_{1}dx_{2} \\leqslant M \\int _{r=N}^{\\infty } g(r) \\frac{\\log (r)}{r} dr \\textit {.}", "$ Let us call: $J(N,C) = \\int _{x \\in D_{N,C}} \\frac{g(\\Vert x \\Vert )}{|x_{1}x_{2}|}dx_{1}dx_{2} \\textit {.", "}$ By passing to polar coordinates, we get that for every $N$ large enough: $J(N,C) \\leqslant K_{1} \\int _{r \\geqslant N} \\frac{g(r)}{r} (\\int _{\\arcsin (2 \\frac{C}{r^{2}})}^{\\frac{\\pi }{2}} \\frac{d \\theta }{\\sin (\\theta )}) dr$ where $K_{1} > 0$ .", "Yet one calculates that: $(\\int _{\\arcsin (2 \\frac{C}{r^{2}})}^{\\frac{\\pi }{2}} \\frac{d \\theta }{\\sin (\\theta )}) = \\frac{1}{2}\\log \\big ( \\frac{(1+\\sqrt{1-\\frac{2 C}{r^{2}}})^{2}}{4 \\frac{C}{r^{2}}} \\big ) \\textit {.", "}$ With the Inequality ($\\ref {chap4:eq37}$ ), we get that: $J(N,C) \\leqslant K_{2} \\int _{r \\geqslant N} \\frac{g(r) \\log (r) }{r} dr$ where $K_{2} > 0$ and so we get the wanted result.", "We can now prove Lemma $\\ref {chap4:lemme4}$ : [Proof of Lemma $\\ref {chap4:lemme4}$ ] $L$ is supposed to be admissible and so $L^{\\perp }$ must also be admissible.", "So there exists $C > 0$ such that for all $x \\in L^{\\perp }-\\lbrace 0 \\rbrace $ , $| \\text{Num}(x) | \\geqslant C \\textit {.", "}$ Furthermore, $\\omega $ is supposed to be regular and of compact support.", "In consequence, $\\hat{\\omega }$ belongs to the Schwartz space and so for all $A > 2$ , there exists $M > 0$ such that for all $x \\in \\mathbb {R}^{2}$ : $| \\tilde{\\omega }(x) | \\leqslant \\frac{M}{1 + \\Vert x \\Vert ^{A}} \\textit {.", "}$ Proposition $\\ref {chap4:prop4}$ and the fact that $\\mathbf {O}_{\\tau }^{+} = (t+ \\beta \\tau ) P$ with $P = \\text{Rect}(a,b)$ give us that: $|\\mathfrak {R}_{\\tau }^{+}(tP,X)-S_{1}(L^{\\perp },t,X)| \\leqslant K_{1} \\sum _{\\begin{array}{c}l \\in L^{\\perp } \\\\ \\Vert l \\Vert > t\\end{array}} \\frac{1}{| l_{1} l_{2} | (1 + \\Vert \\tau l \\Vert ^{A})}$ where $K_{1} > 0$ .", "We have also used implicitly Equation $(\\ref {chap4:eq23})$ .", "Yet, one has also: $\\sum _{\\begin{array}{c}l \\in L^{\\perp } \\\\ \\Vert l \\Vert > t\\end{array}} \\frac{1}{| l_{1} l_{2} | (1 + \\Vert \\tau l \\Vert ^{A})} \\leqslant K_{2} \\left( \\int _{x \\in D_{t,C}} \\frac{1}{|x_{1}x_{2}|(1 + \\Vert \\tau x \\Vert ^{A})}dx_{1}dx_{2} + \\int _{\\begin{array}{c} \\Vert x \\Vert > t \\\\ |Num(x)| < C \\end{array}} \\frac{1}{C(1+\\Vert \\tau x \\Vert ^{A})} dx \\right)$ where $K_{2} > 0$ .", "Lemma $\\ref {chap4:lemme5}$ and a quick calculation give us then that for all $T$ large enough (we have to keep in mind that $\\alpha T \\leqslant t \\leqslant T$ ): $\\sum _{\\begin{array}{c}l \\in L^{\\perp } \\\\ \\Vert l \\Vert \\leqslant t\\end{array}} \\frac{1}{| l_{1} l_{2} | (1 + \\Vert \\tau l \\Vert ^{A})} \\leqslant K_{3} \\left( \\int _{r=t}^{\\infty } \\frac{\\log (r)}{r (1 + (\\tau r)^{A})} dr + \\int _{r=t}^{\\infty } \\frac{1}{r^{2}(1 + (\\tau r)^{A})} dr \\right)$ where $K_{3} > 0$ .", "However, one has also that: $\\int _{r=t}^{\\infty } \\frac{\\log (r)}{r (1 + (\\tau r)^{A})} dr \\leqslant \\frac{K_{4} \\log (t)}{ \\tau ^{A} t^{A}}$ where $K_{4} > 0$ (and depends on $A$ ).", "Yet one has that: $\\tau = \\frac{\\log (t)^{\\gamma }}{t}$ with $ \\frac{1}{2}> \\gamma > 0$ .", "By using Equation ($\\ref {chap4:eq42}$ ), Equation ($\\ref {chap4:eq44}$ ) and Equation ($\\ref {chap4:eq45}$ ) and by using the fact that $V(L^{\\perp },t) = \\Theta (\\log (t))$ , one gets that: $\\frac{|\\mathfrak {R}_{\\tau }^{+}(tP,X)-S_{1}(L^{\\perp },t,X)|}{\\sqrt{V(t,L^{\\perp })}} \\leqslant K_{5} \\left( \\frac{\\log (t)^{\\frac{1}{2}}}{\\log (t)^{ A \\gamma }} + \\frac{1}{\\sqrt{\\log (t)}} \\right)$ where $K_{5} > 0$ and depends on $A$ .", "Then, we can take $A$ large enough so one has that: $\\frac{\\log (t)^{\\frac{1}{2}}}{\\log (t)^{ A \\gamma }} \\leqslant \\frac{K_{6}}{\\log (t)}$ with $K_{6} > 0$ .", "By using Equation ($\\ref {chap4:eq46}$ ) and Equation ($\\ref {chap4:eq47}$ ), one has that: $\\frac{|\\mathfrak {R}_{\\tau }^{+}(tP,X)-S_{1}(L^{\\perp },t,X)|}{\\sqrt{V(t,L^{\\perp })}} \\leqslant \\frac{K_{7}}{\\sqrt{\\log (T)}}$ with $K_{7} > 0$ and for $T$ large enough.", "Thus the wanted result.", "We are now brought back to the study of $S_{1}(L^{\\perp },t,X)$ and now we are going, mainly, to \"center\" it on the prime vectors of $L^{\\perp }$ ." ], [ "Centring of $S_{1}(L^{\\perp },t,X)$ over prime vectors.", "Before stating the main lemma of this paragraph, let us make a small remark.", "Because $L^{\\perp }$ is admissible and because $\\tilde{\\omega }$ is spherically symmetric, by parity, one has that: $S_{1}(L^{\\perp },t,X) = \\frac{2}{\\pi ^{2} \\textit {covol}(L)} \\sum _{l \\in J_{1}(L^{\\perp },t) } \\frac{\\tilde{\\omega }(\\tau l) \\sin (2 \\pi l_{1} t^{+} ) \\sin (2 \\pi l_{2} t^{+}) \\cos (2 \\pi <l,X>) }{l_{1} l_{2}}$ where $J_{1}(L^{\\perp },t) = \\lbrace l \\in L^{\\perp } \\text{ | } 0 < \\Vert l \\Vert \\leqslant t \\text{ and } l_{1} > 0 \\rbrace \\textit {.", "}$ Let us define: $S_{2}(L^{\\perp },t,X) = \\frac{2}{\\pi ^{2} \\textit {covol}(L)} \\sum _{l \\in J_{2}(L^{\\perp },t) } \\frac{Z_{l}(X,t)}{l_{1} l_{2}}$ where $Z_{l}(X,t) = \\sum _{k=1}^{\\infty } \\frac{\\tilde{\\omega }(k \\tau l) \\sin (2 k \\pi l_{1} t^{+} ) \\sin (2 k \\pi l_{2} t^{+}) \\cos (2 k \\pi <l,X>)}{k^{2}}$ and we recall that $J_{2}(L^{\\perp },t) = \\lbrace l \\in L^{\\perp } \\text{ | } 0 < \\Vert l \\Vert \\leqslant t \\text{ , } l \\text{ prime} \\text{ and } l_{1} > 0 \\rbrace \\textit {.", "}$ Then the main statement of this subsection is the following lemma: Lemma 5 For every $T$ large enough, one has: $|\\frac{|S_{2}(L^{\\perp },t,X)-S_{1}(L^{\\perp },t,X)|}{\\sqrt{V(L^{\\perp },t)}}| = O(\\log (T)^{\\gamma - \\frac{1}{2}})$ with $O$ being uniform in $X \\in \\mathbb {R}^{2}$ and in $t$ such that $ \\alpha T \\leqslant t \\leqslant T$ .", "It basically says that the essential information of $S_{1}$ is contained in its prime terms.", "One has: $|S_{2}(L^{\\perp },t,X)-S_{1}(L^{\\perp },t,X)| \\leqslant K_{1} \\sum _{ \\begin{array}{c} l \\in L^{\\perp } \\\\ \\Vert l \\Vert \\geqslant t \\end{array}} \\frac{| \\tilde{\\omega }(\\tau l) |}{| l_{1} l_{2} |}$ where $K_{1} > 0$ .", "So, we can apply the same argument as before in the proof of Lemma $\\ref {chap4:lemme4}$ and one gets that: $|\\frac{|S_{2}(L^{\\perp },t,X)-S_{1}(L^{\\perp },t,X)|}{\\sqrt{V(L^{\\perp },t)}}| = O(\\log (T)^{\\gamma - \\frac{1}{2}}) \\textit {.", "}$ So, the study of $S_{1}$ , when $T \\rightarrow \\infty $ , is now reduced to the study of $S_{2}$ , when $T \\rightarrow \\infty $ .", "In the next paragraph, we are going to simplify the sum $S_{2}$ ." ], [ "Replacing $\\tilde{\\omega }$ by 1 in {{formula:71b85d12-02b9-4948-967f-f150448a46cf}} .", "Let us define: $S_{3}(L^{\\perp },t,X) = \\frac{2}{\\pi ^{2} \\textit {covol}(L)} \\sum _{l \\in J_{2}(L^{\\perp },t) } \\frac{\\tilde{Z}_{l}(X,t)}{l_{1} l_{2}}$ where $\\tilde{Z}_{l}(X,t) = \\sum _{k=1}^{\\infty } \\frac{\\sin (2 k \\pi l_{1} t^{+} ) \\sin (2 k \\pi l_{2} t^{+}) \\cos (2 k \\pi <l,X>)}{k^{2}}$ (we have replaced $\\tilde{\\omega }$ by its value at 0, which is 1 because $\\int _{\\mathbb {R}^{2}} \\omega = 1$ ).", "The main statement of this subsection is the following lemma: Lemma 6 Let us set: $\\Delta (L^{\\perp },t) = \\int _{X \\in \\mathbb {R}^{2}/L^{\\perp }} (\\frac{|S_{2}(L^{\\perp },t,X)-S_{3}(L^{\\perp },t,X)|}{\\sqrt{V(L^{\\perp },t)}})^{2} d\\tilde{\\lambda }_{2}(X) \\textit {.", "}$ Then $\\Delta (L^{\\perp },t)$ goes to 0 uniformly in $t$ that are such that $ \\alpha T \\leqslant t \\leqslant T$ with $T \\rightarrow \\infty $ .", "The Parseval formula applies here and gives us that: $\\Delta (L^{\\perp },t) \\leqslant \\frac{K_{1}}{\\log (t)} \\sum _{l \\in J_{1}(L^{\\perp },t)} (\\frac{\\tilde{\\omega }(\\tau l) - 1}{l_{1}l_{2}})^{2}$ where $K_{1} > 0$ and we have used the fact that $V(L^{\\perp },t) = \\Theta (\\log (t))$ .", "Let us take $\\tilde{\\gamma }$ such that $ \\gamma < \\tilde{\\gamma } < \\frac{1}{2}$ and let us set $t_{1} = \\frac{t}{\\log (t)^{\\tilde{\\gamma }}}$ .", "Then, with this notation, one has, thanks to Equation $(\\ref {chap4:eq60})$ : $\\Delta (L^{\\perp },t) \\leqslant K_{2}( \\Delta _{1} + \\Delta _{2})$ where $K_{2} > 0$ , $\\Delta _{1} = \\frac{1}{\\log (t)} \\sum _{l \\in J_{1}(L^{\\perp },t_{1})} (\\frac{\\tau \\Vert l \\Vert }{l_{1}l_{2}})^{2} \\textit { and}$ $\\Delta _{2} = \\frac{1}{\\log (t)} \\sum _{l \\in J_{1}(L^{\\perp },t) - J_{1}(L^{\\perp },t_{1})} \\frac{1}{l_{1}^{2} l_{2}^{2}} \\textit {.", "}$ Then Lemma $\\ref {chap4:lemme2}$ , the fact $\\tau = \\frac{\\log (t)^{\\gamma }}{t}$ and the definition of $t_{1}$ give us that for $T$ large enough (and for $t$ such that $ \\alpha T \\leqslant t \\leqslant T $ ): $\\Delta _{1} \\leqslant K_{3} \\log (t)^{2(\\gamma - \\tilde{\\gamma })}$ with $K_{3}> 0$ and $\\Delta _{2} \\leqslant \\frac{K_{4}}{\\log (t)} \\int _{ t \\log (t)^{- \\tilde{\\gamma }}}^{t} \\frac{1}{r}dr = O( \\frac{1}{\\sqrt{\\log (t)}}) \\textit {.", "}$ with $K_{4} > 0$ .", "So, one gets finally that, when $T \\rightarrow \\infty $ , $\\Delta _{1} \\rightarrow 0 \\textit { and } \\Delta _{2} \\rightarrow 0 $ uniformly in $t$ for $\\alpha T \\leqslant t \\leqslant T$ .", "So, now, we are brought back to the study of $S_{3}$ when $T \\rightarrow \\infty $ .", "In the next paragraph, we are going to simply $S_{3}$ and its study will be reduced to the study of $S$ when $T \\rightarrow \\infty $ as wanted." ], [ "Replacing $t^{+}$ by {{formula:308440b4-6e08-411a-9693-9c44cc8b40c0}} and proof of proposition {{formula:cf3d6956-a230-4bf8-bdb6-01f8a514832a}} .", "We recall that we have defined $S$ by the Equation ($\\ref {chap4:eq19}$ ).", "The main objective of this paragraph is to prove the following lemma: Lemma 7 Let us call (here) $\\Delta $ the following quantity: $\\Delta (L^{\\perp },t) = \\sqrt{\\int _{X \\in \\mathbb {R}^{2}/L^{\\perp }} (\\frac{|S_{3}(L^{\\perp },t,X)-S(L^{\\perp },t,X)|}{\\sqrt{V(L^{\\perp },t)}})^{2} d\\tilde{\\lambda }_{2}(X)} \\textit {.", "}$ Then $\\Delta (L^{\\perp },t)$ goes to 0 uniformly in $t$ that are such that $ \\alpha T \\leqslant t \\leqslant T$ with $T \\rightarrow \\infty $ .", "Mainly, this lemma says that we can replace $t^{+}$ in the two $\\sin $ terms by $t$ .", "It is reasonable insofar as $t^{+}$ is very close to $t$ .", "Let us set: $S_{4}(L^{\\perp },t,X) = \\frac{2}{\\pi ^{2} \\textit {covol}(L)} \\sum _{l \\in J_{2}(L^{\\perp },t) } \\frac{W_{l}(X,t)}{l_{1} l_{2}}$ where $W_{l}(X,t) = \\sum _{k=1}^{\\infty } \\frac{\\sin (2 k \\pi l_{1} t ) \\sin (2 k \\pi l_{2} t^{+}) \\cos (2 k \\pi <l,X>)}{k^{2}} \\textit {.", "}$ Then the triangle inequality gives us that: $\\Delta \\leqslant \\sqrt{\\Delta _{1}} + \\sqrt{\\Delta _{2}}$ where $\\Delta _{1} = \\int _{X \\in \\mathbb {R}^{2}/L^{\\perp }} (\\frac{|S_{3}(L^{\\perp },t,X)-S_{4}(L^{\\perp },t,X)|}{\\sqrt{V(L^{\\perp },t)}})^{2} d\\tilde{\\lambda }_{2}(X)$ and $\\Delta _{2} = \\int _{X \\in \\mathbb {R}^{2}/L^{\\perp }} (\\frac{|S_{4}(L^{\\perp },t,X)-S(L^{\\perp },t,X)|}{\\sqrt{V(L^{\\perp },t)}})^{2} d\\tilde{\\lambda }_{2}(X) \\textit {.", "}$ Let us take $\\tilde{\\gamma }$ such that $ \\gamma < \\tilde{\\gamma } < \\frac{1}{2}$ and let us set $t_{1} = \\frac{t}{\\log (t)^{\\tilde{\\gamma }}}$ .", "The Parseval formula and the mean value theorem apply here and give us that: $\\Delta _{1} \\leqslant K_{2}( \\Delta _{1,1} + \\Delta _{1,2})$ where $K_{2} > 0$ , $\\Delta _{1,1} = \\frac{1}{\\log (t)} \\sum _{l \\in J_{1}(L^{\\perp },t_{1})} (\\frac{\\tau \\Vert l \\Vert }{l_{1}l_{2}})^{2} \\textit {, }$ $\\Delta _{1,2} = \\frac{1}{\\log (t)} \\sum _{l \\in J_{1}(L^{\\perp },t) - J_{1}(L^{\\perp },t_{1})} \\frac{1}{l_{1}^{2} l_{2}^{2}}$ and $\\Delta _{2} \\leqslant K_{2}( \\Delta _{1,1} + \\Delta _{1,2}) \\textit {.", "}$ We have used the fact that $V(L^{\\perp },) = \\Theta (\\log (t))$ .", "Then Equation $(\\ref {chap4:eq64})$ , Equation $(\\ref {chap4:eq65})$ , Equation ($\\ref {chap4:eq69}$ ), Equation $(\\ref {chap4:eq75})$ and Equation ($\\ref {chap4:eq72}$ ) give that $\\Delta $ goes uniformly to 0 in $t$ for $t$ such that $\\alpha T \\leqslant t \\leqslant T$ We can now prove Proposition $\\ref {chap4:prop6}$ .", "[Proof of proposition $\\ref {chap4:prop6}$ ] Proposition $\\ref {chap4:prop6}$ is a direct consequence of the lemmas $\\ref {chap4:lemme4}$ , $\\ref {chap4:lemme6}$ , $\\ref {chap4:lemme7}$ and $\\ref {chap4:lemme8}$ ." ], [ "Case of $t^{-}$ and proof of Proposition {{formula:2024540a-73f5-47d2-9084-7749c887dcec}}", "By following exactly the same approach that has been used to prove proposition $\\ref {chap4:prop6}$ , we prove the following proposition: Proposition 7 Let us suppose that $L$ is admissible.", "Then, one has, for all $\\gamma > 0$ , $\\mathbb {E}_{X \\in \\mathbb {R}^{2}/L} \\left( ( \\frac{|\\mathfrak {R}_{\\tau }^{-}(tP,X)-S(L^{\\perp },t,X)|}{\\sqrt{V(t,L^{\\perp })}})^{2} \\right) \\rightarrow 0 \\textit {.", "}$ where $\\tau = \\frac{\\log (t)^{\\gamma }}{t}$ and where the convergence towards 0 is uniform in $t$ such that $\\alpha T \\leqslant t \\leqslant T$ and when $T \\rightarrow \\infty $ .", "We are now able to prove Proposition $\\ref {chap4:prop3}$ : [Proof of Proposition $\\ref {chap4:prop3}$ ] It is a direct consequence of Proposition $\\ref {chap4:prop5}$ , Proposition $\\ref {chap4:prop6}$ and Proposition $\\ref {chap4:prop7}$ and of the triangle inequality.", "So, finally, it is enough to study the behaviour, when $T \\rightarrow \\infty $ , of $S$ .", "It is the object of the next subsection." ], [ "Asymptotic behaviour of $S$", "According to the proof of proposition $\\ref {chap4:prop3}$ , by replacing $L^{\\perp }$ by $L$ (if one is admissible, the other also is and conversely), to show the second assertion of Theorem $\\ref {chap4:thm1}$ , one only needs to prove that $\\mathbb {E}_{X \\in \\mathbb {R}^{2}/L^{\\perp }}\\left( (\\frac{S(L,X,t)}{\\sqrt{V(L,t)}})^{2} \\right)$ converges in distribution and in probability towards $\\frac{1}{4 \\pi ^{4} \\text{Covol}(L^{\\perp })^{2}}$ .", "Furthermore, this limit constant will be the same for $\\mathbb {E}_{X \\in \\mathbb {R}^{2}/L^{\\perp }}\\left( (\\frac{S(L,X,t)}{\\sqrt{V(L,t)}})^{2} \\right)$ and $\\mathbb {E}_{X \\in \\mathbb {R}^{2}/L^{\\perp }}\\left( (\\frac{S(L,X,t)}{\\sqrt{V(L,t)}})^{2} \\right)$ .", "With the Parseval formula, we see that we only need to prove the following proposition: Proposition 8 $ G(L,t) = \\frac{2}{\\pi ^{4} \\text{Covol}(L^{\\perp })^{2} } \\sum _{l \\in J_{2}(L,t)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{(\\sin (2 \\pi k t l_{1}) \\sin (2 \\pi k t l_{2}))^{2} }{k^{4}} $ normalized by $V(L,t)$ converges in distribution and in probability towards $\\frac{1}{4 \\pi ^{4} \\text{Covol}(L^{\\perp })^{2}}$ .", "The rest of this subsection is dedicated to proving this last proposition." ], [ "A small remark and approach", "By using the fact that one has for every $x \\in \\mathbb {R}$ , $\\sin (x)^{2} = \\frac{1 - \\cos (2 x) }{2} \\textit {, } $ one gets that: $G(L,t) = G_{1}(L,t) - G_{2}(L,t) - G_{3}(L,t) + G_{4}(L,t)$ where $G_{1}(L,t) = \\frac{1}{2 \\pi ^{4}\\text{Covol}(L^{\\perp })^{2}} \\sum _{l \\in J_{2}(L,t)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{1 }{k^{4}} \\textit {, }$ $G_{2}(L,t) = \\frac{1}{2 \\pi ^{4}\\text{Covol}(L^{\\perp })^{2}} \\sum _{l \\in J_{2}(L,t)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{\\cos (4 \\pi k t l_{1}) }{k^{4}} \\textit {, }$ $G_{3}(L,t) = \\frac{1}{2 \\pi ^{4}\\text{Covol}(L^{\\perp })^{2}} \\sum _{l \\in J_{2}(L,t)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{\\cos (4 \\pi k t l_{2}) }{k^{4}} \\textit { and }$ $G_{4}(L,t) = \\frac{1}{2 \\pi ^{4}\\text{Covol}(L^{\\perp })^{2}} \\sum _{l \\in J_{2}(L,t)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{\\cos (4 \\pi k t l_{1})\\cos (4 \\pi k t l_{2}) }{k^{4}} \\textit {.", "}$ To get the validity of Proposition $\\ref {chap4:prop8}$ , we only need to show the three following propositions: Proposition 9 $\\frac{G_{1}(L,t)}{V(L,t)} \\rightarrow \\frac{1}{4 \\pi ^{4}\\text{Covol}(L^{\\perp })^{2}} $ when $t \\rightarrow \\infty $ .", "Proposition 10 $ \\frac{G_{2}(L,t)}{V(L,t)} $ and $ \\frac{G_{3}(L,t)}{V(L,t)} $ converge in distribution and in probability towards 0 when $t$ is distributed according to the probability measure $\\frac{1}{T} \\rho (\\frac{t}{T}) dt$ and when $T \\rightarrow \\infty $ .", "Proposition 11 $ \\frac{G_{4}(L,t)}{V(t,T)} $ converge in distribution and in probability towards 0 when $t$ is distributed according to the probability measure $\\frac{1}{T} \\rho (\\frac{t}{T}) dt$ and when $T \\rightarrow \\infty $ .", "Proposition $\\ref {chap4:prop9}$ is just a use of different definitions whereas Proposition $\\ref {chap4:prop10}$ and Proposition $\\ref {chap4:prop11}$ work because, basically, there are, relatively to $G_{1}$ , additional oscillatory terms that make the normalized sum go to 0.", "Now we are going to prove in this order these three propositions." ], [ "Proof of Proposition $\\ref {chap4:prop9}$", "The proof of this proposition is straightforward: [Proof of Proposition $\\ref {chap4:prop9}$ ] One has: $G_{1}(L,t) = \\frac{1}{2 \\pi ^{4} \\text{Covol}(L^{\\perp })^{2}} \\sum _{l \\in J_{2}(L,t)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{1 }{k^{4}}$ where $ J_{2}(L,t) = \\lbrace l \\in L \\text{ } | \\text{ } 0 < \\Vert l \\Vert \\leqslant t \\text{ , } l \\text{ prime} \\text{ and } l_{1} > 0 \\rbrace $ and $ V(L,t)= \\sum _{\\begin{array}{c} l \\in L \\\\ 0 < \\Vert l \\Vert \\leqslant t \\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} \\textit {.}", "$ By making $t$ goes to infinity, one gets the wanted result.", "Next, we are going to prove Proposition $\\ref {chap4:prop10}$ ." ], [ "Proof of Proposition $\\ref {chap4:prop10}$", "Before giving the proof of Proposition $\\ref {chap4:prop10}$ , we need several lemmas.", "Let us introduce: $ \\tilde{G}_{2}(L,t,T) = \\frac{1}{2 \\pi ^{4}\\text{Covol}(L^{\\perp })^{2}} \\sum _{l \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{\\cos (4 \\pi k t l_{1}) }{k^{4}} $ and $ \\tilde{G}_{3}(L,t,T) = \\frac{1}{2 \\pi ^{4}\\text{Covol}(L^{\\perp })^{2}} \\sum _{l \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{\\cos (4 \\pi k t l_{2}) }{k^{4}} \\textit {.}", "$ The first lemma basically says that we can study $\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t)}$ and $\\frac{\\tilde{G}_{3}(L,t,T)}{V(L,t)}$ instead of studying, respectively, $\\frac{G_{2}(L,t)}{V(L,t)}$ and $\\frac{G_{3}(L,t)}{V(L,t)}$ : Lemma 8 One has, when $T \\rightarrow \\infty $ , $\\mathbb {E} \\left( |\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t)}-\\frac{G_{2}(L,t)}{V(L,t)}| \\right) \\rightarrow 0$ and $\\mathbb {E} \\left( |\\frac{\\tilde{G}_{3}(L,t,T)}{V(L,t)}-\\frac{G_{3}(L,t)}{V(L,t)}| \\right) \\rightarrow 0 $ (Note that the expectation are calculated relatively to $t$ with $t$ being distributed according to the probability measure $\\frac{1}{T} \\rho (\\frac{t}{T}) dt$ ).", "We are only going to prove the first fact because the proof is going to be symmetrical in $(l_{1},l_{2})$ .", "One has that: $\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t)}-\\frac{G_{2}(L,t)}{V(L,t)} = \\frac{D}{V(L,t)} \\sum _{l \\in J_{2}(L,T) -J_{2}(L,t) } \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{\\cos (4 \\pi k t l_{1}) }{k^{4}}$ where $D$ is a positive constant.", "By integrating and because $t$ belongs to $[\\alpha T, T]$ , one gets that: $\\mathbb {E}\\left( |\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t)}-\\frac{G_{2}(L,t)}{V(L,t)}| \\right) \\leqslant \\frac{D}{\\log (T)} \\mathbb {E} \\left( \\sum _{l \\in J_{2}(L,T) -J_{2}(L,t) } \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{1}{k^{4}} \\right)$ where $D$ is a positive constant, possibly different from the previous one.", "We have also used Proposition $\\ref {chap4:prop2}$ .", "Yet, one has that (see Lemma $\\ref {chap4:lemme2}$ ): $\\sum _{l \\in J_{2}(L,T) -J_{2}(L,t) } \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{1}{k^{4}} \\leqslant D \\log (\\frac{T}{t}) \\textit {.", "}$ So, with Equation ($\\ref {chap4:eq102}$ ), one has that: $\\mathbb {E}\\left( |\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t)}-\\frac{G_{2}(L,t)}{V(L,t)}| \\right) \\leqslant \\frac{D}{\\log (T)} \\mathbb {E}( \\log (\\frac{T}{t})) \\textit {.", "}$ Yet, a quick calculation gives us that: $\\mathbb {E}( \\log (\\frac{T}{t})) = O(1)$ when $T \\rightarrow \\infty $ .", "So, thanks to Equation $(\\ref {chap4:eq104})$ and thanks to Equation $(\\ref {chap4:eq105})$ , one gets the first wanted result.", "The second lemma is an estimating one.", "Lemma 9 For every $A > 0$ , for every $C > 0$ , one has: $ \\int _{ \\begin{array}{c} A \\leqslant \\Vert l \\Vert \\leqslant T \\\\ l_{1} > 0,\\textit { } l_{2} > 0 \\\\ l_{1}l_{2} \\geqslant C\\end{array}} \\frac{1}{l_{1}^{3} l_{2}^{2}} dl_{1} dl_{2} = O(T) \\textit { and } $ $ \\int _{ \\begin{array}{c} A \\leqslant \\Vert l \\Vert \\leqslant T \\\\ l_{1} > 0,\\textit { } l_{2} > 0 \\\\ l_{1}l_{2} \\geqslant C \\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{3}} dl_{1} dl_{2} = O(T) \\textit {.}", "$ Remark.", "Thanks to this lemma, we can show very quickly that the expectation of $\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t)}$ and of $\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t)}$ tends to 0 when $T \\rightarrow \\infty $ .", "By symmetry, one has: $\\int _{ \\begin{array}{c} A \\leqslant \\Vert l \\Vert \\leqslant T \\\\ l_{1} > 0,\\textit { } l_{2} > 0 \\\\ l_{1}l_{2} \\geqslant C\\end{array}} \\frac{1}{l_{1}^{3} l_{2}^{2}} dl_{1} dl_{2} = \\int _{ \\begin{array}{c} A \\leqslant \\Vert l \\Vert \\leqslant T \\\\ l_{1} > 0,\\textit { } l_{2} > 0 \\\\ l_{1}l_{2} \\geqslant C \\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{3}} dl_{1} dl_{2}$ So, we only have to prove the first equality of lemma $\\ref {chap4:lemme9}$ .", "By passing into polar coordinates $(r, \\theta )$ , one has that: $\\int _{ \\begin{array}{c} A \\leqslant \\Vert l \\Vert \\leqslant T \\\\ l_{1} > 0,\\textit { } l_{2} > 0 \\\\ l_{1}l_{2} \\geqslant C\\end{array}} \\frac{1}{l_{1}^{3} l_{2}^{2}} dl_{1} dl_{2} = \\int _{ \\begin{array}{c} A \\leqslant r \\leqslant T \\\\ \\frac{\\pi }{2} > \\theta > 0 \\\\ \\sin (2 \\theta ) \\geqslant \\frac{2 C}{r^{2}} \\end{array}} \\frac{1}{r^{4}} \\frac{1}{\\cos ^{3}(\\theta ) \\sin ^{2}(\\theta )} dr d\\theta \\textit {.", "}$ We see that, when $t \\rightarrow \\infty $ , there are $\\textit {a priori}$ two essential parts of this last integral: the first one is when $r$ is large and $\\theta $ is closed to $\\frac{1}{2} \\arcsin ( \\frac{2 C}{r^{2}})$ ; the second is when $r$ is large and $\\theta $ is closed to $\\frac{\\pi }{2} - \\frac{1}{2} \\arcsin ( \\frac{2 C}{r^{2}})$ .", "By using the facts that when $\\theta \\rightarrow 0$ , $\\sin (\\theta ) \\sim \\theta $ and when $\\theta \\rightarrow \\frac{\\pi }{2}$ , $\\cos (\\theta ) \\sim \\frac{\\pi }{2}-\\theta $ and by calculating, one gets that the first essential part, let us call it $A_{1}(T)$ , is estimated as followed $A_{1}(T) = O(1)$ whereas the second essential part, let us call it $A_{2}(T)$ , is estimated as followed $A_{2}(T) = O(T) \\textit {.", "}$ By using Equation ($\\ref {chap4:eq96}$ ), one gets that: $\\int _{ \\begin{array}{c} A \\leqslant \\Vert l \\Vert \\leqslant T \\\\ l_{1} > 0,\\textit { } l_{2} > 0 \\\\ l_{1}l_{2} \\geqslant C\\end{array}} \\frac{1}{l_{1}^{3} l_{2}^{2}} dl_{1} dl_{2} = O(T) \\textit {.", "}$ We can now tackle the proof of Proposition $\\ref {chap4:prop10}$ .", "[Proof of Proposition $\\ref {chap4:prop10}$ ] The proof will be symmetrical relatively to the transformation $l_{1} \\leftarrow l_{2}$ .", "So, we only need to give the proof of the result that concerns $G_{2}(L,t)$ .", "According to Lemma $\\ref {chap4:lemme10}$ and Markov's inequality, we only need to see that $\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t)}$ converges in distribution and in probability towards 0.", "To obtain the fact that $\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t)} \\rightarrow 0$ in probability, we are going to show that its second moment goes to 0.", "One has, according to the definition of $\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t)}$ , that: $\\mathbb {E} \\big ( (\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t)})^{2} \\big ) \\leqslant D \\mathbb {E}\\left( \\frac{1}{V(L,t)^{2}} \\sum _{l,l^{\\prime } \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2})^{2}}\\frac{1}{(l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\sum _{k,k^{\\prime } \\geqslant 1} \\frac{\\cos (4 \\pi k t l_{1}) }{k^{4}} \\frac{\\cos (4 \\pi k^{\\prime } t l^{\\prime }_{1}) }{k^{\\prime 4}} \\right)$ where $D > 0$ .", "By integrating, by using a usual trigonometric formula and by using Proposition $\\ref {chap4:prop2}$ , one gets that: $\\mathbb {E} \\big ( (\\frac{\\tilde{G}_{2}(L,t,T)}{V(L,t))})^{2} \\big ) \\leqslant O(A_{1}(T) + A_{2}(T)) \\textit {.", "}$ where $A_{1}(T) = \\sum _{l,l^{\\prime } \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2})^{2}}\\frac{1}{(l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\sum _{k,k^{\\prime } \\geqslant 1} \\frac{1}{(k k^{\\prime })^{4}} \\max (\\frac{1}{\\log (T)^{2}}, \\frac{1}{\\log (T)^{2}T|kl_{1}+k^{\\prime }l^{\\prime }_{1}| })$ and $A_{2}(T) = \\sum _{l,l^{\\prime } \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2})^{2}}\\frac{1}{(l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\sum _{k,k^{\\prime } \\geqslant 1} \\frac{1}{(k k^{\\prime })^{4}} \\max (\\frac{1}{\\log (T)^{2}}, \\frac{1}{\\log (T)^{2}T|kl_{1}-k^{\\prime }l^{\\prime }_{1}| }) \\textit {.", "}$ To get the wanted result, it is enough to show that $A_{1}(T)$ and $A_{2}(T)$ tend to 0 when $T \\rightarrow \\infty $ .", "Furthermore, we are only going to show that $A_{2}(T)$ tend to 0 when $T \\rightarrow \\infty $ , the proof for $A_{1}(T)$ being symmetrical.", "Let us remark that one has: $A_{2}(T) \\leqslant A_{2,1}(T) + A_{2,2}(T)$ where $A_{2,1}(T) = \\sum _{l,l^{\\prime } \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2})^{2}}\\frac{1}{(l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\sum _{\\begin{array}{c} k,k^{\\prime } \\geqslant 1 \\\\ \\min (k,k^{\\prime }) \\geqslant \\lceil \\log (T) \\rceil \\end{array}} \\frac{1}{(k k^{\\prime })^{4}} \\max (\\frac{1}{\\log (T)^{2}}, \\frac{1}{\\log (T)^{2}T|kl_{1}-k^{\\prime }l^{\\prime }_{1}| })$ and $A_{2,2}(T) = \\sum _{l,l^{\\prime } \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2})^{2}}\\frac{1}{(l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\sum _{\\lceil \\log (T) \\rceil \\geqslant k,k^{\\prime } \\geqslant 1 } \\frac{1}{(k k^{\\prime })^{4}} \\max (\\frac{1}{\\log (T)^{2}}, \\frac{1}{\\log (T)^{2}T|kl_{1}-k^{\\prime }l^{\\prime }_{1}| }) \\textit {.", "}$ From Lemma $\\ref {chap4:lemme2}$ and by using a usual equivalent, one gets that: $A_{2,1}(T) \\rightarrow 0$ when $T \\rightarrow \\infty $ .", "As a consequence, we only need to look at the behaviour of $A_{2,2}(T)$ .", "There are two types of terms in $A_{2,2}(T)$ : those such that $k l_{1}$ is close to $k^{\\prime } l^{\\prime }_{1}$ , for example at a distance less than $\\frac{1}{T}$ and the others.", "In the first case, it forms a sum that is estimated by $\\frac{D}{\\log (T)^{2}} \\int _{\\begin{array}{c}l_{1} > 0 \\textit {, } l_{2} > 0 \\\\ l_{1} l_{2} \\geqslant C \\\\ D \\leqslant \\Vert l \\Vert \\leqslant T \\log (T)\\end{array}} \\frac{1}{l_{1}^{2}l_{2}^{2}} \\frac{1}{T l_{1}}dl_{1}dl_{2}$ with $D > 0$ and this last quantity goes to zero according to Lemma $\\ref {chap4:lemme9}$ (we have first integrated over $l^{\\prime }_{2}$ and then used the fact $k l_{1}$ is close to $k^{\\prime } l^{\\prime }_{1}$ ).", "So, finally, we only need to show that the following quantity goes to 0 when $T$ goes to infinity: $J(T) = \\sum _{\\begin{array}{c} \\lceil \\log (T) \\rceil \\geqslant k,k^{\\prime } \\geqslant 1 \\\\ |kl_{1} - k^{\\prime }l^{\\prime }_{1}|\\geqslant \\frac{1}{T} \\end{array}} \\frac{1}{(l_{1}l_{2})^{2}}\\frac{1}{(l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\frac{1}{(k k^{\\prime })^{4}} \\max (\\frac{1}{\\log (T)^{2}}, \\frac{1}{\\log (T)^{2}T|kl_{1}-k^{\\prime }l^{\\prime }_{1}| }) \\textit {.", "}$ Yet, one has that: $J(T) \\leqslant J_{1}(T) + J_{2}(T)$ where $J_{1}(T)= \\sum _{\\begin{array}{c} \\lceil \\log (T) \\rceil \\geqslant k,k^{\\prime } \\geqslant 1 \\\\ |kl_{1} - k^{\\prime }l^{\\prime }_{1}|\\geqslant 1 \\end{array}} \\frac{1}{(l_{1}l_{2})^{2}}\\frac{1}{(l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\frac{1}{(k k^{\\prime })^{4}} \\max (\\frac{1}{\\log (T)^{2}}, \\frac{1}{\\log (T)^{2}T|kl_{1}-k^{\\prime }l^{\\prime }_{1}| })$ and $J_{2}(T) = \\sum _{\\begin{array}{c} \\lceil \\log (T) \\rceil \\geqslant k,k^{\\prime } \\geqslant 1 \\\\ 1 \\geqslant |kl_{1} - k^{\\prime }l^{\\prime }_{1}|\\geqslant \\frac{1}{T} \\end{array}} \\frac{1}{(l_{1}l_{2})^{2}}\\frac{1}{(l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\frac{1}{(k k^{\\prime })^{4}} \\max (\\frac{1}{\\log (T)^{2}}, \\frac{1}{\\log (T)^{2}T|kl_{1}-k^{\\prime }l^{\\prime }_{1}| }) \\textit {.", "}$ Yet, according to Lemma $\\ref {chap4:lemme2}$ , one has that: $J_{1}(T) \\leqslant \\frac{K}{T \\log (T)^{2}} O(\\log (T)^{2})$ and so $J_{1}(T) \\rightarrow 0$ when $T \\rightarrow \\infty $ .", "Furthermore, with $P$ being a constant that can be chosen as large as one wants, one has, for $T$ large: $J_{2}(T) \\leqslant \\frac{K}{T \\log (T)^{2}} \\int _{(l_{1},l_{2},l^{\\prime }_{1},l^{\\prime }_{2}) \\in Dom(T)} \\frac{1}{(l_{1}l_{2}l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\frac{1}{l_{1} - l^{\\prime }_{1}} dl_{1} dl^{\\prime }_{1} dl_{2} dl^{\\prime }_{2} + o(1)$ where $Dom(T) = \\lbrace (l_{1},l_{2},l^{\\prime }_{1},l^{\\prime }_{2}) \\in \\mathbb {R}^{4} \\text{ } | \\text{ } T \\log (T) \\geqslant l_{1} \\geqslant P \\text{, } \\\\T \\log (T) \\geqslant l^{\\prime }_{1} \\geqslant P-1 \\text{, } T \\log (T) \\geqslant l^{\\prime }_{2},l_{2} > 0 \\text{, } \\\\l^{\\prime }_{1} +1 \\geqslant l_{1} \\geqslant l^{\\prime }_{1} + \\frac{1}{T} \\text{ and } l^{\\prime }_{1}l^{\\prime }_{2}\\text{, } l_{1}l_{2} \\geqslant C \\rbrace $ and $o(1)$ is a quantity that goes to 0 when $T \\rightarrow \\infty $ .", "By integrating in $l_{2}$ and in $l^{\\prime }_{2}$ , one gets from Equation ($\\ref {chap4:eq120}$ ) that: $J_{2}(T) \\leqslant \\frac{D}{T \\log (T)^{2}} \\int _{\\begin{array}{c} T \\log (T) \\geqslant l_{1} \\geqslant P \\\\ T \\log (T) \\geqslant l^{\\prime }_{1} \\geqslant P-1 \\\\ l^{\\prime }_{1} +1 \\geqslant l_{1} \\geqslant l^{\\prime }_{1} + \\frac{1}{T} \\end{array}} \\frac{1}{l_{1}l^{\\prime }_{1}} \\frac{1}{l_{1}-l^{\\prime }_{1}}dl_{1}dl^{\\prime }_{1} + o(1) \\textit {.", "}$ By using the fact that $0 < \\frac{1}{l_{1}(l_{1}-l^{\\prime }_{1})} \\leqslant \\frac{1}{l^{\\prime }_{1}(l_{1}-l^{\\prime }_{1})}$ and by integrating on $l_{1}$ , one gets with Equation $(\\ref {chap4:eq121})$ that: $J_{2}(T) \\leqslant \\frac{K}{T \\log (T)^{2}} \\int _{ T \\geqslant l^{\\prime }_{1} \\geqslant P-1} \\frac{\\log (T)}{(l^{\\prime }_{1})^{2}} dl^{\\prime }_{1} = O(\\frac{1}{T \\log (T)}) + o(1) \\textit {.", "}$ So, we have finally that $J_{2}(T) \\rightarrow 0$ , when $T \\rightarrow \\infty $ , and so does $J(T)$ , $A_{2,2}(T)$ and $A_{2}(T)$ .", "By exchanging $l_{1}$ and $l_{2}$ , we obtain the fact that $A_{1}(T) \\rightarrow 0$ when $T \\rightarrow \\infty $ .", "Finally, with Equation ($\\ref {chap4:eq108}$ ), one gets the wanted result." ], [ "Proof of Proposition $\\ref {chap4:prop11}$", "To prove Proposition $\\ref {chap4:prop11}$ , we are going to follow the same approach as was used just before.", "Before entering into the proof of Proposition $\\ref {chap4:prop11}$ , we need the following preparatory lemma.", "Let us introduce: $\\tilde{G}_{4}(L,t,T) = \\frac{1}{2 \\pi ^{4} \\text{Covol}(L^{\\perp })^{2}} \\sum _{l \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{\\cos (4 \\pi k t l_{1})\\cos (4 \\pi k t l_{2}) }{k^{4}} \\textit {.", "}$ Then we have the following lemma that in particular says that $\\frac{G_{4}(L,t)}{V(L,t)}-\\frac{\\tilde{G}_{4}(L,t,T)}{V(L,t)}$ tends, in probability, towards 0.", "Lemma 10 One has, when $T \\rightarrow \\infty $ , $\\mathbb {E} \\left( |\\frac{\\tilde{G}_{4}(L,t,T)}{V(L,t)}-\\frac{G_{4}(L,t)}{V(L,t)}| \\right) \\rightarrow 0 \\textit {.}", "$ We use the same estimates as in the proof of Lemma $\\ref {chap4:lemme10}$ .", "We can now tackle the proof of Proposition $\\ref {chap4:prop11}$ .", "[Proof of Proposition $\\ref {chap4:prop11}$ ] According to Lemma $\\ref {chap4:lemme11}$ and the Markov's inequality, we only need to prove that $\\frac{\\tilde{G}_{4}(L,t,T)}{V(L,t)}$ tends in probability towards 0.", "To do so, by using a usual trigonometric formula, we have that: $\\frac{\\tilde{G}_{4}(L,t,T)}{V(L,t)} = D(U_{1}(L,t,T) + U_{2}(L,t,T))$ with $D > 0$ and where $U_{1}(L,t,T) = \\frac{1}{V(L,t)} \\sum _{l \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{\\cos (4 \\pi k t (l_{1} - l_{2})) }{k^{4}}$ and $U_{2}(L,t,T) = \\frac{1}{V(L,t)} \\sum _{l \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2})^{2}} \\sum _{k=1}^{\\infty } \\frac{\\cos (4 \\pi k t (l_{1} + l_{2})) }{k^{4}} \\textit {.", "}$ So, to prove that $\\frac{\\tilde{G}_{4}(L,t,T)}{V(L,t)}$ tends to 0 in probability, we only need to show that the moments of order 2 of $U_{1}(L,t,T)$ and $U_{2}(L,t,T)$ tend to 0 in probability.", "We are going to prove this fact for $U_{1}(L,t,T)$ and the proof will be valid by symmetry for $U_{2}$ and we will so get the wanted result.", "We have that: $\\mathbb {E}(U_{1}(L,t,T)^{2}) \\leqslant D \\sum _{l,l^{\\prime } \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2}l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\sum _{ k,k^{\\prime } \\geqslant 1 } \\frac{1}{(k k^{\\prime })^{4}} \\min \\left( \\frac{1}{\\log (T)^{2}}, \\frac{h(k l,k l^{\\prime }) + h(k l,-k l^{\\prime })}{T \\log (T)^{2}} \\right)$ where $D > 0$ is a constant and $h(l,l^{\\prime }) = \\frac{1}{|l_{1}-l_{2}-(l^{\\prime }_{1}-l^{\\prime }_{2})|} \\textit {.", "}$ We are going to show that: $Z(L,T) = \\sum _{l,l^{\\prime } \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2}l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\sum _{ k,k^{\\prime } \\geqslant 1 } \\frac{1}{(k k^{\\prime })^{4}} \\min \\left( \\frac{1}{\\log (T)^{2}}, \\frac{h(k l,k l^{\\prime }) }{T \\log (T)^{2}} \\right) \\rightarrow 0$ when $T \\rightarrow \\infty $ .", "Furthermore, the proof will still be valid if we exchange $l^{\\prime }$ and $-l^{\\prime }$ .", "So we will get the wanted result due to Equation ($\\ref {chap4:eq127}$ ).", "One has that: $Z(L,T) = \\Delta _{1}(L,T) + \\Delta _{2}(L,T)$ where $\\Delta _{1}(L,T) = \\sum _{l,l^{\\prime } \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2}l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\sum _{ \\begin{array}{c} k,k^{\\prime } \\geqslant 1 \\\\ k \\textit { or } k^{\\prime } > \\log (T) \\end{array}} \\frac{1}{(k k^{\\prime })^{4}} \\min \\left( \\frac{1}{\\log (T)^{2}}, \\frac{h(k l,k l^{\\prime })}{T \\log (T)^{2}} \\right)$ and $\\Delta _{2}(L,T) = \\sum _{l,l^{\\prime } \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2}l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\sum _{ \\begin{array}{c} k,k^{\\prime } \\geqslant 1 \\\\ k \\textit { and } k^{\\prime } \\leqslant \\log (T) \\end{array}} \\frac{1}{(k k^{\\prime })^{4}} \\min \\left( \\frac{1}{\\log (T)^{2}}, \\frac{h(k l,k l^{\\prime }) }{T \\log (T)^{2}} \\right) \\textit {.", "}$ Yet, according to Lemma $\\ref {chap4:lemme2}$ , one has that: $\\Delta _{1}(L,T) \\leqslant \\frac{D}{\\log (T)^{8}} (\\int _{ \\begin{array}{c} A \\leqslant \\Vert l \\Vert \\leqslant T \\\\ |l_{1} l_{2} | \\geqslant C \\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} dl_{1} dl_{2})^{2} + o(1) \\underset{T \\rightarrow \\infty }{\\rightarrow } 0$ where $D > 0$ is a constant.", "So, we only need to prove that $\\Delta _{2}(L,T) \\rightarrow 0$ when $T \\rightarrow \\infty $ .", "Yet, one has also: $\\Delta _{2}(L,T) = \\Delta _{2,1}(L,T) + \\Delta _{2,2}(L,T)$ where $\\Delta _{2,1}(L,T)= \\sum _{l,l^{\\prime } \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2}l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\sum _{ \\begin{array}{c} k,k^{\\prime } \\geqslant 1 \\\\ k \\textit { and } k^{\\prime } \\leqslant \\log (T) \\\\h(k l,k l^{\\prime }) \\leqslant 1 \\end{array}} \\frac{1}{(k k^{\\prime })^{4}} \\frac{h(k l,k l^{\\prime }) }{T \\log (T)^{2}} \\textit { and }$ $\\Delta _{2,2}(L,T) = \\sum _{l,l^{\\prime } \\in J_{2}(L,T)} \\frac{1}{(l_{1}l_{2}l^{\\prime }_{1}l^{\\prime }_{2})^{2}} \\sum _{ \\begin{array}{c} k,k^{\\prime } \\geqslant 1 \\\\ k \\textit { and } k^{\\prime } \\leqslant \\log (T) \\\\ h(k l,k l^{\\prime }) > 1 \\end{array}} \\frac{1}{(k k^{\\prime })^{4}} \\frac{h(k l,k l^{\\prime }) }{T \\log (T)^{2}} \\textit {.", "}$ Yet, there exists $A > 0$ such that: $\\Delta _{2,1}(L,T) \\leqslant \\frac{K}{T \\log (T)^{2}} \\left( \\int _{A \\leqslant \\Vert l \\Vert \\leqslant T \\log (T)} \\frac{1}{l_{1}^{2} l_{2}^{2}} \\right)^{2} \\rightarrow 0$ when $T \\rightarrow \\infty $ and where $K$ is a positive constant.", "The right-hand side converges towards 0 because of Lemma $\\ref {chap4:lemme2}$ .", "Because of Equation $(\\ref {chap4:eq133})$ , it only remains to prove that $\\Delta _{2,2}(L,T)$ converges towards 0 when $T \\rightarrow \\infty $ .", "Yet, one has that: $\\Delta _{2,2}(L,T) \\leqslant \\frac{K}{\\log (T)^{2}} \\int _{\\begin{array}{c}A \\leqslant \\Vert l \\Vert , \\Vert l^{\\prime } \\Vert \\leqslant T \\log (T) \\\\ |l_{1}l_{2}|,|l^{\\prime }_{1}l^{\\prime }_{2}| \\geqslant C \\\\ |l_{1}-l_{2}-(l^{\\prime }_{1}-l^{\\prime }_{2})| < 1 \\end{array}} f(l,l^{\\prime })dl dl^{\\prime }$ where $K > 0$ and $f(l,l^{\\prime }) = \\left( \\frac{1}{l_{1}l_{2}l^{\\prime }_{1}l^{\\prime }_{2}} \\right)^{2}$ .", "Because of Equation ($\\ref {chap4:eq138}$ ) and for symmetry reasons, it is enough to show that the following quantity converges towards 0 when $T \\rightarrow \\infty $ : $J(T) = \\frac{1}{\\log (T)^{2}} \\int _{(l,l^{\\prime }) \\in I(T) } f(l,l^{\\prime }) dl dl^{\\prime }$ where $I(T) = \\lbrace (l,l^{\\prime }) \\in \\mathbb {R}^{4} \\textit { } | & \\textit { } l_{1},l_{2},l^{\\prime }_{1},l^{\\prime }_{2} > 0 \\textit { , } \\nonumber \\\\& l_{1} > l_{2} \\textit { , } l^{\\prime }_{1} > l^{\\prime }_{2} \\textit { , } 1 > l_{1}-l_{2}-(l^{\\prime }_{1}-l^{\\prime }_{2}) > 0 \\textit { , } \\nonumber \\\\& A \\leqslant \\Vert l \\Vert , \\Vert l^{\\prime } \\Vert \\leqslant T \\log (T) \\textit { and } l_{1}l_{2},l^{\\prime }_{1}l^{\\prime }_{2} \\geqslant C \\rbrace \\textit {.", "}$ Let us call $I_{1}(T)$ the set of all $(l,l^{\\prime })$ that belong to $I(T)$ and that verify $l_{1} - l_{2} \\leqslant 2$ .", "Let us call also $I_{2}(T) = I(T) - I_{1}(T)$ .", "Let us note that if $(l,l^{\\prime }) \\in I_{1}(T)$ then $l^{\\prime }_{1} - l^{\\prime }_{2} \\leqslant 2$ .", "So, one has: $J_{1}(T) = \\frac{1}{\\log (T)^{2}} \\int _{(l,l^{\\prime }) \\in I_{1}(T) } f(l,l^{\\prime }) dl dl^{\\prime } \\underset{T \\rightarrow \\infty }{\\rightarrow } 0$ because $\\int _{(l,l^{\\prime }) \\in I_{1}(T) } f(l,l^{\\prime }) dl dl^{\\prime }$ is bounded ($l$ and $l^{\\prime }$ are close to the axis $y=x$ ).", "Let us set: $J_{2}(T) = \\frac{1}{\\log (T)^{2}} \\int _{(l,l^{\\prime }) \\in I_{2}(T) } f(l,l^{\\prime }) dl dl^{\\prime }$ so that $J(T) = J_{1}(T) + J_{2}(T) \\textit {.", "}$ As a consequence from Equation ($\\ref {chap4:eq141}$ ), it is enough to prove that $J_{2}(T) \\rightarrow 0$ when $T \\rightarrow \\infty $ in order to prove that $\\Delta _{2,2}(L,T)$ converges towards 0 when $T \\rightarrow \\infty $ .", "It is easy to see that an $\\textit {a priori}$ important part of the integral $\\int _{(l,l^{\\prime }) \\in I_{2}(T) } f(l,l^{\\prime }) dl dl^{\\prime }$ is the $l$ and $l^{\\prime }$ such that $l_{1}$ and $l^{\\prime }_{1}$ are large, for example larger than $\\log (\\log (T))$ , and $l_{2}$ and $l^{\\prime }_{2}$ are small, for example smaller than $ \\frac{1}{ \\log (\\log (T))}$ .", "The rest of the integral, when divided by $\\log (T)^{2}$ , goes to 0 when $T \\rightarrow \\infty $ .", "But we have also that $1 > l_{1}-l_{2}-(l^{\\prime }_{1}-l^{\\prime }_{2}) > 0$ because $(l,l^{\\prime }) \\in I(T)$ .", "So, it implies in particular that, for $T$ large enough, $ \\frac{3}{2} > l_{1} - l^{\\prime }_{1} > - \\frac{1}{2} \\textit {.}", "$ Hence, by integrating first in $l_{2}$ and in $l^{\\prime }_{2}$ and by using the fact that the lattice is admissible, we have that $J_{2}(T)$ is estimated as followed: $J_{2}(T) \\leqslant \\frac{K_{1}}{\\log (T)^{2}} \\int _{\\begin{array}{c} T \\log (T) \\geqslant l_{1},l^{\\prime }_{1} \\geqslant \\log (\\log (T)) \\\\ \\frac{3}{2} > l_{1} - l^{\\prime }_{1} > - \\frac{1}{2}\\end{array}} \\frac{1}{l_{1} l^{\\prime }_{1} } dl_{1}dl^{\\prime }_{1} +o(1)$ where $K_{1}$ is a positive constant (that depends on $C$ ).", "From Equation ($\\ref {chap4:eq143}$ ), by integrating, first, in $l_{1}$ , second, in $l^{\\prime }_{1}$ , one gets that: $J_{2}(T) \\leqslant \\frac{K_{2}}{\\log (T)} +o_{T}(1)$ where $K_{2} > 0$ , which concludes the proof." ], [ "Conclusion", "We can now give the full proof of Theorem $\\ref {chap4:thm1}$ .", "[Proof of Theorem $\\ref {chap4:thm1}$ ] Proposition $\\ref {chap4:prop2}$ gives us the first part of Theorem $\\ref {chap4:thm1}$ .", "The second part of Theorem $\\ref {chap4:thm1}$ is the consequence of Proposition $\\ref {chap4:prop3}$ and of Proposition $\\ref {chap4:prop8}$ (with $L$ being replaced by $L^{\\perp }$ ; the validity of this last Proposition is a consequence of Proposition $\\ref {chap4:prop9}$ , Proposition $\\ref {chap4:prop10}$ and Proposition $\\ref {chap4:prop11}$ )." ], [ "Proof of Theorem 15", "The goal of this section is to establish Theorem $\\ref {chap4:thm1000}$ .", "It is a natural extension of Theorem $\\ref {chap4:thm1}$ and of a part of $\\ref {chap4:thm2}$ .", "We see that, in that case, the normalization is larger than before: at least $\\log (t)$ whereas the normalization before was in $\\sqrt{\\log (t)}$ .", "To show this last theorem, we are going to proceed in three steps: first, we will show that the lower and upper estimates about $\\tilde{V}(L,r)$ hold, second we will show that the lower and upper estimates about $\\tilde{V}(L,r)$ hold and third we will conclude the proof of Theorem $\\ref {chap4:thm1000}$ by using the third subsection of Section 4." ], [ "Estimation of $\\tilde{V}(L,r)$ in the typical case", "The goal of this subsection is to show the following proposition: Proposition 12 For every $\\epsilon > 0$ , for a typical $L \\in {S}_{d}$ , one has that $\\tilde{V}(L,r) = O(r^{2 d - 2 + \\epsilon })$ and $\\frac{\\tilde{V}(L,r)}{r^{ d - 1}} \\underset{r \\rightarrow \\infty }{\\rightarrow } \\infty \\textit {.", "}$ The proof of this proposition relies heavily on $\\cite {Skriganov}$ and we need to recall a fundamental theorem.", "Definition 6 A subgroup $G \\subset SL_{d}(\\mathbb {R})$ is called ergodic on the homogeneous space ${S}_{d}$ if for every $G$ -invariant measurable subset $A \\subset {S}_{d}$ , $\\mu _{d}(A) = 0$ or $\\mu _{d}(A) = 1$ where $\\mu _{d}$ is the unique Haar and probability measure on ${S}_{d}$ .", "The Moore's ergodic theorem gives us in fact that $G$ is ergodic if, and only if, $G$ is not contained in any compact subgroup of $SL_{d}(\\mathbb {R})$ .", "As a consequence, $\\Delta $ is ergodic and thus we have the following fundamental theorem Theorem 7 Let $\\psi $ a function integrable over $({S}_{d},\\mu _{d})$ .", "Then, for almost all $L \\in {S}_{d}$ (in the sense of the measure $\\mu _{d}$ ), one has that $\\lim _{r \\rightarrow \\infty } \\frac{1}{|\\Delta _{r}|} \\sum _{\\delta \\in \\Delta _{r}} \\psi (\\delta L) = \\int _{{S}_{d}} \\psi (L) d\\mu _{d}(L) \\textit {.}", "$ We can now give the proof of Proposition $\\ref {chap4:prop106}$ .", "[Proposition $\\ref {chap4:prop106}$ ] First, one has that: $\\tilde{V}(L,r) \\leqslant \\left( \\sum _{\\delta \\in \\Delta _{r}} \\frac{1}{\\Vert \\delta L \\Vert ^{d}} \\right)^{2} \\textit {.", "}$ Yet, Lemma 3.2 from $\\cite {Skriganov}$ gives us that for every $\\epsilon > 0$ , for a typical $L \\in {S}_{d}$ , $\\left( \\sum _{\\delta \\in \\Delta _{r}} \\frac{1}{\\Vert \\delta L \\Vert ^{d}} \\right) = O(r^{d-1+ \\epsilon }) \\textit {.", "}$ So, Equation ($\\ref {chap4:eq1034}$ ) and Equation ($\\ref {chap4:eq1035}$ ) give us the wanted first result of Proposition $\\ref {chap4:prop106}$ .", "Second, one has that: $\\sqrt{\\frac{\\tilde{V}(r,L)}{r^{ d - 1}}} \\geqslant \\frac{K_{1}}{r^{d-1}} \\sum _{\\delta \\in \\Delta _{r}} \\frac{1}{\\Vert \\delta L \\Vert ^{d}}$ where $K_{1} > 0$ .", "We have obtained this last equation by using the concavity of the square root and because the cardinal number of $\\Delta _{r}$ is of order $r^{d-1}$ (see Equation $(\\ref {chap4:eq1300})$ ).", "Let then $m \\geqslant 1$ .", "One has that: $\\frac{1}{r^{d-1}} \\sum _{\\delta \\in \\Delta _{r}} \\frac{1}{\\Vert \\delta L \\Vert ^{d}} \\geqslant \\frac{1}{r^{d-1}} \\sum _{\\delta \\in \\Delta _{r}} \\min \\left(m,\\frac{1}{\\Vert \\delta L \\Vert ^{d}} \\right) \\textit {.", "}$ Yet, $ L \\longmapsto \\min \\left(m,\\frac{1}{\\Vert L \\Vert ^{d}} \\right)$ is integrable over ${S}_{d}$ .", "So, one has for a typical $L \\in {S}_{d}$ : $\\frac{1}{r^{d-1}} \\sum _{\\delta \\in \\Delta _{r}} \\min \\left(m,\\frac{1}{\\Vert \\delta L \\Vert ^{d}} \\right) \\underset{r \\rightarrow \\infty }{\\rightarrow } K_{2} \\int _{{S}_{d}} \\min (m, \\frac{1}{\\Vert L \\Vert ^{d}}) d\\mu _{d}$ where $K_{2} > 0$ does not depend on $m$ .", "By using Equation $(\\ref {chap4:eq1036})$ , Equation $(\\ref {chap4:eq1037})$ and Equation $(\\ref {chap4:eq1038})$ , one gets that for every $m \\geqslant 1$ , for a typical $L \\in {S}_{d}$ : $\\liminf _{r \\rightarrow \\infty } \\sqrt{\\frac{\\tilde{V}(r,L)}{r^{ d - 1}}} \\geqslant K \\int _{{S}_{d}} \\min (m, \\frac{1}{\\Vert L \\Vert ^{d}}) d\\mu _{d}$ where $K >0$ and $m > 0$ .", "By making $m \\rightarrow \\infty $ , by using Fatou's lemma and by using the fact that $\\int _{{S}_{d}}\\frac{1}{\\Vert L \\Vert ^{d}} d\\mu _{d} = \\infty \\textit {, }$ one gets the wanted result." ], [ "Result about $V(L,t)$ in the typical case", "The goal of this subsection is to prove the following proposition: Proposition 13 For every $\\epsilon > 0$ , for $L$ a typical lattice, one has that $V(L,t) = O(\\log (t)^{2+\\epsilon })$ and $\\frac{V(L,t)}{\\log (t)^{2}} \\underset{t \\rightarrow \\infty }{\\rightarrow } \\infty \\textit {.", "}$ In fact, the most important part of $V(L,t)$ are the terms $l_{1}^{2} l_{2}^{2}$ that are the smallest possible.", "So, we need to know more about how small can be $|l_{1} l_{2}|$ .", "In fact, we have the following result: Theorem 8 ([11], [9]) For a typical $L \\in {S}_{2}$ , there exists a sequence $(l_{n})_{n \\in \\mathbb {N}}$ such that $ \\Vert l_{n} \\Vert \\underset{n \\rightarrow \\infty }{\\rightarrow } \\infty \\textit { and } \\log (\\Vert l_{n} \\Vert ) |(l_{n})_{1} (l_{n})_{2}| \\underset{n \\rightarrow \\infty }{\\rightarrow } 0 \\textit {.}", "$ Furthermore, for all $\\alpha > 0$ , for a typical $L \\in {S}_{2}$ , there exists $C > 0$ such that for all $l \\in L-\\lbrace 0 \\rbrace $ , $ |l_{1}l_{2}| \\geqslant C | \\log (\\Vert l \\Vert )|^{-1 - \\alpha } \\textit {.", "}$ As a consequence of Theorem $\\ref {chap4:thm12}$ , we see that, to establish Proposition $\\ref {chap4:prop107}$ , the following lemma will be convenient: Lemma 11 For all $C > 0$ , for all $A > 0$ , for all $\\alpha > 0$ , one has that: $\\int _{\\begin{array}{c} l \\in \\mathbb {R}^{2} \\\\ A \\leqslant \\Vert l \\Vert \\leqslant t \\\\ |\\text{Num}(l)| \\geqslant C | \\log (\\Vert l \\Vert ) |^{-1-\\alpha }\\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} dl_{1} dl_{2} = O (\\log (t)^{2+ \\alpha }) \\textit {.", "}$ The proof of this lemma is basically the same as the proof of Lemma $\\ref {chap4:lemme2}$ .", "First, let us say that it is enough to prove the result for $A$ fixed and large enough.", "Then, let us set: $J(t) = \\int _{\\begin{array}{c} l \\in \\mathbb {R}^{2} \\\\ A \\leqslant \\Vert l \\Vert \\leqslant t \\\\ |\\text{Num}(l)| \\geqslant C | \\log (\\Vert l \\Vert ) |^{-1-\\alpha }\\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} dl_{1} dl_{2} \\textit {.", "}$ By passing into polar coordinates $(r,\\theta )$ and by using the symmetries, one has: $J(t) = 8 \\int _{\\begin{array}{c}A \\leqslant r \\leqslant t \\\\ 0 \\leqslant \\theta \\leqslant \\frac{\\pi }{4} \\\\ \\sin (2 \\theta ) \\geqslant \\frac{2 C \\log (r)^{-1-\\alpha } }{r^{2}} \\end{array}} \\frac{1}{r^{3}} \\frac{4}{\\sin (2 \\theta )^{2}} dr d\\theta \\textit {.", "}$ By making the changes of variable $\\theta ^{\\prime } = 2 \\theta $ and, then, $u = \\tan (\\theta ^{\\prime })$ , one gets from Equation $(\\ref {chap4:eq11})$ and by taking $A$ large enough: $J(t) = 16 \\int _{A \\leqslant r \\leqslant t^{1+ \\epsilon _{1}}} \\frac{1}{r^{3}} (-1 + \\frac{r^{2} \\log (r)^{1+\\alpha }}{2 C} \\sqrt{1 - (\\frac{2 C}{\\log (r)^{1+\\alpha } r^{2}})^{2}}) dr \\textit {.", "}$ From Equation ($\\ref {chap4:eq1042}$ ), for all $A$ large enough, one has that: $J(t) = O (\\log (t)^{2+\\alpha })$ when $t \\rightarrow \\infty $ .", "So we get the wanted result.", "We can now prove Proposition $\\ref {chap4:prop107}$ .", "[Proof of Proposition $\\ref {chap4:prop107}$ ] Let $\\epsilon > 0$ .", "For a typical $L \\in {S}_{2}$ , there exists $C > 0 $ such that for all $l \\in L-\\lbrace 0 \\rbrace $ $|l_{1}l_{2}| \\geqslant C | \\log (\\Vert l \\Vert )|^{-1 - \\epsilon }$ and there exists a sequence $(l_{n})_{n \\in \\mathbb {N}} \\in L^{\\mathbb {N}}$ such that $\\Vert l_{n} \\Vert \\underset{n \\rightarrow \\infty }{\\rightarrow } \\infty \\textit { and }$ $ \\log (\\Vert l_{n} \\Vert ) |(l_{n})_{1} (l_{n})_{2}| \\underset{n \\rightarrow \\infty }{\\rightarrow } 0 \\textit {.", "}$ So, first, one has that there exist $A,D > 0$ such that: $V(L,t) \\leqslant D \\int _{\\begin{array}{c} l \\in \\mathbb {R}^{2} \\\\ A \\leqslant \\Vert l \\Vert \\leqslant t \\\\ |\\text{Num}(l)| \\geqslant C | \\log (\\Vert l \\Vert ) |^{-1-\\epsilon }\\end{array}} \\frac{1}{l_{1}^{2} l_{2}^{2}} dl_{1} dl_{2} = O (\\log (t)^{2+ \\epsilon })$ according to the definition of $V(L,t)$ and because of Equation $(\\ref {chap4:eq1044})$ .", "So, because of Lemma $\\ref {chap4:lemme13}$ , one gets that: $V(L,t) = O (\\log (t)^{2+ \\epsilon }) \\textit {.", "}$ Furthermore, a consequence of Equation $(\\ref {chap4:eq1045})$ and of Equation $(\\ref {chap4:eq1046})$ is the fact that: $\\liminf _{t \\rightarrow \\infty } \\frac{V(L,t)}{\\log (t)^{2}} = \\infty $ also due to the definition of $V(L,t)$ .", "We can now conclude the proof of Theorem $\\ref {chap4:thm1000}$ ." ], [ "Conclusion", "[Proof of Theorem $\\ref {chap4:thm1000}$ ] The first assertion of Theorem $\\ref {chap4:thm1000}$ is proven by Proposition $\\ref {chap4:prop106}$ .", "The second assertion of Theorem $\\ref {chap4:thm1000}$ is proven by Proposition $\\ref {chap4:prop107}$ .", "The third part of Theorem $\\ref {chap4:thm1000}$ , that concerns the convergence in distribution, is shown as the second part of Theorem $\\ref {chap4:thm1}$ (see 4.2 and 4.3)." ], [ "Proof of Theorem 16", "In this section, we are going to show Theorem $\\ref {chap4:thm3}$ .", "Let $x \\in \\mathbb {R}$ and $a > 0$ .", "Instead of considering $t$ , we can consider $ \\frac{t}{a}$ .", "So, we can suppose, and we are going to make this assumption in the rest of this section, that $a=1$ in the study of $\\frac{\\mathcal {R}(t \\text{Rect}(a,a) + (x,x) ,\\mathbb {Z}^{2})}{t}$ .", "Now we are going to give a simple expression of $\\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t}$ ." ], [ "An expression of $\\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t}$", "The main objective of this subsection is to prove the following proposition: Proposition 14 We have for every $x \\in \\mathbb {R}^{2}$ , for every $t > 0$ , that $\\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t} = \\frac{(\\lfloor t + x \\rfloor - \\lceil -t + x \\rceil +1)^{2} - 4 t^{2}}{t} \\textit {.", "}$ The proof is quite straightforward.", "Let $x \\in \\mathbb {R}^{2}$ .", "Let $t > 0$ .", "One has that: $N(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})& = \\left( \\sum _{ \\begin{array}{c} (n_{1},n_{2}) \\in \\mathbb {Z}^{2} \\\\ -t+x \\leqslant n_{1} \\leqslant t+x \\\\ -t+x \\leqslant n_{2} \\leqslant t+x \\end{array}} 1 \\right) \\nonumber \\\\& = (\\lfloor t + x \\rfloor - \\lceil -t + x \\rceil +1)^{2} \\textit {.", "}$ Furthermore, one has that: $\\text{Area}(t \\text{Rect}(1,1))= 4 t^{2} \\textit {.", "}$ So, Equation $(\\ref {chap4:eq1002})$ and Equation $(\\ref {chap4:eq1003})$ and the definition of $\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})$ give us Equation $(\\ref {chap4:eq1001})$ .", "With Equation $(\\ref {chap4:eq1001})$ , one has that: $\\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t} = (\\lfloor t + x \\rfloor - \\lceil -t + x \\rceil +1 - 2 t )\\frac{(\\lfloor t + x \\rfloor - \\lceil -t + x \\rceil +1 + 2 t )}{t} \\textit {.", "}$ Thanks to this last remark, the asymptotical study of $\\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t}$ with $t$ distributed on $[0,T]$ according to the probability measure $\\frac{1}{T} \\rho (\\frac{t}{T}) dt$ is going to be reduced to the study of a simpler quantity.", "It is the object of the next subsection." ], [ "Reduction of the study of $\\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t}$", "The main objective of this subsection is to prove the following proposition: Proposition 15 For every $g \\in C_{c}(\\mathbb {R})$ , $\\int _{t=0}^{T} (g\\left( \\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t} \\right) - g\\left( \\Delta (t,x) \\right)) \\frac{1}{T} \\rho (\\frac{t}{T}) dt \\underset{T \\rightarrow \\infty }{\\rightarrow } 0$ where $\\Delta (t,x)$ is defined by $\\Delta (t,x) = 4 (\\lfloor t + x \\rfloor - \\lceil -t + x \\rceil +1 - 2 t ) \\textit {.", "}$ The proof is quite straightforward and lie on the definitions of $\\lfloor \\cdot \\rfloor $ and of $\\lceil \\cdot \\rceil $ .", "For every $t > 0$ , for every $x \\in \\mathbb {R}$ , $t+x - 1< \\lfloor t + x \\rfloor \\leqslant t+x$ and $-t+x \\leqslant \\lceil -t + x \\rceil < -t+x +1 \\textit {.", "}$ From Equation $(\\ref {chap4:eq1006})$ and Equation $(\\ref {chap4:eq1007})$ , one gets that: $\\frac{4 t - 1}{t}< \\frac{(\\lfloor t + x \\rfloor - \\lceil -t + x \\rceil +1 + 2 t )}{t} \\leqslant \\frac{4 t + 1}{t}$ So, from this last equation, one has that, when $t \\rightarrow \\infty $ , $\\frac{(\\lfloor t + x \\rfloor - \\lceil -t + x \\rceil +1 + 2 t )}{t} \\rightarrow 4 \\textit {.", "}$ So, one has that: $\\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t} - \\Delta (t,x) \\rightarrow 0$ when $t \\rightarrow \\infty $ because $(\\lfloor t + x \\rfloor - \\lceil -t + x \\rceil +1 - 2 t )$ is bounded.", "Now, the end of this proof is quite straightforward.", "Indeed, one has, for every $0 < \\kappa < \\frac{1}{2}$ : $&| \\int _{t=0}^{T} (g\\left( \\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t} \\right) - g\\left( \\Delta (t,x) \\right)) \\frac{1}{T} \\rho (\\frac{t}{T}) dt | \\leqslant 2 \\Vert g \\Vert _{\\infty } \\int _{0}^{\\kappa } \\rho (t) dt \\nonumber \\\\& + \\int _{\\kappa T}^{T} |g\\left( \\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t} \\right) - g\\left( \\Delta (t,x) \\right)| \\frac{1}{T} \\rho (\\frac{t}{T}) dt$ and, because $g \\in C_{c}(\\mathbb {R})$ , it is a uniformly continuous function and one has, from Equation $(\\ref {chap4:eq1011})$ , that $\\limsup _{T \\rightarrow \\infty } \\int _{\\kappa T}^{T} |g\\left( \\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t} \\right) - g\\left( \\Delta (t,x) \\right)| \\frac{1}{T} \\rho (\\frac{t}{T}) dt = 0$ because, also, of Equation $(\\ref {chap4:eq1010})$ .", "So, Equation $(\\ref {chap4:eq1011})$ and Equation $(\\ref {chap4:eq1012})$ give us that for every $0 < \\kappa \\leqslant \\frac{1}{2}$ : $\\limsup _{T \\rightarrow \\infty } | \\int _{t=0}^{T} (g\\left( \\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t} \\right) - g\\left( \\Delta (t,x) \\right)) \\frac{1}{T} \\rho (\\frac{t}{T}) dt | \\leqslant 2 \\Vert g \\Vert _{\\infty } \\int _{0}^{\\kappa } \\rho (t) dt \\textit {.", "}$ By making $\\kappa $ go to 0, one gets the wanted result.", "Proposition $\\ref {chap4:prop101}$ enables us to reduce the asymptotic study of $\\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x) ,\\mathbb {Z}^{2})}{t}$ to the asymptotic study of $\\Delta (t,x)$ .", "In the next subsection, we are going to show that $\\Delta (t,x)$ converges in distribution and exhibit the limit distribution and its moments." ], [ "Convergence in distribution of $\\Delta (t,x)$", "The goal of this subsection is to prove the following proposition: Proposition 16 For all $x \\in \\mathbb {R}$ , when $t \\in [0,T]$ is distributed according to the probability density $\\frac{1}{T} \\rho (T \\cdot )$ on $[0,T]$ then, when $T \\rightarrow \\infty $ , $\\Delta (t,x)$ converges in distribution.", "Furthermore, the limit distribution $\\beta $ has a compact support included in $[-2,4]$ and for every $k \\in \\mathbb {N}$ , one has that $\\int _{x \\in \\mathbb {R}} x^{k} d \\beta (x) = a_{k} $ where $a_{k} = \\frac{4^{k}(1 + (-1)^{k})(y^{k+1} + (1-y)^{k+1})}{2(k+1)}$ with $y = |t_{2,0} - t_{1,0}|$ where $t_{2,0}$ is the first $t \\geqslant 0$ such that $ -t + x \\in \\mathbb {Z}$ and $t_{1,0}$ is the first $t \\geqslant 0$ such that $ t + x \\in \\mathbb {Z}$ .", "We are going to show this proposition in three steps.", "The first time, and the next subsubsection is dedicated to it, consists in calculating the limit, when $T \\rightarrow \\infty $ , of all entire moments of $\\Delta (t,x)$ when $\\rho = \\mathbf {1}_{[0,1]}$ .", "The second time consists in showing that these limits define a unique probability distribution over $\\mathbb {R}$ .", "The last subsection is dedicated to the conclusion of the proof.", "So, basically, we are going to use the method of moments to show Proposition $\\ref {chap4:prop105}$ ." ], [ "Calculation of limits of moments of $\\Delta (t,x)$", "Before stating the main proposition of this section, we need to make some observations and put in place some notations.", "Let us call $t_{1,0} < \\cdots < t_{1,l}$ the different steps $t \\in [0,T]$ such that $t + x \\in \\mathbb {Z}$ .", "In the same way, let us call $t_{2,0} < \\cdots < t_{2,h}$ the different steps $t \\in [0,T]$ such that $-t + x \\in \\mathbb {Z}$ .", "Let us observe that for every $i \\in \\lbrace 0, \\cdots , l-1 \\rbrace $ , $t_{1,i+1} - t_{1,i} = 1$ and that for every $j \\in \\lbrace 0, \\cdots , h-1 \\rbrace $ , $t_{2,j+1} - t_{2,j} = 1$ .", "As a consequence, one has necessarily that $t_{2,0} \\in [t_{1,0},t_{1,1}[$ or $t_{1,0} \\in [t_{2,0},t_{2,1}[$ and $h=l$ or $h=l-1$ or $h=l+1$ .", "Let us set: $y = |t_{1,0} - t_{2,0}| \\textit {.", "}$ Then, the main proposition of this section is the following proposition: Proposition 17 For every $k \\in \\mathbb {N}$ , when $\\rho = \\mathbf {1}_ {[0,1]}$ , one has that: $\\lim _{T \\rightarrow \\infty } \\left( \\mathbb {E}((\\Delta (t,x))^{k}) \\right) = a_{k}$ where $a_{k} = \\frac{4^{k}(1 + (-1)^{k})(y^{k+1} + (1-y)^{k+1})}{2(k+1)} \\textit {.", "}$ The proof consists basically in cutting the interval $[0,T]$ into subintervals where all the quantities that intervene in the calculus can be expressed simply.", "Let $k \\geqslant 0$ and let us suppose that $\\rho = \\mathbf {1}_{[0,1]}$ .", "By symmetry, we can, and we will, also suppose that $t_{2,0} \\in [t_{1,0},t_{1,1}[$ .", "One has that: $-4 \\leqslant \\Delta (t,x) \\leqslant 4$ for all $t \\in \\mathbb {R}$ and $x \\in \\mathbb {R}$ .", "Consequently, we can suppose that $\\mathbb {E}((\\Delta (t,x))^{k}) = \\sum _{i=0}^{h-1} \\int _{t_{1,i}}^{t_{2,i}} \\Delta (t,x)^{k} \\frac{1}{T} dt + \\int _{t_{2,i}}^{t_{1,i+1}} \\Delta (t,x)^{k} \\frac{1}{T} dt$ even if it means neglecting the rest of the integral that is calculated on a union of two intervals of respective lengths at most 2 and so the corresponding term, because of Equation $(\\ref {chap4:eq1110})$ , is a $O(\\frac{1}{T})$ .", "Let $i \\in \\lbrace 0, \\cdots , h-1\\rbrace $ .", "Then one has: $\\int _{t_{1,i}}^{t_{2,i}} \\Delta (t,x)^{k} \\frac{1}{T} dt = \\int _{t_{1,i}}^{t_{2,i}} 4^{k} (t_{1,i} + t_{2,i} -2 t)^{k} \\frac{1}{T} dt$ according to the Equation $(\\ref {chap4:eq1111})$ .", "So, one gets that: $\\int _{t_{1,i}}^{t_{2,i}} \\Delta (t,x)^{k} \\frac{1}{T} dt = \\frac{4^{k}}{2 T(k+1)}( y^{k+1} - (-y)^{k+1} )$ because for all $i \\in \\lbrace 0,\\cdots ,h-1 \\rbrace $ , $ y= t_{2,0}-t_{1,0} = t_{2,i} - t_{1,i} $ .", "So, one gets that: $\\int _{t_{1,i}}^{t_{2,i}} \\Delta (t,x)^{k} \\frac{1}{T} dt = \\frac{4^{k}}{2 T(k+1)} y^{k+1} (1 + (-1)^{k}) \\textit {.", "}$ In a similar way, one gets that: $\\int _{t_{2,i}}^{t_{1,i+1}} \\Delta (t,x)^{k} \\frac{1}{T} dt = \\frac{4^{k}}{2 T(k+1)} (1-y)^{k+1} (1 + (-1)^{k})$ because $1-y = t_{1,i+1} - t_{2,i}$ .", "So, with Equation $(\\ref {chap4:eq1016})$ , Equation $(\\ref {chap4:eq1019})$ and Equation $(\\ref {chap4:eq1020})$ , one gets that: $\\mathbb {E}((\\Delta (t,x))^{k}) = \\sum _{i=0}^{h-1} \\frac{4^{k}}{2 T(k+1)}(y^{k+1}+(1-y)^{k+1}) (1 + (-1)^{k}) \\textit {.", "}$ By using the fact that $ \\lim _{T \\rightarrow \\infty } \\frac{h}{T} = 1$ , one gets from Equation $(\\ref {chap4:eq1021})$ that: $\\mathbb {E}((\\Delta (t,x))^{k}) = \\frac{4^{k} h}{2 T(k+1)}(y^{k+1}+(1-y)^{k+1}) (1 + (-1)^{k}) \\underset{T \\rightarrow \\infty }{\\rightarrow } \\frac{4^{k} }{2(k+1)}(y^{k+1}+(1-y)^{k+1}) (1 + (-1)^{k}) \\textit {.", "}$ In the next subsubsection, we are going to see that there exists a unique probability distribution over $\\mathbb {R}$ such that the entire moments are given by the $a_{k}$ ." ], [ "Existence and uniqueness of the distribution whose moments are given by the $a_{k}$", "The main objective of this subsubsection is to prove the following proposition: Proposition 18 There exists a unique probability distribution $\\beta $ over $\\mathbb {R}$ such that for all $k \\in \\mathbb {N}$ , $\\int _{x \\in \\mathbb {R}} x^{k} d\\beta (x) = a_{k} \\textit {.}", "$ To prove this proposition, we need to recall the following theorem.", "Theorem 9 Let $(\\alpha _{k})_{k \\in \\mathbb {N}}$ be a sequence of real numbers such that the power series $\\sum _{k \\geqslant 0} \\frac{\\alpha _{k}}{k!}", "z^{k}$ has a positive radius of convergence.", "Then, there exists at most one probability measure $\\beta $ over $\\mathbb {R}$ such that for all $ k\\in \\mathbb {N}$ , $ \\alpha _{k} = \\int _{x \\in \\mathbb {R}} x^{k} d\\beta (x) \\textit {.", "}$ We can now prove the proposition $\\ref {chap4:prop103}$ .", "[Proof of Proposition $\\ref {chap4:prop103}$ ] One has: $\\frac{\\frac{a_{k+1}}{(k+1)!}}{\\frac{a_{k}}{k!}}", "\\underset{k \\rightarrow \\infty }{ \\rightarrow } 0$ according to Equation ($\\ref {chap4:eq1023}$ ).", "So, the ratio test gives us that $\\sum _{k \\geqslant 0} \\frac{a_{k}}{k!}", "z^{k}$ has a radius of convergence that is infinite.", "As a consequence, Theorem $\\ref {chap4:thm7}$ applies here and gives us that there is at most one probability measure $\\beta $ over $\\mathbb {R}$ whose entire moments are given by the $(a_{k})_{k \\in \\mathbb {N}}$ .", "Furthermore, $-4 \\leqslant \\Delta (t,x) \\leqslant 4$ for all $t \\geqslant 0$ and for all $x \\in \\mathbb {R}$ .", "So, we have that for every $x \\in \\mathbb {R}$ , $(\\mathbb {P}_{\\Delta (\\cdot ,x)})_{T > 0}$ is tight (recall that the distribution of $t$ depends on $T$ ), which means here that for every $\\epsilon > 0$ , there exists a compact set $W_{\\epsilon }$ of $\\mathbb {R}$ such that for every $T >0$ , $\\mathbb {P}_{\\Delta (\\cdot ,x)}\\big (W_{\\epsilon } \\big ) \\geqslant 1 - \\epsilon \\textit {.", "}$ As a consequence, Prokhorov's theorem gives us the existence of the probability measure $\\beta $ whose entire moments are given by the $(a_{k})_{k \\in \\mathbb {N}}$ ." ], [ "Conclusion of the proof of Proposition $\\ref {chap4:prop105}$", "To conclude the proof of Proposition $\\ref {chap4:prop105}$ , we need to recall one theorem and to prove one lemma.", "Theorem 10 Let $X$ be a real random variable characterized by its entire moments and $(X_{n})_{n \\in \\mathbb {N}}$ be a sequence of real random variables such that for all $k \\in \\mathbb {N}$ , $E(X_{n}^{k}) \\underset{n \\rightarrow \\infty }{\\rightarrow } E(X^{k}) \\textit {.", "}$ Then the sequence $(X_{n})$ converges in distribution towards $X$ .", "The following lemma is in fact taken from $\\cite {bleher1992distribution}$ (but it was not formulated as it, see proof of theorem 4.3).", "We are going to give the proof of this lemma here for completeness.", "Lemma 12 Let $F$ be a real measurable function from $\\mathbb {R}_{+}$ .", "Assume that there exists a probability measure $\\mu $ over $\\mathbb {R}$ such that for every $g \\in C_{b}(\\mathbb {R})$ , $\\lim _{T \\rightarrow \\infty } \\frac{1}{T} \\int _{0}^{T} g(F(t)) dt = \\int _{x \\in \\mathbb {R}} g(x) d\\mu (x) \\textit {.", "}$ Then, for every probability density $\\rho $ over $[0,1]$ , for every $g \\in C_{b}(\\mathbb {R})$ , one has $\\lim _{T \\rightarrow \\infty } \\frac{1}{T} \\int _{0}^{T} g(F(t)) \\rho (\\frac{t}{T}) dt = \\int _{x \\in \\mathbb {R}} g(x) d\\mu (x) \\textit {.", "}$ Assume first that $\\rho $ is a step-wise function consisting of a finite number of steps.", "By linearity of the Equation $(\\ref {chap4:eq1025})$ , it is enough, in this case, to prove ($\\ref {chap4:eq1025}$ ) for the function $ \\rho (x) = \\frac{1}{b-a} \\mathbf {1}_{[a,b]}(x) $ where $0 \\leqslant a < b \\leqslant 1$ , $\\textit {id est}$ a one-step function.", "Let $g \\in C_{b}(\\mathbb {R})$ .", "Because of the assumption of Lemma $\\ref {chap4:lemme12}$ , one has: $\\frac{1}{T} \\int _{0}^{T} g(F(t)) \\rho (\\frac{t}{T}) dt & = \\frac{1}{T(b-a)} \\int _{aT}^{bT} g(F(t)) dt \\nonumber \\\\& = \\frac{1}{b-a}( b \\frac{1}{b T} \\int _{0}^{bT} g(F(t)) dt - a \\frac{1}{a T} \\int _{0}^{aT} g(F(t)) dt ) \\nonumber \\\\& \\underset{T \\rightarrow \\infty }{ \\rightarrow } \\frac{1}{b-a} (b \\int _{x \\in \\mathbb {R}} g(x) d\\mu (x) - a \\int _{x \\in \\mathbb {R}} g(x) d\\mu (x)) = \\int _{x \\in \\mathbb {R}} g(x) d\\mu (x)$ So we have Equation $(\\ref {chap4:eq1025})$ in this case.", "The general case follows now by using the fact that for every probability density $\\rho $ over $[0,1]$ , for every $\\epsilon > 0$ , there exists $\\rho _{\\epsilon }$ , a step-wise function consisting of a finite number of steps, such that $\\int _{x \\in [0,1]} |\\rho (x) - \\rho _{\\epsilon }(x)| dx \\leqslant \\epsilon \\textit {.}", "$ We can now prove Proposition $\\ref {chap4:prop105}$ .", "[Proof of Proposition $\\ref {chap4:prop105}$ ] Thanks to Lemma $\\ref {chap4:lemme12}$ , we only need to prove Proposition $\\ref {chap4:prop105}$ in the case where $\\rho = \\mathbf {1}_{[0,1]}$ .", "Furthermore, Proposition $\\ref {chap4:prop102}$ , Proposition $\\ref {chap4:prop103}$ and Theorem $\\ref {chap4:thm8}$ gives us that that $\\Delta (\\cdot ,x)$ converges in distribution when $T \\rightarrow \\infty $ and the moments of the limit distribution is given by the $a_{k}$ .", "Finally, we recall that $-4 \\leqslant \\Delta (t,x) \\leqslant 4$ for all $t \\geqslant 0$ , for all $x \\in \\mathbb {R}$ and so $\\beta $ has its support compact and included in $[-4,4]$ ." ], [ "Conclusion", "We can now prove Theorem $\\ref {chap4:thm3}$ .", "[Proof of Theorem $\\ref {chap4:thm3}$ ] Thanks to Lemma $\\ref {chap4:lemme12}$ , we only need to prove Theorem $\\ref {chap4:thm3}$ in the case where $\\rho = \\mathbf {1}_{[0,1]}$ and $a=1$ .", "Proposition $\\ref {chap4:prop101}$ gives us that, if $\\Delta (\\cdot ,x)$ converges in distribution, then it is also the case of $\\frac{\\mathcal {R}(t \\text{Rect}(1,1) + (x,x), \\mathbb {Z}^{2})}{t}$ and the limit distribution is the same.", "Finally, Proposition $\\ref {chap4:prop105}$ gives the wanted result." ] ]
2210.07847
[ [ "Jordan-Wigner fermionization of quantum spin systems on arbitrary 2D\n lattices: A mutual Chern-Simons approach" ], [ "Abstract A variety of analytical approaches have been developed for the study of quantum spin systems in two dimensions, the notable ones being spin-waves, slave boson/fermion parton constructions, and for lattices with one-to-one local correspondence of faces and vertices, the 2D Jordan-Wigner (JW) fermionization.", "Field-theoretically, JW fermionization is implemented through Chern-Simons (CS) flux attachment.", "For a correct fermionization of lattice quantum spin-$1/2$ magnets, it is necessary that the fermions obey mutual bosonic (anyonic) statistics under exchange - this is not possible to implement on arbitrary 2D lattices if fermionic matter couples only to the lattice gauge fields.", "Enlarging the gauge degrees of freedom to include the dual lattice allows the construction of consistent mutual Chern-Simons field theories.", "Here we propose a mutual CS theory where the microscopic (spin) degrees of freedom are represented as lattice fermionic matter additionally coupled to specific combinations of dual lattice gauge fields that depend on the local geometry.", "We illustrate the use of this method for understanding the properties of a honeycomb Kitaev model subjected to a strong Zeeman field in the $z$-direction." ], [ "Introduction", "The Jordan-Wigner approach is very attractive for studying quantum spin-$1/2$ systems in two dimensions.", "Unlike the Holstein-Primakoff [1] (or interacting spin-wave) approaches, it does not generate highly nonlinear many-body interactions.", "Compared to parton-based approaches, it does not require enlarging the Hilbert space which then must be projected to the physical space [2], [3], [4], [5].", "Moreover, the JW fermions naturally interpolate between magnons and spinons, as is evident in the study of a simpler 1D system - the transverse field Ising model - and readily describe fractionalized quasiparticles in different phases [6], [7].", "JW fermionization, from the outset, gives a topological (CS) field theory [8], [9], [10], where CS flux attachment generates the topological interaction of the JW fermions.", "Chern-Simons field theories provide a natural language [11] for describing topological phases and their emergent excitations [12], [13], [14], [15].", "CS flux attachment is easy to implement on lattices where a local association of every lattice site (vertex) with a unique face [16] exists.", "In arbitrary 2D lattices $L$ where local face-vertex correspondence may not always be there, consistent CS field theories may still be constructed by including gauge fields on dual lattice ($L^{*}$ ) sites and links.", "The result is a mutual Chern-Simons theory [17], [12], [18], where every lattice site is locally associated with the (unique) face dual to the site.", "A complicating factor is that the mutual CS theory describes the mutual anyonic statistics of particles respectively living on $L$ and $L^{*}.$ For the JW fermions to describe quantum spins, we require an implementation of anyonic (bosonic) statistics for the exchange of fermionic matter on $L.$ Here we propose a mutual CS theory where the microscopic spin degrees of freedom on $L$ are represented as lattice fermions living on $L$ attached to a certain local combination of dual lattice gauge fields living on $L^{*}$ such that the desired anyonic statistics is realized.", "The Chern-Simons formulation for spin lattices with face-vertex correspondence nevertheless suffers from some limitations.", "Even in the absence of, say, an external magnetic field, these CS theories are not parity and time-reversal invariant, unlike the original microscopic models [19].", "Besides, quantizing the field theory requires a careful handling of the lattice analogue of the Levi-Civita symbol [16].", "Likewise, there is ambiguity in the commutation relation of two Wilson loops, one of which ends on the path of the other, unless one introduces a dual curve [16].", "Mutual CS theories do not suffer from these shortcomings.", "They can even be formulated for arbitrary 2D lattices.", "For the special case of lattices with face-vertex correspondence, it was shown in Ref.", "[19] that anyons may be represented in the mutual CS theory as extended (dumb-bell) fermionic fields whose ends live respectively on $L$ and nearest dual lattice $L^{*}$ sites.", "The continuum limit of this lattice theory describes point-like anyons.", "An alternate proposal made recently [20], [21] involves starting with a mutual CS theory but imposing an additional constraint on the lattice (link) gauge fields that they should be equal to the average of gauge fields on the nearby dual links.", "The idea is to avoid the problems in the formulation of Ref.", "[16], although when applied to hexagonal or triangular lattices, the formalism gives unphysical fractional values for the linking numbers of Wilson loops.", "In contrast to field-theory approaches based on Chern-Simons flux attachment, Hamiltonian approaches using the 2D Jordan-Wigner transformation have also been used for lattices including those lacking face-vertex correspondence [22], [23]; however long-range interactions are generated in this process, and it is also unclear if large gauge fluctuations necessary for charge quantization are accounted for.", "As an illustration of our proposed technique, we study the honeycomb Kitaev model in a strong Zeeman field ($h$ ) in the $z$ -direction.", "At low fields, the model is known to describe a deconfined phase with long-range topological order characterized by a four-fold degenerate ground state on the torus, and fractionalized excitations in the form of free Majorana fermions and gapped $Z_2$ visons.", "For the ferromagnetic sign of the Kitaev interaction, the topological order is quite fragile, vanishing at Zeeman fields a few per cent of the Kitaev interaction [24].", "For antiferromagnetic Kitaev interactions, topological order persists to larger Zeeman fields, around a fifth of the Kitaev interaction.", "Recently, there is great interest in understanding if fractionalization and other signatures of topological order such as a half-quantized thermal Hall conductivity can re-emerge at sufficiently high fields in Kitaev materials whose ground state otherwise has long-range magnetic order [25], [26].", "Even if topological order may be strictly speaking not present at such fields, it is important to understand how much of the properties could be understood from the point of view of gauge field fluctuations coupling to fractionalized matter.", "At very high fields, the ground state is a fully polarized paramagnet, and it would normally make sense to approach this regime using the Holstein-Primakoff transformation (spin-wave theory).", "However as the field is decreased, it is known that interactions of the spin waves become rapidly very important, and spin waves do not provide a good description at lower fields where topological order is about to get restored.", "This encourages us to take the CS approach and check its advantages and limitations.", "We obtain an effective mutual Maxwell-Chern-Simons field theory coupled to a superfluid order parameter field - i.e.", "a gauged superfluid.", "We show how the parameters in this theory can be systematically obtained from the underlying microscopic ones.", "Starting from the high-field side, which in our formalism corresponds to a confined phase, we progressively decrease the field, identifying the onset of local superfluidity and eventually the establishment of a global superfluid phase through the suppression of vison fluctuations.", "This represents the transition to the topologically ordered phase.", "Our perturbative approach in inverse of the field strength prevents us from accessing the low-field Kitaev dominated regime.", "Near to the topological transition, we also study the possibility of vison dispersion [27] and make a comparison with understanding obtained from perturbative studies from the low-field side [28].", "The rest of the paper is organized as follows.", "In Section we introduce the lattice version of mutual CS gauge theory for lattices lacking local face-vertex correspondence, focusing on the example of a honeycomb lattice.", "We propose in Sec.", "a way of realizing the required anyonic exchange statistics of the JW fermions by attaching a certain combination of dual lattice gauge fields to fermionic matter on the lattice sites.", "Section illustrates an application of this mutual CS formulation for the honeycomb Kitaev model in a finite Zeeman field in the $z$ -direction.", "Here we describe phases of the Kitaev model in the lattice gauge theory language, starting from the high field limit.", "We develop an understanding of the evolution of parameters in the effective field theory in terms of the original microscopic parameters.", "We conclude with a summary of our findings and a discussion in Sec.", "." ], [ "Mutual CS Gauge Theory on Lattices Lacking Face-Vertex Correspondence", "Consistent formulation of CS theories on the lattice requires that every vertex is attached to the flux through a unique plaquette, which is evidently possible when there is a local face-vertex correspondence [16].", "In such cases, the Euclidean time CS action has the form $S=-\\frac{i \\kappa }{2\\pi }\\int d\\tau [A_{v}M_{v,f}\\phi _{f}-\\cfrac{1}{2} A_{e}K_{e,e^{\\prime }} \\dot{A_{e^{\\prime }}}].$ Here the repeated indices are summed over.", "The indices $v,f,e$ run over all vertices, faces and edges respectively.", "$A_{v}$ and $A_{e}$ are respectively the temporal and spatial components of the gauge fields, with the former associated with the sites and the latter with the links.", "$\\phi _{f}$ is the flux through the face $f$ associated with the vertex $v$ via face-vertex correspondence.", "The detailed description of the matrices $M_{v,f}$ and $K_{e,e^{\\prime }}$ are not important for the purposes of this paper and can be found in Ref.", "[16].", "$M_{v,f}$ dictates the flux attachment and $K_{e,e^{\\prime }}$ is the lattice analog of the Levi-Civita symbol.", "The canonical commutation relation is $[A_{e},A_{e^{\\prime }}]=-\\frac{2\\pi i}{\\kappa } K^{-1}_{e,e^{\\prime }}.$ The $K_{e,e^{\\prime }}$ matrix in Eq.", "(REF ) is in general quite complicated and involves both forward and backward (spatial) differences [29].", "It is also not very local in the sense that $e$ and $e^{\\prime }$ merely need to be associated with the same face In case of lattices without face-vertex correspondence this $K_{e,e^{\\prime }}$ matrix is singular and the CS theory is no longer consistent [16].", "These difficulties are not due to some fundamental obstruction to defining lattice CS theories on arbitrary cellulations, since it should be possible to recover the continuum CS theory as a limiting case of any lattice.", "For quantum spin-$1/2$ lattice systems that we are ultimately interested in, we note that the Hamiltonian equivalent of CS theory - the 2D Jordan-Wigner transformations - do not have any requirement that the lattice must have local face-vertex correspondence.", "Such an approach has been taken, for example, for the XY model on the honeycomb lattice [23].", "Figure: Honeycomb lattice (LL) and its dual triangular lattice (L * L^{*}).", "The faces of the triangular lattice are dual to the honeycomb lattice vertices, and likewise the hexagonal plaquettes are dual to the vertices of the triangular lattice.", "The dual of a link on LL is the link on L * L^{*} crossing perpendicularly the relevant link on L.L.Later in this paper, we will study as an example the honeycomb Kitaev model whose lattice evidently does not satisfy local face-vertex correspondence.", "The dual lattice is triangular, so the combined system has an equal number of vertices and faces, and moreover has local face-vertex correspondence.", "It is easily seen that such local face vertex correspondence exists for arbitrary polygonal cellulations of 2D space.", "Figure REF shows a honeycomb lattice (green) and its dual triangular (blue) lattice.", "We now describe a mutual CS theory consisting of gauge fields on both honeycomb $L$ and triangular $L^{*}$ .", "Denote the temporal and spatial components of the gauge field on $L$ by $A_{v}$ and $A_{e}$ respectively, and on $L^{*}$ by $a^{*}_{v^{*}}$ and $a^{*}_{e^{*}}$ respectively.", "Analogously to Eq.", "(REF ), we associate the scalar potential at any vertex (whether on $L$ or $L^{*}$ ) with the flux through the dual plaquette corresponding to the vertex.", "The $U(1)$ gauge invariant Lagrangian [16] satisfying the above flux attachment rules is given by $\\mathcal {L}_{\\text{CS}} & =\\frac{\\kappa }{4\\pi }[\\xi ^{*}_{f^{*}e^{*}} a^{*}_{e^{*}}A_{v}\\delta _{f^{*}v}+D^{*}_{v^{*}e^{*}}a^{*}_{v^{*}}A_{e} \\delta _{ee^{*}}-\\partial _{0}a^{*}_{e^{*}}A_{e}\\delta _{ee^{*}}] \\nonumber \\\\& -\\frac{\\kappa }{4\\pi }[\\xi _{fe} A_{e} a^{*}_{v^{*}}\\delta _{fv^{*}}+D_{ve}A_{v}a^{*}_{e^{*}} \\delta _{ee^{*}}-\\partial _{0}A_{e}a^{*}_{e^{*}}\\delta _{ee^{*}}].$ The $\\xi _{fe}$ and $D_{ve}$ are respectively the lattice analogs of curl and gradient [16] operations.", "The $\\delta $ -functions are defined as follows: $\\delta _{e,e^{*}} = 1$ if $e$ and $e^{*}$ are links dual to each other, and zero otherwise, and similarly for $\\delta _{fv^{*}}$ etc.", "The canonical equal time commutation relations are $[A_{e},a^{*}_{e^{*}}] & = i \\frac{2\\pi }{\\kappa } \\delta _{e,e^{*}} \\times {\\rm sgn}(\\vec{n_{e}} \\times \\vec{n_{e^{*}}}),\\nonumber \\\\[A_{e}, A_{e^{\\prime }}] & = [a^{*}_{e^{*}}, a^{*}_{e^{\\prime *}}] = 0.$ Unlike the earlier formulation, there are no difficulties with the lattice version of the Levi-Civita term since $K_{e,e^{\\prime *}}=\\delta _{e,e^{\\prime *}} \\times {\\rm sgn}(\\vec{n_{e}} \\times \\vec{n_{e^{*}}}),$ and $A_{e}$ , $a^{*}_{e^{*}}$ are perpendicular to each other like continuum case.", "Furthermore, it can be shown that such a mutual Chern-Simons gauge theory has parity and time reversal symmetry [30].", "Commutation relations of chains follow from the canonical commutation relations in Eq.", "(REF ) and the Baker-Hausdorff-Campbell formula: $\\left[\\int _{\\mathcal {C}} A_e, \\int _{\\mathcal {C}^{*}}a^{*}_{e^{*}}\\right] & = i\\frac{2\\pi }{\\kappa }\\nu [\\mathcal {C},\\mathcal {C}^{*}],$ where $\\nu [\\mathcal {C},\\mathcal {C}^{*}]$ is the difference of right handed and left handed intersections of chains $\\mathcal {C}$ and $\\mathcal {C}^{*}.$ Correspondingly, the relation between the respective Wilson lines will be $W_{\\mathcal {C}}W_{\\mathcal {C}^{*}} & = e^{-i\\frac{2\\pi }{\\kappa }\\nu [\\mathcal {C},\\mathcal {C}^{*}]}W_{\\mathcal {C}^{*}}W_{\\mathcal {C}}.$ If source terms coupling the gauge fields to charge and current are now introduced, varying the action with respect to the temporal components of the gauge fields gives us the flux attachment constraints for physical states: $\\xi ^{*}_{f^{*}e^{*}} a^{*}_{e^{*}} \\equiv \\Phi _{f^{*}} & = \\frac{4\\pi }{\\kappa }Q_{v}, \\nonumber \\\\\\xi _{fe} A_{e} \\equiv \\Phi _{f} & = \\frac{4\\pi }{\\kappa }Q_{v^{*}}.$ Here $\\Phi _{f(f^{*})}$ is the flux associated with the face $f(f^{*})$ and $Q_{{v}(v^{*})}$ is the vertex charge in $L(L^{*})$ .", "If the system is subjected to toroidal boundary conditions, the spatial manifold has two holes that can be enclosed by non-contractible loops.", "The only nontrivial commutators are between pairs of (dual) non-contractible loops, drawn along two independent polar directions of the torus.", "In particular for $\\kappa =2,$ the zero energy state can be labelled by the eigenvalues ($W=\\pm 1$ ), one for each independent non-contractible loop along the two polar directions, i.e., this state has a nontrivial four fold degeneracy associated with these nonlocal string operators." ], [ "Quantum spin-1/2 particles on the honeycomb and triangular lattice", "The 2D Jordan-Wigner (JW) transformation expresses the spin raising (lowering) operators in terms of fermion creation (annihilation) operators attached to an infinite string, essentially a disorder operator, that implements bosonic commutation relations between spins at different spatial sites by ensuring odd values of the paths' linking number $\\nu [\\mathcal {C},\\mathcal {C}^{*}]$ when they are exchanged using arbitrary paths.", "The 2D disorder operator, unlike its 1D counterpart, is not unique [8], [9], [10], [31], and the only purpose is to implement the spin statistics.", "However, it is readily constructed for arbitrary polygonal cellulations of the 2D space.", "For a path integral (CS) formulation, one needs suitable disorder operators defined in terms of the gauge fields on the links, which are attached to the fermion (matter) fields, and it is also very desirable for computational simplicity that the Hamiltonian involves only local combinations of the link fields.", "For example, in the widely studied $XY$ spin models on lattices with face-vertex correspondence (e.g.", "square or Kagome), the gauge fields coupling to the hopping fermions are simply the Wilson lines associated with the corresponding links [32], [33].", "Consider now spin-$1/2$ models on a lattice lacking face vertex correspondence - we take the honeycomb lattice first for concreteness.", "For simplicity, we are interested in models with only local couplings of fermions and gauge fields.", "One way to ensure this is by restricting ourselves to local Hamiltonians and preserving fermion number parity.", "In Fig.", "REF , we show a schematic of a fermion bilinear sharing a link $(i_A, j_B)$ and attached to a local combination of lattice gauge fields, $\\Psi _{i_A}^{\\dagger } \\Psi _{j_B} e^{iB_e} e^{iA_e},$ where $B_e$ is a suitable local combination of the dual lattice gauge fields that we want to obtain.", "Such a term arises, for example, in two-body interaction of spins sharing a link.", "The choice of sign of the link gauge fields is chosen such that the holonomy $e^{iA_e}$ is associated for a hopping from site $j_B$ to site $i_A.$ We also need to give an orientation to our (directed) links - for our hexagonal lattice, the links are oriented from the $A$ to $B$ sublattice.", "The dual triangular lattice is not bipartite, but here the orientation of the dual link is chosen such that the sign of $(\\vec{n_{e}} \\times \\vec{n_{e^{*}}})$ is positive.", "Since the process in Eq.", "(REF ) conserves fermion number, $A_e$ can be in general $U(1).$ It is important to note that since we have a mutual CS theory, the link fields $A_e$ are not responsible for anyonic (bosonic) statistics of exchange of spins on different lattice sites, and such statistics comes entirely from attaching our lattice fermion fields to the dual lattice gauge fields.", "The choice of $B_e$ is not unique.", "We propose (see Fig.", "REF ) $B_{e} & = N (a^{*}_{e1^{*}}+a^{*}_{e2^{*}}+a^{*}_{e3^{*}}+a^{*}_{e4^{*}}),$ to be the sum of the four fields in the rhombus enclosing the link up to some normalisation constant $N.$ For our lattice, we argue that the normalization $N=\\frac{\\kappa }{4}$ ensures the desired statistics.", "Figure: Anyonic (bosonic) exchange statistics of JW fermions on LL sharing link e=i A →j B e=i_{A}\\rightarrow j_{B} is implemented in the mutual CS formulation (see text for details) by attaching the fermionic bilinear to a holonomy e iB e ,e^{iB_e}, where B e B_e is proportional to the sum of the four dual link potentials on L * .L^{*}.", "Here i,ji,j are unit cells of the honeycomb lattice.Figure: Schematic of lattice and dual loops traversed upon taking a spin-1/21/2 particle around two different loops on the honeycomb lattice, denoted by the boundaries of the shaded regions.", "The arrows represent the number of times the corresponding edge is traversed.", "Note that each dual loop makes two windings, which in turn determines the normalization of B e .B_e.Let us first take a spin-$1/2$ particle around the elementary hexagonal plaquette on the honeycomb lattice (see Fig.", "REF a), which results in one $2\\pi $ winding of the hexagonal void, and a double winding of the dual lattice links enclosing the vertices of the hexagon.", "On the dual lattice path, the accumulated phase is $2 N \\xi ^{*}_{f^{*}e^{*}} a^{*}_{e^{*}} & =\\frac{8 N \\pi }{\\kappa } \\sum _{v} Q_{v}.$ Since there are no lattice charges enclosed by the dual lattice path in this case, the phase is zero, and $N$ cannot be fixed here.", "Now let us take the spin-$1/2$ particle around the simplest (3-hexagon) path that encloses a vertex (see Fig.", "REF b).", "Here the phase accumulated by the dual curve is $3 \\times 2 N \\xi ^{*}_{f^{*}e^{*}} a^{*}_{e^{*}} & = \\frac{24 N \\pi Q_{v}}{\\kappa },$ with $Q_v=1.$ Correct spin statistics requires an odd multiple of $2\\pi $ accumulated by the dual loops, the simplest choice at first sight appears to be $N=\\kappa /12.$ However with such a choice, it is known [20] that intersecting Wilson loops have unphysical $1/3$ (fractional) linking number.", "This is also evident from the dual path in Fig.", "REF a, where $\\sum _v Q_v = 2,$ which is equivalent to $\\sum _v Q_v=0,$ results in a fractional phase of $2\\pi /3$ per particle, instead of $2\\pi .$ The problem is that the dual curve in the left figure has one $4\\pi $ winding, while the one on the right has three $4\\pi $ windings.", "Thus the correct values for $N$ are $N=\\frac{(2m+1)}{4}\\kappa ,$ where $m$ is an integer.", "Without loss of generality, we choose $m=0,$ i.e., $N=\\kappa /4.$ The vortex charges $Q_v^{*}$ do not depend on the normalization.", "Note that the lattice Wilson loops square to unity; accordingly, the vortex charges are integer multiples of $\\kappa /4.$ Figure: Arrangement of dual lattice gauge fields for a triangular lattice for implementing bosonic exchange statistics for the JW fermions sharing the link e=r 2 →r 1 .e=r_{2}\\rightarrow r_{1}.", "This lattice also lacks one-to-one face-vertex correspondence.Now we use the same procedure for triangular lattice where $A_{e}$ and $a^*_{e^*}$ live on the direct triangular lattice edge and dual honeycomb lattice edge respectively.", "Here the choice of $B_{e}$ is (see Fig.", "REF ) $B_{e} & = N (a^{*}_{e^{*}_a}+a^{*}_{e^{*}_b}+a^{*}_{e^{*}_c}+a^{*}_{e^{*}_d}).$ Figure: Schematic of lattice (blue) and dual (green) loops traversed upon taking a spin-1/21/2 particle around two different loops (boundary of the shaded regions) on the triangular lattice.Taking a spin-$\\frac{1}{2}$ particle around a lattice loop in Fig.", "REF c or in Fig.", "REF d, in both cases the phase accumulated by the dual curve is $2 N \\xi ^{*}_{f^{*}e^{*}} a^{*}_{e^{*}} & =\\frac{8 N \\pi }{\\kappa } \\sum _{v} Q_{v},$ with $Q_{v}=1$ .", "This results the normalisation $N$ for triangular lattice to be $N=\\frac{(2m+1)}{4}\\kappa $ same as honeycomb lattice.", "Note that for the case of honeycomb lattice the above choice of $B_{e}$ is not unique .", "If we take all the eight contributing links (instead of taking only four links making the rhombus in Fig.", "(REF )) that connect the two ends of the dual link($e^{*}$ ) of any given link($e$ ), the normalisation can be shown to be $\\frac{\\kappa }{2}$ .", "For a lattice such as Kagome, we found that the choice is unique.", "As we have discussed above, the role of the $B_e$ fields on the dual links is to implement bosonic statistics for exchange of spins.", "From Eq.", "(REF ) it is clear that the dual fluxes can only take values 0 or $4\\pi $ since $Q_v$ can take values 0 or $1.$ We choose these dual gauge fields to satisfy $U(1)$ symmetry, although other choices such as $Z_2$ can also be made.", "We now discuss gauging another kind of fermion bilinear corresponding to a link Cooper pair $(\\Psi _{i_A}^{\\dagger } \\Psi _{j_B}^{\\dagger })$ that also appears in numerous spin models such as Ising or Kitaev, where $S_{z}$ is not a conserved quantity.", "This process creates a fermion pair sharing a link.", "Such terms couple to a pair of lattice Wilson lines that terminate at the end points of the links, i.e., $\\Psi _{i_A}^{\\dagger } \\Psi _{j_B}^{\\dagger }W_{C_{i_A}}W_{C_{j_{B}}} e^{-iB_e}.$ The Wilson line $W_{C_{i_{A}}}= \\exp [-i\\sum _{e^{\\prime }\\in C_{i_{A}}} A_{e^{\\prime }}]$ on the lattice transports a fermion from the boundary at infinity to site $i_A$ along the string $C_{i_A}$ , and similarly for $W_{C_{j_{B}}}.$ For periodic boundary conditions, the lines emanate from a fermion pair annihilation on a link, and end at the pair creation link.", "These gauge fields are $U(1)$ in general.", "If the lattice fermions are strongly gapped, such as when there is a large Zeeman field, the effective gauge theory obtained after integrating out the fermions will not involve long strings, in which case we will get a local effective $U(1)$ gauge theory.", "If however the $A_e$ are $Z_2,$ the product $ W_{C_{i_A}}W_{C_{j_{B}}}$ further reduces to the holonomy $e^{-iA_e}$ on the link, $W_{C_{i_A}}W_{C_{j_{B}}} & \\equiv e^{-iA_e},\\, A_e \\in Z_2.$" ], [ "Application to Kitaev model in a large Zeeman field", "Having described the construction of our CS theory (with fermionic matter) for quantum spin systems on lattices that lack face-vertex correspondence, we apply our ideas to the ferromagnetic Kitaev model on the honeycomb lattice subjected to a large magnetic field along the $z$ -direction such that the Kitaev interactions can be regarded as a perturbation.", "The Hamiltonian is given by [7] $\\mathcal {H} & = \\mathcal {H}_{0} + \\mathcal {H}_{1} \\nonumber \\\\& = h\\sum _{p}\\sigma ^{z}_{p} - \\sum _{\\langle pq\\rangle \\in \\gamma -\\rm {links}} J_{\\gamma }\\sigma _{p}^{\\gamma }\\sigma _{q}^{\\gamma },$ where $\\mathcal {H}_{0,1}$ respectively refer to the Zeeman and Kitaev terms, the $\\sigma $ are Pauli matrices, $p,q$ are the vertices associated with the corresponding link $\\gamma ,$ with $\\gamma = x,\\,y,\\,\\mbox{or }z,$ and $h$ is the strength of applied field.", "The ground state of the unperturbed (purely Zeeman) model is trivially a fully polarized paramagnet, and we set $\\kappa =1.$ As the Zeeman field is progressively decreased, the system ultimately transitions into a deconfined state with fractionalized excitations and long-range topological order.", "The fact that relatively small field values ($h/J < 1$ ) suffice to degrade the topological order motivates us to approach the problem from the high field side.", "Our approach is also an alternative to the spin-wave approximation often employed for studies of the Kitaev model at high fields [1], [34], [35].", "We fermionize our model using the dual CS formalism described in Sec..", "Specifically, we will obtain an effective field theory in the limit of large Zeeman field, $h \\gg J_{\\gamma }$ treating the Kitaev interactions as a perturbation.", "In this high field limit, the effective field theory is $U(1)$ gauge-invariant.", "In the opposite limit of large Kitaev interactions, the fermion number is not conserved but the fermion number parity is - implying that $U(1)$ gauge symmetry will break down to $Z_2.$ After fermionization, the Kitaev Hamiltonian takes the form $\\mathcal {H}=\\mathcal {H}_{x}+\\mathcal {H}_{y}+\\mathcal {H}_{z}+h\\sum _{p}[2\\Psi ^{\\dagger }_{p}\\Psi _{p}-1],$ where $\\mathcal {H}_{x} & =-J_{x}\\!\\!\\!\\!\\!\\!\\sum _{x-\\text{links(e)}}\\!\\!\\!\\!\\!", "[\\Psi ^{\\dagger }_{p} e^{-i(A_{e}+B_{e})}\\Psi ^{\\dagger }_{q}+\\Psi ^{\\dagger }_{p}e^{i(A_{e}+B_{e})}\\Psi _{q}+ \\rm {h.c.}], \\nonumber \\\\\\mathcal {H}_{y} & =-J_{y}\\!\\!\\!\\!\\!\\!\\sum _{y-\\text{links(e)}}\\!\\!\\!\\!\\!", "[-\\Psi ^{\\dagger }_{p}e^{-i(A_{e}+B_{e})}\\Psi ^{\\dagger }_{q}+\\Psi ^{\\dagger }_{p}e^{i(A_{e}+B_{e})}\\Psi _{q}+ \\rm {h.c.}],\\nonumber \\\\\\mathcal {H}_{z} & =-J_{z}\\!\\!\\!\\!\\!\\!\\sum _{z-\\text{links(e)}}\\!\\!\\!\\!\\!", "[2\\Psi ^{\\dagger }_{p}\\Psi _{p}-1][2\\Psi ^{\\dagger }_{q}\\Psi _{q}-1].$ Since the fermionic matter is gauged only under the lattice gauge fields, and there is no vortex matter at the dual sites, $\\mathcal {H}_x$ and $\\mathcal {H}_y$ are not invariant under $U(1)$ gauge transformations of the dual lattice gauge fields $B_{e}.$ However, we may regard the above coupling of $B_e$ to the fermionic matter as an interaction in which the dynamics of $B_e$ is governed by the CS term - in this way, $B_e$ is an external dynamical field coupling to the fermions for the purpose of satisfying correct spin statistics and gauge invariance is not necessary.", "Unlike the magnetization (or fermion number), which is a conserved quantity at high fields, the vortex charge is not.", "However in the Kitaev limit $h/J\\rightarrow 0,$ the vortex charge is conserved.", "The role of a small Zeeman perturbation is to create pairs of vortices along the $z$ -bonds in each order of the perturbation - thus at high fields where vortex number is ill-defined, the vortex number parity is still conserved.", "This makes us choose $A_e$ in the rest of the paper to be $Z_2$ and not $U(1),$ although the analysis can be performed equally well with $A_e \\in U(1).$ For our bipartite lattice, the vertices carry two labels, namely the unit cell $(i,j..),$ and the sub-lattice $(A,B).$ Motivated by the fact that in the Kitaev limit, the model is equivalent to a topological superconductor, we choose to decouple the four fermion interaction in the Cooper channel, $\\mathcal {H}_{z} & =-J_{z}\\!\\!\\!\\!\\sum _{z-\\text{links}}\\!\\Bigg \\lbrace 1-2\\Psi ^{\\dagger }_{i_A}\\Psi _{i_A}-2\\Psi ^{\\dagger }_{i_B}\\Psi _{i_B}-4|\\Delta _{i}|^2 \\nonumber \\\\& +4\\Delta ^{*}_{i} \\Psi _{i_A}e^{i(A_{e}+B_{e})}\\Psi _{i_B}-4\\Delta _{i} \\Psi ^{\\dagger }_{i_A}e^{-i(A_{e}+B_{e})}\\Psi ^{\\dagger }_{i_B}\\Bigg \\rbrace ,$ where $\\Delta _{i}=\\langle \\Psi _{i_A}\\Psi _{i_B}\\rangle e^{i(A_{e}+B_{e})}$ is the order parameter.", "The factor $ e^{i(A_{e}+B_{e})}$ makes for a slightly unusual definition of $\\Delta ,$ which is guided by a desire to retain a similar structure of $\\mathcal {H}_z$ as in $\\mathcal {H}_x$ and $\\mathcal {H}_y.$ An alternate choice of decoupling in the density channel was not pursued guided by the fact that the order parameter will be large in the presence of large Zeeman fields which is undesirable in a perturbative expansion for the free energy.", "We consider the Euclidean time action for this model, $S[\\Psi _p,\\Delta ] =\\sum _{p}\\int ^{\\beta }_{0} d\\tau [\\Psi _{p}^{\\dagger }(\\partial _{\\tau }+iA_{p})\\Psi _{p}-i\\mathcal {L}_{\\text{CS}}+\\mathcal {H}],$ where $\\mathcal {H}$ is the Hamiltonian after mean field decoupling of the $z$ -link interactions.", "Now at a high magnetic field, charges $n_{p}$ have only small fluctuations, and consequently, $A_{p}$ can have large fluctuations.", "It is convenient to perform a gauge transformation to eliminate the strongly fluctuating potential fields that appear in the fermionic determinant through a gauge transformation of the fermionic fields, $\\Psi _{i_A}\\rightarrow \\Psi _{i_A}e^{i\\chi _{i_A}},$ and choosing $A_{i_A}=-\\partial _{\\tau } \\chi _{i_A}.$ The fermionic partition function is given by $Z & =\\!\\!\\!\\int D{\\text{(fields)}} e^{-S};\\,\\,S=S_{0}+S_{c}+S^{\\prime }+S^{*}.\\text{ Here}\\nonumber \\\\& S_{0}=\\sum _{p}\\int _{\\tau }\\Psi ^{\\dagger }_{p}(\\partial _{\\tau }+\\xi _{p})\\Psi _{p},\\hspace{14.45377pt} \\xi _{p}=2h+2J_{z},\\nonumber \\\\& S_{c}=-\\int _{\\tau } [iL_{\\text{CS}}-4J_{z}\\sum _{i}|\\Delta _{i}|^2].$ The remaining terms are Nambu off-diagonal, number nonconserving, $& S^{*}=\\int _{\\tau } T_{0} =\\sum _{i}\\int _{\\tau } T^{i}_{0}\\hspace{10.84006pt}\\text{and}\\hspace{10.84006pt}S^{\\prime }=\\int _{\\tau } T,$ where $T_{0}$ and $T$ are defined below.", "$T & =-J_{x}\\sum _{x-\\text{links}}[\\Psi ^{\\dagger }_{i_A} e^{-i\\varphi ^{x}_{1}}\\Psi ^{\\dagger }_{j_B}+\\Psi ^{\\dagger }_{i_A} e^{i\\varphi ^{x}_{2}}\\Psi _{j_B}+h.c.", "]\\nonumber \\\\&-J_{y}\\sum _{y-\\text{links}}[-\\Psi ^{\\dagger }_{i_A} e^{-i\\varphi ^{y}_{1}}\\Psi ^{\\dagger }_{j_B}+\\Psi ^{\\dagger }_{i_A} e^{i\\varphi ^{y}_{2}}\\Psi _{j_B}+h.c.", "]$ where $i_A\\rightarrow j_B$ is a $x$ bond or a $y$ bond of the honeycomb Kitaev model and $\\varphi _{1} & =A_{e}+B_{e}+\\chi _{i_A}+\\chi _{j_B},\\,\\text{and} \\nonumber \\\\\\varphi _{2} & =A_{e}+B_{e}-\\chi _{i_A}+\\chi _{j_B}.$ The superscripts ($x$ , $y$ ) on $\\varphi _{1,2}$ in Eq.", "(REF ) refer to the type of link ($x$ or $y.$ ).", "The fermionic part of the action takes the form $S_{F}=S_{0}+S^{*}+S^{\\prime }=\\Psi ^{\\dagger }G^{-1}\\Psi ,$ where the inverse Green function is $G^{-1}=G^{-1}_{0}+T_{0}+T.$ Here $\\Psi $ is $4N$ component spinor in the Nambu notation ( $N$ is the number of unit cells in the honeycomb lattice): $\\Psi _{i}\\!=\\!\\!\\begin{bmatrix}\\Psi _{i_A}\\\\\\Psi ^{\\dagger }_{i_A}\\\\\\Psi _{i_B}\\\\\\Psi ^{\\dagger }_{i_B}\\\\\\end{bmatrix} ,{G^{i}_{0}}^{-1}\\!\\!\\!\\!=\\frac{1}{2}\\!\\!\\begin{bmatrix}\\partial _{\\tau }+\\xi _{i_A} & 0 & 0 & 0 \\\\ \\!0 & \\partial _{\\tau }-\\xi _{i_A} & 0 & 0 \\\\ \\!0 & 0 & \\partial _{\\tau }+\\xi _{i_B} & 0 \\\\ \\!0 & 0 & 0 & \\partial _{\\tau }-\\xi _{i_B} \\\\ \\!\\end{bmatrix},$ $T^{i}_{0}=2J_{z}\\begin{bmatrix}0 & 0 & 0 & \\Delta _{i}e^{-i\\varphi ^{i}_{1}} \\\\0 & 0 & -\\Delta ^{*}_{i}e^{i\\varphi ^{i}_{1}} & 0 \\\\0 & -\\Delta _{i}e^{-i\\varphi ^{i}_{1}} & 0 & 0 \\\\\\Delta ^{*}_{i}e^{i\\varphi ^{i}_{1}} & 0 & 0 & 0 \\\\\\end{bmatrix}$ with $\\varphi ^{i}_{1}=A_{e}+B_{e}+\\chi _{i_A}+\\chi _{i_B}$ .", "The superscript $i$ in $\\varphi ^{i}_{1}$ runs over unit cell i.e.", "$z$ -bond.", "We now formally integrate out the fermions (which are gapped in the presence of the strong Zeeman field), $S=S_{c}-\\text{Tr}\\ln (G^{-1}),\\hspace{7.22743pt} \\ln G^{-1}=\\ln G^{-1}_{0} +\\ln [1+G_{0}(T_{0}+T)],$ and expand the logarithm in the small parameters $J/h,$ We also drop $\\ln G^{-1}_{0}$ as it does not involve any dynamical fields.", "In the expansion, the leading terms $\\text{tr}(G_{0} T)$ and $\\text{tr}(G_{0}T_{0})$ vanish because $G_{0}$ is site-diagonal and $T$ , $T_{0}$ are site off-diagonal.", "There are two types of terms that appear in the resulting effective field theory: link and loop terms, the leading contributions respectively appearing at the second and sixth order.", "Using these results we present our effective action, $S & =S_{c}+\\int _{\\tau }\\!", "\\left[\\sum _{x-links} \\!\\!\\mathcal {L}^x_2+\\!\\!\\!\\!\\sum _{y-links}\\mathcal {L}^y_2+\\!\\!\\!\\!\\sum _{z-links}\\mathcal {L}^z_2+\\sum _{}\\!\\!\\mathcal {L}_6\\right].$ The leading contributions to the link terms at low temperatures ($\\beta h \\gg 1$ ) are: $\\mathcal {L}^x_2 & =\\frac{J^{2}_{x}}{64h^3} \\left(\\frac{\\partial \\varphi ^{x}_{1}}{\\partial \\tau }-2ih\\right)^2, \\nonumber \\\\\\mathcal {L}^y_2 & =\\frac{J^{2}_{y}}{64h^3} \\left(\\frac{\\partial \\varphi ^{y}_{1}}{\\partial \\tau }-2ih\\right)^2, \\nonumber \\\\\\mathcal {L}^z_{2} &=\\!\\!\\frac{-J^2_{z}}{2h^3} \\!\\!\\left[ 16h^2|\\Delta _i|^2\\!\\!-4h\\Delta ^{*}_i(\\partial _{\\tau }\\!-i\\dot{\\varphi ^i_1})\\Delta _i\\!-\\!|(\\partial _{\\tau }-i\\dot{\\varphi ^i_1})\\Delta _i|^{2}\\!", "\\right].$ The terms correspond to $\\varphi ^{x(y)}_{2}$ contain $n_{F}(-\\xi _{iA})n_{F}(\\xi _{jB})$ which is negligible at high field whereas $\\varphi ^{x(y)}_{1}$ attached terms involve $n_{F}(-\\xi _{iA})n_{F}(-\\xi _{jB})$ .", "Here $n_{F}(\\xi _{iA}/{\\xi _{jB}})$ is the Fermi-Dirac distribution.", "Consider now the definition of the phases $\\varphi ^{x(y)}_{1}$ in Eq.", "REF .", "Since the phases $\\chi $ appearing in the definition of the $\\varphi ^{x(y)}_1$ are $Z_2,$ (i.e.", "taking values only 0 or $\\pi $ ) we can absorb them in a redefinition of the $A_e,$ which is equivalent to choosing a gauge where $A_v = 0.$ Thus the first two terms in Eq.", "REF are reminiscent of the electric field terms in a Maxwell theory.", "The $i2h$ terms appearing with the electric field correspond to the Zeeman cost of flipping a spin.", "The loop terms appear first only at the sixth order, $\\mathcal {L}_6= -\\frac{16}{3} \\int ^{\\beta }_{0} d\\tau \\frac{J^{2}_{x}J^{2}_{y}J^{2}_{z}}{(2h)^{5}}\\Bigg \\lbrace \\Delta _{j}\\Delta ^{*}_{l}e^{i \\left[ \\int _{\\text{C}} \\vec{A}.\\vec{dl}+\\int _{\\text{C'}}{\\frac{\\vec{a^*}}{2}}.\\vec{dl}\\right]}+h.c.\\Bigg \\rbrace ,$ where $C$ and $C^{\\prime }$ shown in Fig.", "REF are respectively the hexagonal loop (with a single winding) on the lattice links, and the dual loop (with a double winding) that encloses this hexagon - both generated while taking a spin around the elementary hexagon.", "Since there is no fermion on the hexagon ($Q_v = 0$ ), the dual flux is zero, and without loss of generality, hereinafter we take the order parameter fields to be real and its phase (i.e.", "sign) fluctuations are shifted to the gauge fields.", "As we have discussed earlier, the flux in $C$ can take values 0 or $\\pi .$ Fixing the signs of the order parameter fields to be the same, the energy associated with the plaquette terms is evidently minimized for $\\int _{\\text{C}} \\vec{A}.\\vec{dl} = \\Phi _f =0.$ Equation (REF ) describes a $Z_2$ gauged superfluid in which the dynamics of the gauge fields is governed by a mutual Maxwell-Chern-Simons theory.", "The Wilson loop term, Eq.", "(REF ) determines the cost of a $\\pi $ -flux change in a plaquette (vison gap) - the cost clearly vanishes in the absence of superfluid order (i.e.", "when $\\Delta =0$ ).", "To simplify our further discussion, we limit ourselves to the isotropic Kitaev case i.e., $J_x = J_y = J_z = J.$ Upon reducing the field, the sign of the coefficient of the quadratic term ultimately turns negative, i.e., $4J - 8 \\frac{J^{2}}{h} & < 0,\\,\\mbox{or } h < 2J,$ resulting in nonzero expectation values for local order parameter fields $\\Delta _i.$ For the ferromagnetic Kitaev couplings ($J > 0$ in our model), a sufficiently small magnetic field is required for the $\\Delta _i$ to develop a nonzero expectation.", "The situation is very different for the antiferromagnetic counterpart ($J < 0,$ ), where clearly $\\Delta _i \\ne 0 $ even at large Zeeman fields.", "This does not necessarily mean a superfluid phase for which establishment of global phase coherence is needed.", "Returning to our effective model, we note that the coupling constants for the Maxwell terms are $E_{C} = 16 h^3/J^2$ for the electric part and $E_{J} = J^6 |\\Delta |^{2}/6 h^5$ for the magnetic part.", "The model has a dimensionless coupling constant, $g = E_{J}/E_{C} \\sim (J/h)^8 \\times (|\\Delta |^{2}/ 96).$ When $g \\gtrsim 1,$ the phase fluctuations of the $\\Delta _i$ are suppressed resulting in the superfluid state.", "It is also an appropriate place for pointing out that since $A_e \\in Z_2,$ $\\Delta $ essentially fluctuates between two values ($\\pm |\\Delta |$ ), and the superfluid phase here is associated with a broken $Z_2$ symmetry and not $U(1).$ Let us understand this change of behaviour in a little more detail.", "Consider first a large Wilson loop $W_{L}$ of perimeter $L$ that encloses a number $M \\gg 1$ of elementary hexagonal plaquettes.", "At large magnetic field ($g\\ll 1$ ), we perturbatively expand the exponential with the magnetic term and perform the average over the gauge field configurations.", "By Elitzur's theorem, only gauge invariant terms survive the averaging, and we get $W_{L} \\sim \\left(\\frac{E_{J}}{E_{C}}\\right)^{M},$ where the angular brackets denote averaging over the order parameter field.", "Since $|\\Delta |^2 = 0,$ we have $W_{L}\\equiv 0,$ essentially a vortex superfluid which strongly confines the charges.", "Physically, the large magnetic field suppresses spin flips or fermion number fluctuations.", "Conversely, large Wilson loops $W_{L^{*}}$ on the dual lattice are $O(1).$ Next, we reduce the magnetic field until $|\\Delta |^2 \\ne 0$ develops locally.", "If the local order parameter is small, we can still expand the exponential with the magnetic term.", "Clearly, the Wilson loop now follows a “volume” law, $W_{L} \\sim \\exp \\left(-M \\ln \\left[\\frac{E_{J}}{E_{C}}\\right]\\right),$ still indicating a confined phase.", "As our perturbative approach is valid only for $(h/J)^{8} > |\\Delta |^{2}/96,$ we are unable to provide a complete description of the low field phase in the Kitaev limit.", "We conclude with a count of the degrees of freedom in our model.", "The original spin model has $2^{N}$ states, where $N$ is the number of spins.", "There are $N/2$ hexagons, each associated with a plaquette Wilson loop of value $\\pm 1.$ Additionally, the matter fields $\\Delta _i$ are $Z_2$ degrees of freedom on every $z$ -link, which accounts for the remaining $2^{N/2}$ degrees of freedom." ], [ "Discussion", "In summary, we have developed a mututual CS formalism for anyons on lattices lacking face-vertex correspondence.", "The spin degrees of freedom were expressed in terms of JW fermions coupled to the lattice $Z_2$ gauge fields and a certain local combination of the dual ($U(1)$ or $Z_2$ ) gauge fields, which ensured correct spin exchange statistics on arbitrary 2D lattices.", "In the presence of fermionic matter, the theory is not invariant under gauge transformations of the dual fields because of the absence of vortex matter on the dual lattice sites.", "As an illustration, the formulation was used to obtain an effective CS field theory of the honeycomb Kitaev model subjected to a strong Zeeman field in the $z$ -direction.", "The effective theory is that of a superfluid coupled to fluctuating gauge fields whose dynamics is governed by a mutual Maxwell-Chern-Simons theory.", "The field-tuned topological transition of the Kitaev model appears as a normal to superfluid phase transition in our description.", "We briefly discuss the excitations in the normal phases going up to the normal-superfluid transition.", "At high fields, $E_J/E_C \\equiv 0$ for the FM Kitaev case (since $\\Delta =0$ at high fields here) and small for AFM Kitaev ($\\Delta \\ne 0$ nonzero but small), which means only the electric part of the Maxwell term is important, and the gauge fields do not propagate.", "At low fields such that $E_J/E_C > 1,$ the cosine “Josephson” term can be expanded in increasing powers of the plaquette flux, and to quadratic order in the gauge fields, the result is a Maxwell-Chern-Simons like theory with a massive photon with group velocity $c \\sim \\sqrt{E_J E_C}\\sim J^{2} |\\Delta | /h.$ The photon band is Zeeman shifted by an amount $2h,$ around which the “mass” of the propagating photon mode is $\\kappa c.$ In the vicinity of the transition to the topologically ordered phase (i.e.", "$g \\sim 1$ ), the photon group velocity scales as $h \\sqrt{|\\Delta |},$ which agrees with estimates of the vison hopping scale $t_{\\text{vison}}\\sim h$ obtained from perturbative expansion in the Kitaev limit [28].", "Since for any value of the field, $|\\Delta |$ is generally larger for the AFM Kitaev model, the vison dispersion persists to higher Zeeman fields in the AFM Kitaev case.", "In either case (FM or AFM Kitaev), vison propagation requires us to be in the superfluid phase (i.e.", "$g\\gtrsim 1$ ).", "Although the confined phase ($E_J/E_{C}\\lesssim 1$ ) and deconfined phase ($E_J/E_{C}>1$ ) resemble the toric code in a Zeeman field [36], [37], [38], the $E_{J}/E_{C}$ contains an additional factor $|\\Delta |^2$ which is zero for the FM case for sufficiently high fields $h > 2J.$ The presence of the Higgs field $\\Delta $ makes our model different from a pure gauge theory such as the toric code.", "There is no Coulomb phase either, due to the presence of the CS term.", "Owing to the perturbative nature of our treatment, we were unable to study the properties of the deconfined phase.", "For this, we would need to do a perturbative study from the low-field regime, which is a work in progress.", "If the Zeeman field has finite components in other directions (apart from $z$ ) or the interactions involve an odd number of fermions, long string-like excitations cannot be avoided.", "This would be taken up in a future study.", "The phase diagram was interpreted using this CS language.", "Our method relied on identifying a small parameter (here $J/h \\ll 1$ ).", "The advantage of this method is in the possibility of generalization away from the Kitaev-type integrability.", "However our mutual CS formalism can be used to phenomenologically build local gauge invariant models that also respect lattice symmetries.", "This allows us to avoid the continuum limit at the outset, and also works for lattices lacking face-vertex correspondence.", "The authors acknowledge support of the Department of Atomic Energy, Government of India, under Project Identification No.", "RTI 4002.", "JD and VT thank Shiraz Minwalla and Subir Sachdev for discussions on aspects of Chern-Simons theory, and Kedar Damle for pointing out relevant literature and reading the manuscript." ] ]
2210.07718
[ [ "Nobody Wants to Work Anymore: An Analysis of r/antiwork and the\n Interplay between Social and Mainstream Media during the Great Resignation" ], [ "Abstract r/antiwork is a Reddit community that focuses on the discussion of worker exploitation, labour rights and related left-wing political ideas (e.g.", "universal basic income).", "In late 2021, r/antiwork became the fastest growing community on Reddit, coinciding with what the mainstream media began referring to as the Great Resignation.", "This same media coverage was attributed with popularising the subreddit and, therefore, accelerating its growth.", "In this article, we explore how the r/antiwork community was affected by the exponential increase in subscribers and the media coverage that chronicled its rise.", "We investigate how subreddit activity changed over time, the behaviour of heavy and light users, and how the topical nature of the discourse evolved with the influx of new subscribers.", "We report that, despite the continuing rise of subscribers well into 2022, activity on the subreddit collapsed after January 25th 2022, when a moderator's Fox news interview was widely criticised.", "While many users never commented again, longer running trends of users' posting and commenting behaviour did not change.", "Finally, while many users expressed their discontent at the changing nature of the subreddit as it became more popular, we found no evidence of major shifts in the topical content of discussion over the period studied, with the exception of the introduction of topics related to seasonal events (e.g.", "holidays, such as Thanksgiving) and ongoing developments in the news (e.g.", "working from home and the curtailing of reproductive rights in the United States)." ], [ "Introduction", "In 2021, the attrition rate of employees in the global workforce reached record highs in an economic trend that became known as the Great Resignationhttps://www.bloomberg.com/news/articles/2021-05-10/quit-your-job-how-to-resign-after-covid-pandemic.", "The COVID-19 pandemic had caused many workers to leave the labour force because of problems related to child and social care arrangements, early retirement and even death [10].", "The resulting labour shortages led to wage growth, encouraging workers to quit their jobs and seek opportunities elsewhere [25].", "More broadly, the trauma inflicted by the pandemic led many to question their relationship with work and to demand better working conditions [28], [29].", "The Great Resignation was widely reported on in the mainstream media, with coverage often linking to social media, e.g.", "“Man Quits His Job With Epic 'Have a Good Life' Text and People Are Impressed”https://www.newsweek.com/1639419, “Quitting Your Job Never Looked So Fun”https://www.nytimes.com/2021/10/29/style/quit-your-job.html and “Scroll through TikTok to see the real stars of the workplace”https://www.ft.com/content/c7f8fb0e-8f1a-4829-b818-cb9fe90352fa.", "Indeed, media articles often presented the growing popularity of r/antiworkhttps://www.reddit.com/r/antiwork/, a Reddit community, as emblematic of the significance of the Great Resignationhttps://www.ft.com/content/1270ee18-3ee0-4939-98a8-c4f40940e644 (see Figure REF ).", "Figure: The number of subscribers to the r/antiwork subreddit from 2019 onwards.", "The grey box highlights the period from October 15 2021-January 25 2022.r/antiwork is a subreddit created to discuss worker exploitation, labour rights and the antiwork movement, irreverently encapsulated by the subreddit's slogan of “Unemployment for all, not just the rich!”.", "Throughout the pandemic, r/antiwork enjoyed continuous subscriber growth, increasing from  80,000 members at the start of 2020 to over 200,000 in less than a year.", "However, after becoming the subject of mainstream media coverage in mid-October 2021, the number of subscribers increased by over 330,000 within a two week period – an increase of 57% – making it the fastest growing subreddit at the time (see grey region from Figure REF ).", "Interactions with the media continued to shape r/antiwork: Doreen Ford, a longtime moderator of the subreddit, was interviewed by Fox News on January 25 2022.", "The interview was controversial, resulting in the subreddit briefly going private, many members unsubscribing and a reduction in the rate of subscriber growth throughout 2022.", "Numerous redditors observed that there exists a tension between the moderators, who tend to hold more radical political views, and newer members of the subreddit who are more concerned with organised labour and reforming the current economic systemhttps://www.reddit.com/r/SubredditDrama/comments/sdesxw/comment/huc9wf9/.", "Indeed, there are numerous posts from long-term members lamenting how the subreddit has changed over time, from discussing how “society would/could function without unnecessary labor” to users “posting real and fake text messages of quitting their job”https://www.reddit.com/r/antiwork/comments/qfi56h/.", "Reddit has been the subject of numerous studies on social media behaviour.", "These studies have shown that large numbers of new users can be disruptive to an online community [20].", "They can impact communication norms [15] and behave in ways that are harmful to the community [21].", "However, even in extreme cases, like when a subreddit gets defaulted (made a default subreddit for newly registered Reddit accounts), the community can still remain high-quality and retain its core character [23].", "Other studies have highlighted how the mainstream media can influence social media and the general public.", "For example, public attention of COVID-19 on Reddit was mainly driven by media coverage [11] and negative media articles led to numerous hateful subreddits being banned by Reddit, including r/TheFappening, r/CoonTown and r/jailbait.", "Media coverage has also been shown to have negative consequences on social media: it can increase problematic online behaviour [13] and banning subreddits has increased hate speech elsewhere on Reddit [18].", "To our knowledge, however, there are no studies where a subreddit's rapid rise was so intertwined with media coverage and, moreover, where a media event was the catalyst in its decline.", "Furthermore, we are unaware of any other studies specifically related to r/antiwork.", "To understand how r/antiwork was impacted by media events, we performed a quantitative analysis of over 300,000 posts and 12 million comments from January 2019 to July 2022.", "We performed a time series analysis of users posting and commenting behaviour, and investigated how user activity on r/antiwork was affected by the initial media articles in October 2021 and the Fox News interview in January 2022.", "Next, we categorised users as light and heavy users to understand how different types of user contribute to the subreddit.", "Lastly, we used topic modelling to understand whether the influx of new users had changed the discourse on r/antiwork, e.g.", "focusing more on the topic of quitting their jobs rather than more serious topics related to the antiwork movement.", "In summary, we ask the following research questions: [leftmargin=4ex] RQ1 Subreddit Activity: How did subreddit activity change after the increase in subscribers that coincided with coverage in the mainstream media?", "RQ2 User Types: How was the posting and commenting behaviour of heavy and light users impacted by the growth in subscribers?", "RQ3 Content Analysis: Did the influx of new users change the discourse in terms of the distribution of topics discussed?", "In the remainder of this paper, we will answer these research questions and discuss how our results relate to existing work on social media analysis." ], [ "Related Work", "In this section, we briefly review the work in three research areas related to this article: the impact of mainstream media on social media activity, massive growth in social media users, and previous analyses of Reddit data." ], [ "Impact of Mainstream Media on Social Media Activity", "Mainstream media coverage, such as newspaper articles, television programmes and radio broadcasts, on political or social topics can lead to increased awareness of that topic on social media, such as Reddit.", "Moreover, elevated interest in an event by mainstream media can impact the number of users as well as their activity on social media platforms.", "For example, Chew et al.", "[7] and Tausczik et al.", "[31] examined the trajectories of activities on social media (Twitter and web blogs) during the H1N1 pandemic and noticed that peaks in user activity coincided with major news stories.", "Similarly, Gozzi et al.", "[11] showed that during the COVID-19 pandemic, user activity on Reddit and searches on Wikipedia were mainly driven by mainstream media coverage.", "The popularity of a topic in mainstream media can also lead to an increase in moderation activities on social media platforms, particularly on Reddit.", "For example, Reddit’s administrative interventions caused by violations of their content policy for toxic content occurred more frequently as a result of media pressure [13].", "Moreover, mainstream media attention on subreddits with toxic content further exacerbated the toxicity of their content [13].", "Horta Ribeiro et al.", "[18] further studied user activity and content toxicity after r/The_Donald and r/incels were banned due to media-driven moderation.", "They found a significant decrease in users' posting activity, but an increase in activities associated with toxicity and radicalization.", "Our work is unique in that through the analysis of r/antiwork we study two simultaneous mainstream media-driven impacts on social media: 1) the massive growth in subscribers to r/antiwork coinciding with increased coverage of the Great Resignation by mainstream media, and 2) a spontaneous decrease in user activity triggered by a heavily criticised interview of an r/antiwork moderator on Fox News.", "To the best of our knowledge, this is the first quantitative study of the impact of spontaneous decreases on Reddit.", "Although previous studies examined the decreases in user activity after moderation [17], [13], these decreases were due to platform bans rather than spontaneous user behaviour." ], [ "Massive Growth in Social Media Users", "A topic often studied in social media is the growth in new users [21].", "Previous studies suggest that an influx of newcomers can cause online community disruption due to new users failing to adhere to community norms [21] or cause an information overload in a given online community [19].", "In recent years, several studies analyzed the impact of a massive growth of users on social media.", "Kiene et al.", "[20] present a qualitative study of the massive growth of the subreddit r/NoSleep, demonstrating that the massive growth of the subreddit did not cause any major disruptions.", "Lin et al.", "further showed that communities can remain high-quality and similar to their previous selves after the influx of new members [23].", "The work of Chan et al.", "illustrates that a sudden spike in the number of users is a source of potential disruptions for an online community, however large communities are less impacted than smaller ones [6].", "Additionally, Haq et al.", "[15] examine linguistic patterns on r/WallStreetBets, suggesting that writing style differs significantly between long-term users and new users resulting from a period of sudden growth.", "Our work studies the impact of the massive growth in the number of users on r/antiwork.", "Our analysis provides a new perspective on how the behaviour of different types of users (i.e.", "heavy and light posters and commenters) are affected by subscriber growth." ], [ "Social Meda Analysis of Reddit", "Reddit, as one of the most popular social media platforms, is widely used to study online communities and social phenomena.", "Many studies focus on the analysis of specific subreddits.", "Ammari et al.", "[2] analysed gender stereotypes on r/Daddit and r/Mommit, and Sepahpour et al.", "[27] compared audience effects of r/Daddit and r/Mommit with r/Parenting.", "Leavitta et al.", "[22] studied how the content of different topics on r/sandy, a subreddit dedicated to hurricane Sandy, changed over time.", "Horta Ribeiro et al.", "[18] explored the impact of the ban of the subreddit r/The_Donald on user activity and content toxicity.", "Haq et al.", "[15] focused on the impact of sudden community growth in r/WallStreetBets during the GameStop short squeeze in January 2021.", "Our work is somewhat similar to [15] in the sense that both works study the influence of massive growth in users caused by specific external events.", "However, we analyse changes in user behaviour and discussion topics, whereas Haq et al.", "focus on the writing style of long-term and new users.", "Other studies of Reddit communities investigated community loyalty and successes [12], [14], [8], topic popularity prediction [1], and multi-community engagement [30], [16]." ], [ "Data", "We downloaded all posts and comments on the r/antiwork subreddit from January 1 2019 to July 31 2022 using the PushShift APIhttps://pushshift.io/ [3].", "We only considered posts with at least one associated comment as a proxy for duplicate posts referencing the same event, off-topic and spam posts, as well as posts that received no user engagement for other reasons.", "The data set contained 304,096 posts and 12,141,548 comments.", "These posts were made by 119,746 users (posters) and the comments were made by 1,298,451 users (commenters).", "We preprocessed the data set to remove comments that could potentially bias our analysis.", "We filtered out comments that: [label=()] were removed by users or moderators, but remain in the data set as placeholders (comments are typically removed for violating community guidelines), or were comments from bots (e.g.", "the AutoModerator bot, or where the body of the comment began “I am a bot...”, as many do by convention).", "After filtering, 11,665,342 comments remained in the data set (96.1%).", "We removed posts that had zero comments after filtering, leaving 284,449 posts (93.5%) Definitions User Types In our analysis, we compare the behaviour of two groups of users that we refer to as “light” and “heavy” users of r/antiwork.", "We define light posters or commenters as those with only a single post or comment in the data set, respectively.", "A majority of posters are light posters (75.1%) and a high percentage of commenters are light commenters (42.5%).", "We define heavy posters or commenters as the top 1% of users ranked in descending order by number of posts or comments, respectively.", "Overall, heavy posters made 10.1% of posts and heavy commenters were responsible for 29.8% of comments.", "Figure: Total number of daily posts submitted to r/antiwork that received at least one comment.A large proportion of posts (29.6%) were made by light posters.", "Red dashed lines are results from change point detection.Figure: Total number of daily comments on r/antiwork.A large proportion of comments (29.8%) were made by heavy commenters.", "Red dashed lines are results from change point detection.", "Time Periods For our topic modeling analysis, we divided the data set into three time periods: Period 1: January 1 2019–October 14 2021 Period 2: October 15 2021–January 24 2022 Period 3: January 25 2022–July 31 2022 These periods are delineated by two events in the mainstream media: the publication of a Newsweek articlehttps://www.newsweek.com/1639419, which was the first example of a mainstream media article linking to a viral posthttps://www.reddit.com/r/antiwork/comments/q82vqk/ on r/antiwork (October 15 2021) and the Fox News interview with Doreen Ford (January 25 2022).", "Period 2 is highlighted as a grey box in all figures where the $x$ -axis represents time.", "Change Point Detection We use Classification And Regression Trees (CART) for change point detection [5].", "CART is a non-parametric method that uses a decision tree to recursively segment the predictor space into purer, more homogeneous intervals (often called “splitting”).", "This segmentation process is terminated by a complexity parameter that regularises the cost of growing the tree by adding a penalty for adding additional partitions (“pruning”).", "In our case, we fit a regression tree with the dependent variable as the number of posts or comments, and the predictor space as each day from January 1 2019–July 31 2022.", "We used the rpart R package to create regression models [32], the Gini index for splitting and a complexity parameter of 0.01 for pruning.", "Topic modelling We use Latent Dirichlet Allocation (LDA) for topic modelling [4].", "LDA is a generative model that defines a set of latent topics by estimating the document-topic and topic-word distributions within documents for a predefined number of topics.", "In our case, we consider each post to be a document and the contents of that document as the concatenation of all comments for that post.", "We do not include the post text as part of the document because a large proportion of post bodies are composed of images.", "We preprocessed comments for topic modelling by removing URLs and stop words, replacing accented characters with their ASCII equivalents, replacing contractions with their constituent words, and lemmatizing all words.", "Finally, we filtered out posts with fewer than 50 comments leaving 11,368,863 comments (97.5%) across 181,913 posts (64.0%) for topic modelling.", "LDA was applied to each of the three time periods separately (see Section REF ).", "Periods 1, 2 and 3 contained 40,794; 71,470 and 69,649 posts, respectively.", "We evaluate the quality of topic models using the $C_{uci}$ coherence score [24] to select the optimal number of topics.", "Each topic was labelled by a human annotator with knowledge of r/antiwork and topics were aligned between models using those labels and the Jensen-Shannon distance between topic-word distributions.", "Topic modelling was performed using the Gensim Python library [26].", "Results In the following section, we characterise how posting and commenting activity changed during the period of increased media coverage (RQ1), then we investigate trends in the behaviour of heavy and light users (RQ2), and, lastly, we see how the distribution of topics changed between the three time periods (RQ3).", "Unless stated otherwise, all analyses refer to the time period between January 1 2019-July 31 2022.", "Figures are limited to the period between May 1 2021-July 31 2022 for the sake of clarity.", "RQ1: Subreddit Activity The mainstream media usually points to the number of r/antiwork subscribers to illustrate its growth and popularity (see Figure REF ).", "However, in addition to subscribing, users can interact with a subreddit by posting, commenting and voting.", "As Reddit no longer provides the number of up- and down-votes to third parties, we focused on users' posting and commenting behaviour.", "Figure: Proportion of posts from r/antiwork that received at least one commentmade by heavy and light posters.Figure: Average comments received by each post on r/antiwork.", "Average comments per post from heavy and light users remained relatively constant over time.Figure REF shows the daily number of posts submitted to r/antiwork that received at least one comment.", "Up until mid-2021, the average number of posts per day grew steadily, for example, increasing from 46.4 in January 2020, just prior to the start of the Coronavirus pandemic, to 76.8 in April 2021.", "From May 2021, the rate of posting started to accelerate, consistently breaching 200 posts per day by September, before growing exponentially from October 9 to the weekend of October 23-24.", "From late October 2021, posting behaviour settled into a pattern of heightened activity during weekdays that dips during the weekends.", "At its peak, 2,658 posts were made on January 26, the day after Doreen Ford's Fox News interview, before collapsing to less than half the posting volume of the preceding month.", "On January 27 2022, r/antiwork lost 38,228 subscribers (2.2%) (see the right hand edge of the grey region in Figure REF ).", "For comparison, the second biggest dip in subscribers was on February 24 2019 when the number of subscribers decreased by 7.", "Figure REF shows similar trends in commenting behaviour: an exponential increase in mid-October 2021 followed by a sudden collapse in late January 2022.", "Unlike posting, however, there is no obvious differences between commenting volume on weekdays versus weekends.", "As with the posts to r/antiwork, the number of comments peaked during January 26-28, before falling 46.2% on January 29 2022.", "The dashed lines on Figures REF and REF show the results from change point detection.", "In Figure REF , the first change on October 14 follows a viral post by u/hestolemysmile (the single most commented on post on r/antiworkhttps://www.reddit.com/r/antiwork/comments/q82vqk/).", "In Figure REF , the first two changes on October 15 and 22 coincide with the publication of widely-circulated articles by Newsweek and the New York Times, respectively.", "In both Figures REF and REF , January 29 was identified as the number of posts and comments fell after the Fox News interview.", "The remaining events appear to be around seasonal holidays: posts appear to increase following Thanksgiving (November 30), while comments increase on the first working day after Christmas (December 27).", "The last events related to posting (May 13 2022) and commenting (February 11 and May 14 2022) do not appear to be related to specific events, but is the model acknowledging more gradual downward shifts in activity.", "Figure: Proportion of users whose last comment to r/antiwork fell on each day.Data from the last month in the data set was excluded as many of these users will continue commenting.Figure: Topic coherence (C uci C_{uci}) for different numbers of topics for three partitions of the data set.", "RQ2: Behaviour of Heavy and Light Users The results from RQ1 showed that consistently growing subscriber counts do not necessarily lead to ever-increasing numbers of posts and comments, but are contingent on external events.", "Here, we investigate the behaviour of heavy and light users (defined in Section REF ) to understand who is driving the changes in the volume of posts and comments.", "We also look at when users made their last comment to the subreddit to assess whether users stopped engaging with r/antiwork or simply comment less frequently after the interview on Fox News.", "Figure REF shows that posting behaviour is mostly driven by light posters, who were responsible for 29.6% of posts, compared to 10.1% for heavy posters.", "Figure REF shows that the proportion of posts made by light and heavy posters were approximately equal prior to October 2021, but then start to diverge with almost half of posts coming from light posters by the end of July 2022.", "Conversely, Figure REF shows that heavy commenters make more comments in aggregate than light commenters (29.8% vs. 4.7%).", "Unlike users' posting behaviour, however, the average number of comments per post remained relatively constant over time for both types of commenters, a trend that appears to be unaffected by the surge in subscribers (see Figure REF ).", "Lastly, in Figure REF we investigated when users made their last comment to r/antiwork (we omitted the last month's data for clarity as many of these users will continue commenting in the future).", "Between October 2021 and January 2022, a majority of users commenting for the last time were light commenters, i.e.", "their last comment is their first and only comment.", "The proportion of heavy commenters making their last comment remained low until January 26-28 2022 when 4.4% of heavy commenters made their final comment.", "After January 2022, it was equally likely that heavy and light commenters stopped commenting until May 2022 when it became more likely for heavy commenters to stop commenting than light commenters of r/antiwork.", "RQ3: Content Analysis In RQ1, we showed that the volume of posts and comments increased dramatically in October 2021 before collapsing in January 2022.", "In RQ2, however, we saw that an increasing proportion of posts came from light users, i.e.", "users who only post once.", "We want to understand how these two phenomena affected what was discussed on r/antiwork using topic modelling.", "We investigate the optimal number of topics and contrast the topic distributions for the three time periods defined in Section REF .", "We used topic coherence to identify the optimal number of topics.", "Figure REF shows the coherence scores for topic models with 5-100 topics in increments of 5.", "We performed either 5 or 10 replicates for each number of topics for each time period (more replicates were run for 15-75 topics where the coherence score was maximised).", "The optimal number of topics was 25, 30 and 40 for periods 1, 2 and 3, respectively.", "The different number of topics in each time period appears to confirm our decision to split the data set for topic modelling and is suggestive that the topics discussed broadened over time.", "We note, however, that while periods 2 and 3 have a similar number of documents (comments aggregated by parent post), period 1 is considerably smaller (see Section REF ).", "Table REF shows which topics were present, their proportion and the topic ranking for each time period.", "In periods 1 and 3, the top-ranking topic was Quitting, whereas in period 2, when r/antiwork itself was being featured in numerous news stories, the top-ranking topic was Reddit.", "The top-3 topics for all time periods were the same: Quitting, Reddit and Mental Health and accounted for 22.5-27.5% of the content on r/antiwork.", "In total, 17 topics appeared in all three time periods, accounting for 60.6-74.1% of content.", "Each time period had unique topics, many of which were based on seasonal events and major stories in the news media.", "Period 1 included Leisure (i.e.", "hobbies and free time) and Social Security (disability, welfare).", "Period 2 included Holidays (period 2 covered both Thanksgiving and Christmas), Corporations (related to, for example, Kellogg's union busting activities) and Pandemic (in particular, stories of working during the pandemic).", "Lastly, period 3 included topics for the Fox News Interview, Working from Home (in opposition to companies' post-pandemic return to office policies) and Reproductive Rights (related to the leaked U.S. Supreme Court draft decision to overturn Roe v. Wade).", "Topics confined to a single time period, however, tended to be relatively minor and were generally present in the long-tail of the topic distribution.", "Discussion Our study aimed to explore how user activity on r/antiwork was impacted by a gradual, sustained increase of subscribers, followed by a period of accelerated growth coinciding with mainstream media coverage of the Great Resignation.", "Instead, we found an online community where the parallel surge in posts and comments became decoupled from subscriber growth and collapsed after Doreen Ford's Fox News interview even as the number of subscribers continued to rise (RQ1).", "Change point detection provided suggestive evidence that user activity was driven by mainstream media events in mid-October 2021 and late January 2022, as the dates of these events were independently identified in both posting and commenting data (Figures REF and REF , respectively).", "We found that different types of users had a disproportionate influence on overall activity, with light posters and heavy commenters being responsible for almost a third of posts and comments, respectively (Figures REF and REF ) (RQ2).", "Light and heavy posters were responsible for similar proportions of posts prior to October 2021, but then gradually diverged until light posters were responsible for almost half of all posts by the end of July 2022 where our data set ends (Figure REF ).", "Commenting trends, on the other hand, appeared to be undisturbed by October's surge in new subscribers and January's collapse in activity (Figure REF ).", "While there was a spike in heavy commenters making their last comment immediately following the Fox News interview, it does not appear to have been a sufficient reduction to affect broader trends.", "Lastly, despite anecdotal observations that the quality of discussion on r/antiwork had declined due to subscriber growth, we found no evidence to support this claim (RQ3).", "In general, the main topics of discussion were the same in all three time periods studied: the top-ranked topics were always Quitting, Reddit and Mental Health, and the topics shared by all time periods accounted for 60.6-74.1% of content (Table REF ).", "Furthermore, we believe that we underestimated the degree of similarity between topic distributions, because a topic in one time period would sometimes correspond to two topics in another time period (e.g.", "Food/Drugs in period 1 versus separate Food and Drugs topics found in both period 2 and 3).", "Many studies use topic modelling to identify what is being discussed in online communities and to identify changes over time.", "We identified Mental Health and Quitting as two of the most prevalent topics on r/antiwork.", "This finding is in agreement with a study by del Rio-Chanona et al.", "that identified mental health issues as one of the main reasons for members of the r/jobs subreddit to quit their jobs, especially since the onset of the pandemic [9].", "We also saw the introduction of a Pandemic topic that was unique to the period from October 25 2021-January 25 2022 (period 2), suggesting that the surge of new members shared their work-related experiences from during the COVID-19 pandemic.", "We found no evidence of a change in the topic distribution between time periods, in accordance with other studies of massive growth in online communities [23].", "This does not, however, discount the fact that an influx of newcomers could subtly change the feel of an online community.", "For example, Haq et al.", "identify significant differences in writing style between new and long-term users, with new users writing shorter comments with more emojis [15].", "Numerous studies also point to the temporary nature of long-term member grievances: Lin et al.", "observed a dip in upvotes in newly defaulted subreddits that recovered quickly afterwards and, moreover, that complaints about low quality posts did not increase in frequency after defaulting [23].", "In an interview-based study, Kiene et al.", "showed how r/nosleep attributed the subreddit's resilience in the face of sustained growth to active moderators and a shared sense of community [20].", "It seems likely that this was the case with r/antiwork as well: several of the moderators have been involved since the subreddit's inception and have publicly championed the political objectives of the antiwork movement.", "Furthermore, there is a consistent hard core of heavy commenters, implying a sense of community at least among long-term members.", "Limitations In our study, we observed how mainstream media events coincided with changes in activity on r/antiwork (subscriber count, posting and commenting), but it is unclear to what extent these events were causal.", "In October 2021, the topics discussed on r/antiwork happened to align with the broader zeitgeist of worker dissatisfaction following the COVID-19 pandemic, so it seems likely that members would have found the subreddit through other means, such as the Reddit front page.", "Our findings related to the Fox News interview, however, do not appear to suffer from this limitation as the fallout had such wide and direct consequences, including the drop in posting and commenting activity, the loss of members, and was even captured by the topic model as a distinct topic.", "Another limitation was the lack of data available from before 2019, when r/antiwork was a smaller and more focused community.", "Had we been able to include the earliest data, we might have seen greater differences in the topic distribution between then and 2022 than what we observed in our study.", "However, being a small sample, it would have had significant variance, leading to issues with the interpretation of results.", "Future Research In this study, we focused on characterising the development of r/antiwork and looked at how the surge of new members impacted user behaviour and what topics were being discussed.", "In future research, we plan to further investigate the aftermath of the Fox News interview.", "On January 26 2022, a subreddit called r/WorkReform was founded by disgruntled members of r/antiwork, gaining over 400,000 subscribers within 24 hours.", "We want to investigate the differences between the two subreddits in terms of users, topics discussed and reactions to major events, such as the overturning of Roe v. Wade in June 2022.", "Second, we want to take a deeper look at the users of r/antiwork, using their other activity on Reddit to understand why they behave the way they do.", "We believe that users who want to post about quitting their job could be more dissimilar to one another than heavy commenters whose interests are more likely to be focused on topics in r/antiwork.", "Conclusion In this paper, we presented an study of how subscribers, posts and comments on r/antiwork were impacted by events in the media, how heavy and light user behaviour differed from one another, and a content analysis based on topic modelling to show how the discourse on the subreddit evolved.", "We have shown that, despite the continuing rise of subscribers, activity on r/antiwork collapsed after the Fox News interview on January 25 2022.", "We showed that heavy commenters and light posters have a disproportionate influence on subreddit activity, making almost a third of overall comments and posts, respectively.", "Over time, light posters have become responsible for an increasing proportion of posts, reaching almost 50% of posts by the end of July 2022.", "Heavy and light commenters, however, appeared unaffected by the surge of users, being responsible for approximately the same number of comments per post throughout the period studied.", "Commenting trends were not even impacted when 4.4% of heavy commenters made their last comment on r/antiwork between January 26-28 2022 after the broadcast of the Fox News interview.", "Lastly, the influx of new users did not appear to change the topical content of discussion: all three time periods had the same top-3 topics: Quitting, Reddit and Mental Health.", "Each time period had distinct topics, but they tended to be related to seasonal events and ongoing developments in the news.", "Overall, we found no evidence of major shifts in the topical content of discussion over the period studied.", "Appendix Topic models Table: Labelled topics from three time periods ordered by prevalence in the first time period they appear in.Bold topic names are present in all three time periods.", "Example keywords are not necessarily the highest probabilitywords, but representative of the topic across all time periods it appears in.", "Proportion columns may not add up to 1.0 due to rounding." ], [ "Results", "In the following section, we characterise how posting and commenting activity changed during the period of increased media coverage (RQ1), then we investigate trends in the behaviour of heavy and light users (RQ2), and, lastly, we see how the distribution of topics changed between the three time periods (RQ3).", "Unless stated otherwise, all analyses refer to the time period between January 1 2019-July 31 2022.", "Figures are limited to the period between May 1 2021-July 31 2022 for the sake of clarity." ], [ "RQ1: Subreddit Activity", "The mainstream media usually points to the number of r/antiwork subscribers to illustrate its growth and popularity (see Figure REF ).", "However, in addition to subscribing, users can interact with a subreddit by posting, commenting and voting.", "As Reddit no longer provides the number of up- and down-votes to third parties, we focused on users' posting and commenting behaviour.", "Figure: Proportion of posts from r/antiwork that received at least one commentmade by heavy and light posters.Figure: Average comments received by each post on r/antiwork.", "Average comments per post from heavy and light users remained relatively constant over time.Figure REF shows the daily number of posts submitted to r/antiwork that received at least one comment.", "Up until mid-2021, the average number of posts per day grew steadily, for example, increasing from 46.4 in January 2020, just prior to the start of the Coronavirus pandemic, to 76.8 in April 2021.", "From May 2021, the rate of posting started to accelerate, consistently breaching 200 posts per day by September, before growing exponentially from October 9 to the weekend of October 23-24.", "From late October 2021, posting behaviour settled into a pattern of heightened activity during weekdays that dips during the weekends.", "At its peak, 2,658 posts were made on January 26, the day after Doreen Ford's Fox News interview, before collapsing to less than half the posting volume of the preceding month.", "On January 27 2022, r/antiwork lost 38,228 subscribers (2.2%) (see the right hand edge of the grey region in Figure REF ).", "For comparison, the second biggest dip in subscribers was on February 24 2019 when the number of subscribers decreased by 7.", "Figure REF shows similar trends in commenting behaviour: an exponential increase in mid-October 2021 followed by a sudden collapse in late January 2022.", "Unlike posting, however, there is no obvious differences between commenting volume on weekdays versus weekends.", "As with the posts to r/antiwork, the number of comments peaked during January 26-28, before falling 46.2% on January 29 2022.", "The dashed lines on Figures REF and REF show the results from change point detection.", "In Figure REF , the first change on October 14 follows a viral post by u/hestolemysmile (the single most commented on post on r/antiworkhttps://www.reddit.com/r/antiwork/comments/q82vqk/).", "In Figure REF , the first two changes on October 15 and 22 coincide with the publication of widely-circulated articles by Newsweek and the New York Times, respectively.", "In both Figures REF and REF , January 29 was identified as the number of posts and comments fell after the Fox News interview.", "The remaining events appear to be around seasonal holidays: posts appear to increase following Thanksgiving (November 30), while comments increase on the first working day after Christmas (December 27).", "The last events related to posting (May 13 2022) and commenting (February 11 and May 14 2022) do not appear to be related to specific events, but is the model acknowledging more gradual downward shifts in activity.", "Figure: Proportion of users whose last comment to r/antiwork fell on each day.Data from the last month in the data set was excluded as many of these users will continue commenting.Figure: Topic coherence (C uci C_{uci}) for different numbers of topics for three partitions of the data set." ], [ "RQ2: Behaviour of Heavy and Light Users", "The results from RQ1 showed that consistently growing subscriber counts do not necessarily lead to ever-increasing numbers of posts and comments, but are contingent on external events.", "Here, we investigate the behaviour of heavy and light users (defined in Section REF ) to understand who is driving the changes in the volume of posts and comments.", "We also look at when users made their last comment to the subreddit to assess whether users stopped engaging with r/antiwork or simply comment less frequently after the interview on Fox News.", "Figure REF shows that posting behaviour is mostly driven by light posters, who were responsible for 29.6% of posts, compared to 10.1% for heavy posters.", "Figure REF shows that the proportion of posts made by light and heavy posters were approximately equal prior to October 2021, but then start to diverge with almost half of posts coming from light posters by the end of July 2022.", "Conversely, Figure REF shows that heavy commenters make more comments in aggregate than light commenters (29.8% vs. 4.7%).", "Unlike users' posting behaviour, however, the average number of comments per post remained relatively constant over time for both types of commenters, a trend that appears to be unaffected by the surge in subscribers (see Figure REF ).", "Lastly, in Figure REF we investigated when users made their last comment to r/antiwork (we omitted the last month's data for clarity as many of these users will continue commenting in the future).", "Between October 2021 and January 2022, a majority of users commenting for the last time were light commenters, i.e.", "their last comment is their first and only comment.", "The proportion of heavy commenters making their last comment remained low until January 26-28 2022 when 4.4% of heavy commenters made their final comment.", "After January 2022, it was equally likely that heavy and light commenters stopped commenting until May 2022 when it became more likely for heavy commenters to stop commenting than light commenters of r/antiwork." ], [ "RQ3: Content Analysis", "In RQ1, we showed that the volume of posts and comments increased dramatically in October 2021 before collapsing in January 2022.", "In RQ2, however, we saw that an increasing proportion of posts came from light users, i.e.", "users who only post once.", "We want to understand how these two phenomena affected what was discussed on r/antiwork using topic modelling.", "We investigate the optimal number of topics and contrast the topic distributions for the three time periods defined in Section REF .", "We used topic coherence to identify the optimal number of topics.", "Figure REF shows the coherence scores for topic models with 5-100 topics in increments of 5.", "We performed either 5 or 10 replicates for each number of topics for each time period (more replicates were run for 15-75 topics where the coherence score was maximised).", "The optimal number of topics was 25, 30 and 40 for periods 1, 2 and 3, respectively.", "The different number of topics in each time period appears to confirm our decision to split the data set for topic modelling and is suggestive that the topics discussed broadened over time.", "We note, however, that while periods 2 and 3 have a similar number of documents (comments aggregated by parent post), period 1 is considerably smaller (see Section REF ).", "Table REF shows which topics were present, their proportion and the topic ranking for each time period.", "In periods 1 and 3, the top-ranking topic was Quitting, whereas in period 2, when r/antiwork itself was being featured in numerous news stories, the top-ranking topic was Reddit.", "The top-3 topics for all time periods were the same: Quitting, Reddit and Mental Health and accounted for 22.5-27.5% of the content on r/antiwork.", "In total, 17 topics appeared in all three time periods, accounting for 60.6-74.1% of content.", "Each time period had unique topics, many of which were based on seasonal events and major stories in the news media.", "Period 1 included Leisure (i.e.", "hobbies and free time) and Social Security (disability, welfare).", "Period 2 included Holidays (period 2 covered both Thanksgiving and Christmas), Corporations (related to, for example, Kellogg's union busting activities) and Pandemic (in particular, stories of working during the pandemic).", "Lastly, period 3 included topics for the Fox News Interview, Working from Home (in opposition to companies' post-pandemic return to office policies) and Reproductive Rights (related to the leaked U.S. Supreme Court draft decision to overturn Roe v. Wade).", "Topics confined to a single time period, however, tended to be relatively minor and were generally present in the long-tail of the topic distribution." ], [ "Discussion", "Our study aimed to explore how user activity on r/antiwork was impacted by a gradual, sustained increase of subscribers, followed by a period of accelerated growth coinciding with mainstream media coverage of the Great Resignation.", "Instead, we found an online community where the parallel surge in posts and comments became decoupled from subscriber growth and collapsed after Doreen Ford's Fox News interview even as the number of subscribers continued to rise (RQ1).", "Change point detection provided suggestive evidence that user activity was driven by mainstream media events in mid-October 2021 and late January 2022, as the dates of these events were independently identified in both posting and commenting data (Figures REF and REF , respectively).", "We found that different types of users had a disproportionate influence on overall activity, with light posters and heavy commenters being responsible for almost a third of posts and comments, respectively (Figures REF and REF ) (RQ2).", "Light and heavy posters were responsible for similar proportions of posts prior to October 2021, but then gradually diverged until light posters were responsible for almost half of all posts by the end of July 2022 where our data set ends (Figure REF ).", "Commenting trends, on the other hand, appeared to be undisturbed by October's surge in new subscribers and January's collapse in activity (Figure REF ).", "While there was a spike in heavy commenters making their last comment immediately following the Fox News interview, it does not appear to have been a sufficient reduction to affect broader trends.", "Lastly, despite anecdotal observations that the quality of discussion on r/antiwork had declined due to subscriber growth, we found no evidence to support this claim (RQ3).", "In general, the main topics of discussion were the same in all three time periods studied: the top-ranked topics were always Quitting, Reddit and Mental Health, and the topics shared by all time periods accounted for 60.6-74.1% of content (Table REF ).", "Furthermore, we believe that we underestimated the degree of similarity between topic distributions, because a topic in one time period would sometimes correspond to two topics in another time period (e.g.", "Food/Drugs in period 1 versus separate Food and Drugs topics found in both period 2 and 3).", "Many studies use topic modelling to identify what is being discussed in online communities and to identify changes over time.", "We identified Mental Health and Quitting as two of the most prevalent topics on r/antiwork.", "This finding is in agreement with a study by del Rio-Chanona et al.", "that identified mental health issues as one of the main reasons for members of the r/jobs subreddit to quit their jobs, especially since the onset of the pandemic [9].", "We also saw the introduction of a Pandemic topic that was unique to the period from October 25 2021-January 25 2022 (period 2), suggesting that the surge of new members shared their work-related experiences from during the COVID-19 pandemic.", "We found no evidence of a change in the topic distribution between time periods, in accordance with other studies of massive growth in online communities [23].", "This does not, however, discount the fact that an influx of newcomers could subtly change the feel of an online community.", "For example, Haq et al.", "identify significant differences in writing style between new and long-term users, with new users writing shorter comments with more emojis [15].", "Numerous studies also point to the temporary nature of long-term member grievances: Lin et al.", "observed a dip in upvotes in newly defaulted subreddits that recovered quickly afterwards and, moreover, that complaints about low quality posts did not increase in frequency after defaulting [23].", "In an interview-based study, Kiene et al.", "showed how r/nosleep attributed the subreddit's resilience in the face of sustained growth to active moderators and a shared sense of community [20].", "It seems likely that this was the case with r/antiwork as well: several of the moderators have been involved since the subreddit's inception and have publicly championed the political objectives of the antiwork movement.", "Furthermore, there is a consistent hard core of heavy commenters, implying a sense of community at least among long-term members." ], [ "Limitations", "In our study, we observed how mainstream media events coincided with changes in activity on r/antiwork (subscriber count, posting and commenting), but it is unclear to what extent these events were causal.", "In October 2021, the topics discussed on r/antiwork happened to align with the broader zeitgeist of worker dissatisfaction following the COVID-19 pandemic, so it seems likely that members would have found the subreddit through other means, such as the Reddit front page.", "Our findings related to the Fox News interview, however, do not appear to suffer from this limitation as the fallout had such wide and direct consequences, including the drop in posting and commenting activity, the loss of members, and was even captured by the topic model as a distinct topic.", "Another limitation was the lack of data available from before 2019, when r/antiwork was a smaller and more focused community.", "Had we been able to include the earliest data, we might have seen greater differences in the topic distribution between then and 2022 than what we observed in our study.", "However, being a small sample, it would have had significant variance, leading to issues with the interpretation of results." ], [ "Future Research", "In this study, we focused on characterising the development of r/antiwork and looked at how the surge of new members impacted user behaviour and what topics were being discussed.", "In future research, we plan to further investigate the aftermath of the Fox News interview.", "On January 26 2022, a subreddit called r/WorkReform was founded by disgruntled members of r/antiwork, gaining over 400,000 subscribers within 24 hours.", "We want to investigate the differences between the two subreddits in terms of users, topics discussed and reactions to major events, such as the overturning of Roe v. Wade in June 2022.", "Second, we want to take a deeper look at the users of r/antiwork, using their other activity on Reddit to understand why they behave the way they do.", "We believe that users who want to post about quitting their job could be more dissimilar to one another than heavy commenters whose interests are more likely to be focused on topics in r/antiwork." ], [ "Conclusion", "In this paper, we presented an study of how subscribers, posts and comments on r/antiwork were impacted by events in the media, how heavy and light user behaviour differed from one another, and a content analysis based on topic modelling to show how the discourse on the subreddit evolved.", "We have shown that, despite the continuing rise of subscribers, activity on r/antiwork collapsed after the Fox News interview on January 25 2022.", "We showed that heavy commenters and light posters have a disproportionate influence on subreddit activity, making almost a third of overall comments and posts, respectively.", "Over time, light posters have become responsible for an increasing proportion of posts, reaching almost 50% of posts by the end of July 2022.", "Heavy and light commenters, however, appeared unaffected by the surge of users, being responsible for approximately the same number of comments per post throughout the period studied.", "Commenting trends were not even impacted when 4.4% of heavy commenters made their last comment on r/antiwork between January 26-28 2022 after the broadcast of the Fox News interview.", "Lastly, the influx of new users did not appear to change the topical content of discussion: all three time periods had the same top-3 topics: Quitting, Reddit and Mental Health.", "Each time period had distinct topics, but they tended to be related to seasonal events and ongoing developments in the news.", "Overall, we found no evidence of major shifts in the topical content of discussion over the period studied." ] ]
2210.07796
[ [ "Thermal Transitions in Dense Two-Colour QCD" ], [ "Abstract The infamous sign problem makes it impossible to probe dense (baryon density $\\mu_B>0$) QCD at temperatures near or below the deconfinement threshold.", "As a workaround, one can explore QCD-like theories such as two-colour QCD (QC2D) which don't suffer from this sign problem but are qualitively similar to real QCD.", "Previous studies on smaller lattice volumes have investigated deconfinement and colour superfluid to normal matter transitions.", "In this study we look at a larger lattice volume $N_s=24$ in an attempt to disentangle finite volume and finite temperature effects.", "We also fit to a larger number of diquark sources to better allow for extrapolation to zero diquark source." ], [ "Introduction", "The objective of lattice QCD is to solve the path integral for an operator ${O}[\\Phi ]$ $\\langle {O}\\rangle =\\frac{1}{Z}\\int {D}[\\Phi ]{O}[\\Phi ]e^{-S[\\Phi ]}$ where $Z$ is the partition function, $\\Phi $ is shorthand for all the fields and $S$ the action.", "$S$ plays the role of a probability distribution function allowing us to apply Monte Carlo techniques to solve this integral.", "Gauge configurations $U$ are produced with probability weight $e^{-S[U]}=\\det {M[U]e^{-S_G[U]}}$ where $M[U]$ is the fermion matrix.", "For the $\\text{SU}(3)$ gauge group with a non-zero baryon chemical potential $\\mu _B$ , $\\det M[U]$ can take on complex values, yielding a complex probability density.", "This means that dense QCD systems such as neutron stars and colour superconducting states are beyond the scope of current lattice simulation techniques.", "Alternatives such as reweighting, analytic continuation, the complex Langevin method, effective field theories and studies of other gauge groups are currently being employed.", "An overview of the first four approaches can be found in [1] with this study using the latter approach.", "For an $\\text{SU}(2)$ gauge group and an even number of flavours the determinant and Pfaffian are both real and non-negative for $\\mu _B>0$ , thus allowing Monte Carlo techniques to be used.", "This two-colour version of QCD has some interesting properties, such as the role of baryons being played by bosonic diquarks.", "But qualitatively it is similar to three-colour QCD, sharing noticeable features such as confinement and chiral symmetry breaking.", "Figure: Phase diagram of QC2D for m π m ρ =0.80(1)\\frac{m_\\pi }{m_\\rho }=0.80(1) .", "The orange hatchedarea is the deconfinement crossover region, black circles correspond to the superfluid transition and the bluetriangles the inflexion point of the Polyakov loop from ." ], [ "Simulation details", "For this simulation we are using an unimproved Wilson action with two quark flavours, with simulation parameters from table REF .", "The scale was set by taking the string tension $\\sqrt{\\sigma }={440}{}$ and fitting the static quark potential to the Cornell form $ V(r)=C+\\frac{\\alpha }{r}+\\sigma r$ at zero chemical potential [2], [3].", "We look at a fixed chemical potential $\\mu ={443}{}$ and conduct a temperature sweep.", "These lattice parameters were previously used for a chemical potential scan on a spatial extent of $N_s=12$ in [4] and both temperature and chemical potential scans on a spatial extent of $N_s=16$ in [5].", "The code used for this simulation was first used in [6] and the version used for this run can be found here [7] on Zenod on Zenodo.", "At low temperatures we expect to find a colour superfluid phase.", "As the temperature increases we then expect to see a dense hadronic phase, akin to neutron stars.", "Beyond that we expect to undergo crossover transition to the deconfined quark-gluon plasma.", "The temperature is given by $T=\\frac{1}{a_\\tau N_\\tau }$ There are two methods of controlling the temperature.", "By changing the lattice spacing in the temporal extent $a_\\tau $ it is possible to continuously vary the temperature.", "However this requires setting the scale for every value of $a_\\tau $ which is time and resource intensive.", "Instead we vary the number of sites along the time direction and use a fixed $a_\\tau $ .", "This allows us to complete a temperature scan without setting the scale for each temperature, but at the cost of only being able to look at a discrete set of temperatures.", "Temperatures were scanned at $N_\\tau =3\\text{--}20$ , giving temperatures in the range 55–365.", "As is standard practice we take the largest time extent where $N_\\tau >N_s$ to be\"zero-temperature\", thus use the $N_\\tau =20$ value of an observable for zero temperature subtraction.", "At non-zero baryon density, the fermion matrix acquires a non-zero density of very small eigenvalues, slowing down the computation significantly.", "Introducing a diquark source $j$ lifts these eigenvalues, with \"physical\" results recovered by an extrapolation of $j$ to zero as seen in figure REF .", "Whereas previous studies have used either two or three diquark sources, this is the first time we have done a full temperature scan with four sources and have a full temperature scan with $aj=0.010$ .", "Table: Configurations were generated using the above parameters for a coarse lattice as described in." ], [ "Diquark Condensate", "The observables being measured are described in [6], but as a recap we'll be looking at the diquark condensate $\\langle qq\\rangle =\\frac{\\kappa }{2}\\langle \\bar{\\psi _1}C\\gamma _5\\tau _2\\bar{\\psi _2^{tr}}-\\psi _2^{tr}C\\gamma _5\\tau _2\\psi _1\\rangle $ where $C$ is the charge conjugation matrix.", "The index of the $\\psi $ term denotes flavour.", "This is the order parameter for the superfluid phase transition (predicted to be a second order transition).", "Figure REF shows the diquark condensate as a function of the diquark source for various temperatures.", "The fit $\\langle qq\\rangle =A+Bj^\\alpha $ was used, with the $y$ -intercept $A$ being used as the $j=0$ value of the diquark condensate.", "This fit has previously been used in [3], [5], [8] on $N_s=12$ and $N_s=16$ lattices using the lattice parameters from this simulation and with a finer lattice spacing $a={0.138(6)}{}$ .", "The $aj=0.01$ values and the larger lattice volume have given us much improved control of the $j=0$ extrapolation.", "For low and high temperatures $\\langle qq\\rangle $ was found to be linear in $j$ .", "But around the superfluid phase transition the fit is non-linear, with the exponent reaching its minimum around the transition.", "Figure: Unrenormalised diquark condensate 〈qq〉\\langle qq\\rangle as a function of diquark source jj and temperature TT.We extrapolated jj to zero in figure to obtain the hollow points in figure .Figure REF shows the diquark condensate as a function of temperature for various diquark sources.", "The hollow points correspond to the zero diquark extrapolation.", "We used a cubic spline for the interpolation.", "A linear fit of the inflexion points of the three lowest diquark sources suggests the superfluid phase transition occurs around $T\\sim {84}{}$ , significantly lower than the deconfinement crossover.", "This indicates that the superfluid phase transition is indeed distinct from the deconfinement crossover as seen in figure REF .", "However an improved action and more data are required to nail down the transition temperature more precisely, in addition to an error analysis to give proper bounds on the transition temperature." ], [ "Thermodynamics", "We are also interested in thermodynamic observables such as the quark number density $n_q$ , the quark energy $\\varepsilon _q$ and the trace anomaly $T_{\\mu \\mu }=\\varepsilon -3P$ We evaluated the pressure and energy density the using derivative the method, with the Karsch coefficients calculated in [3].", "These results are qualitatively consistent with recent results from [9], [10].", "The energy density exhibits similar behaviour.", "The dip in the density in figure REF is in the same region as the superfluid phase transition.", "This is remarkable as $\\frac{n_q}{n_{SB}}$ is not an order parameter for the phase transition.", "The trace anomaly itself is smaller than expected.", "Also interesting is how the quark and gluon contributions nearly cancel each other out at most temperatures.", "This may be due to the choice of regularisation.", "The trace anomaly is constant and consistently zero up until $T\\sim {150}{}$ , suggesting that we are in a conformal régime.", "Is this a sign of a second transition from conformal to non-conformal physics?" ], [ "Chiral and Deconfinement Transition", "Another quantity of interest is the Polyakov loop expectation value, which is the order parameter for the deconfinement crossover in pure gauge theory.", "The Polyakov loop was renormalised using the procedure described in [11] with the renormalisation constants previously calculated on a smaller volume in [3].", "The Polyakov loop renormalisation is temperature dependent, unlike that of the diquark condensate.", "$ L_R\\left(T,\\mu \\right)=Z_L^{N_\\tau } L_0\\left(\\frac{1}{a_\\tau N_\\tau },\\mu \\right)$ We consider two renormalisation schemes to determine $Z_L$ Table: NO_CAPTIONThe chiral condensate with Wilson fermions has both multiplicative and additive renormalisations.", "The additive and multiplicative renormalisations are a constant shift and constant factor respectively.", "Figure REF has undergone additive renormalisation by subtracting the zero temperature value of $\\langle qq \\rangle $ , but multiplicative renormalisation has not been carried out.", "This can be done using the proceedure found in [12], [13], [14] but would merely amount to an overall rescaling of the data in figure REF .", "The change in behaviour from constant to decreasing at $T\\sim {150}{}$ suggests that the crossover coincides with the deconfinement crossover, not the superfluid transition." ], [ "Outlook", "The larger lattice volume has shown us that finite volume errors are small for these parameters.", "The superfluid phase transition and deconfinement crossover are distinct.", "Further investigation is needed into the unexpectedly small trace anomaly.", "Work is underway to implement a Symanzik improved fermion action.", "We have also started tuning for a finer lattice with lighter quarks.", "Combined with the improved action these \"light-fine\" quarks will give us better control over errors in addition to providing more physically realistic measurements." ], [ "Acknowledgements", "D. Lawlor would like to acknowledge support from the National University of Ireland, Maynooth's John and Pat Hume Scholarship.", "The authors wish to acknowledge the Irish Centre for High-End Computing (ICHEC) for the provision of computational facilities and support.", "This work was performed using the DiRAC Data Intensive service at Leicester, operated by the University of Leicester IT Services, which forms part of the STFC DiRAC HPC Facility (www.dirac.ac.uk).", "The equipment was funded by BEIS capital funding via STFC capital grant ST/K000373/1 and ST/R002363/1 and STFC DiRAC Operations grant ST/R001014/1.", "DiRAC is part of the National e-Infrastructure." ] ]
2210.07731
[ [ "Model-Based Imitation Learning for Urban Driving" ], [ "Abstract An accurate model of the environment and the dynamic agents acting in it offers great potential for improving motion planning.", "We present MILE: a Model-based Imitation LEarning approach to jointly learn a model of the world and a policy for autonomous driving.", "Our method leverages 3D geometry as an inductive bias and learns a highly compact latent space directly from high-resolution videos of expert demonstrations.", "Our model is trained on an offline corpus of urban driving data, without any online interaction with the environment.", "MILE improves upon prior state-of-the-art by 31% in driving score on the CARLA simulator when deployed in a completely new town and new weather conditions.", "Our model can predict diverse and plausible states and actions, that can be interpretably decoded to bird's-eye view semantic segmentation.", "Further, we demonstrate that it can execute complex driving manoeuvres from plans entirely predicted in imagination.", "Our approach is the first camera-only method that models static scene, dynamic scene, and ego-behaviour in an urban driving environment.", "The code and model weights are available at https://github.com/wayveai/mile." ], [ "Introduction", "From an early age we start building internal representations of the world through observation and interaction [1].", "Our ability to estimate scene geometry and dynamics is paramount to generating complex and adaptable movements.", "This accumulated knowledge of the world, part of what we often refer to as common sense, allows us to navigate effectively in unfamiliar situations [2].", "In this work, we present MILE, a Model-based Imitation LEarning approach to jointly learn a model of the world and a driving policy.", "We demonstrate the effectiveness of our approach in the autonomous driving domain, operating on complex visual inputs labelled only with expert action and semantic segmentation.", "Unlike prior work on world models [3], [4], [5], our method does not assume access to a ground truth reward, nor does it need any online interaction with the environment.", "Further, previous environments in OpenAI Gym [3], MuJoCo [4], and Atari [5] were characterised by simplified visual inputs as small as $64\\times 64$ images.", "In contrast, MILE operates on high-resolution camera observations of urban driving scenes.", "Driving inherently requires a geometric understanding of the environment, and MILE exploits 3D geometry as an important inductive bias by first lifting image features to 3D and pooling them into a bird's-eye view (BeV) representation.", "The evolution of the world is modelled by a latent dynamics model that infers compact latent states from observations and expert actions.", "The learned latent state is the input to a driving policy that outputs vehicle control, and can additionally be decoded to BeV segmentation for visualisation and as a supervision signal.", "Our method also relaxes the assumption made in some recent work [6], [7] that neither the agent nor its actions influence the environment.", "This assumption rarely holds in urban driving, and therefore MILE is action-conditioned, allowing us to model how other agents respond to ego-actions.", "We show that our model can predict plausible and diverse futures from latent states and actions over long time horizons.", "It can even predict entire driving plans in imagination to successfully execute complex driving manoeuvres, such as negotiating a roundabout, or swerving to avoid a motorcyclist (see videos in the supplementary material).", "We showcase the performance of our model on the driving simulator CARLA [8], and demonstrate a new state-of-the-art.", "MILE achieves a 31% improvement in driving score with respect to previous methods [9], [10] when tested in a new town and new weather conditions.", "Finally, during inference, because we model time with a recurrent neural network, we can maintain a single state that summarises all the past observations and then efficiently update the state when a new observation is available.", "We demonstrate that this design decision has important benefits for deployment in terms of latency, with negligible impact on the driving performance.", "To summarise the main contributions of this paper: We introduce a novel model-based imitation learning architecture that scales to the visual complexity of autonomous driving in urban environments by leveraging 3D geometry as an inductive bias.", "Our method is trained solely using an offline corpus of expert driving data, and does not require any interaction with an online environment or access to a reward, offering strong potential for real-world application.", "Our camera-only model sets a new state-of-the-art on the CARLA simulator, surpassing other approaches, including those requiring LiDAR inputs.", "Our model predicts a distribution of diverse and plausible futures states and actions.", "We demonstrate that it can execute complex driving manoeuvres from plans entirely predicted in imagination." ], [ "Related Work", "Our work is at the intersection of imitation learning, 3D scene representation, and world modelling." ], [ "Imitation learning.", "Despite that the first end-to-end method for autonomous driving was envisioned more than 30 years ago [11], early autonomous driving approaches were dominated by modular frameworks, where each module solves a specific task [12], [13], [14].", "Recent years have seen the development of several end-to-end self-driving systems that show strong potential to improve driving performance by predicting driving commands from high-dimensional observations alone.", "Conditional imitation learning has proven to be one successful method to learn end-to-end driving policies that can be deployed in simulation [15] and real-world urban driving scenarios [16].", "Nevertheless, difficulties of learning end-to-end policies from high-dimensional visual observations and expert trajectories alone have been highlighted [17].", "Several works have attempted to overcome such difficulties by moving past pure imitation learning.", "DAgger [18] proposes iterative dataset aggregation to collect data from trajectories that are likely to be experienced by the policy during deployment.", "NEAT [19] additionally supervises the model with BeV semantic segmentation.", "ChauffeurNet [20] exposes the learner to synthesised perturbations of the expert data in order to produce more robust driving policies.", "Learning from All Vehicles (LAV) [10] boosts sample efficiency by learning behaviours from not only the ego vehicle, but from all the vehicles in the scene.", "Roach [9] presents an agent trained with supervision from a reinforcement learning coach that was trained on-policy and with access to privileged information." ], [ "3D scene representation.", "Successful planning for autonomous driving requires being able to understand and reason about the 3D scene, and this can be challenging from monocular cameras.", "One common solution is to condense the information from multiple cameras to a single bird's-eye representation of the scene.", "This can be achieved by lifting each image in 3D (by learning a depth distribution of features) and then splatting all frustums into a common rasterised BeV grid [21], [22], [23].", "An alternative approach is to rely on transformers to learn the direct mapping from image to bird's-eye view [24], [25], [26], without explicitly modelling depth." ], [ "World models.", "Model-based methods have mostly been explored in a reinforcement learning setting and have been shown to be extremely successful [3], [27], [5].", "These methods assume access to a reward, and online interaction with the environment, although progress has been made on fully offline reinforcement learning [28], [29].", "Model-based imitation learning has emerged as an alternative to reinforcement learning in robotic manipulation [30] and OpenAI Gym [31].", "Even though these methods do not require access to a reward, they still require online interaction with the environment to achieve good performance.", "Learning the latent dynamics of a world model from image observations was first introduced in video prediction [32], [33], [34].", "Most similar to our approach, [4], [5] additionally modelled the reward function and optimised a policy inside their world model.", "Contrarily to prior work, our method does not assume access to a reward function, and directly learns a policy from an offline dataset.", "Additionally, previous methods operate on simple visual inputs, mostly of size $64\\times 64$ .", "In contrast, MILE is able to learn the latent dynamics of complex urban driving scenes from high resolution $600\\times 960$ input observations, which is important to ensure small details such as traffic lights can be perceived reliably." ], [ "Trajectory forecasting.", "The goal of trajectory forecasting is to estimate the future trajectories of dynamic agents using past physical states (e.g.", "position, velocity), and scene context (e.g.", "as an offline HD map) [35], [36], [37], [38].", "World models build a latent representation of the environment that explains the observations from the sensory inputs of the ego-agent (e.g.", "camera images) conditioned on their actions.", "While trajectory forecasting methods only model the dynamic scene, world models jointly reason on static and dynamic scenes.", "The future trajectories of moving agents is implicitly encoded in the learned latent representation of the world model, and could be explicitly decoded given we have access to future trajectory labels.", "[35], [37], [38] forecast the future trajectory of moving agents, but did not control the ego-agent.", "They focused on the prediction problem and not on learning expert behaviour from demonstrations.", "[39] inferred future trajectories of the ego-agent from expert demonstrations, and conditioned on some specified goal to perform new tasks.", "[36] extended their work to jointly model the future trajectories of moving agents as well as of the ego-agent.", "Our proposed model jointly models the motion of other dynamics agents, the behaviour of the ego-agent, as well as the static scene.", "Contrary to prior work, we do not assume access to ground truth physical states (position, velocity) or to an offline HD map for scene context.", "Our approach is the first camera-only method that models static scene, dynamic scene, and ego-behaviour in an urban driving environment." ], [ "MILE: Model-based Imitation LEarning", "In this section, we present MILE: our method that learns to jointly control an autonomous vehicle and model the world and its dynamics.", "An overview of the architecture is presented in mile-fig:architecture and the full description of the network can be found in mile-appendix:model-description.", "We begin by defining the generative model (subsection:probabilisticmodel), and then derive the inference model (subsection:variationalinference).", "subsection:inferencemodel and subsection:generativemodel describe the neural networks that parametrise the inference and generative models respectively.", "Finally, in subsection:imagine we show how our model can predict future states and actions to drive in imagination." ], [ "Probabilistic Generative Model", "Let ${\\mathbf {o}}_{1:T}$ be a sequence of $T$ video frames with associated expert actions ${\\mathbf {a}}_{1:T}$ and ground truth BeV semantic segmentation labels ${\\mathbf {y}}_{1:T}$ .", "We model their evolution by introducing latent variables ${\\mathbf {s}}_{1:T}$ that govern the temporal dynamics.", "The initial distribution is parameterised as ${\\mathbf {s}}_1 \\sim \\mathcal {N}({\\mathbf {0}}, \\mathbf {I})$ , and we additionally introduce a variable ${\\mathbf {h}}_1 \\sim \\delta ({\\mathbf {0}})$ that serves as a deterministic history.", "The transition consists of (i) a deterministic update ${\\mathbf {h}}_{t+1} = f_{\\theta }({\\mathbf {h}}_t, {\\mathbf {s}}_t)$ that depends on the past history ${\\mathbf {h}}_t$ and past state ${\\mathbf {s}}_t$ , followed by (ii) a stochastic update ${\\mathbf {s}}_{t+1} \\sim \\mathcal {N}(\\mu _{\\theta }({\\mathbf {h}}_{t+1}, {\\mathbf {a}}_t), \\sigma _{\\theta }({\\mathbf {h}}_{t+1}, {\\mathbf {a}}_t)\\mathbf {I})$ , where we parameterised ${\\mathbf {s}}_t$ as a normal distribution with diagonal covariance.", "We model these transitions with neural networks: $f_{\\theta }$ is a gated recurrent cell, and $(\\mu _{\\theta }, \\sigma _{\\theta })$ are multi-layer perceptrons.", "The full probabilistic model is given by equation:full-model.", "${\\left\\lbrace \\begin{array}{ll}{\\mathbf {h}}_1 &\\sim \\delta ({\\mathbf {0}})\\\\{\\mathbf {s}}_1 &\\sim \\mathcal {N}({\\mathbf {0}}, \\mathbf {I})\\\\{\\mathbf {h}}_{t+1} &= f_{\\theta }({\\mathbf {h}}_t, {\\mathbf {s}}_t) \\\\{\\mathbf {s}}_{t+1} & \\sim \\mathcal {N}(\\mu _{\\theta }({\\mathbf {h}}_{t+1}, {\\mathbf {a}}_t), \\sigma _{\\theta }({\\mathbf {h}}_{t+1}, {\\mathbf {a}}_t)\\mathbf {I}) \\\\{\\mathbf {o}}_t &\\sim \\mathcal {N}(g_{\\theta }({\\mathbf {h}}_t, {\\mathbf {s}}_t), \\mathbf {I}) \\\\{\\mathbf {y}}_t &\\sim \\mathrm {Categorical}(l_{\\theta }({\\mathbf {h}}_t, {\\mathbf {s}}_t)) \\\\{\\mathbf {a}}_t &\\sim \\mathrm {Laplace}(\\pi _{\\theta }({\\mathbf {h}}_t, {\\mathbf {s}}_t), \\mathbf {1})\\end{array}\\right.", "}$ with $\\delta $ the Dirac delta function, $g_{\\theta }$ the image decoder, $l_{\\theta }$ the BeV decoder, and $\\pi _{\\theta }$ the policy, which will be described in subsection:generativemodel." ], [ "Variational Inference", "Following the generative model described in equation:full-model, we can factorise the joint probability as: $\\begin{split}p({\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T}, {\\mathbf {a}}_{1:T}, {\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}) =\\prod _{t=1}^T p({\\mathbf {h}}_t, {\\mathbf {s}}_t|{\\mathbf {h}}_{t-1},{\\mathbf {s}}_{t-1},{\\mathbf {a}}_{t-1})p({\\mathbf {o}}_t,{\\mathbf {y}}_t,{\\mathbf {a}}_t|{\\mathbf {h}}_t, {\\mathbf {s}}_t)\\end{split}$ with $p({\\mathbf {h}}_t, {\\mathbf {s}}_t|{\\mathbf {h}}_{t-1},{\\mathbf {s}}_{t-1},{\\mathbf {a}}_{t-1}) &= p({\\mathbf {h}}_t|{\\mathbf {h}}_{t-1},{\\mathbf {s}}_{t-1})p({\\mathbf {s}}_t|{\\mathbf {h}}_t,{\\mathbf {a}}_{t-1}) \\\\p({\\mathbf {o}}_t,{\\mathbf {y}}_t,{\\mathbf {a}}_t|{\\mathbf {h}}_t, {\\mathbf {s}}_t) &= p({\\mathbf {o}}_t|{\\mathbf {h}}_t,{\\mathbf {s}}_t)p({\\mathbf {y}}_t|{\\mathbf {h}}_t, {\\mathbf {s}}_t)p({\\mathbf {a}}_t|{\\mathbf {h}}_t, {\\mathbf {s}}_t)$ Given that ${\\mathbf {h}}_t$ is deterministic according to equation:full-model, we have $p({\\mathbf {h}}_t|{\\mathbf {h}}_{t-1},{\\mathbf {s}}_{t-1})=\\delta ({\\mathbf {h}}_t -f_{\\theta }({\\mathbf {h}}_{t-1},{\\mathbf {s}}_{t-1}))$ .", "Therefore, in order to maximise the marginal likelihood of the observed data $p({\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T}, {\\mathbf {a}}_{1:T})$ , we need to infer the latent variables ${\\mathbf {s}}_{1:T}$ .", "We do this through deep variational inference by introducing a variational distribution $q_{H,S}$ defined and factorised as follows: $q_{H,S} \\triangleq q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T}, {\\mathbf {a}}_{1:T-1}) = \\prod _{t=1}^T q({\\mathbf {h}}_t|{\\mathbf {h}}_{t-1},{\\mathbf {s}}_{t-1})q({\\mathbf {s}}_t|{\\mathbf {o}}_{\\le t},{\\mathbf {a}}_{<t})$ with $q({\\mathbf {h}}_t|{\\mathbf {h}}_{t-1},{\\mathbf {s}}_{t-1})=p({\\mathbf {h}}_t|{\\mathbf {h}}_{t-1},{\\mathbf {s}}_{t-1})$ , the Delta dirac function defined above, and $q({\\mathbf {h}}_1)=\\delta ({\\mathbf {0}})$ .", "We parameterise this variational distribution with a neural network with weights $\\phi $ .", "By applying Jensen's inequality, we can obtain a variational lower bound on the log evidence: $\\log p(&{\\mathbf {o}}_{1:T} &&, {\\mathbf {y}}_{1:T}, {\\mathbf {a}}_{1:T}) \\ge ~\\mathcal {L}({\\mathbf {o}}_{1:T}, {\\mathbf {y}}_{1:T}, {\\mathbf {a}}_{1:T} ; \\theta , \\phi ) \\\\&\\triangleq && \\sum _{t=1}^T \\operatorname{\\mathbb {E}}_{q({\\mathbf {h}}_{1:t},{\\mathbf {s}}_{1:t}|{\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})}\\bigg [\\underbrace{\\log p({\\mathbf {o}}_t|{\\mathbf {h}}_t, {\\mathbf {s}}_t)}_{\\text{image reconstruction}} +~ \\underbrace{\\log p({\\mathbf {y}}_t|{\\mathbf {h}}_t, {\\mathbf {s}}_t)}_{\\text{bird's-eye segmentation}} ~+~ \\underbrace{\\log p({\\mathbf {a}}_t|{\\mathbf {h}}_t,{\\mathbf {s}}_t)}_{\\text{action}}\\bigg ] \\\\& && - \\sum _{t=1}^T \\operatorname{\\mathbb {E}}_{q({\\mathbf {h}}_{1:t-1},{\\mathbf {s}}_{1:t-1}|{\\mathbf {o}}_{\\le t-1}, {\\mathbf {a}}_{<t-1})}\\bigg [\\underbrace{D_{\\mathrm {KL}}\\Big ( q({\\mathbf {s}}_t | {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t}) ~||~ p({\\mathbf {s}}_t | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1}) \\Big )}_{\\text{posterior and prior matching}}\\bigg ]$ Please refer to mile-appendix:lower-bound for the full derivation.", "We model $q({\\mathbf {s}}_t | {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})$ as a Gaussian distribution so that the Kullback-Leibler (KL) divergence can be computed in closed-form.", "Given that the image observations ${\\mathbf {o}}_t$ are modelled as Gaussian distributions with unit variance, the resulting loss is the mean-squared error.", "Similarly, the action being modelled as a Laplace distribution and the BeV labels as a categorical distribution, the resulting losses are, respectively, $L_1$ and cross-entropy.", "The expectations over the variational distribution can be efficiently approximated with a single sequence sample from $q_{H,S}$ , and backpropagating gradients with the reparametrisation trick [40]." ], [ "Inference Network $\\phi $", "The inference network, parameterised by $\\phi $ , models $q({\\mathbf {s}}_t | {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})$ , which approximates the true posterior $p({\\mathbf {s}}_t | {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})$ .", "It is constituted of two elements: the observation encoder $e_{\\phi }$ , that embeds input images, route map and vehicle control sensor data to a low-dimensional vector, and the posterior network $(\\mu _{\\phi },\\sigma _{\\phi })$ , that estimates the probability distribution of the Gaussian posterior." ], [ "Observation Encoder", "The state of our model should be compact and low-dimensional in order to effectively learn dynamics.", "Therefore, we need to embed the high resolution input images to a low-dimensional vector.", "Naively encoding this image to a 1D vector similarly to an image classification task results in poor performance as shown in mile-section:ablation-studies.", "Instead, we explicitly encode 3D geometric inductive biases in the model.", "Lifting image features to 3D.", "Since autonomous driving is a geometric problem where it is necessary to reason on the static scene and dynamic agents in 3D, we first lift the image features to 3D.", "More precisely, we encode the image inputs ${\\mathbf {o}}_t \\in \\mathbb {R}^{3\\times H \\times W}$ with an image encoder to extract features ${\\mathbf {u}}_t \\in \\mathbb {R}^{C_e\\times H_e \\times W_e}$ .", "Then similarly to [21], we predict a depth probability distribution for each image feature along a predefined grid of depth bins ${\\mathbf {d}}_t \\in \\mathbb {R}^{D\\times H_e, \\times W_e}$ .", "Using the depth probability distribution, the camera intrinsics $K$ and extrinsics $M$ , we can lift the image features to 3D: $\\mathrm {Lift}({\\mathbf {u}}_t, {\\mathbf {d}}_t, K^{-1}, M)) \\in \\mathbb {R}^{C_e\\times D \\times H_e \\times D_e \\times 3}$ .", "Pooling to BeV.", "The 3D feature voxels are then sum-pooled to BeV space using a predefined grid with spatial extent $H_b\\times W_b$ and spatial resolution $b_{\\mathrm {res}}$ .", "The resulting feature is ${\\mathbf {b}}_t \\in \\mathbb {R}^{C_e\\times H_b \\times W_b}$ .", "Mapping to a 1D vector.", "In traditional computer vision tasks (e.g.", "semantic segmentation [41], depth prediction [42]), the bottleneck feature is usually a spatial tensor, in the order of $10^5-10^6$ features.", "Such high dimensionality is prohibitive for a world model that has to match the distribution of the priors (what it thinks will happen given the executed action) to the posteriors (what actually happened by observing the image input).", "Therefore, using a convolutional backbone, we compress the BeV feature ${\\mathbf {b}}_t$ to a single vector ${\\mathbf {x}}^{\\prime }_t \\in \\mathbb {R}^{C^{\\prime }}$ .", "As shown in mile-section:ablation-studies, we found it critical to compress in BeV space rather than directly in image space.", "Route map and speed.", "We provide the agent with a goal in the form of a route map [9], which is a small grayscale image indicating to the agent where to navigate at intersections.", "The route map is encoded using a convolutional module resulting in a 1D feature ${\\mathbf {r}}_t$ .", "The current speed is encoded with fully connected layers as ${\\mathbf {m}}_t$ .", "At each timestep $t$ , the observation embedding ${\\mathbf {x}}_t$ is the concatenation of the image feature, route map feature and speed feature: ${\\mathbf {x}}_t = [{\\mathbf {x}}^{\\prime }_t, {\\mathbf {r}}_t, {\\mathbf {m}}_t] \\in \\mathbb {R}^C$ , with $C=512$" ], [ "Posterior Network", "The posterior network $(\\mu _{\\phi },\\sigma _{\\phi })$ estimates the parameters of the variational distribution $q({\\mathbf {s}}_t | {\\mathbf {o}}_{\\le t},{\\mathbf {a}}_{<t}) \\sim \\mathcal {N}\\left( \\mu _{\\phi }({\\mathbf {h}}_t, {\\mathbf {a}}_{t-1}, e_{\\phi }({\\mathbf {o}}_t)), \\sigma _{\\phi }({\\mathbf {h}}_t, {\\mathbf {a}}_{t-1}, e_{\\phi }({\\mathbf {o}}_t)){\\mathbf {I}}\\right)$ with ${\\mathbf {h}}_{t} = f_{\\theta }({\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})$ .", "Note that ${\\mathbf {h}}_t$ was inferred using $f_{\\theta }$ because we have assumed that ${\\mathbf {h}}_t$ is deterministic, meaning that $q({\\mathbf {h}}_t|{\\mathbf {h}}_{t-1},{\\mathbf {s}}_{t-1})=p({\\mathbf {h}}_t|{\\mathbf {h}}_{t-1},{\\mathbf {s}}_{t-1})=\\delta ({\\mathbf {h}}_t -f_{\\theta }({\\mathbf {h}}_{t-1},{\\mathbf {s}}_{t-1}))$ .", "The dimension of the Gaussian distribution is equal to 512." ], [ "Generative Network $\\theta $", "The generative network, parameterised by $\\theta $ , models the latent dynamics $({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T})$ as well as the generative process of $({\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T})$ .", "It comprises a gated recurrent cell $f_{\\theta }$ , a prior network $(\\mu _{\\theta }, \\sigma _{\\theta })$ , an image decoder $g_{\\theta }$ , a BeV decoder $l_{\\theta }$ , and a policy $\\pi _{\\theta }$ .", "The prior network estimates the parameters of the Gaussian distribution $p({\\mathbf {s}}_t|{\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1}) \\sim \\mathcal {N}\\left( \\mu _{\\theta }({\\mathbf {h}}_t,\\hat{{\\mathbf {a}}}_{t-1}), \\sigma _{\\theta }({\\mathbf {h}}_t,\\hat{{\\mathbf {a}}}_{t-1}){\\mathbf {I}}\\right)$ with ${\\mathbf {h}}_{t} = f_{\\theta }({\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})$ and $\\hat{{\\mathbf {a}}}_{t-1}=\\pi _{\\theta }({\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})$ .", "Since the prior does not have access to the ground truth action ${\\mathbf {a}}_{t-1}$ , the latter is estimated with the learned policy $\\hat{{\\mathbf {a}}}_{t-1} = \\pi _{\\theta }({\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})$ .", "The Kullback-Leibler divergence loss between the prior and posterior distributions can be interpreted as follows.", "Given the past state $({\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})$ , the objective is to predict the distribution of the next state ${\\mathbf {s}}_t$ .", "As we model an active agent, this transition is decomposed into (i) action prediction and (ii) next state prediction.", "This transition estimation is compared to the posterior distribution that has access to the ground truth action ${\\mathbf {a}}_{t-1}$ , and the image observation ${\\mathbf {o}}_t$ .", "The prior distribution tries to match the posterior distribution.", "This divergence matching framework ensures the model predicts actions and future states that explain the observed data.", "The divergence of the posterior from the prior measures how many nats of information were missing from the prior when observing the posterior.", "At training convergence, the prior distribution should be able to model all action-state transitions from the expert dataset.", "The image and BeV decoders have an architecture similar to StyleGAN [43].", "The prediction starts as a learned constant tensor, and is progressively upsampled to the final resolution.", "At each resolution, the latent state is injected in the network with adaptive instance normalisation.", "This allows the latent states to modulate the predictions at different resolutions.", "The policy is a multi-layer perceptron.", "Please refer to mile-appendix:model-description for a full description of the neural networks." ], [ "Imagining Future States and Actions", "Our model can imagine future latent states by using the learned policy to infer actions $\\hat{{\\mathbf {a}}}_{T+i} = \\pi _{\\theta }({\\mathbf {h}}_{T+i}, {\\mathbf {s}}_{T+i})$ , predicting the next deterministic state ${\\mathbf {h}}_{{T+i+1}} = f_{\\theta }({\\mathbf {h}}_{T+i}, {\\mathbf {s}}_{T+i})$ and sampling from the prior distribution ${\\mathbf {s}}_{T+i+1} \\sim \\mathcal {N}(\\mu _{\\theta }({\\mathbf {h}}_{T+i+1},\\hat{{\\mathbf {a}}}_{T+i}), \\sigma _{\\theta }({\\mathbf {h}}_{T+i+1},\\hat{{\\mathbf {a}}}_{T+i}){\\mathbf {I}})$ , for $i\\ge 0$ .", "This process can be iteratively applied to generate sequences of longer futures in latent space, and the predicted futures can be visualised through the decoders." ], [ "Dataset.", "The training data was collected in the CARLA simulator with an expert reinforcement learning (RL) agent [9] that was trained using privileged information as input (BeV semantic segmentations and vehicle measurements).", "This RL agent generates more diverse runs and has greater driving performance than CARLA's in-built autopilot [9].", "We collect data at $25\\mathrm {Hz}$ in four different training towns (Town01, Town03, Town04, Town06) and four weather conditions (ClearNoon, WetNoon, HardRainNoon, ClearSunset) for a total of $2.9\\mathrm {M}$ frames, or 32 hours of driving data.", "At each timestep, we save a tuple $({\\mathbf {o}}_t, \\mathbf {route}_t, \\mathbf {speed}_t, {\\mathbf {a}}_t, {\\mathbf {y}}_t)$ , with ${\\mathbf {o}}_t \\in \\mathbb {R}^{3\\times 600\\times 960}$ the forward camera RGB image, $\\mathbf {route}_t \\in \\mathbb {R}^{1\\times 64\\times 64}$ the route map (visualized as an inset on the top right of the RGB images in Figure REF ), $\\mathbf {speed}_t \\in \\mathbb {R}$ the current velocity of the vehicle, ${\\mathbf {a}}_t \\in \\mathbb {R}^2$ the action executed by the expert (acceleration and steering), and ${\\mathbf {y}}_t \\in \\mathbb {R}^{C_b \\times 192 \\times 192}$ the BeV semantic segmentation.", "There are $C_b = 8$ semantic classes: background, road, lane marking, vehicles, pedestrians, and traffic light states (red, yellow, green).", "In urban driving environments, the dynamics of the scene do not contain high frequency components, which allows us to subsample frames at $5\\mathrm {Hz}$ in our sequence model." ], [ "Training.", "Our model was trained for $50,000$ iterations on a batch size of 64 on 8 V100 GPUs, with training sequence length $T=12$ .", "We used the AdamW optimiser [44] with learning rate $10^{-4}$ and weight decay $0.01$ ." ], [ "Metrics.", "We report metrics from the CARLA challenge [45] to measure on-road performance: route completion, infraction penalty, and driving score.", "These metrics are however very coarse, as they only give a sense of how well the agent performs with hard penalties (such as hitting virtual pedestrians).", "Core driving competencies such as lane keeping and driving at an appropriate speed are obscured.", "Therefore we also report the cumulative reward of the agent.", "At each timestep the reward [46] penalises the agent for deviating from the lane center, for driving too slowly/fast, or for causing infractions.", "It measures how well the agent drives at the timestep level.", "In order to account for the length of the simulation (due to various stochastic events, it can be longer or shorter), we also report the normalised cumulative reward.", "More details on the experimental setting is given in mile-appendix:experimental-setting." ], [ "Driving Performance", "We evaluate our model inside the CARLA simulator on a town and weather conditions never seen during training.", "We picked Town05 as it is the most complex testing town, and use the 10 routes of Town05 as specified in the CARLA challenge [45], in four different weather conditions.", "mile-table:carla-challenge shows the comparison against prior state-of-the-art methods: CILRS [17], LBC [47], TransFuser [48], Roach [9], and LAV [10].", "We evaluate these methods using their publicly available pre-trained weights.", "Table: Driving performance on a new town and new weather conditions in CARLA.", "Metrics are averaged across three runs.", "We include reward signals from past work where available.MILE outperforms previous works on all metrics, with a 31% relative improvement in driving score with respect to LAV.", "Even though some methods have access to additional sensor information such as LiDAR (TransFuser [48], LAV [10]), our approach demonstates superior performance while only using RGB images from the front camera.", "Moreover, we observe that our method almost doubles the cumulative reward of Roach (which was trained on the same dataset) and approaches the performance of the privileged expert." ], [ "Ablation Studies", "We next examine the effect of various design decisions in our approach." ], [ "3D geometry.", "We compare our model to the following baselines.", "Single frame that predicts the action and BeV segmentation from a single image observation.", "Single frame, no 3D which is the same model but without the 3D lifting step.", "And finally, No 3D which is MILE without 3D lifting.", "As shown in mile-table:ablations, in both cases, there is a significant drop in performance when not modelling 3D geometry.", "For the single frame model, the cumulative reward drops from 6084 to 1878.", "For MILE, the reward goes from 7621 to 4564.", "These results highlights the importance of the 3D geometry inductive bias." ], [ "Probabilistic modelling.", "At any given time while driving, there exist multiple possible valid behaviours.", "For example, the driver can slightly adjust its speed, decide to change lane, or decide what is a safe distance to follow behind a vehicle.", "A deterministic driving policy cannot model these subtleties.", "In ambiguous situations where multiple choices are possible, it will often learn the mean behaviour, which is valid in certain situations (e.g.", "the mean safety distance and mean cruising speed are reasonable choices), but unsafe in others (e.g.", "in lane changing: the expert can change lane early, or late; the mean behaviour is to drive on the lane marking).", "We compare MILE with a No prior/post.", "matching baseline that does not have a Kullback-Leibler divergence loss between the prior and posterior distributions, and observe this results in a drop in cumulative reward from 7621 to 6084.", "Table: Ablation studies.", "We report driving performance on a new town and new weather conditions in CARLA.", "Results are averaged across three runs." ], [ "Fully Recurrent Inference in Closed-Loop Driving", "We compare the closed-loop performance of our model with two different strategies: Reset state: for every new observation, we re-initialise the latent state and recompute the new state $[h_T, s_T]$ , with $T$ matching the training sequence length.", "Fully recurrent: the latent state is initialised at the beginning of the evaluation, and is recursively updated with new observations.", "It is never reset, and instead, the model must have learned a representation that generalises to integrating information for orders of magnitude more steps than the $T$ used during training.", "mile-table:online-deployment shows that our model can be deployed with recurrent updates, matching the performance of the Reset state approach, while being much more computationally efficient ($7\\times $ faster from $6.2\\mathrm {Hz}$ with $T=12$ of fixed context to $43.0\\mathrm {Hz}$ with a fully recurrent approach).", "A hypothesis that could explain why the Fully recurrent deployment method works well is because the world model has learned to always discard all past information and rely solely on the present input.", "To test this hypothesis, we add Gaussian noise to the past latent state during deployment.", "If the recurrent network is simply discarding all past information, its performance should not be affected.", "However in mile-table:online-deployment, we see that the cumulative reward significantly decreases, showing our model does not simply discard all past context, but actively makes use of it.", "Table: Comparison of two deployment methods.", "(i) Reset state: for each new observation a fresh state is computed from a zero-initialised latent state using the last TT observations, and (ii) Fully recurrent: the latent state is recurrently updated with new observations.", "We report driving performance on an unseen town and unseen weather conditions in CARLA.", "Frequency is in Hertz." ], [ "Long Horizon, Diverse Future Predictions", "Our model can imagine diverse futures in the latent space, which can be decoded to BeV semantic segmentation for interpretability.", "mile-fig:multimodal-predictions shows examples of multi-modal futures predicted by MILE.", "Figure: Qualitative example of multi-modal predictions, for 8 seconds in the future.", "BeV segmentation legend: black = ego-vehicle, white = background, gray = road, dark gray=lane marking, blue = vehicles, cyan = pedestrians, green/yellow/red = traffic lights.", "Ground truth labels (GT) outside the field-of-view of the front camera are masked out.", "In this example, we visualise two distinct futures predicted by the model: 1) (top row) driving through the green light, 2) (bottom row) stopping because the model imagines the traffic light turning red.", "Note the light transition from green, to yellow, to red, and also at the last frame t+8.0st+8.0\\mathrm {s} how the traffic light in the left lane turns green." ], [ "Latent State Dimension", "In our model, we have set the latent state to be a low-dimensional 1D vector of size 512.", "In dense image reconstruction however, the bottleneck feature is often a 3D spatial tensor of dimension (channel, height, width).", "We test whether it is possible to have a 3D tensor as a latent probabilistic state instead of a 1D vector.", "We change the latent state to have dimension $256\\times 12 \\times 12$ (40k distributions), $128\\times 24 \\times 24$ (80k distributions), and $64\\times 48 \\times 48$ (160k distributions, which is the typical bottleneck size in dense image prediction).", "Since the latent state is now a spatial tensor, we adapt the recurrent network to be convolutional by switching the fully-connected operations with convolutions.", "We evaluate the model in the reset state and fully recurrent setting and report the results in mile-fig:latent-state-dimensionality.", "[row sep= ,col sep=] dimension reset recurrent 512x1x1 7621 7532 256x12x12 7465 6998 128x24x24 6407 4596 64x48x48 5637 3794 Figure: Analysis on the latent state dimension.", "We report closed-loop driving performance in a new town and new weather in CARLA.In the reset state setting, performance decreases as the dimensionality of the latent state increases.", "Surprisingly, even though the latent space is larger and has more capacity, driving performance is negatively impacted.", "This seems to indicate that optimising the prior and posterior distributions in the latent space is difficult, and especially more so as dimensionality increases.", "The prior, which is a multivariate Gaussian distribution needs to match the posterior, another multivariate Gaussian distribution.", "What makes this optimisation tricky is that the two distributions are non-stationary and change over time during the course of training.", "The posterior needs to extract the relevant information from the high-resolution images and incorporate it in the latent state in order to reconstruct BeV segmentation and regress the expert action.", "The prior has to predict the transition that matches the distribution of the posterior.", "Even more intriguing is when we look at the results in the fully recurrent deployment setting.", "When deployed in a fully recurrent manner in the simulator, without resetting the latent state, the model needs to discard information that is no longer relevant and continuously update its internal state with new knowledge coming from image observations.", "In our original latent state dimension of 512, there is almost no different in driving performance between the two deployment modes.", "The picture is dramatically different when using a higher dimensional spatial latent state.", "For all the tested dimensions, there is a large gap between the two deployment settings.", "This result seems to indicate that the world model operating on high-dimensional spatial states has not optimally learned this behaviour, contrarily to the one operating on low-dimensional vector states." ], [ "Driving in Imagination", "Humans are believed to build an internal model of the world in order to navigate in it [49], [50], [51].", "Since the stream of information they perceive is often incomplete and noisy, their brains fill missing information through imagination.", "This explains why it is possible for them to continue driving when blinded by sunlight for example.", "Even if no visual observations are available for a brief moment, they can still reliably predict their next states and actions to exhibit a safe driving behaviour.", "We demonstrate that similarly, MILE can execute accurate driving plans entirely predicted from imagination, without having access to image observations.", "We qualitatively show that it can perform complex driving maneuvers such as navigating a roundabout, marking a pause a stop sign, or swerving to avoid a motorcyclist, using an imagined plan from the model (see supplementary material).", "Quantitatively, we measure how accurate the predicted plans are by operating in the fully recurrent setting.", "We alternate between the observing mode where the model can see image observations, and the imagining mode where the model has to imagine the next states and actions, similarly to a driver that temporarily loses sight due to sun glare.", "In mile-appendix:driving-imagination we show that our model can retain the same driving performance with up to 30% of the drive in imagining mode.", "This demonstrates that the model can imagine driving plans that are accurate enough for closed loop driving.", "Further, it shows that the latent state of the world model can seamlessly switch between the observing and imagining modes.", "The evolution of the latent state is predicted from imagination when observations are not available, and updated with image observations when they become accessible." ], [ "Conclusion", "We presented MILE: a Model-based Imitation LEarning approach for urban driving, that jointly learns a driving policy and a world model from offline expert demonstrations alone.", "Our approach exploits geometric inductive biases, operates on high-dimensional visual inputs, and sets a new state-of-the-art on the CARLA simulator.", "MILE can predict diverse and plausible future states and actions, allowing the model to drive from a plan entirely predicted from imagination.", "An open problem is how to infer the driving reward function from expert data, as this would enable explicit planning in the world model.", "Another exciting avenue is self-supervision in order to relax the dependency on the bird's-eye view segmentation labels.", "Self-supervision could fully unlock the potential of world models for real-world driving and other robotics tasks." ], [ "Acknowledgements.", "We would like to thank Vijay Badrinarayanan, Przemyslaw Mazur, and Oleg Sinavski for insightful research discussions.", "We are also grateful to Lorenzo Bertoni, Lloyd Russell, Juba Nait Saada, Thomas Uriot, and the anonymous reviewers for their helpful feedback and comments on the paper." ], [ "Driving in Imagination", "We deploy the model in the fully recurrent setting, and at fixed intervals: (i) we let the model imagine future states and actions, without observing new images, and execute those actions in the simulator.", "(ii) We then let the model update its knowledge of the world by observing new image frames.", "More precisely, we set the fixed interval to a two-second window, and set a ratio of imagining vs. observing.", "If for example that ratio is set to 0.5, we make the model imagine by sampling from the prior distribution for 1.0s, then sample from the posterior distribution for 1.0s, and alternate between these two settings during the whole evaluation run.", "We make the ratio of imagining vs. observing vary from 0 (always observing each image frame, which is the default behaviour) to $0.6$ (imagining for $60\\%$ of the time).", "We report both the driving performance and perception accuracy in mile-fig:dreaming-driving.", "The driving performance is measured with the driving score, and the perception accuracy using the intersection-over-union with the ground truth BeV semantic segmentation.", "We compare MILE with a one-frame baseline which has no memory and only uses a single image frame for inference.", "mile-fig:dreaming-driving-performance shows that our model can imagine for up to $30\\%$ of the time without any significant drop in driving performance.", "After this point, the driving score starts decreasing but remains much higher than its one-frame counterpart.", "In mile-fig:dreaming-driving-perception, we see that the predicted states remain fairly accurate (by decoding to BeV segmentation), even with an important amount of imagining.", "These results demonstrate that our model can predict plausible future states and actions, accurate enough to control a vehicle in closed-loop.", "fig:dreaming-driving-traffic illustrates an example of the model driving in imagination and successfully negotiating a roundabout.", "Figure: Driving in imagination.", "We report the closed-loop driving performance and perception accuracy in CARLA when the model imagines future states and actions and does not observe a proportion of the images.Figure: An example of the model imagining and accurately predicting future states and actions to negotiate a roundabout.", "When imagining, the model does not observe the image frames, but predicts the future states and actions from its current latent state." ], [ "Image Resolution", "In urban driving, small elements in the scene can have an important role in decision making.", "One typical example is traffic lights, which only occupy a small portion of the image, but dictate whether a vehicle can continue driving forward or needs to stop at a red light.", "fig:image-resolution-traffic-light and fig:image-resolution-pedestrian illustrate how traffic lights and pedestrians become much harder to distinguish in lower image resolutions.", "We evaluate the importance of image resolution by training MILE at different resolutions: $75\\times 120$ , $150\\times 240$ , $300\\times 480$ , and $600\\times 960$ (our proposed resolution).", "We report the results in mile-table:resolution and observe a significant decrease in both driving score and cumulative reward.", "The performance drop is most severe in the infraction penalty metric.", "To get a better understanding of what is happening, we detail in mile-table:resolution-breakdown the breakdown of the infractions.", "We report the number of red lights run, the number of vehicle collisions, and the number of pedestrian collisions, all per kilometre driven.", "As the resolution of the image lowers, the number of infractions increases across all modalities (red lights, vehicles, and pedestrians).", "These results highlight the importance of high resolution images to reliably detect traffic lights, vehicles, and pedestrians.", "Table: Analysis on the image resolution.", "We report driving performance on a new town and new weather conditions in CARLA.Table: Analysis on the image resolution.", "We report the breakdown of infraction penalties on a new town and new weather conditions in CARLA.", "The metrics are: number of red lights run, number of vehicle collisions, and number of pedestrian collisions.", "They are normalised per kilometre driven.", "Lower is better.Figure: Input image observation at different resolutions.", "The red traffic light becomes almost indistinguishable in lower resolutions.Figure: Input image observation at different resolutions.", "It becomes increasingly harder to see the pedestrian as the resolution decreases." ], [ "Training Town Evaluation", "We also evaluate our method on towns and weather conditions seen during training.", "As reported in mile-table:carla-challenge-train, our model shows a 21% relative improvement in driving score with respect to Roach.", "Note that the RL expert has a lower performance than in test town Town05, because Town03 was designed as the most complex town [52].", "Table: Driving performance in CARLA on a town and weather conditions seen during training.", "Metrics are averaged across three runs." ], [ "Evaluation in the Settings of Past Works", "We also evaluated our model in the evaluation settings of: TransFuser [48]: the full 10 test routes of Town05 in ClearNoon weather and no scenarios (mile-table:transfuser-setting); LAV [10]: 2 test routes from Town02 and 2 test routes in Town05 in weathers [SoftRainSunset, WetSunset, CloudyNoon, MidRainSunset] and no scenarios (mile-table:lav-setting).", "Table: Driving performance in CARLA in the TransFuser evaluation setting.Table: Driving performance in CARLA in the LAV evaluation setting." ], [ "Lower Bound Derivation", "appendix]mile-appendix:lower-bound Let $q_{H,S} \\triangleq q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T}) = q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {a}}_{1:T-1})$ be the variational distribution (where we have assumed independence of $({\\mathbf {y}}_{1:T}, {\\mathbf {a}}_T)$ given $({\\mathbf {o}}_{1:T}, {\\mathbf {a}}_{1:T-1}$ )), and $p({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T})$ be the posterior distribution.", "The Kullback-Leibler divergence between these two distributions writes as: $&D_{\\mathrm {KL}}\\left(q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T}) ~||~ p({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T}) \\right) \\\\=& \\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T} \\sim q_{H,S}} \\left[ \\log \\frac{q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T})}{p({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T})}\\right] \\\\=& \\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T} \\sim q_{H,S}} \\left[ \\log \\frac{q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T})p({\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T})}{p({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}) p({\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T}|{\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T})} \\right] \\\\=& \\log p({\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T}) - \\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T} \\sim q_{H,S}} \\left[ \\log p({\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T}|{\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}) \\right] \\\\&+ D_{\\mathrm {KL}}(q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T})~||~p({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T})) $ Since $D_{\\mathrm {KL}}\\left(q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T}) ~||~ p({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T}) \\right) \\ge 0$ , we obtain the following evidence lower bound: $\\log p({\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T}) \\ge &\\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T} \\sim q_{H,S}} \\left[ \\log p({\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T}|{\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}) \\right] \\\\&- D_{\\mathrm {KL}}(q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {a}}_{1:T-1})~||~p({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T})) $ Let us now calculate the two terms of this lower bound separately.", "On the one hand: $& \\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T} \\sim q_{H,S}} \\left[ \\log p({\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T}|{\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}) \\right] \\\\=&~ \\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T} \\sim q_{H,S}} \\left[ \\log \\prod _{t=1}^T p({\\mathbf {o}}_t|{\\mathbf {h}}_t,{\\mathbf {s}}_t) p({\\mathbf {y}}_t|{\\mathbf {h}}_t,{\\mathbf {s}}_t) p({\\mathbf {a}}_t|{\\mathbf {h}}_t,{\\mathbf {s}}_t) \\right] \\\\=&~\\sum _{t=1}^T \\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_{1:t}, {\\mathbf {s}}_{1:t} \\sim q({\\mathbf {h}}_{1:t}, {\\mathbf {s}}_{1:t}| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})} \\left[ \\log p({\\mathbf {o}}_t|{\\mathbf {h}}_t,{\\mathbf {s}}_t) + \\log p({\\mathbf {y}}_t|{\\mathbf {h}}_t,{\\mathbf {s}}_t) + \\log p({\\mathbf {a}}_t|{\\mathbf {h}}_t,{\\mathbf {s}}_t) \\right] $ where eq:expectation1 follows from eq:generative-model, and eq:last-expectation was obtained by integrating over remaining latent variables $({\\mathbf {h}}_{t+1:T}, {\\mathbf {s}}_{t+1:T})$ .", "On the other hand: $&D_{\\mathrm {KL}}(q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {a}}_{1:T-1})~||~p({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T})) \\\\=&~\\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T} \\sim q_{H,S}} \\left[\\log \\frac{q({\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}| {\\mathbf {o}}_{1:T}, {\\mathbf {a}}_{1:T-1})}{p({\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T})} \\right] \\\\=&~\\int _{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}} q({\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}| {\\mathbf {o}}_{1:T}, {\\mathbf {a}}_{1:T-1}) \\log \\frac{q({\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}| {\\mathbf {o}}_{1:T}, {\\mathbf {a}}_{1:T-1})}{p({\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T})} \\mathop {}\\!\\mathrm {d}{\\mathbf {h}}_{1:T} \\mathop {}\\!\\mathrm {d}{\\mathbf {s}}_{1:T} \\\\=&~\\int _{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}} q({\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}| {\\mathbf {o}}_{1:T}, {\\mathbf {a}}_{1:T-1}) \\log \\left[ \\prod _{t=1}^T \\frac{q({\\mathbf {h}}_t| {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})q({\\mathbf {s}}_t| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})}{p({\\mathbf {h}}_{t} | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1}) p({\\mathbf {s}}_t | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})} \\right] \\mathop {}\\!\\mathrm {d}{\\mathbf {h}}_{1:T} \\mathop {}\\!\\mathrm {d}{\\mathbf {s}}_{1:T} \\\\=&~\\int _{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}} q({\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}| {\\mathbf {o}}_{1:T}, {\\mathbf {a}}_{1:T-1}) \\log \\left[ \\prod _{t=1}^T \\frac{q({\\mathbf {s}}_t| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})}{p({\\mathbf {s}}_t | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})} \\right] \\mathop {}\\!\\mathrm {d}{\\mathbf {h}}_{1:T} \\mathop {}\\!\\mathrm {d}{\\mathbf {s}}_{1:T} \\\\$ where: eq:kl-step-1 follows from the factorisations defined in eq:generative-model and eq:inference-model.", "The simplification in eq:kl-step-2 results of $q({\\mathbf {h}}_t| {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1}) = p({\\mathbf {h}}_{t} | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})$ .", "Thus: $&D_{\\mathrm {KL}}(q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {a}}_{1:T-1})~||~p({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T})) \\\\=&~\\int _{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}} \\left(\\prod _{t=1}^T q({\\mathbf {h}}_{t}| {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})q({\\mathbf {s}}_{t}| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t}) \\right)\\left( \\sum _{t=1}^T \\log \\frac{q({\\mathbf {s}}_t| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})}{p({\\mathbf {s}}_t | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})} \\right) \\mathop {}\\!\\mathrm {d}{\\mathbf {h}}_{1:T} \\mathop {}\\!\\mathrm {d}{\\mathbf {s}}_{1:T}\\\\=&~\\int _{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}} \\left(\\prod _{t=1}^T q({\\mathbf {h}}_{t}| {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})q({\\mathbf {s}}_{t}| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t}) \\right)\\Bigg ( \\log \\frac{q({\\mathbf {s}}_1| {\\mathbf {o}}_1)}{p({\\mathbf {s}}_1)}\\\\& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\quad + \\sum _{t=2}^T \\log \\frac{q({\\mathbf {s}}_t| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})}{p({\\mathbf {s}}_t | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})} \\Bigg ) \\mathop {}\\!\\mathrm {d}{\\mathbf {h}}_{1:T} \\mathop {}\\!\\mathrm {d}{\\mathbf {s}}_{1:T} \\\\=&~\\operatorname{\\mathbb {E}}_{{\\mathbf {s}}_1 \\sim q({\\mathbf {s}}_1|{\\mathbf {o}}_1)} \\left[ \\log \\frac{q({\\mathbf {s}}_1|{\\mathbf {o}}_1)}{p({\\mathbf {s}}_1)}\\right] \\\\&+~\\int _{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}} \\left( \\prod _{t=1}^T q({\\mathbf {h}}_{t}| {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})q({\\mathbf {s}}_{t}| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t}) \\right) \\left(\\sum _{t=2}^T \\log \\frac{q({\\mathbf {s}}_t| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})}{p({\\mathbf {s}}_t | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})} \\right) \\mathop {}\\!\\mathrm {d}{\\mathbf {h}}_{1:T} \\mathop {}\\!\\mathrm {d}{\\mathbf {s}}_{1:T} \\\\=&~D_{\\mathrm {KL}}( q({\\mathbf {s}}_1|{\\mathbf {o}}_1)~||~p({\\mathbf {s}}_1)) \\\\&+~\\int _{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}} \\left( \\prod _{t=1}^T q({\\mathbf {h}}_{t}| {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})q({\\mathbf {s}}_{t}| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t}) \\right) \\Bigg (\\log \\frac{q({\\mathbf {s}}_2| {\\mathbf {o}}_{1:2}, {\\mathbf {a}}_1)}{p({\\mathbf {s}}_2 | {\\mathbf {h}}_1, {\\mathbf {s}}_1)}\\\\&\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\quad \\hspace{5.0pt}\\; +\\sum _{t=3}^T \\log \\frac{q({\\mathbf {s}}_t| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})}{p({\\mathbf {s}}_t | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})} \\Bigg ) \\mathop {}\\!\\mathrm {d}{\\mathbf {h}}_{1:T} \\mathop {}\\!\\mathrm {d}{\\mathbf {s}}_{1:T} \\\\=&~D_{\\mathrm {KL}}( q({\\mathbf {s}}_1|{\\mathbf {o}}_1)~||~p({\\mathbf {s}}_1)) + ~\\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_1,{\\mathbf {s}}_1 \\sim q({\\mathbf {h}}_1,{\\mathbf {s}}_1|{\\mathbf {o}}_1)} \\left[ D_{\\mathrm {KL}}( q({\\mathbf {s}}_2|{\\mathbf {o}}_{1:2}, {\\mathbf {a}}_1)~||~p({\\mathbf {s}}_2|{\\mathbf {h}}_1, {\\mathbf {s}}_1)) \\right]\\\\&+~\\int _{{\\mathbf {h}}_{1:T}, {\\mathbf {s}}_{1:T}} \\left( \\prod _{t=1}^T q({\\mathbf {h}}_{t}| {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})q({\\mathbf {s}}_{t}| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t}) \\right) \\left(\\sum _{t=3}^T \\log \\frac{q({\\mathbf {s}}_t| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})}{p({\\mathbf {s}}_t | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})} \\right) \\mathop {}\\!\\mathrm {d}{\\mathbf {h}}_{1:T} \\mathop {}\\!\\mathrm {d}{\\mathbf {s}}_{1:T} $ where eq:kl-first-term and eq:kl-second-term were obtained by splitting the integral in two and integrating over remaining latent variables.", "By recursively applying this process on the sum of logarithms indexed by $t$ , we get: $&D_{\\mathrm {KL}}(q({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T}|{\\mathbf {o}}_{1:T},{\\mathbf {a}}_{1:T-1})~||~p({\\mathbf {h}}_{1:T},{\\mathbf {s}}_{1:T})) \\\\=&~ \\sum _{t=1}^T \\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_{1:t-1},{\\mathbf {s}}_{1:t-1} \\sim q({\\mathbf {h}}_{1:t-1},{\\mathbf {s}}_{1:t-1}|{\\mathbf {o}}_{\\le t-1},{\\mathbf {a}}_{<t-1})} \\left[ D_{\\mathrm {KL}}( q({\\mathbf {s}}_t|{\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})~||~p({\\mathbf {s}}_t | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})) \\right] $ Finally, we inject eq:last-expectation and eq:last-kl in eq:lower-bound to obtain the desired lower bound: $&\\log p({\\mathbf {o}}_{1:T},{\\mathbf {y}}_{1:T},{\\mathbf {a}}_{1:T})\\\\\\ge &~\\sum _{t=1}^T \\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_{1:t}, {\\mathbf {s}}_{1:t} \\sim q({\\mathbf {h}}_{1:t}, {\\mathbf {s}}_{1:t}| {\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})} \\left[ \\log p({\\mathbf {o}}_t|{\\mathbf {h}}_t,{\\mathbf {s}}_t) + \\log p({\\mathbf {y}}_t|{\\mathbf {h}}_t,{\\mathbf {s}}_t) + \\log p({\\mathbf {a}}_t|{\\mathbf {h}}_t,{\\mathbf {s}}_t) \\right]\\\\& - \\sum _{t=1}^T \\operatorname{\\mathbb {E}}_{{\\mathbf {h}}_{1:t-1},{\\mathbf {s}}_{1:t-1} \\sim q({\\mathbf {h}}_{1:t-1},{\\mathbf {s}}_{1:t-1}|{\\mathbf {o}}_{\\le t-1},{\\mathbf {a}}_{<t-1})} \\left[ D_{\\mathrm {KL}}( q({\\mathbf {s}}_t|{\\mathbf {o}}_{\\le t}, {\\mathbf {a}}_{<t})~||~p({\\mathbf {s}}_t | {\\mathbf {h}}_{t-1}, {\\mathbf {s}}_{t-1})) \\right] $" ], [ "Model Description", "appendix]mile-appendix:model-description We give a full description of MILE.", "The graphical models of the generative and inference models are depicted in mile-fig:probabilistic-model.", "mile-appendix:parameters shows the number of parameters of each component of the model, and mile-appendix:hyperparameters contains all the hyperparameters used during training.", "mile-appendix:inference-model describes the inference network, and mile-appendix:generative-model the generative network." ], [ "Graphical Models", "appendix]mile-appendix:graphical-models Figure: Graphical models representing the conditional dependence between states.", "Deterministic and stochastic states are represented by, respectively, squares and circles.", "Observed states are in gray." ], [ "Lifting to 3D.", "The $\\mathrm {Lift}$ operation can be detailed as follows: (i) Using the inverse intrinsics $K^{-1}$ and predicted depth, the features in the pixel image space are lifted to 3D in camera coordinates with a pinhole camera model, (ii) the rigid body motion $M$ transforms the 3D camera coordinates to 3D vehicle coordinates (center of inertia of the ego-vehicle)." ], [ "Observation dropout.", "At training time the priors are trained to match posteriors through the KL divergence, however they are not necessarily optimised for robust long term future prediction.", "[4] optimised states for robust multi-step predictions by iteratively applying the transition model and integrating out intermediate states.", "In our case, we supervise priors unrolled with random temporal horizons (i.e.", "predict states at $t+k$ with $k \\ge 1$ ).", "More precisely, during training, with probability $p_{\\text{drop}}$ we sample the stochastic state ${\\mathbf {s}}_t$ from the prior instead of the posterior.", "We call this observation dropout.", "If we denote $X$ the random variable representing the $k$ number of times a prior is unrolled, $X$ follows a geometric distribution with probability of success $(1 - p_{\\text{drop}})$ .", "Observation dropout resembles $z$ -dropout from [53], where the posterior distribution is modelled as a mixture of two Gaussians, one of which comes from the prior.", "During training, some posterior variables are randomly dropped out, forcing other posterior variables to maximise their information extraction from input images.", "Observation dropout can be seen as a global variant of $z$ -dropout since it drops out all posterior variables together." ], [ "Additional details", "The action space is in $\\mathbb {R}^2$ with the first component being the acceleration in $[-1,1]$ .", "Negative values correspond to braking, and positive values to throttle.", "The second component is steering in $[-1, 1]$ , with negative values corresponding to turning left, and positive values to turning right.", "For simplicity, we have set the weight parameter of the image reconstruction to 0.", "In order to improve reconstruction of the bird's-eye view vehicles and pedestrians, we also include an instance segmentation loss [54].", "Finally, we use the KL balancing technique from [5].", "Table: Inference model φ\\phi .Table: Generative model θ\\theta ." ], [ "Experimental Setting", "appendix]mile-appendix:experimental-setting" ], [ "Dataset", "Each run was randomised with a different start and end position, as well as with traffic agents [9].", "A random number of vehicles and pedestrians were spawned in the environment as specified in mile-table:data-collection.", "Table: Uniform sampling intervals of spawned vehicles and pedestrians in each town during training." ], [ "Metrics", "We report metrics from the CARLA challenge [45] to measure on-road performance: route completion, infraction penalty, and driving score.", "[itemsep=0.3mm, parsep=0pt] Route completion $R_{\\mathrm {completion}} \\in [0, 1]$: for a given simulation scenario, the percentage of route completed by the driving agent.", "The simulation can end early if the agent deviates from the desired route by more than $30\\mathrm {m}$ , or does not take any action for $180\\mathrm {s}$ .", "Infraction penalty $I_{\\mathrm {penalty}}$: multiplicative penalty due to various infractions from the agent (collision with pedestrians/vehicles/static objects, running red lights etc.).", "$I_{\\mathrm {penalty}} \\in [0, 1]$ , with $I_{\\mathrm {penalty}}=1$ meaning no infraction was observed.", "Driving score $D$: measures both how far the agent drives on the given route, but also how well it drives.", "$D$ is defined as $D = R_{\\mathrm {completion}}\\times I_{\\mathrm {penalty}} \\in [0,1]$ , with $D=1$ corresponding perfect driving.", "For a full description of these metrics, please refer to [45].", "We now define how the normalised cumulative reward is defined.", "At every timestep, the environment computes a reward $r\\in [R_{\\mathrm {min}}, 1]$ [46] for the driving agent.", "If $N$ is the number of timesteps the agent was deployed for without hitting a termination criteria, then the cumulative reward $R \\in [N\\times R_{\\mathrm {min}}, N]$ .", "In order to account for the length of the simulation (due to various stochastic events, it can be longer or shorter), we also report the normalised cumulative reward $\\overline{R} =R/N$ .", "We also wanted to highlight the limitations of the driving score as it is obtained by multiplying the route completion with the infraction penalty.", "The route completion (in $[0,1]$ ) can be understood as the recall: how far the agent has travelled along the specified route.", "The infraction penalty (also in $[0, 1]$ ) starts at $1.0$ and decreases with each infraction with multiplicative penalties.", "It can be understood as the precision: how many infractions has the agent successfully avoided.", "Therefore, two models are only comparable at a given recall (or route completion), as the more miles are driven, the more likely the agent risks causing infractions.", "We instead suggest reporting the cumulative reward in future, that overcomes the limitations of the driving score by being measured at the timestep level.", "The more route is driven, the more rewards are accumulated along the way.", "This reward is however modulated by the driving abilities of the model (and can be negative when encountering hard penalties)." ], [ "Evaluation Settings", "We measure the performance of our model on two settings.", "Each evaluation is repeated three times.", "New town, new weathers: the 10 test scenarios in Town05 [45], on 4 unseen weather conditions: SoftRainSunset, WetSunset, CloudyNoon, MidRainSunset.", "Train town, train weathers: the 20 train scenarios in Town03 [45], on 4 train weather conditions: ClearNoon, WetNoon, HardRainNoon, ClearSunset." ] ]
2210.07729
[ [ "Privacy-Preserving and Lossless Distributed Estimation of\n High-Dimensional Generalized Additive Mixed Models" ], [ "Abstract Various privacy-preserving frameworks that respect the individual's privacy in the analysis of data have been developed in recent years.", "However, available model classes such as simple statistics or generalized linear models lack the flexibility required for a good approximation of the underlying data-generating process in practice.", "In this paper, we propose an algorithm for a distributed, privacy-preserving, and lossless estimation of generalized additive mixed models (GAMM) using component-wise gradient boosting (CWB).", "Making use of CWB allows us to reframe the GAMM estimation as a distributed fitting of base learners using the $L_2$-loss.", "In order to account for the heterogeneity of different data location sites, we propose a distributed version of a row-wise tensor product that allows the computation of site-specific (smooth) effects.", "Our adaption of CWB preserves all the important properties of the original algorithm, such as an unbiased feature selection and the feasibility to fit models in high-dimensional feature spaces, and yields equivalent model estimates as CWB on pooled data.", "Next to a derivation of the equivalence of both algorithms, we also showcase the efficacy of our algorithm on a distributed heart disease data set and compare it with state-of-the-art methods." ], [ "Introduction", "More than ever, data is collected to record the ubiquitous information in our everyday life.", "However, on many occasions, the physical location of data points is not confined to one place (one global site) but distributed over different locations (sites).", "This is the case for, e.g., patient records that are gathered at different hospitals but usually not shared between hospitals or other facilities due to the sensitive information they contain.", "This makes data analysis challenging, particularly if methods require or notably benefit from incorporating all available (but distributed) information.", "For example, personal patient information is typically distributed over several hospitals, while sharing or merging different data sets in a central location is prohibited.", "To overcome this limitation, different approaches have been developed to directly operate at different sites and unite information without having to share sensitive parts of the data to allow privacy-preserving data analysis." ], [ "Distributed Data", "Distributed data can be partitioned vertically or horizontally across different sites.", "Horizontally partitioned data means that observations are spread across different sites with access to all existing features of the available data point, while for vertically partitioned data, different sites have access to all observations but different features (covariates) for each of these observations.", "In this work, we focus on horizontally partitioned data.", "Existing approaches for horizontally partitioned data vary from fitting regression models such as generalized linear models [46], [28], [21], [12], to conducting distributed evaluations [6], [44], [42], to fitting artificial neural networks [31].", "Furthermore, various software frameworks are available to run a comprehensive analysis of distributed data.", "One example is the collection of R [36] packages DataSHIELD [18], which enables data management and descriptive data analysis as well as securely fitting of simple statistical models in a distributed setup without leaking information from one site to the others." ], [ "Interpretability and Data Heterogeneity", "In many research areas that involve critical decision-making, especially in medicine, methods should not only excel in predictive performance but also be interpretable.", "Models should provide information about the decision-making process, the feature effects, and the feature importance as well as intrinsically select important features.", "Generalized additive models [45] are one of the most flexible approaches in this respect, providing an interpretable yet complex models that also allow for non-linearity in the data.", "As longitudinal studies are often the most practical way to gather information in many research fields, methods should also be able to account for subject-specific effects and account for the correlation of repeated measurements.", "Furthermore, when analyzing data originating from different sites, the assumption of having identically distributed observations across all sites often does not hold.", "In this case, a reasonable assumption for the data-generating process is a site-specific deviation from the general population mean.", "Adjusting models to this situation is called interoperability [27], while ignoring it may lead to biased or wrong predictions." ], [ "Related Literature", "Various approaches for distributed and privacy-preserving analysis have been proposed in recent years.", "In the context of statistical models, [22] describe how to calculate a linear model (LM) in a distributed and privacy-preserving fashion by sharing data summaries.", "[21] propose a similar approach for GLMs by communicating the Fisher information and score vector to conduct a distributed Fisher scoring algorithm.", "The site information is then globally aggregated to estimate the model parameters.", "Other privacy-preserving techniques include ridge regression [12], logistic regression, and neural networks [33].", "In machine learning, methods such as the naive Bayes classifier, trees, support vector machines, and random forests [24] exist with specific encryption techniques [35] to conduct model updates.", "In these setups, a trusted third party is usually required.", "However, this is often unrealistic and difficult to implement, especially in a medical or clinical setup.", "Furthermore, as encryption is an expensive operation, its application is infeasible for complex algorithms that require many encryption calls [34].", "Existing privacy-preserving boosting techniques often focus on the AdaBoost algorithm by using aggregation techniques of the base classifier [23], [17].", "A different approach to boosting decision trees in a federated learning setup was introduced by [25] using a locality-sensitive hashing to obtain similarities between data sets without sharing private information.", "These algorithms focus on aggregating tree-based base components, making them difficult to interpret, and come with no inferential guarantees.", "In order to account for repeated measurements, [29] propose a privacy-preserving and lossless way to fit linear mixed models (LMMs) to correct for heterogeneous site-specific random effects.", "Their concept of only sharing aggregated values is similar to our approach, but is limited in the complexity of the model and only allows normally distributed outcomes.", "Other methods to estimate LMMs in a secure and distributed fashion are [48], [1], or [47].", "Besides privacy-preserving and distributed approaches, integrative analysis is another technique based on pooling the data sets into one and analyzing this pooled data set while considering challenges such as heterogeneity or the curse of dimensionality [13], [4], [32].", "While advanced from a technical perspective by, e.g., outsourcing computational demanding tasks such as the analysis of multi-omics data to cloud services [3], the existing statistical cloud-based methods only deal with basic statistics.", "The challenges of integrative analysis are similar to the ones tackled in this work, our approach, however, does not allow merging the data sets in order to preserve privacy." ], [ "Our Contribution", "This work presents a method to fit generalized additive mixed models (GAMMs) in a privacy-preserving and lossless mannerIn this article, we define a distributed fitting procedure as lossless if the model parameters of the algorithm are the same as the ones computed on the pooled data.", "to horizontally distributed data.", "This not only allows the incorporation of site-specific random effects and accounts for repeated measurements in LMMs, but also facilitates the estimation of mixed models with responses following any distribution from the exponential family and provides the possibility to estimate complex non-linear relationships between covariates and the response.", "To the best of our knowledge, we are the first to provide an algorithm to fit the class of GAMMs in a privacy-preserving and lossless fashion on distributed data.", "Our approach is based on component-wise gradient boosting [10].", "CWB can be used to estimate additive models, account for repeated measurements, compute feature importance, and conduct feature selection.", "Furthermore, CWB is suited for high-dimensional data situations $(n\\ll p)$ .", "CWB is therefore often used in practice for, e.g., predicting the development of oral cancer [39], classifying individuals with and without patellofemoral pain syndrome [26], or detecting synchronization in bioelectrical signals [37].", "However, there have so far not been any attempts to allow for a distributed, privacy-preserving, and lossless computation of the CWB algorithm.", "In this paper, we propose a distributed version of CWB that yields the identical model produced by the original algorithm on pooled data and that accounts for site heterogeneity by including interactions between features and a site variable.", "This is achieved by adjusting the fitting process using 1) a distributed estimation procedure, 2) a distributed version of row-wise tensor product base learners, and 3) an adaption of the algorithm to conduct feature selection in the distributed setup.", "We implement our method in R using the DataSHIELD framework and demonstrate its application in an exemplary medical data analysis.", "The remainder of this paper is structured as follows: First, we introduce the basic notation, terminology, and setup of GAMMs in Section .", "We then describe the original CWB algorithm in Section REF and its link to GAMMs.", "In Section , we present the distributed setup and our novel extension of the CWB algorithm.", "Finally, Section  demonstrates both how our distributed CWB algorithm can be used in practice and how to interpret the obtained results." ], [ "Implementation", "We implement our approach as an R package using the DataSHIELD framework and make it available on GitHubgithub.com/schalkdaniel/dsCWB.", "The code for the analysis can also be found in the repositorygithub.com/schalkdaniel/dsCWB/blob/main/usecase/analyse.R." ], [ "Notation and Terminology", "Our proposed approach uses the CWB algorithm as fitting engine.", "Since this method was initially developed in machine learning, we introduce here both the statistical notation used for GAMMs as well as the respective machine learning terminology and explain how to relate the two concepts.", "We assume a $p$ -dimensional covariate or feature space $\\mathcal {X}= (\\mathcal {X}_1 \\times \\hdots \\times \\mathcal {X}_p) \\subseteq \\mathbb {R}^p$ and response or outcome values from a target space $\\mathcal {Y}$ .", "The goal of boosting is to find the unknown relationship $f$ between $\\mathcal {X}$ and $\\mathcal {Y}$ .", "In turn, GAMMs (as presented in Section REF ) model the conditional distribution of an outcome variable $Y$ with realizations $y\\in \\mathcal {Y}$ , given features ${x}= (x_1, \\dots , x_p) \\in \\mathcal {X}$ .", "Given a data set $ \\left\\lbrace \\left({x}^{(1)}, y^{(1)}\\right), \\ldots , \\left({x}^{(n)}, y^{(n)}\\right)\\right\\rbrace $ with $n$ observations drawn (conditionally) independently from an unknown probability distribution $\\mathbb {P}_{xy}$ on the joint space $\\mathcal {X}\\times \\mathcal {Y}$ , we aim to estimate this functional relationship in CWB with $\\hat{f}$ .", "The goodness-of-fit of a given model $\\hat{f}$ is assessed by calculating the empirical risk $\\mathcal {R}_{\\text{emp}}(\\hat{f}) = n^{-1}\\sum _{({x}, y)\\in \\mathcal {D}}L(y, \\hat{f}({x}))$ based on a loss function $L: \\mathcal {Y}\\times {R}\\rightarrow {R}$ and the data set $.", "Minimizing $ Remp$ using this loss function is equivalent to estimating $ f$ using maximum likelihood by defining $ L(y,f(x)) = -(y,h(f(x)))$ with log-likelihood $$, response function $ h$ and minimizing the sum of log-likelihood contributions.$ In the following, we also require the vector ${x}_j= (x_j^{(1)}, \\ldots , x_j^{(n)})^{\\hspace{-0.83328pt}\\mathsf {T}}\\in \\mathcal {X}_j$ , which refers to the $j\\vphantom{x}^{\\text{th}}$ feature.", "Furthermore, let ${x}= (x_1, \\dots , x_p)$ and $y$ denote arbitrary members of $\\mathcal {X}$ and $\\mathcal {Y}$ , respectively.", "A special role is further given to a subset $\\mathbf {u} = (u_1,\\ldots ,u_q)^\\top $ , $q\\le p$ , of features ${x}$ , which will be used to model the heterogeneity in the data." ], [ "Generalized Additive Mixed Models", "A very flexible class of regression models to model the relationship between covariates and the response are GAMMs [45].", "In GAMMs, the response $Y^{(i)}$ for observation $i=1,\\ldots ,n_s$ of measurement unit (or site) $s$ is assumed to follow some exponential family distribution such as the Poisson, binomial, or normal distributions [30], conditional on features $\\mathbf {x}^{(i)}$ and the realization of some random effects.", "The expectation $\\mu := \\mathbb {E}(Y^{(i)} | \\mathbf {x}^{(i)}, \\mathbf {u}^{(i)})$ in GAMMs is modeled as $ h^{-1}(\\mu ^{(i)}) = f^{(i)} = \\sum _{j\\in \\mathcal {J}_{1}} x^{(i)}_j \\beta _j + \\sum _{j\\in \\mathcal {J}_{2}} u^{(i)}_j \\gamma _{j,s} + \\sum _{j\\in \\mathcal {J}_{3}} \\phi _j(x^{(i)}_j).$ In (REF ), $h$ is a smooth monotonic response function, $f$ corresponds to the additive predictor, $\\gamma _{j,s} \\sim \\mathcal {N}(0, \\psi )$ are random effects accounting for heterogeneity in the data, and $\\phi _j$ are non-linear effects of pre-specified covariates.", "The different index sets $\\mathcal {J}_1, \\mathcal {J}_2, \\mathcal {J}_3 \\subseteq \\lbrace 1,\\ldots ,p\\rbrace \\cup \\emptyset $ indicate which features are modeled as fixed effects, random effects, or non-linear (smooth) effects, respectively.", "The modeler usually defines these sets.", "However, as we will also explain later, the use of CWB as a fitting engine allows for automatic feature selection and therefore does not require explicitly defining these sets.", "In GAMMs, smooth effects are usually represented by (spline) basis functions, i.e., $\\phi _j(x_j) \\approx (B_{j,1}(x_j), \\ldots , B_{j,d_j}(x_j))^\\top \\mathbf {\\theta }_j$ , where $\\mathbf {\\theta }_j \\in \\mathbb {R}^{d_j}$ are the basis coefficients corresponding to each basis function $B_{j,d_j}$ .", "The coefficients are typically constrained in their flexibility by adding a quadratic (difference) penalty for (neighboring) coefficients to the objective function to enforce smoothness.", "GAMMs, as in (REF ), are not limited to univariate smooth effects $\\phi _j$ , but allow for higher-dimensional non-linear effects $\\phi (x_{j_1}, x_{j_2}, \\ldots , x_{j_k})$ .", "The most common higher-dimensional smooth interaction effects are bivariate effects ($k=2$ ) and can be represented using a bivariate or a tensor product spline basis (see Section REF for more details).", "Although higher-order splines with $k>2$ are possible, models are often restricted to bivariate interactions for the sake of interpretability and computational feasibility.", "In Section , we will further introduce varying coefficient terms $\\phi _{j,s}(x_j)$ in the model (REF ), i.e., smooth effects $f$ varying with a second variable $s$ .", "Analogous to random slopes, $s$ can also be the index set defining observation units of random effects $\\mathcal {J}_2$ .", "Using an appropriate distribution assumption for the basis coefficients $\\mathbf {\\theta }_j$ , these varying coefficients can then be considered as random smooth effects." ], [ "Component-Wise Boosting", "Component-wise (gradient) boosting [10], [9] is a an iterative algorithm that performs block-coordinate descent steps with blocks (or base learners) corresponding to the additive terms in (REF ).", "With a suitable choice of base learners and objective function, CWB allows efficient optimization of GAMMs, even in high-dimensional settings with $p \\gg n$ .", "We will first introduce the concept of base learners that embed additive terms of the GAMM into boosting and subsequently describe the actual fitting routine of CWB.", "Lastly, we will describe the properties of the algorithm and explain its connection to model (REF )." ], [ "Base Learners", "In CWB, the $l\\vphantom{x}^{\\text{th}}$ base learner $b_l: \\mathcal {X}\\rightarrow {R}$ is used to model the contribution of one or multiple features in the model.", "In this work, we investigate parametrized base learners $b_l({x}, \\mathbf {\\theta }_l)$ with parameters $\\mathbf {\\theta }_l\\in {R}^{d_l}$ .", "For simplicity, we will use $\\mathbf {\\theta }$ as a wildcard for the coefficients of either fixed effects, random effects, or spline bases in the following.", "We assume that each base learner can be represented by a generic basis representation $g_l : \\mathcal {X}\\rightarrow {R}^{d_l},\\ {x}\\mapsto g_l({x}) = (g_{l,1}({x}), \\dots , g_{l, d_l}({x}))^{\\hspace{-0.83328pt}\\mathsf {T}}$ and is linear in the parameters, i.e., $b_l({x}, \\mathbf {\\theta }_l) = g_l({x})^{\\hspace{-0.83328pt}\\mathsf {T}}\\mathbf {\\theta }_l$ .", "For $n$ observations, we define the design matrix of a base learner $b_l$ as ${Z}_l := (g_l({x}^{(1)}), \\ldots , g_l({x}^{(n)}))^{\\hspace{-0.83328pt}\\mathsf {T}}\\in {R}^{n\\times d_l}$ .", "Note that base learners are typically not defined on the whole feature space but on a subset $\\mathcal {X}_l\\subseteq \\mathcal {X}$ .", "For example, a common choice for CWB is to define one base learner for every feature $x_l\\in \\mathcal {X}_l$ to model the univariate contributions of that feature.", "A base learner $b_l({x}, \\mathbf {\\theta }_l)$ can depend on hyperparameters (HPs) $\\mathbf {\\alpha }_l$ that are set prior to the fitting process.", "For example, choosing a base learner using a P-spline [16] representation requires setting the degree of the basis functions, the order of the difference penalty term, and a parameter $\\lambda _l$ determining the smoothness of the spline.", "In order to represent GAMMs in CWB, the following four base learner types are used." ], [ "(Regularized) linear base learners", "A linear base learner is used to include linear effects of a features $x_{j_1}, \\dots , x_{j_{d_l}}$ into the model.", "The basis transformation is given by $g_l({x}) = (g_{l,1}({x}), \\dots , g_{l,d_l + 1}({x}))^{\\hspace{-0.83328pt}\\mathsf {T}}= (1, x_{j_1}, \\dots , x_{j_{d_l}})^{\\hspace{-0.83328pt}\\mathsf {T}}$ .", "Linear base learners can be regularized by incorporating a ridge penalization [19] with tunable penalty parameter $\\lambda _l$ as an HP $\\mathbf {\\alpha }_l$ .", "Fitting a ridge penalized linear base learner to a response vector ${y}\\in \\mathbb {R}^n$ results in the penalized least squares estimator $\\hat{\\mathbf {\\theta }}_l = ({Z}_l^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_l + \\mathbf {K}_l)^{-1}{Z}_l^{\\hspace{-0.83328pt}\\mathsf {T}}{y}$ with penalty matrix $\\mathbf {K}_l = \\lambda _l\\mathbf {I}_{d_l + 1}$ , where $\\mathbf {I}_d$ denotes the $d$ -dimensional identity matrix.", "Often, an unregularized linear base learner is also included to model the contribution of one feature ${x}_j$ as a linear base learner without penalization.", "The basis transformation is then given by $g_l({x}) = (1, x_j)^{\\hspace{-0.83328pt}\\mathsf {T}}$ and $\\lambda _l = 0$ ." ], [ "Spline base learners", "These base learners model smooth effects using univariate splines.", "A common choice is penalized B-splines [16], where the feature $x_j$ is transformed using a B-spline basis transformation $g_l({x}) = (B_{l,1}(x_j), \\dots , B_{l,d_l}(x_j))^{\\hspace{-0.83328pt}\\mathsf {T}}$ with $d_l$ basis functions $g_{l,m} = B_{l,m},\\ m = 1, \\dots , d_l$ .", "In this case, the choice of the spline order $B$ , the number of basis functions $d_l$ , the penalization term $\\lambda _l$ , and the order $v$ of the difference penalty (represented by a matrix $\\mathbf {D}_l\\in {R}^{d_{l-v}\\times d_l}$ ) are considered HPs $\\mathbf {\\alpha }_l$ of the base learner.", "The base learner's parameter estimator in general is given by the penalized least squares solution $\\hat{\\mathbf {\\theta }}_l = ({Z}_l^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_l + \\mathbf {K}_l)^{-1}{Z}_l^{\\hspace{-0.83328pt}\\mathsf {T}}{y}$ , with penalization matrix $\\mathbf {K}_l = \\lambda _l\\mathbf {D}_l^\\top \\mathbf {D}_l$ in the case of P-splines." ], [ "Categorical and random effect base learners", "Categorical features $x_j\\in \\lbrace 1, \\dots , G\\rbrace $ with $G\\in {N}, G\\ge 2$ classes are handled by a binary encoding $g_l({x}) = ({1}_{\\lbrace 1\\rbrace }(x_j), \\dots , {1}_{\\lbrace G\\rbrace }(x_j))^{\\hspace{-0.83328pt}\\mathsf {T}}$ with the indicator function ${1}_A(x) = 1$ if $x\\in A$ and ${1}_A(x) = 0$ if $x\\notin A$ .", "A possible alternative encoding is the dummy encoding with $\\breve{g}_l({x}) = (1, {1}_{\\lbrace 1\\rbrace }(x_j), \\dots , {1}_{\\lbrace G-1\\rbrace }(x_j))^{\\hspace{-0.83328pt}\\mathsf {T}}$ with reference group $G$ .", "Similar to linear and spline base learners, it is possible incorporate a ridge penalization with HP $\\mathbf {\\alpha }_l=\\lambda _l$ .", "This results in the base learner's penalized least squared estimator $\\hat{\\mathbf {\\theta }}_l = ({Z}_l^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_l + \\mathbf {K}_l)^{-1}{Z}_l^{\\hspace{-0.83328pt}\\mathsf {T}}{y}$ with penalization matrix $\\mathbf {K}_l = \\lambda _l\\mathbf {I}_{G}$ .", "Due to the mathematical equivalence of ridge penalized linear effects and random effects with normal prior [8], this base learner can further be used to estimate random effect predictions $\\hat{\\gamma }_j$ when using categorical features $u_j$ and thereby account for heterogeneity in the data." ], [ "Row-wise tensor product base learners", "This type of base learner is used to model a pairwise interaction between two features $x_j$ and $x_k$ .", "Given two base learners $b_j$ and $b_k$ with basis representations $g_j({x}) = (g_{j,1}(x_j), \\dots , g_{j,d_j}(x_j))^{\\hspace{-0.83328pt}\\mathsf {T}}$ and $g_k({x}) = (g_{k,1}(x_k), \\dots , g_{k,d_k}(x_k))^{\\hspace{-0.83328pt}\\mathsf {T}}$ , the basis representation of the row-wise tensor product base learner $b_l = b_j \\times b_k$ is defined as $g_l({x}) = (g_j({x})^{\\hspace{-0.83328pt}\\mathsf {T}}\\otimes g_k({x})^{\\hspace{-0.83328pt}\\mathsf {T}})^{\\hspace{-0.83328pt}\\mathsf {T}}= (g_{j,1}(x_j) g_k({x})^{\\hspace{-0.83328pt}\\mathsf {T}}, \\dots , g_{j,d_j}(x_j) g_k({x})^{\\hspace{-0.83328pt}\\mathsf {T}})^{\\hspace{-0.83328pt}\\mathsf {T}}\\in {R}^{d_l}$ with $d_l = d_j d_k$ .", "The HPs $\\mathbf {\\alpha }_l = \\lbrace \\mathbf {\\alpha }_j, \\mathbf {\\alpha }_k\\rbrace $ of a row-wise tensor product base learner are induced by the HPs $\\mathbf {\\alpha }_j$ and $\\mathbf {\\alpha }_k$ of the respective individual base learners.", "Analogously to other base learners, the penalized least squared estimator in this case is $\\hat{\\mathbf {\\theta }}_l = ({Z}_l^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_l + \\mathbf {K}_l)^{-1}{Z}_l^{\\hspace{-0.83328pt}\\mathsf {T}}{y}$ with penalization matrix $\\mathbf {K}_l = \\tau _j \\mathbf {K}_j \\otimes \\mathbf {I}_{d_k} + \\mathbf {I}_{d_j} \\otimes \\tau _k \\mathbf {K}_k \\in {R}^{d_l \\times d_l}$ .", "This Kronecker sum penalty, in particular, allows for anisotropic smoothing with penalties $\\tau _j$ and $\\tau _k$ when using two spline bases for $g_j$ and $g_k$ , and varying coefficients or random splines when combining a (penalized) categorical base learner and a spline base learner." ], [ "Fitting Algorithm", "CWB first initializes an estimate $\\hat{f}$ of the additive predictor with a loss-optimal constant value $\\hat{f}^{[0]} = \\operatorname{arg\\,min}_{c\\in {R}}\\mathcal {R}_{\\text{emp}}(c)$ .", "It then proceeds and estimates Eq.", "(REF ) using an iterative steepest descent minimization in function space by fitting the previously defined base learners to the model's functional gradient $\\nabla _f L(y,f)$ evaluated at the current model estimate $\\hat{f}$ .", "Let $\\hat{f}^{[m]}$ denote the model estimation after $m\\in \\mathbb {N}$ iterations.", "In each step in CWB, the pseudo residuals $\\tilde{r}^{[m](i)}= -\\nabla _f L(y^{(i)}, f({x}^{(i)}))|_{f = \\hat{f}^{[m-1]}}$ for $\\ i \\in \\lbrace 1, \\dots , n\\rbrace $ are first computed.", "CWB then selects the best-fitting base learner from a pre-defined pool of base-learners denoted by $\\mathcal {B} = \\lbrace b_l\\rbrace _{l \\in \\lbrace 1, \\ldots , |\\mathcal {B}|\\rbrace }$ and adds the base learner's contribution to the previous model $\\hat{f}^{[m]}$ .", "The selected base learner is chosen based on its sum of squared errors (SSE) when regressing the pseudo residuals $\\tilde{{r}}^{[m]}= (\\mathbf {r}^{[m](1)}, \\dots , \\mathbf {r}^{[m](n)})^{\\hspace{-0.83328pt}\\mathsf {T}}$ onto the base learner's features using the $L_2$ -loss.", "This procedure is repeated $M$ times or until a convergence criterion is met.", "The learning rate $\\nu $ controls the speed of model updates.", "A value $\\nu \\in [0.01, 0.1]$ was shown to be sufficiently small to yield fast convergence [9].", "Further details of CWB are given in Algorithm REF [41].", "To enforce a fair selection of model terms, regularization parameters are set such that all base-learners have the same degrees-of-freedom [20].", "[H] Vanilla CWB algorithm Input Train data $, learning rate $$, number of boosting iterations $ M$, loss\\\\\\hspace*{0.0pt} \\phantom{\\textbf {Input} }function $ L$, set of base learner $ B$\\\\\\hspace*{0.0pt} \\textbf {Output} Model $ f[M]$ defined by fitted parameters $ [1], ..., [M]$\\vspace{4.26773pt}\\hrule $ [1] $\\operatorname{CWB}$$\\nu ,L,\\mathcal {B}$ Initialize: $\\hat{f}^{[0]}({x}) = \\operatorname{arg\\,min}_{c\\in {R}}\\mathcal {R}_{\\text{emp}}(c)$ $m \\in \\lbrace 1, \\dots , M\\rbrace $ $\\tilde{r}^{[m](i)}= -\\left.\\frac{\\partial {L\\left(y^{(i)}, f\\left({x}^{(i)}\\right)\\right)}}{\\partial f({x}^{(i)})}\\right|_{f = \\hat{f}^{[m-1]}},\\ \\ \\forall i \\in \\lbrace 1, \\dots , n\\rbrace $ $l \\in \\lbrace 1, \\dots , |\\mathcal {B}|\\rbrace $ $\\hat{\\mathbf {\\theta }}^{[m]}_l = \\left({Z}_l^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_l + \\mathbf {K}_l\\right)^{-1} {Z}^{\\hspace{-0.83328pt}\\mathsf {T}}_l \\tilde{{r}}^{[m]}$ $\\operatorname{SSE}_l = \\sum _{i=1}^n(\\tilde{r}^{[m](i)}- b_l({x}^{(i)}, \\hat{\\mathbf {\\theta }}^{[m]}_l))^2$ $l^{[m]} = \\operatorname{arg\\,min}_{l\\in \\lbrace 1, \\dots , |\\mathcal {B}|\\rbrace } \\operatorname{SSE}_l$ $\\hat{f}^{[m]}({x}) = \\hat{f}^{[m-1]}({x}) + \\nu b_{l^{[m]}}({x},\\hat{\\mathbf {\\theta }}^{[m]}_{l^{[m]}})$ return $\\hat{f}= \\hat{f}^{[M]}$" ], [ "Properties and Link to Generalized Additive Mixed Models", "The estimated coefficients $\\hat{\\mathbf {\\theta }}$ resulting from running the CWB algorithm are known to converge to the maximum likelihood solution [43] for $M\\rightarrow \\infty $ under certain conditions.", "This is due to the fact that CWB performs a coordinate gradient descent update of a model defined by its additive base learners that exactly represent the structure of an additive mixed model (when defining the base learners according to Section REF ) and by the objective function that corresponds to the negative (penalized) log-likelihood.", "Two important properties of this algorithm are 1) its coordinate-wise update routine, and 2) the nature of model updates using the $L_2$ -loss.", "Due to the first property, CWB can be used in settings with $p \\gg n$ , as only a single additive term is fitted onto the pseudo-residuals in every iteration.", "This not only reduces the computational complexity of the algorithm for an increasing number of additive predictors (linear instead of quadratic) but also allows variable selection when stopping the routine early (e.g., based on a validation data set), as not all the additive components might have been selected into the model.", "In particular, this allows users to specify the full GAMM model without manual specification of the type of feature effect (fixed or random, linear or non-linear) and then automatically sparsify this model by an objective and data-driven feature selection.", "The second property, allows fitting models of the class of generalized linear/additive (mixed) models using only the $L_2$ -loss instead of having to work with some iterative weighted least squares routine.", "In particular, this allows performing the proposed lossless distributed computations described in this paper, as we will discuss in Section ." ], [ "Distributed Computing Setup and Privacy Protection", "Before presenting our main results, we now introduce the distributed data setup we will work with throughout the remainder of this paper.", "The data set $ is horizontally partitioned into $ S$ data sets $ s= { (x(1)s, y(1)s), ..., (x(ns)s, y(ns)s)}$, $ s= 1, ..., S$ with $ ns$ observations.", "Each data set $ s$ is located at a different site $ s$ and potentially follows a different data distributions $ Pxy,s$.", "The union of all data sets yields the whole data set $ s=1Ss$ with mutually exclusive data sets $ sl =   l,s{1,...,S}, ls$.", "The vector of realizations per site is denoted by $ ysYns$.$ In this distributed setup, multiple ways exist to communicate information without revealing individual information.", "More complex methods such as differential privacy [15], homomorphic encryption [35], or k-anonymity [40] allow sharing information without violating an individual's privacy.", "An alternative option is to only communicate aggregated statistics.", "This is one of the most common approaches and is also used by DataSHIELD [18] for GLMs or by [29] for LMMs.", "DataSHIELD, for example, uses a privacy level that indicates how many individual values must be aggregated to allow the communication of aggregated values.", "For example, setting the privacy level to a value of 5 enables sharing of summary statistics such as sums, means, variances, etc.", "if these are computed on at least 5 elements (observations)." ], [ "Host and Site Setup", "Throughout this article, we assume the $1, \\dots , S$ sites or servers to have access to their respective data set $s$ .", "Each server is allowed to communicate with a host server that is also the analyst's machine.", "In this setting, the analyst can potentially see intermediate data used when running the algorithms, and hence each message communicated from the servers to the host must not allow any reconstruction of the original data.", "The host server is responsible for aggregating intermediate results and communicating these results back to the servers." ], [ "Distributed Component-Wise Boosting", "We now present our distributed version of the CWB algorithm to fit privacy-preserving and lossless GAMMs.", "In the following, we first describe further specifications of our setup in Section REF , elaborate on the changes made to the set of base learners in Section REF , and then show how to adapt CWB's fitting routine in Section REF ." ], [ "Setup", "In the following, we distinguish between site-specific and shared effects.", "As effects estimated across sites typically correspond to fixed effects and effects modeled for each site separately are usually represented using random effects, we use the terms as synonyms in the following, i.e., shared effects and fixed effects are treated interchangeably and the same holds for site-specific effects and random effects.", "We note that this is only for ease of presentation and our approach also allows for site-specific fixed effects and random shared effects.", "As the data is not only located at different sites but also potentially follows different data distributions $\\mathbb {P}_{xy,s}$ at each site $s$ , we extend Eq.", "(REF ) to not only include random effects per site, but also site-specific smooth (random) effects $\\phi _{j,s}(x_j)$ , $s= 1, \\dots , S$ for all features $x_j$ with $j\\in \\mathcal {J}_3$ .", "For every of these smooth effects $\\phi _{j,s}$ we assume an existing shared effect $f_{j,\\text{shared}}$ that is equal for all sites.", "These assumptions – particularly the choice of site-specific effects – are made for demonstration purposes.", "In a real-world application, the model structure can be defined individually to match the given data situation.", "However, note again that CWB intrinsically performs variable selection, and there is thus no need to manually define the model structure in practice.", "In order to incorporate the site information into the model, we add a variable $x_0^{(i)} \\in \\lbrace 1, \\dots , S\\rbrace $ for the site to the data by setting $\\tilde{{x}}^{(i)} = (x_0^{(i)}, {x}^{(i)})$ .", "The site variable is a categorical feature with $S$ classes." ], [ "Base Learners", "For shared effects, we keep the original structure of CWB with base learners chosen from a set of possible learners $\\mathcal {B}$ .", "Section REF explains how these shared effects are estimated in the distributed setup.", "We further define a regularized categorical base learner $b_0$ with basis transformation $g_0(x_0) = ({1}_{\\lbrace 1\\rbrace }(x_0), \\dots , {1}_{\\lbrace S\\rbrace }(x_0))^{\\hspace{-0.83328pt}\\mathsf {T}}$ and design matrix ${Z}_0\\in {R}^{n\\times S}$ .", "We use $b_0$ to extend $\\mathcal {B}$ with a second set of base learners $\\mathcal {B}_\\times = \\lbrace b_0 \\times b\\ |\\ b \\in \\mathcal {B}\\rbrace $ to model site-specific random effects.", "All base learners in $\\mathcal {B}_\\times $ are row-wise tensor product base learners $b_{l_\\times }= b_0 \\times b_l$ of the regularized categorical base learner $b_0$ dummy-encoding every site and all other existing base learners $b_l\\in \\mathcal {B}$ .", "This allows for potential inclusion of random effects for every fixed effect in the model.", "More specifically, the $l\\vphantom{x}^{\\text{th}}$ site-specific effect given by the row-wise tensor product base learner $b_{l_\\times }$ uses the basis transformation $g_{l_\\times } = g_0 \\otimes g_l$ $ g_{l_\\times }(\\tilde{{x}}) = g_0(x_0)^{\\hspace{-0.83328pt}\\mathsf {T}}\\otimes g_l({x})^{\\hspace{-0.83328pt}\\mathsf {T}}= (\\underbrace{{1}_{\\lbrace 1\\rbrace }(x_0) g_l({x})^{\\hspace{-0.83328pt}\\mathsf {T}}}_{=g_{{l_\\times }, 1}}, \\dots , \\underbrace{{1}_{\\lbrace S\\rbrace }(x_0) g_l({x})^{\\hspace{-0.83328pt}\\mathsf {T}}}_{=g_{{l_\\times }, S}})^{\\hspace{-0.83328pt}\\mathsf {T}},$ where the basis transformation $g_l$ is equal for all $S$ sites.", "After distributed computation (see Eq.", "(REF ) in the next section), the estimated coefficients are $\\mathbf {\\hat{\\theta }}_{l_\\times } = (\\mathbf {\\hat{\\theta }}_{{l_\\times }, 1}^{\\hspace{-0.83328pt}\\mathsf {T}}, \\dots , \\mathbf {\\hat{\\theta }}_{{l_\\times }, S}^{\\hspace{-0.83328pt}\\mathsf {T}})^{\\hspace{-0.83328pt}\\mathsf {T}}$ with $\\mathbf {\\hat{\\theta }}_{{l_\\times }, s}\\in {R}^{d_l}$ .", "The regularization of the row-wise Kronecker base learners not only controls their flexibility but also assures identifiable when additionally including a shared (fixed) effect for the same covariate.", "The penalty matrix $\\mathbf {K}_{l_\\times } = \\lambda _0 \\mathbf {K}_0 \\otimes \\mathbf {I}_{d_l} + \\mathbf {I}_S\\otimes \\lambda _{l_\\times } \\mathbf {K}_l\\in \\mathbb {R}^{Sd_l \\times Sd_l}$ is given as Kronecker sum of the penalty matrix of the categorical site effect and the penalty matrices $\\mathbf {K}_0$ and $\\mathbf {K}_l$ with respective regularization strengths $\\lambda _0, \\lambda _{l_\\times }$ .", "As $\\mathbf {K}_0 = \\lambda _0\\mathbf {I}_{S}$ is a diagonal matrix, $\\mathbf {K}_{l_\\times }$ is a block matrix with entries $\\lambda _0\\mathbf {I}_{d_l} + \\lambda _{l_\\times } \\mathbf {K}_l$ on the diagonal blocks.", "Moreover, as $g_0$ is a binary vector, we can also express the design matrix ${Z}_{l_\\times }\\in \\mathbb {R}^{n\\times Sd_l}$ as a block matrix, yielding $ {Z}_{l_\\times } = \\operatorname{diag}({Z}_{l,1}, \\dots , {Z}_{l,S}),\\ \\mathbf {K}_{l_\\times } = \\operatorname{diag}(\\lambda _0\\mathbf {I}_{d_l} + \\lambda _{l_\\times }\\mathbf {K}_l, \\dots , \\lambda _0\\mathbf {I}_{d_l} + \\lambda _{l_\\times }\\mathbf {K}_l),$ where ${Z}_{l,k}$ are the distributed design matrices of $b_l$ on sites $s=1,\\dots ,S$ ." ], [ "Fitting Algorithm", "We now describe the adaptions required to allow for distributed computations of the CWB fitting routine.", "In Sections REF and REF , we show the equality between our distributed fitting approach and CWB fitted on pooled data.", "Section REF describes the remaining details such as distributed SSE calculations, distributed model updates, and pseudo residual updates in the distributed setup.", "Section REF summarizes the distributed CWB algorithm and Section REF elaborates on the communication costs of our algorithm." ], [ "Distributed Shared Effects Computation", "Fitting CWB in a distributed fashion requires adapting the fitting process of the base learner $b_l$ in Algorithm REF to distributed data.", "To allow for shared effects computations across different sites without jeopardizing privacy, we take advantage of CWB's update scheme, which boils down to a (penalized) least squares estimation per iteration for every base learner.", "This allows us to build upon existing work such as [22] to fit linear models in a distributed fashion by just communicating aggregated statistics between sites and the host.", "In a first step, the aggregated matrices $\\mathbf {F}_{l,s} = {Z}_{l,s}^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_{l,s}$ and vectors $\\mathbf {u}_{l,s} = {Z}_{l,s}^{\\hspace{-0.83328pt}\\mathsf {T}}{y}_s$ are computed on each site.", "In our privacy setup (Section REF ), communicating $\\mathbf {F}_{l,s}$ and $\\mathbf {u}_{l,s}$ is allowed as long as the privacy-aggregation level per site is met.", "In a second step, the site information is aggregated to a global information $\\mathbf {F}_l = \\sum _{s=1}^S\\mathbf {F}_{l,s} + \\mathbf {K}_l$ and $\\mathbf {u}_l = \\sum _{s=1}^S\\mathbf {u}_{l,s}$ and then used to estimate the model parameters $\\mathbf {\\hat{\\theta }}_l = \\mathbf {F}_l^{-1}\\mathbf {u}_l$ .", "This approach, referred to as $\\operatorname{distFit}$ , is explained again in detail in Algorithm REF and used for the shared effect computations of the model by substituting $\\hat{\\mathbf {\\theta }}^{[m]}_l = \\left({Z}_l^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_l + \\mathbf {K}_l\\right)^{-1} {Z}^{\\hspace{-0.83328pt}\\mathsf {T}}_l \\tilde{{r}}^{[m]}$ (Algorithm REF line REF ) with $\\hat{\\mathbf {\\theta }}^{[m]}_l = \\operatorname{distFit}({Z}_{l,1}, \\dots , {Z}_{l,S}, \\tilde{{r}}^{[m]}_1, \\dots , \\tilde{{r}}^{[m]}_S, \\mathbf {K}_l)$ .", "Note that the pseudo residuals $\\tilde{{r}}^{[m]}_k$ are also securely located at each site and are updated after each iteration.", "Details about the distributed pseudo residuals updates are explained in Section REF .", "We also note that the computational complexity of fitting CWB can be drastically reduced by pre-calculating and storing $({Z}_{l}^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_{l} + \\mathbf {K}_l)^{-1}$ in a first initialization step, as the matrix is independent of iteration $m$ , and reusing these pre-calculated matrices in all subsequent iterations [41].", "Using pre-calculated matrices also reduces the amount of required communication between sites and host.", "[H] Distributed Effect Estimation.", "The line prefixes [S] and [H] indicate whether the operation is conducted at the sites ([S]) or at the host ([H]).", "Input Sites design matrices ${Z}_{l,1}, \\dots , {Z}_{l,S}$ , response vectors ${y}_1, \\dots , {y}_S$ and an optional Input penalty matrix $\\mathbf {K}_l$ .", "Output Estimated parameter vector $\\mathbf {\\hat{\\theta }}_l$ .", "[1] $\\operatorname{distFit}$${Z}_{l,1}, \\dots , {Z}_{l,S}, {y}_1, \\dots , {y}_S, \\mathbf {K}_l$ $s\\in \\lbrace 1, \\dots , S\\rbrace $ [S] ${F}_{l,s} = {Z}_{l,s}^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_{l,s}$ [S] $\\mathbf {u}_{l,s} = {Z}_{l,s}^{\\hspace{-0.83328pt}\\mathsf {T}}{y}_s$ [S] Communicate ${F}_{l,s}$ and $\\mathbf {u}_{l,s}$ to the host [H] ${F}_l = \\sum _{s=1}^S{F}_{l,s} + \\mathbf {K}_l$ [H] $\\mathbf {u}_l = \\sum _{s=1}^S\\mathbf {u}_{l,s}$ [H] return $\\mathbf {\\hat{\\theta }}_l = {F}_l^{-1}\\mathbf {u}_l$" ], [ "Distributed Site-specific Effects Computation", "If we pretend that the fitting of the base learner $b_{l_\\times }$ is performed on the pooled data, we obtain $\\mathbf {\\hat{\\theta }}_{l_\\times }&= \\left({Z}_{l_\\times }^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_{l_\\times } + \\mathbf {K}_{l_\\times }\\right)^{-1}{Z}_{l_\\times }^{\\hspace{-0.83328pt}\\mathsf {T}}{y}= \\left(\\begin{array}{c}({Z}_{l,1}^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_{l,1} + \\lambda _0\\mathbf {I}_{d_l} + \\mathbf {K}_l)^{-1} {Z}_{l,1}^{\\hspace{-0.83328pt}\\mathsf {T}}{y}_1 \\\\\\vdots \\\\({Z}_{l,S}^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_{l,S} + \\lambda _0\\mathbf {I}_{d_l} + \\mathbf {K}_l)^{-1} {Z}_{l,S}^{\\hspace{-0.83328pt}\\mathsf {T}}{y}_S\\end{array}\\right), $ where (REF ) is due to the block structure, as described in (REF ) of Section REF .", "This shows that the fitting of the site-specific effects $\\mathbf {\\hat{\\theta }}_{l_\\times }$ can be split up into the fitting of individual parameters $\\mathbf {\\hat{\\theta }}_{{l_\\times },s} = ({Z}_{l,s}^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_{l,s} + \\lambda _0\\mathbf {I}_{d_l} + \\mathbf {K}_l)^{-1} {Z}_{l,s}^{\\hspace{-0.83328pt}\\mathsf {T}}{y}_s.$ It is thus possible to compute site-specific effects at the respective site without the need to share any information with the host.", "The host, in turn, only requires the SSE of the respective base learner (see next Section REF ) to perform the next iteration of CWB.", "Hence, during the fitting process, the parameter estimates remain at their sites and are just updated if the site-specific base learner is selected.", "This again minimizes the amount of data communication between sites and host and speeds up the fitting process.", "After the fitting phase, the aggregated site-specific parameters are communicated once in a last communication step to obtain the final model." ], [ "Pseudo Residual Updates, SSE Calculation, and Base Learner Selection", "The remaining challenges to run the distributed CWB algorithm are 1) the pseudo residual calculation (Algorithm REF line REF ), 2) the SSE calculation (Algorithm REF line REF ), and 3) base learner selection (Algorithm REF line REF )." ], [ "Distributed pseudo residual updates", "The site-specific response vector ${y}_s$ containing the values $y^{(i)},\\ i \\in \\lbrace 1, \\dots , n_s\\rbrace $ is the basis of the pseudo residual calculation.", "We assume that every site $s$ has access to all shared effects as well as the site-specific information of all site-specific base learners $b_{l_\\times }$ only containing the respective parameters $\\mathbf {\\hat{\\theta }}_{{l_\\times },s}$ .", "Based on these base learners, it is thus possible to compute a site model $\\hat{f}^{[m]}_s$ as a representative of $\\hat{f}^{[m]}$ on every site $s$ .", "The pseudo residual updates $\\tilde{{r}}^{[m]}_s$ per site are then based on $\\hat{f}^{[m]}_s$ via $\\tilde{r}^{[m](i)}_s= -\\frac{\\partial {L\\left(y^{(i)}, f\\left({x}^{(i)}\\right)\\right)}}{\\partial f({x}^{(i)})}\\left.\\vphantom{f_{(i}^{(i}}\\right|_{f = \\hat{f}^{[m-1]}_s},\\ i \\in \\lbrace 1, \\dots , n_s\\rbrace $ using $s$ .", "Most importantly, all remaining steps of the distributed CWB fitting procedure do not share the pseudo residuals $\\tilde{{r}}^{[m]}_s$ in order to avoid information leakage about ${y}_s$ ." ], [ "Distributed SSE calculation and base learner selection", "After fitting all base learners $b_l\\in \\mathcal {B}$ and $b_{l_\\times }\\in \\mathcal {B}_\\times $ to $\\tilde{{r}}^{[m]}_s$ , we obtain $\\hat{\\mathbf {\\theta }}^{[m]}_l$ , $l=1, \\dots , |\\mathcal {B}|$ , and $\\hat{\\mathbf {\\theta }}^{[m]}_{l_\\times }$ , $l_\\times = 1_\\times , \\dots , |\\mathcal {B}_\\times |$ .", "Calculating the SSE distributively for the $l\\vphantom{x}^{\\text{th}}$ and $l_\\times \\vphantom{x}^{\\text{th}}$ base learner $b_l$ and $b_{l_\\times }$ , respectively, requires calculating $2S$ site-specific SSE values: $\\operatorname{SSE}_{l,s} &= \\sum _{i=1}^{n_s} \\left(\\tilde{r}^{[m](i)}_s- b_l({x}^{(i)}_s, \\hat{\\mathbf {\\theta }}^{[m]}_l)\\right)^2 = \\sum _{i=1}^{n_s} (\\tilde{r}^{[m](i)}- g_l({x}^{(i)})^{\\hspace{-0.83328pt}\\mathsf {T}}\\hat{\\mathbf {\\theta }}^{[m]}_l)^2 , \\\\\\operatorname{SSE}_{l_\\times ,s} &= \\sum _{i=1}^{n_s} \\left(\\tilde{r}^{[m](i)}_s- b_{l_\\times }({x}^{(i)}_s, \\hat{\\mathbf {\\theta }}^{[m]}_{l_\\times })\\right)^2 = \\sum _{i=1}^{n_s} (\\tilde{r}^{[m](i)}_s- g_l({x}^{(i)})^{\\hspace{-0.83328pt}\\mathsf {T}}\\hat{\\mathbf {\\theta }}^{[m]}_{l_\\times , s})^2.", "$ The site-specific SSE values are then sent to the host and aggregated to $\\operatorname{SSE}_l = \\sum _{s=1}^S\\operatorname{SSE}_{l,s}$ .", "If privacy constraints have been met in all previous calculations, sharing the individual SSE values is not critical and does not violate any privacy constraints as the value is an aggregation of all $n_s$ observations for all sites $s$ .", "Having gathered all SSE values at the host location, selecting the best base learner in the current iteration is done in the exact same manner as for the non-distributed CWB algorithm by selecting $l^{[m]} = \\operatorname{arg\\,min}_{l\\in \\lbrace 1, \\dots , |\\mathcal {B}|, 1_\\times , \\dots , |\\mathcal {B}|_\\times \\rbrace } \\operatorname{SSE}_l$ .", "After the selection, the index $l^{[m]}$ is shared with all sites to enable the update of the site-specific models $\\hat{f}^{[m]}_s$ .", "If a shared effect is selected, the parameter vector $\\hat{\\mathbf {\\theta }}^{[m]}_{l^{[m]}}$ is shared with all sites.", "Caution must be taken when the number of parameters of one base learner is equal to the number of observations, as this allows reverse-engineering private data.", "In the case of a site-specific effect selection, no parameter needs to be communicated, as the respective estimates are already located at each site.", "[!ht] Distributed CWB Algorithm.", "The line prefixes [S] and [H] indicate whether the operation is conducted at the sites ([S]) or at the host ([H]).", "Input Sites with site data $k$ , learning rate $\\nu $ , number of boosting iterations $M$ , loss Input function $L$ , set of shared effects $\\mathcal {B}$ and respective site-specific effects $\\mathcal {B}_\\times $ Output Prediction model $\\hat{f}$ [1] $\\operatorname{distrCWB}$$\\nu ,L,\\mathcal {B}, \\mathcal {B}_\\times $ Initialization: [H] Initialize shared model $\\hat{f}^{[0]}_{\\text{shared}}({x}) = \\operatorname{arg\\,min}_{c\\in {R}}\\mathcal {R}_{\\text{emp}}(c)$ [S] Calculate ${Z}_{l,s}$ and $\\mathbf {F}_{l,s} = {Z}_{l,s}^{\\hspace{-0.83328pt}\\mathsf {T}}{Z}_{l,s}$ , $\\forall l\\in \\lbrace 1, \\dots , |\\mathcal {B}|\\rbrace ,\\ s\\in \\lbrace 1, \\dots , S\\rbrace $ [S] Set $\\hat{f}^{[0]}_s= \\hat{f}^{[0]}_{\\text{shared}}$ $m \\in \\lbrace 1, \\dots , M\\rbrace $ or while an early stopping criterion is not met [S] Update pseudo residuals: [S] $\\tilde{r}^{[m](i)}_s= -\\left.\\frac{\\partial {L\\left(y^{(i)}, f\\left({x}^{(i)}\\right)\\right)}}{\\partial f({x}^{(i)})}\\right|_{f = \\hat{f}^{[m-1]}_s},\\ \\ \\forall i \\in \\lbrace 1, \\dots , n_s\\rbrace $ $l \\in \\lbrace 1, \\dots , |\\mathcal {B}|\\rbrace $ [H] Calculate shared effect: $\\hat{\\mathbf {\\theta }}^{[m]}_l = \\operatorname{distFit}({Z}_{l,1}, \\dots , {Z}_{l,S}, {y}_1, \\dots , {y}_S, \\mathbf {K}_l)$ [H] Communicate $\\hat{\\mathbf {\\theta }}^{[m]}_l$ to the sites $k \\in \\lbrace 1, \\dots , S\\rbrace $ [S] Fit $l\\vphantom{x}^{\\text{th}}$ site-specific effect: $\\hat{\\mathbf {\\theta }}^{[m]}_{l_\\times ,s} = (\\mathbf {F}_{l,s} + \\lambda _0\\mathbf {I}_{d_l} + \\mathbf {K}_l)^{-1}{Z}_{l,s}\\tilde{{r}}^{[m]}_s$ [S] Calculate the SSE for the $l\\vphantom{x}^{\\text{th}}$ shared and site-specific effect: [S] $\\operatorname{SSE}_{l,s} = \\sum _{i=1}^{n_s} (\\tilde{r}^{[m](i)}- g_l({x}^{(i)})^{\\hspace{-0.83328pt}\\mathsf {T}}\\hat{\\mathbf {\\theta }}^{[m]}_l)^2$ [S] $\\operatorname{SSE}_{l_\\times ,s} = \\sum _{i=1}^{n_s} (\\tilde{r}^{[m](i)}_s- g_l({x}^{(i)})^{\\hspace{-0.83328pt}\\mathsf {T}}\\hat{\\mathbf {\\theta }}^{[m]}_{l_\\times , s})^2$ [S] Send $\\operatorname{SSE}_{l,s}$ and $\\operatorname{SSE}_{l_\\times , s}$ to the host [H] Aggregate SSE values: $\\operatorname{SSE}_{l} = \\sum _{s=1}^S\\operatorname{SSE}_{l,s}$ and $\\operatorname{SSE}_{l_\\times } = \\sum _{s=1}^S\\operatorname{SSE}_{l_\\times ,s}$ [H] Select best base learner: $l^{[m]} = \\operatorname{arg\\,min}_{l\\in \\lbrace 1, \\dots , |\\mathcal {B}|, 1_\\times , \\dots , |\\mathcal {B}|_\\times \\rbrace } \\operatorname{SSE}_l$ $b_{l^{[m]}}$ is a shared effect [H] Update model: $\\hat{f}^{[m]}_{\\text{shared}}({x}) = \\hat{f}^{[m-1]}_{\\text{shared}}({x}) + \\nu b_{l^{[m]}}({x}, \\hat{\\mathbf {\\theta }}^{[m]}_{l^{[m]}})$ [H] Upload model update $\\hat{\\mathbf {\\theta }}^{[m]}_{l^{[m]}}$ to the sites.", "[S] Update site model $\\hat{f}_s^{[m]}$ via parameter updates $\\hat{\\mathbf {\\theta }}_{l^{[m]}} = \\hat{\\mathbf {\\theta }}_{l^{[m]}} + \\nu \\hat{\\mathbf {\\theta }}^{[m]}_{l^{[m]}}$ [S] Communicate site-specific effects $\\hat{\\mathbf {\\theta }}_{1_\\times }, \\dots , \\hat{\\mathbf {\\theta }}_{|\\mathcal {B}|_\\times }$ to the host [H] Add site-specific effects to the model of shared effects $\\hat{f}^{[M]}_{\\text{shared}}$ to obtain the full model $\\hat{f}^{[M]}$ [H] return $\\hat{f}= \\hat{f}^{[M]}$" ], [ "Distributed CWB Algorithm with Site-Specific Effects", "Assembling all pieces, our distributed CWB algorithm is summarized in Algorithm REF ." ], [ "Communication Costs", "While the CWB iterations themselves can be performed in parallel on every site and do not slow down the process compared to a pooled calculation, it is worth discussing the communication costs of $\\operatorname{distrCWB}$ .", "During the initialization, data is shared just once, while the fitting phase requires the communication of data in each iteration.", "Let $d = \\max _l d_l$ be the maximum number of basis functions (or, alternatively, assume $d$ basis functions for all base learners).", "The two main drivers of the communication costs are the number of boosting iterations $M$ and the number of base learners $|\\mathcal {B}|$ .", "Because of the iterative nature of CWB with a single loop over the boosting iterations, the communication costs (both for the host and each site) scale linearly with the number of boosting iterations $M$ , i.e., $\\mathcal {O}(M)$ .", "For the analysis of communication costs in terms of the number of base learners, we distinguish between the initialization phase and the fitting phase." ], [ "Initialization", "d As only the sites share $\\mathbf {F}_{l,s}\\in {R}^{d\\times d},\\ \\forall l\\in \\lbrace 1, \\dots , |\\mathcal {B}|\\rbrace $ , the transmitted amount of values is $d^2|\\mathcal {B}|$ for each site and therefore scales linearly with $|\\mathcal {B}|$ , i.e., $\\mathcal {O}(|\\mathcal {B}|)$ .", "The host does not communicate any values during the initialization." ], [ "Fitting", "In each iteration, every site shares its vector ${Z}_{l,s}^{\\hspace{-0.83328pt}\\mathsf {T}}\\tilde{{r}}^{[m]}_s\\in {R}^d,\\ \\forall l\\in \\lbrace 1, \\dots , |\\mathcal {B}|\\rbrace $ .", "Over the course of $M$ boosting iterations, each site therefore shares $dM|\\mathcal {B}|$ values.", "Every site also communicates the SSE values, i.e., 2 values (index and SSE value) for every base learner and thus $2M|\\mathcal {B}|$ values for all iterations and base learners.", "In total, each site communicates $M|\\mathcal {B}|(d + 2)$ values.", "The communication costs for all sites are therefore $\\mathcal {O}(|\\mathcal {B}|)$ .", "The host, in turn, communicates the estimated parameters $\\hat{\\mathbf {\\theta }}^{[m]}\\in {R}^d$ of the $|\\mathcal {B}|$ shared effects.", "Hence, $dM|\\mathcal {B}|$ values as well as the index of the best base learner in each iteration are transmitted.", "In total, the host therefore communicates $dM|\\mathcal {B}| + M$ values to the sites, and costs are therefore also $\\mathcal {O}(|\\mathcal {B}|)$ ." ], [ "Application", "We now showcase our algorithm on a heart disease data set that consists of patient data gathered all over the world.", "The data were collected at four different sites by the 1) Hungarian Institute of Cardiology, Budapest (Andras Janosi, M.D.", "), 2) University Hospital, Zurich, Switzerland (William Steinbrunn, M.D.", "), 3) University Hospital, Basel, Switzerland (Matthias Pfisterer, M.D.", "), and 4) V.A.", "Medical Center, Long Beach, and Cleveland Clinic Foundation (Robert Detrano, M.D., Ph.D.), and is thus suited for a multi-site distributed analysis.", "The individual data sets are freely available at https://archive.ics.uci.edu/ml/datasets/heart+disease [14].", "For our analysis, we set the privacy level (cf.", "Section REF ) to 5 which is a common default." ], [ "Data Description", "The raw data set contains 14 covariates, such as the chest pain type (cp), resting blood pressure (trestbps), maximum heart rate (thalach), sex, exercise-induced angina (exang), or ST depression (i.e., abnormal difference of the ST segment from the baseline on an electrocardiogram) induced by exercise relative to rest (oldpeak).", "A full list of covariates and their abbreviations is given on the data set's website.", "After removing non-informative covariates and columns with too many missing values at each site, we obtain $n_{\\text{cleveland}} = 303$ , $n_{\\text{hungarian}} = 292$ , $n_{\\text{switzerland}} = 116$ , and $n_{\\text{va}} = 140$ observations and 8 covariates.", "A table containing the description of the abbreviations of these covariates is given in Table REF in Appendix REF .", "For our application, we assume that missing values are completely at random and all data sets are exclusively located at each sites.", "The task is to determine important risk factors for heart diseases.", "The target variable is therefore a binary outcome indicating the presence of heart disease or not." ], [ "Analysis and Results", "We run distributed CWB with a learning rate of 0.1 and a maximum number of 100000 iterations.", "To determine an optimal stopping iteration for CWB, we use 20 % of the data as validation data and set the patience to 5 iterations.", "In other words, the algorithm stops if no risk improvement on the validation data is observed in 5 consecutive iterations.", "For the numerical covariates, we use a P-spline with 10 cubic basis functions and second-order difference penalties.", "All base learners are penalized accordingly to a global degree of freedom that we set to 2.2 (to obtain unbiased feature selection) while the random intercept is penalized according to 3 degrees of freedom (see Appendix  for more details).", "Since we are modelling a binary response variable, $h^{-1}$ is the inverse logit function $\\operatorname{logit}^{-1}(f) = (1 + \\exp (-f))^{-1}$ .", "The model for an observation of site $s$ , conditional on its random effects $\\mathbf {\\gamma }$ , is given in Appendix REF ." ], [ "Results", "The algorithm stops after $m_{\\text{stop}} = 5578 $ iterations as the risk on the validation data set starts to increase (cf.", "Figure REF in the Appendix REF ).", "Out of these 5578 iterations, the distributed CWB algorithm selects a shared effect in 782 iterations and site-specific effects in 4796 iterations.", "This indicates that the data is rather heterogeneous and requires site-specific (random) effects.", "Figure REF (Left) shows traces of how and when the different additive terms (base learners) entered the model during the fitting process and illustrates the selection process of CWB.", "Figure: Left: Model trace showing how and when the four most selected additive terms entered the model.", "Right: Variable importance of selected features in decreasing order.The estimated effect of the most important feature oldpeak (cf.", "Figure REF , Right) found is further visualized in Figure REF .", "Looking at the shared effect, we find a negative influence on the risk of heart disease when increasing ST depression (oldpeak).", "When accounting for site-specific deviations, the effect becomes more diverse, particularly for Hungary.", "Figure: Decomposition of the effect of oldpeak into the shared (left) and the site-specific effects (middle).", "The plot on the right-hand side shows the sum of shared and site-specific effects.In Appendix REF and REF , we provide the partial effects for all features and showcase the conditional predictions of the fitted GAMM model for a given site." ], [ "Comparison of Estimation Approaches", "The previous example shows partial feature effects that exhibit shrinkage due to the early stopping of CWB's fitting routine.", "While this prevents overfitting and induces a sparse model, we can also run CWB for a very large amount of iterations without early stopping to approximate the unregularized and hence unbiased maximum likelihood solution.", "We illustrate this in the following by training CWB and our distributed version for 100000 iterations and compare its partial effects to the ones of a classical mixed model-based estimation routine implemented in the R package mgcv [45].", "Results of the estimated partial effects of our distributed CWB algorithm and the original CWB on pooled data show a perfect overlap (cf.", "Figure REF ).", "This again underpins the lossless property of the proposed algorithm.", "The site-specific effects on the pooled data are fitted by defining a row-wise Kronecker base learner for all features and the site as a categorical variable.", "The same approach is used to estimate a GAMM using mgcv fitted on the pooled data with tensor products between the main feature and the categorical site variable.", "A comparison of all partial feature effects is given in Appendix REF showing good alignment between the different methods.", "For the oldpeak effect shown in Figure REF , we also see that the partial effects of the two CWB methods are very close to the mixed model-based estimation, with only smaller differences caused by a slightly different penalization strength of both approaches.", "The empirical risk is $0.4245$ for our distributed CWB algorithm, $0.4245$ for CWB on the pooled data, and $0.4441$ for the GAMM on the pooled data.", "Figure: Comparison of the site-specific effects for oldpeak between the distributed (dsCWB) and pooled CWB approach (compboost) as well as estimates of from mgcv." ], [ "Discussion", "We proposed a novel algorithm for distributed, lossless, and privacy-preserving GAMM estimation to analyze horizontally partitioned data.", "To account for data heterogeneity of different sites we introduced site-specific (smooth) random effects.", "Using CWB as the fitting engine allows estimation in high-dimensional settings and fosters variable as well as effect selection.", "This also includes a data-driven selection of shared and site-specific features, providing additional data insights.", "Owing to the flexibility of boosting and its base learners, our algorithm is easy to extend and can also account for interactions, functional regression settings [7], or modeling survival tasks [5].", "An open challenge for the practical use of our approach is its high communication costs.", "For larger iterations (in the 10 or 100 thousands), computing a distributed model can take several hours.", "One option to reduce the total runtime is to incorporate accelerated optimization recently proposed in [41].", "Another driver that influences the runtime is the latency of the technical setup.", "Future improvements could reduce the number of communications, e.g., via multiple fitting rounds at the different sites before communicating the intermediate results.", "A possible future extension of our approach is to account for both horizontally and vertically distributed data.", "Since the algorithm is performing component-wise (coordinate-wise) updates, the extension to vertically distributed data naturally falls into the scope of its fitting procedure.", "This would, however, require a further advanced technical setup and the need to ensure consistency across sites.", "ACKNOWLEDGMENT This work was supported by the German Federal Ministry for Research and Technology (BMFT) under Grant FKZ: 01ZZ1804C (DIFUTURE, MII).", "The authors of this work take full responsibilities for its content.", "0pt plus 0.3ex Further Methodological Details Unbiased Feature Selection CWB updates the model fit in a greedy fashion by choosing the best-fitting base learner for the current pseudo-residual vector in every step.", "This can potentially bias the selection of base learners and impair the estimation procedure as more complex models are preferred over simpler ones.", "This problem and a possible solution is discussed in [20].", "To this end, the authors propose an unbiased feature/base learner selection by assigning the same amount of flexibility to each base learner.", "The main idea of this unbiased feature selection is to measure the flexibility of every base learner $b_l$ via its degrees of freedom $\\operatorname{df}_l$ and set $\\operatorname{df}_l = \\operatorname{df}\\in {R}^+$ for all base learners $b_1, \\dots , b_l$ by choosing an appropriate amount of penalization $\\lambda _l$ .", "Using a large penalization for more complex base learners (e.g., a P-spline base learner) allows for aligning the flexibility to those of simpler base learners (e.g., a linear base learner) and obtaining an unbiased feature selection.", "The penalty term $\\lambda _l = \\operatorname{dro}(\\operatorname{df}, b_l)$ for the corresponding $\\operatorname{df}$ is calculated using the Demmler-Reinsch-Orthogonalization [38].", "The DRO makes use of the correspondence between the degrees of freedom and the penalty term induced by the base learner-specific hat matrix [11].", "More details can, e.g., be found in [41].", "Application: Additional Information and Results Data Set Description Table REF contains a description of the features and a summary.", "Table: Description of the covariates used for the application.", "The shown summary is the minimum, first/0.25 quantile, median, mean, third/0.75 quantile, and maximum for numerical covariates and the encoding with the number of instances for categorical covariates.", "Choice of the Hyperparameters We run CWB for 100000 iterations to give the algorithm the chance to converge in data situations with less noise.", "When fitting CWB to optimize a GAMM, a convex optimization problem is solved.", "In this case, it should be sufficient to stop the algorithm immediately when risk increases.", "In order to account for numerical inaccuracies, however, we set the patient to 5.", "Furthermore, common defaults for the spline degree, number of knots, and penalization difference order are used.", "Finally, while it is possible to use automatic hyperparameter optimization to tune the degrees of freedom of the model (and all other hyperparameters), we here define the global model flexibility in such a way that this allows a meaningful and fair comparison with the mgcv routine.", "Model Formula $h(\\mathbb {E}(&Y|\\mathbf {x},\\mathbf {u}, \\mathbf {\\gamma }_s)) = {1}(x_{\\text{sex}}=\\text{``male''}) \\beta _{\\text{sex}} + {1}(x_{\\text{exang}}=\\text{``yes''}) \\beta _{\\text{exang}} + \\\\&\\quad \\textstyle \\sum _{i\\in \\lbrace 1,2,3\\rbrace }{1}(x_{\\text{cp}}=i) \\beta _{\\text{cp},i} + \\textstyle \\sum _{i\\in \\lbrace 1,2\\rbrace }{1}(x_{\\text{restecg}}=i) \\beta _{\\text{restecg},i} + \\\\&\\quad {1}(x_{\\text{sex}}=\\text{``male''}) \\gamma _{\\text{sex},s} + \\textstyle \\sum _{i\\in \\lbrace 1,2,3\\rbrace } {1}(x_{\\text{cp}}=i) \\gamma _{\\text{cp},i,s} + \\\\&\\quad \\textstyle \\sum _{i\\in \\lbrace 1,2\\rbrace } {1}(x_{\\text{restecg}}=i) \\gamma _{\\text{resecg},i,s} + {1}(x_{\\text{exang}}=\\text{``yes''}) \\gamma _{\\text{exang},s} + \\gamma _{0,s} + \\\\&\\quad \\phi _{\\text{age}}(x_{\\text{age}}) + \\phi _{\\text{trestbps}}(x_{\\text{trestbps}})+ \\phi _{\\text{thalach}}(x_{\\text{thalach}}) + \\phi _{\\text{oldpeak}}(x_{\\text{oldpeak}}) +\\\\&\\quad \\phi _{\\text{age},s}(x_{\\text{age}}) + \\phi _{\\text{trestbps},s}(x_{\\text{trestbps}})+ \\phi _{\\text{thalach},s}(x_{\\text{thalach}}) + \\phi _{\\text{oldpeak},s}(x_{\\text{oldpeak}}).$ Risk Traces Figure REF shows the risk traces obtained during training the distributed CWB algorithm.", "The validation risk is used to determine the stopping iteration.", "The algorithm stops if the risk on the validation data set worsens 5 consecutive times.", "Figure: Train and validation risk per iteration measured during the training.", "The validation risk is used to determine the optimal stopping iteration of the fitting process.", "Partial Feature Effects Figure REF and REF visualize the partial effects of all numerical and categorical features.", "Visualized are the shared effect, the site-specific effects and the aggregated effect per site.", "Figure: Partial effects for the numerical attributes.", "The figure on the left shows the shared effect, while the plot in the middle illustrates the site-specific effects, and the right figure contains the sum of both effects.Figure: The blue vertical line indicates the coefficient value of the shared effect.", "The horizontal coloured lines indicate the site-specific correction per site.", "The points are the sum of the shared and site-specific effects.", "Predicting a new patient For this demonstration, we predict a new male patient of age 60 with asymptomatic chest pain, a resting heart rate of 90 bps, an abnormal resting ECGS, an oldpeak value of 3, a maximum heart rate of 160, and angina induced by exercise.", "The predicted score of this patient without a site-specific correction is $\\hat{f}_{\\text{shared}} = -1.112$ .", "Calculating the score with site-specific corrections yields $\\hat{f}_{\\text{cleveland}}= -0.4424$ , $\\hat{f}_{\\text{hungarian}} = 3.0762$ , $\\hat{f}_{\\text{switzerland}} = -0.5589$ , and $\\hat{f}_{\\text{va}} = -0.8586$ , indicating a high risk of heart disease if this man resides in Hungary.", "This is caused by the different effects of sex, exang, cp, and oldpeak in combination with being Hungarian.", "The individual contributions are visualized in Figure REF and show the decomposition of the prediction scores to explain the decision-making of the model.", "The offset of all models is $\\hat{f}^{[0]}({x}) = -0.9091$ .", "Figure: Individual contribution of a new patient to the predicted score.", "The blue vertical line corresponds to the shared effect while the horizontal lines are the site-specific corrections added to the shared effect.", "Comparison of Estimation Approaches Figure REF and REF show the partial effects of all numerical and categorical features when compared to the pooled approach and mgcv.", "For all features, our distributed CWB algorithm is yielding exactly the same estimates as when fitting CWB on the pooled data.", "Comparing the feature effects with mgcv reveals a good alignment.", "Figure: Partial feature effects of the numerical features of our distributed CWB approach (dsCWB) compared to the pooled approach (compboost) and mgcv.Figure: Partial feature effects of the categorical features of our distributed CWB approach (dsCWB) compared to the pooled approach (compboost) and mgcv." ], [ "Unbiased Feature Selection", "CWB updates the model fit in a greedy fashion by choosing the best-fitting base learner for the current pseudo-residual vector in every step.", "This can potentially bias the selection of base learners and impair the estimation procedure as more complex models are preferred over simpler ones.", "This problem and a possible solution is discussed in [20].", "To this end, the authors propose an unbiased feature/base learner selection by assigning the same amount of flexibility to each base learner.", "The main idea of this unbiased feature selection is to measure the flexibility of every base learner $b_l$ via its degrees of freedom $\\operatorname{df}_l$ and set $\\operatorname{df}_l = \\operatorname{df}\\in {R}^+$ for all base learners $b_1, \\dots , b_l$ by choosing an appropriate amount of penalization $\\lambda _l$ .", "Using a large penalization for more complex base learners (e.g., a P-spline base learner) allows for aligning the flexibility to those of simpler base learners (e.g., a linear base learner) and obtaining an unbiased feature selection.", "The penalty term $\\lambda _l = \\operatorname{dro}(\\operatorname{df}, b_l)$ for the corresponding $\\operatorname{df}$ is calculated using the Demmler-Reinsch-Orthogonalization [38].", "The DRO makes use of the correspondence between the degrees of freedom and the penalty term induced by the base learner-specific hat matrix [11].", "More details can, e.g., be found in [41]." ], [ "Data Set Description", "Table REF contains a description of the features and a summary.", "Table: Description of the covariates used for the application.", "The shown summary is the minimum, first/0.25 quantile, median, mean, third/0.75 quantile, and maximum for numerical covariates and the encoding with the number of instances for categorical covariates." ], [ "Choice of the Hyperparameters", "We run CWB for 100000 iterations to give the algorithm the chance to converge in data situations with less noise.", "When fitting CWB to optimize a GAMM, a convex optimization problem is solved.", "In this case, it should be sufficient to stop the algorithm immediately when risk increases.", "In order to account for numerical inaccuracies, however, we set the patient to 5.", "Furthermore, common defaults for the spline degree, number of knots, and penalization difference order are used.", "Finally, while it is possible to use automatic hyperparameter optimization to tune the degrees of freedom of the model (and all other hyperparameters), we here define the global model flexibility in such a way that this allows a meaningful and fair comparison with the mgcv routine." ], [ "Risk Traces", "Figure REF shows the risk traces obtained during training the distributed CWB algorithm.", "The validation risk is used to determine the stopping iteration.", "The algorithm stops if the risk on the validation data set worsens 5 consecutive times.", "Figure: Train and validation risk per iteration measured during the training.", "The validation risk is used to determine the optimal stopping iteration of the fitting process." ], [ "Partial Feature Effects", "Figure REF and REF visualize the partial effects of all numerical and categorical features.", "Visualized are the shared effect, the site-specific effects and the aggregated effect per site.", "Figure: Partial effects for the numerical attributes.", "The figure on the left shows the shared effect, while the plot in the middle illustrates the site-specific effects, and the right figure contains the sum of both effects.Figure: The blue vertical line indicates the coefficient value of the shared effect.", "The horizontal coloured lines indicate the site-specific correction per site.", "The points are the sum of the shared and site-specific effects." ], [ "Predicting a new patient", "For this demonstration, we predict a new male patient of age 60 with asymptomatic chest pain, a resting heart rate of 90 bps, an abnormal resting ECGS, an oldpeak value of 3, a maximum heart rate of 160, and angina induced by exercise.", "The predicted score of this patient without a site-specific correction is $\\hat{f}_{\\text{shared}} = -1.112$ .", "Calculating the score with site-specific corrections yields $\\hat{f}_{\\text{cleveland}}= -0.4424$ , $\\hat{f}_{\\text{hungarian}} = 3.0762$ , $\\hat{f}_{\\text{switzerland}} = -0.5589$ , and $\\hat{f}_{\\text{va}} = -0.8586$ , indicating a high risk of heart disease if this man resides in Hungary.", "This is caused by the different effects of sex, exang, cp, and oldpeak in combination with being Hungarian.", "The individual contributions are visualized in Figure REF and show the decomposition of the prediction scores to explain the decision-making of the model.", "The offset of all models is $\\hat{f}^{[0]}({x}) = -0.9091$ .", "Figure: Individual contribution of a new patient to the predicted score.", "The blue vertical line corresponds to the shared effect while the horizontal lines are the site-specific corrections added to the shared effect." ], [ "Comparison of Estimation Approaches", "Figure REF and REF show the partial effects of all numerical and categorical features when compared to the pooled approach and mgcv.", "For all features, our distributed CWB algorithm is yielding exactly the same estimates as when fitting CWB on the pooled data.", "Comparing the feature effects with mgcv reveals a good alignment.", "Figure: Partial feature effects of the numerical features of our distributed CWB approach (dsCWB) compared to the pooled approach (compboost) and mgcv.Figure: Partial feature effects of the categorical features of our distributed CWB approach (dsCWB) compared to the pooled approach (compboost) and mgcv." ] ]
2210.07723
[ [ "Schr\\\"odinger cat states prepared by logical gate with non-Gaussian\n resource state: effect of finite squeezing and efficiency versus monotones" ], [ "Abstract Quantum measurement-induced gate based on entanglement with ideal cubic phase state used as a non-Gaussian resource is able to produce Shr\\\"odinger cat state in the form of two high fidelity ``copies'' of the target state on phase plane [N.I.", "Masalaeva, I.V.", "Sokolov, Phys.", "Lett.", "A 424, 127846 (2022)].", "In this work we examine the effect of finite initial squeezing of the resource state on the gate performance.", "We present exact solution for the gate output state and demonstrate that there exists a degree of squeezing, available in experiment, such that the output cat state quality almost does not impove with the further increase of squeezing.", "On the other hand, the probability of the expected ancilla measurement outcome decreases with squeezing.", "Since an overall efficiency of the conditional scheme should account for the probability of success, we argue that such measures of non-Gaussianity of the resource state as Wigner logarithmic negativiy and non-Gaussianity may not be directly applicable to assess the efficiency of non-Gaussian gates based on quantum entanglement and subsequent projective measurement." ], [ "Introduction", "The continuous-variable (CV) quantum information schemes based on Gaussian resource states were extensively explored both theoretically and experimentally [1], [2], [3], [4], including their essentially multimode implementation [5], [6], [7], [8], [9].", "While the continuous-variable Gaussian cluster schemes are able to perform Gaussian transformations of the input states, in order to achieve universal quantum computing there is a need to introduce the non-Gaussian logical gates  [1], [2].", "A minimal nonlinearity sufficient to prepare non-Gaussian resource states is cubic.", "The cubic phase state based on cubic nonlinearity was first considered in [10], [11], and some approaches to the implementation of such states, as well as of the cubic (or higher) phase gates, were explored both theoretically and experimentally [12], [13], [14], [15], [16], [17].", "Recently, we demonstrated [18], [19] that continuous-variable measurement-induced two-node logical gate is able to prepare Schrödinger cat-like quantum superpositions, if Gaussian (e. g. squeezed) resource state of an ancillary oscillator is substituted with non-Gaussian one, and standard homodyne measurement is applied.", "This was shown for the cubic phase state used as a resource.", "A variety of methods to prepare CV Schrödinger cat states were discussed and implemented experimentally so far.", "This can be directly achieved by unitary evolution assisted by a strong enough non-linear interaction [20].", "The schemes based on a hybrid measurement-induced evolution also can create cat-like states.", "The optical Schrödinger cat states were generated using photon subtraction in a low-photon regime [21] and from wide-band CV squeezed light [22], the homodyne detection with photon number state as a resource [23], and the iterative schemes which allow an incremental enlargement of cat states [24], [25], [26].", "The Schrödinger cat states can be prepared by using a photonic even-parity detector and CV entanglement [27], by photon number measurement on two-mode [28] or multimode [29] Gaussian state.", "A key feature of the gate [18], [19] is that the Schrödinger cat state emerges when the ancillary oscillator measurement is compatible not with one, but with two different values of its physical variables.", "As far as these values are imprinted by the entanglement into the target oscillator observables, the two-component cat-like quantum superposition is created at the output of the gate.", "This feature does not appear if Gaussian (squeezed) resource state is used, and needs the measurement which is consistent with this criterion.", "Since the experimentally attainable degree of squeezing is limited, in this work we focus on the effect of finite initial squeezing of the resource state on the gate performance.", "The exact solution for the gate output state in terms of the Airy function is presented for vacuum input state.", "We demonstrate that there exists a degree of squeezing, available in experiment, such that the output superposition quality almost does not impove with the further increase of squeezing.", "This quality is estimated by means of the fidelity between the output state and the superposition of two coherent states well spaced on the phase plane, which is of interest for some error correction protocols.", "On the other hand, we find that the probability of the expected ancilla measurement outcome decreases with squeezing.", "It is natural to admit that an overall efficiency of the conditional scheme should account for the probability of success.", "We argue that such measures of non-Gaussianity of the resource state, as Wigner logarithmic negativiy and non-Gaussianity, may not be directly applicable to assess the efficiency of non-Gaussian gates, which are based on quantum entanglement and subsequent projective measurement." ], [ "Cat states from CV gate with cubic resource state based on finite initial squeezing", "The non-Gaussian gate based on perfect cubic phase state was introduced in [18], [19].", "The measurement-induced two-node gate uses the cubic phase state of the ancillary oscillator as an elementary non-Gaussian resource, the $C_Z$ operation which entangles an input signal with the ancilla, and the projecting homodyne measurement.", "Under optimal conditions, the gate output state is close to “perfect” Schrödinger cat state, that is, to the superposition of two symmetrically displaced undistorted copies of the input state.", "A key feature of the gate is that the ancilla measurement outcome provides multivalued information about the output state canonical variables, which results in the preparation of a cat-like state.", "This feature is easily interpreted [18], [19] in terms of a clear pictorial representation which is extendable to some other measurement-induced schemes based on the non-Gaussian resources.", "Let us outline briefly the gate operation in a more realistic configuration, when the ancillary resource oscillator is subject to a finite initial squeezing before the cubic nonlinear perturbation is applied to the ancilla.", "In general, the target oscillator may be initially prepared in an arbitrary state $|\\psi _1\\rangle = \\int dx_1\\psi (x_1)|x_1\\rangle ,$ which is assumed to occupy a limited range $\\lbrace \\Delta x_1,\\,\\Delta y_1\\rbrace $ of the coordinate and momentum.", "As the initial state of the ancillary oscillator, prepared before the cubic evolution is applied, we consider the squeezed state whose uncertainty region on the phase plane is squeezed along the momentum axis by the factor $s \\le 1$ , $\\psi ^{(sq)}(x_2) = \\frac{\\sqrt{s}}{\\pi ^{1/4}}e^{-(sx_2)^{2}/2}.$ In order to prepare the non-Gaussian cubic resource state of ancilla, one applies the unitary evolution operator $\\exp (i\\gamma q_2^3)$ to the state (REF ).", "In the following, we use the notation $\\lbrace q,\\,p\\rbrace $ for the coordinate and momentum operators.", "Next, the $C_Z$ entangling unitary evolution operator $\\exp (iq_1q_2)$ is applied, which prepares the state $|\\psi _{12}\\rangle = \\frac{\\sqrt{s}}{\\pi ^{1/4}}\\int dx_1dx_2\\psi (x_1) e^{ix_2(x_1 + \\gamma x_2^2)}e^{-(sx_2)^{2}/2}|x_1\\rangle |x_2\\rangle ,$ and the ancillary oscillator momentum is measured with the outcome $y_m$ .", "Projecting the state (REF ) on the homodyne detector eigenstate $|y_m\\rangle _{p_2}$ , one arrives at the target oscillator output state wave function (unnormalized), $\\tilde{\\psi }^{(out)}(x) = \\psi _{1}(x)\\tilde{\\varphi }(x-y_m),$ where the target oscillator input state wave function is multiplied by the factor which is expressed [30] in terms of the Airy function, $\\tilde{\\varphi }(x-y_m) =\\frac{\\sqrt{s}}{\\pi ^{3/4}\\sqrt{2}}\\int dx^{\\prime } \\; e^{ix^{\\prime }(x-y_{m}+\\gamma x^{\\prime 2})}e^{-(sx^{\\prime })^{2}/2} =\\\\\\frac{\\sqrt{2s}\\,\\pi ^{1/4}}{(3\\gamma )^{1/3}}\\exp \\left[\\frac{s^2}{6\\gamma }\\left(x - y_m + \\frac{s^4}{18\\gamma }\\right) \\right]{\\rm Ai}\\left[\\frac{1}{(3\\gamma )^{1/3}}\\left(x -y_m + \\frac{s^4}{12\\gamma }\\right)\\right].$ The probability density of the ancilla momentum measurement outcome $y_m$ is $P(y_m) = \\langle \\tilde{\\psi }^{(out)}|\\tilde{\\psi }^{(out)}\\rangle = \\int dx \\big |\\tilde{\\psi }^{(out)}(x)\\big |^2.$ The state conditionally prepared by the gate finally is given by $\\psi ^{(out)}(x) = \\frac{1}{\\sqrt{P(y_m)}} \\tilde{\\psi }^{(out)}(x).$ Previously [19], we have explored in detail the performance of the gate based on the ideal initial resource state (that is, in the limit where instead of the squeezed state (REF ) one uses the momentum eigenstate $|0\\rangle _{p_2}$ ) for a representative set of Fock states of the target oscillator.", "Here we focus on the effect of finite initial squeezing.", "To be specific, we assume the target oscillator to be in the vacuum state which is given by (REF ), where $x_2\\rightarrow x_1$ , $s=1$ .", "As seen from the exact result (REF ), in the limit of perfect squeezing, $s\\rightarrow 0$ , the added factor vanishes, and $P(y_m)\\rightarrow 0$ .", "In this limit, the probability density to observe any given value $y_m$ of the ancilla momentum also vanishes.", "Therefore, in our previous work we were able to explore in detail quantum statistics of the emerging Schrödinger cat-like states in dependence on the cubic interaction parameter $\\gamma $ and on the measurement outcome $y_m$ , but could not discuss the probability density and propose an optimal choice of the initial ancilla squeezing.", "Let us first consider how a finite initial squeezing affects the quality of the emerging superpositions.", "In [19] we compared the exact solution obtained under the assumption of perfect squeezing, and the state closest to it, which corresponds to the Schrödinger cat state in the form of a superposition of two undistorted copies of the vacuum state of the target oscillator (i. e., the Glauber states), symmetrically spaced on the phase plane along the momentum quadrature, $|\\psi _{cat}^{(out)}\\rangle = \\frac{\\left(e^{i\\theta } |\\alpha \\rangle + e^{-i\\theta }|-\\alpha \\rangle \\right)}{\\sqrt{2\\big (1 +\\cos (2\\theta ) e^{-2|\\alpha |^2}\\big )}},$ where $\\alpha = ip^{(+)}, \\quad p^{(+)} = \\sqrt{y_m/3\\gamma }, \\quad \\theta =\\frac{\\pi }{4} - \\frac{2}{3\\sqrt{3\\gamma }}y_m^{3/2}.$ We have shown that in the limit of perfect squeezing and within a relevant range of the parameters $\\lbrace \\gamma , y_m\\rbrace $ , the proposed gate prepares with high fidelity the superpositions (REF ) of Glauber states.", "Since the experimentally available squeezing is limited, it remained unclear to which degree a finite squeezing can affect the possibility to obtain the Schrödinger cat states close to (REF ).", "Figure: Infidelity 1-F cat 1-F_{cat} for the ancilla momentum measurement outcome y m =3,3.6,4.5,6,9,15y_m=3,\\,3.6,\\,4.5,\\,6,\\,9,\\,15respectively in dependence on the initial ancilla state squeezing, where 1/s≥11/s \\ge 1 is the stretching factorof the ancilla coordinate quadrature.In these plots, the cubic deformation coefficient is chosen as γ=y m /30\\gamma = y_m/30, which providesthe cat states with the same spacing 2p (+) =3.162p^{(+)} = 3.16 (see ())between the copies along the momentum axis.In Fig.", "REF we represent the effect of initial squeezing on the infidelity $1-F_{cat}$ between the gate output state (REF ) and the cat state (REF ), where $F_{cat} = \\left| \\int dx \\psi ^{(out)*}(x)\\psi ^{(out)}_{cat}(x)\\right|^2.$ Fig.", "REF shows that starting from some threshold squeezing, which is achievable in experiment, a further increase in squeezing does not lead to a significant increase in fidelity.", "The fidelity difference from unity in this limit is due to local deformations of the input state copies on the phase plane, as seen from the Figs.", "REF and REF .", "This deformation is the smaller, the larger the parameter $\\gamma $ of cubic nonlinearity.", "Figure: Output state Wigner function for the cubic deformation coefficient γ=0.1\\gamma =0.1 and the ancillamomentum measurement outcome y m =3y_m=3 (low fidelity configuration).", "The ancilla initial squeezing is5 dB (a), 9 dB (b), and 14 dB (c), which corresponds to 1/s=1.781/s=1.78, 2.82, and 5.01, correspondingly.Figure: Same as in Fig.", "for the cubic deformation coefficient γ=0.5\\gamma =0.5 and the ancillamomentum measurement outcome y m =15y_m=15 (high fidelity configuration).Figure: Probability density P(y m )P(y_m) of the measurement outcome y m =3,6,9,12,15y_m=3,\\,6,\\,9,\\,12,\\,15 respectivelyin dependence on the initial ancilla state squeezing, where 1/s≥11/s \\ge 1 is the stretching factor of the ancilla coordinatequadrature.", "For any measurement outcome in these plots, the cubic deformation coefficient is assumed tobe γ=y m /30\\gamma = y_m/30, which provides the spacing 2p (+) =3.162p^{(+)} = 3.16 between the centers of copiesalong the momentum axis.However, as seen from Fig.", "REF , the degree of initial squeezing significantly affects the probability to obtain the expected measurement result $y_m$ .", "This is due to the fact that the probability density per unit momentum interval of the ancillary oscillator, initially prepared in cubic state, essentially depends on squeezing, as illustrated in Fig.", "REF .", "Figure: A visual representation of the ancillary oscillator support region on the phase plane before and afterthe cubic deformation with γ=0.1\\gamma =0.1 for the initial squeezing of 5 and 14 dB respectively.Two distinct areas on the phase plane of the ancilla observables compatiblewith the same momentum measurement outcome y m =3y_m=3 are marked by the asterisks.Due to entanglement, both values of the ancilla coordinate are transferred to the target oscillatorobservables, thus generating the two-component cat state.For a qualitative interpretation of this feature of the gate under consideration, we introduce in Fig.", "REF the support regions of cubic phase state for 5 and 14 dB squeezing, built in the semiclassical approximation, i. e. when the evolution of points on the phase plane caused by the cubic Hamiltonian $ -\\gamma q_2^3$ , is described via a semiclassical mapping of the form $x_2 \\rightarrow x_2, \\quad y_2 \\rightarrow y_2 + 3\\gamma q_2^2$ .", "At low squeezing, the resource quasi-cubic state has a low probability density near $p_2 = y_m$ , as shown in Fig.", "REF for 5 dB squeezing.", "For an arbitrarily large squeezing, the support region of the resource state is stretched infinitely into an arbitrarily “thin\" parabola, and the probability of the expected momentum measurement result vanishes.", "As is well known, a correct quantum description of the statistical distribution, which corresponds to a given quantum state, is represented by its Wigner function.", "In Fig.", "REF we show the Wigner function of the cubic phase state built for the same values of the parameters of cubic interaction and squeezing, as in the figures REF , REF , and REF .", "Figure: Wigner function of the non-ideal cubic phase state for the cubic deformation coefficient γ=0.1\\gamma =0.1,generated from the initial ancilla state with squeezing of 5 dB (a), 9 dB (b) and 14 dB (c).One can note a good agreement between the results that follow from the exact form (REF ) of the gate output state, and the qualitative picture introduced above.", "This means that despite a more complex behavior of the cubic phase state Wigner function in comparison with its semiclassical interpretation, some features present in the behavior of the Wigner function (e. g., oscillations) may average out and not manifest themselves in the output state.", "Hence, in turn, we can conclude that our approach [18], [19] to qualitative analysis of the gates that use non-Gaussian resource states and measurement can be useful for the search and evaluation of other logical devices." ], [ "Efficiency of the gate versus monotones", "Recently, the problem of universal measures of non-Gaussianity of the states that can serve as a resource for universal quantum computing has been widely discussed [31].", "One could expect that a non-Gaussian resource state, for which a well-chosen measure is larger, would be able to take more efficiently the result of quantum evolution out of the class of Gaussian schemes when used optimally.", "Among the measures discussed so far in this context are the non-Gaussianity and Wigner logarithmic negativity.", "For finite squeezing, the non-Gaussianity and Wigner logarithmic negativity of the cubic phase state, which our gate uses as a non-Gaussian resource, are [32] monotonous functions of the parameter $\\gamma /s^3$ (in the notation of our work, $1/s \\ge 1$ is the stretching factor of the coordinate quadrature).", "This parameter increases with the degree of squeezing.", "Our results demonstrate that for large enough parameter $\\gamma $ of the cubic non-linearity, our gate can conditionally produce the output state which is very close to the cat-like superposition (REF ) of the Glauber states almost independently on the degree of squeezing (after some threshold value, see Fig.", "REF ).", "If the measured ancilla momentum $y_m$ fits to the range specified before the measurement, the “amount of non-Gaussianity” in the output cat-like state becomes, for large enough squeezing, almost independent on the squeezing present in the parameter $\\gamma /s^3$ , no matter which measure is used.", "However, from the point of view of a real use of the measurement-assisted gates, the concept of efficiency should include not only the quality of the prepared states, but also the probability of obtaining the necessary measurement result.", "In our scheme, the “amount of non-Gaussianity” present in the output state, weighted by its probability shown in Fig.", "REF , vanishes in the limit of perfect squeezing.", "That is, the use of cubic phase state with the larger value of the parameter $\\gamma /s^3$ and, hence, with the larger non-Gaussianity and Wigner logarithmic negativity, may render the gate less efficient in the sense mentioned above.", "One can conclude from this example that the non-Gaussianity measures based on global properties of the resource states may be generally not suitable for the schemes based on the entanglement and projective measurements.", "Physically speaking, the efficiency of such schemes may depend more on the resource state behavior in the region of phase space, where the resource state overlaps with the state detected by the measuring device, than on its global properties." ], [ "Conclusion", "We have investigated quantum statistical properties of two-component Schrödinger cats created at the output of the conditional non-Gaussian logical gate that we have proposed previously.", "The gate is based on quantum entanglement of the target oscillator with the ancillary one, and the subsequent projective measurement of ancilla.", "A key feature of the regime where cat-like states arise is that the measurement provides multivalued information about the target system physical variables.", "As a non-Gaussian resource, we considered here cubic phase state generated from the initial squeezed state of ancilla with an arbitrary finite initial squeezing.", "We showed that in the relevant range of parameters, an increase in the degree of initial squeezing above some threshold range achievable in experiment has negligible effect on the fidelity between the gate output state and the Schrödinger cat state in the form of superposition of two coherent Glauber states.", "On the other hand, we demonstrated that the degree of initial squeezing essentially affects the probability of the projective measurement outcome that leads to the desired output state.", "This probability vanishes for both too small and too large degree of squeezing, and reaches its maximum value at a certain degree of squeezing, which depends on the degree of cubic non-linearity used in the preparation of the resource state and on the output state parameters.", "We provided a simple physical interpretation of these results using both the exact and semiclassical representation of the resource state properties and of the measurement procedure on the phase plane.", "Based on our results, we also draw attention to the fact that such measures of non-Gaussianity of the resource state, as Wigner logarithmic negativiy and non-Gaussianity, may be not applicable to assess the overall efficiency of non-Gaussian gates which use quantum entanglement and subsequent projective measurement." ] ]
2210.07705
[ [ "Plausible May Not Be Faithful: Probing Object Hallucination in\n Vision-Language Pre-training" ], [ "Abstract Large-scale vision-language pre-trained (VLP) models are prone to hallucinate non-existent visual objects when generating text based on visual information.", "In this paper, we exhaustively probe the object hallucination problem from three aspects.", "First, we examine various state-of-the-art VLP models, showing that models achieving better scores on standard metrics(e.g., BLEU-4, CIDEr) could hallucinate objects more frequently.", "Second, we investigate how different types of visual features in VLP influence hallucination, including region-based, grid-based, and patch-based.", "Surprisingly, we find that patch-based features perform the best and smaller patch resolution yields a non-trivial reduction in object hallucination.", "Third, we decouple various VLP objectives and demonstrate their effectiveness in alleviating object hallucination.", "Based on that, we propose a new pre-training loss, object masked language modeling, to further reduce object hallucination.", "We evaluate models on both COCO (in-domain) and NoCaps (out-of-domain) datasets with our improved CHAIR metric.", "Furthermore, we investigate the effects of various text decoding strategies and image augmentation methods on object hallucination." ], [ "Introduction", "Thanks to the advancement of large pre-trained Language Models (LMs) and Vision-Language Pre-training (VLP) methods, models are able to achieve surprisingly good performance in vision-conditioned text generation, e.g., image captioning.", "However, large LMs are found to often generate unfaithful or nonsensical texts given the source input [17], which is called hallucination.", "This problem is also inherited to VLP models [2], as they often generate fluent and seems likely sentences if we only see the text, but wrong when includes the visual inputs.", "One major type of hallucination in VL is known as object hallucination [44], where models generate non-existent or inaccurate objects from the input image.", "Object hallucination in VLP models essentially limit their performance and raise safety concerns for industrial applications.", "For example, in biomedical image captioning [39], object hallucination reduces the accuracy of diagnosis and may lead to severe consequences to the patient.", "Despite the limitations and potential risks caused by the object hallucination, this problem in VLP models has not been studied in contemporary works yet.", "To narrow down the aforementioned research gap, we systematically investigate four fundamental research questions about object hallucination: 1) how much do modern VLP models hallucinate?", "2) how do different forms of image encoding affect object hallucination?", "3) what are the effects of various VLP objectives on object hallucination?", "and 4) how to alleviate object hallucination based on our findings?", "To evaluate object hallucination, we adopt and improve upon the CHAIR metric, Caption Hallucination Assessment with Image Relevance, proposed by [44].", "In addition to the in-domain COCO dataset, we extend the evaluation with NoCaps to further assess the faithfulness of generated captions in the out-of-domain scenario.", "For our first question, we examine recently proposed VLP models, showing that they still hallucinate frequently, especially on out-of-domain images even if they have been pre-trained on millions of image-text pairs.", "Interestingly, models achieving better scores on previous standard metrics (e.g., BLEU-4, CIDEr) could hallucinate more often.", "Additionally, we discover that the widely adopted optimization method SCST [42] leads to more severe hallucination problem.", "Second, we investigate how different types of image encoding in VLP influence hallucination, including region-based, grid-based, and patch-based.", "Surprisingly, we find that patch-based features perform the best and smaller patch resolution yields a non-trivial reduction in object hallucination.", "Third, we decouple common VLP objectives, demonstrating that discriminative losses (e.g., cross-modal contrastive, matching, and their variants) do not mitigate object hallucination by learning global multimodal representations.", "For generative losses, they indeed reduce hallucination while different pre-training datasets lead to distinctive model behaviors.", "Finally, besides the discoveries above, we propose a new VLP loss, namely object masked language modeling, to further alleviate object hallucination by enhancing the alignment between text tokens and visual objects during generation.", "Our contributions are three-fold: We systematically investigate state-of-the-art vision-language pre-trained models on the object hallucination problem, showing that it is still far from resolved and previous methods that improve standard metrics may reflect in even worse hallucination.", "We study the effects of different types of image encodings and decouple three common VLP objectives to analyze which parts of modern VLP methods impact object hallucination.", "We propose a simple yet effective pre-training objective to mitigate object hallucination, namely object masked language modeling.", "Experimental results show that it reduces object hallucination by 17.4% without the need of new dataset.", "We believe our insightful findings will grease the way for building more responsible and reliable VLP models.", "Code and evaluation setups will be released." ], [ "Hallucination in Deep Learning", "Generally, the term hallucination denotes the appearance of undesirable output that is unfaithful to the conditional input [35], even though it may appear to be fluent or reasonable.", "In the multimodal field, the hallucination phenomenon refers to the prediction of non-existent or incorrect objects (e.g., in object detection or image captioning) and is called object hallucination [44], [6].", "Despite the success of deep learning models, they suffer the hallucination problem, which degrades the performance and hinders practical applications [17].", "Many works have been proposed to mitigate hallucination in recent years.", "[36] applied data refinement with self-training to improve the equivalence between the input and the paired text in the data-to-text generation task.", "[59] proposed the uncertainty-aware beam search as an add-on technique to the original beam search, in both image captioning and data-to-text generation.", "To reduce hallucination in dialog systems, [47] introduced knowledge augmentation and [14] presented a post-processing method to refine generated outputs.", "[49] augment the generation model with fine-grained, answer-related salient information predicted by a machine reading comprehension module, to reduce hallucination in generative question answer task." ], [ "Vision-Language Pre-training", "The research on vision-language pre-training (VLP) has progressed vastly in recent years.", "Due to the demand for large-scale data, most VLP methods use self-supervised pretraining objectives to utilize image-text pairs crawled from the web.", "In the beginning, BERT [11]-style VLP models [33], [53], [27], [7], [46] are trained to perform multimodal understanding tasks, using objectives like image-text matching and masked language modeling.", "Later, encoder-decoder architectures are introduced to additionally handle multimodal generation tasks with a causal language modeling loss [28], [30], [8], [12], [25], [57].", "Another line of research uses a dual-stream architecture [40], [18], [62], [61] with separate image and text encoders aligned together through an image-text contrastive loss.", "They improve the performance of various multimodal downstream tasks by a large step.", "[2] show that fatal object hallucination can happen naturally or be provoked by the adversarial prompting in modern VLP models.", "However, in previous works, how different VLP strategies influence the faithfulness of generated text given images has not been studied.", "Moreover, the effects of using different types of image encoding are also unclear, including region-based [29], [63], [16], grid-based [58], and patch-based [21], [26]." ], [ "Evaluation Setup", "In this section, we first introduce the CHAIR evaluation metric and our proposed improvements to it in sec:evalmetric.", "Then, in sec:evaldatasets, we discuss the datasets used for evaluation and explain how to calculate CHAIR under different settings." ], [ "Evaluation Metric", "We use the CHAIR metric, Caption Hallucination Assessment with Image Relevance, proposed by [44] to measure the object hallucination.", "CHAIR calculates what proportion of object words generated are actually in the image according to the ground truth.", "CHAIR has two variants: CHAIR$_i$ (instance-level) and CHAIR$_s$ (sentence-level), which are formulated as follows: $\\textrm {CHAIR}_i = \\frac{\\textrm {\\# \\lbrace hallucinated objects\\rbrace }}{\\textrm {\\# \\lbrace all objects in ground truth\\rbrace }},$ $\\textrm {CHAIR}_s = \\frac{\\textrm {\\# \\lbrace hallucinated sentences\\rbrace }}{\\textrm {\\# \\lbrace all sentences\\rbrace }},$ where CHAIR$_i$ measures the proportion of hallucinated objects over all the ground-truth objects (note that it calculates sample by sample and then averages the score over all samples), and CHAIR$_s$ measures the proportion of the hallucinated sentence (has at least one hallucinated object) over all sentences.", "We notice that the CHAIR$_i$ score will tend to be small when there are substantial objects in the ground truth (the denominator becomes large) or when the model tends to generate a small number of objects, leading to a relatively small number of hallucinated objects (the numerator becomes small).", "Therefore, we propose a modified version of $\\text{CHAIR}_i$ as follows: $\\textrm {CHAIR}_i^{\\prime } = \\frac{\\textrm {\\# \\lbrace hallucinated objects\\rbrace }}{\\textrm {\\# \\lbrace all objects in prediction\\rbrace }},$ where the denominator denotes the number of predicted objects.", "We can see that $\\text{CHAIR}_i^{\\prime }$ score measures the proportion of hallucinated objects in the generation, which will not be affected by the number of objects in the ground truth.", "Compared to CHAIR$_i$ , $\\text{CHAIR}_i^{\\prime }$ can better measure the likelihood of the model's object hallucination.", "Without further mentioning, CHAIR$_i$ represents our modified one in the following sections.", "Table: Image captioning results of recent state-of-the-art VLP models , , , on the COCO Caption Karpathy test set and NoCaps validation set.", "Here, B@4, C, M, S, and CH denote BLEU-4, CIDEr, METEOR, SPICE, and CHAIR, respectively.", "CIDEr Optim indicates whether the SCST CIDEr optimization is used or not.", "All results are generated by using their officially provided checkpoints and hyper-parameters, * means the model is finetuned by us as the provided one is broken.", "†\\dagger denotes the model also uses unimodal data besides image-text pairs." ], [ "COCO Caption.", "The COCO Caption [31] is a large-scale and widely used dataset for the training and evaluation of the image captioning task.", "We use the Karpathy split [20], in which 82K, 5K, and 5K images are in the train, validation, and test sets, respectively.", "Each image is annotated with at least five ground truth captions.", "To calculate CHAIR scores on this dataset, we follow the setting proposed in [44].", "In practice, we first tokenize each sentence and then singularize each word.", "Then, we use a list of synonyms from [34] to map fine-grained objects to the pre-defined 80 coarse-grained MSCOCO objects categories (e.g., mapping “puppy”, “chihuahua”, “poodle” objects to the “dog” object).", "The purpose of doing this mapping is to ensure that we do not detect hallucinated objects by mistake.", "For example, when the ground-truth caption only has the “puppy” object, the CHAIR metrics will consider the “dog” object generated by models as the hallucinated object if we do not map the previous object to the “dog” object." ], [ "NoCaps.", "The NoCaps [1] dataset aims to evaluate models trained on the training set of COCO Caption data to examine how well they generalize to a much larger variety of visual concepts, i.e., unseen object categories.", "There are 4,500 images in the validation set and 10,600 images in the test set.", "The images are from the Open Images V4 [23] dataset, which contains 600 object classes.", "To calculate CHAIR scores on this dataset, we follow the setting used in COCO Caption.", "Specifically, we map the fine-grained classes defined in NoCaps to coarse-grained categories based on the hierarchical object relationshiphttps://github.com/nocaps-org/image-feature-extractors/blob/master/data/oi_categories.json to improve the effectiveness of CHAIR metrics.", "Specifically, we only add two types of object categories to our final object list: 1) the super-category that has sub-categories, and 2) the object category that has neither super-category nor sub-categories.", "Eventually, we construct a list of 139 coarse-grained object categories from the 600 classes." ], [ "Object Hallucination in VLP Models", "Benefitting from the vast advancement of various VLP methods, the performance of image captioning has been improved by a large step.", "Generally, this performance is measured by metrics like CIDEr [56], SPICE [3], METEOR [5], and BLEU [37], which consider the semantic and syntactic similarity or n-gram-based fluency between the model generated and ground truth captions.", "However, the faithfulness of generated captions is neglected.", "In this section, we provide a preliminary analysis of recently proposed VLP models on the image captioning task to investigate and understand how much they hallucinate when generating text conditioned on an image.", "The results are shown in Table REF .", "Models are finetuned on the COCO Karpathy training set and evaluated on both of the COCO Karpathy test set and the NoCaps validation set.", "Figure: Comparison of image captioning examples generated by VinVL Base _{Base} and OFA Large _{Large} with and without the SCST CIDEr optimization.", "Red color denotes the object is hallucinated.Overall, we observe two noteworthy insights.", "First, for all CHAIR scores, they are not proportional to standard evaluation metrics.", "Although standard metrics (e.g., the cosine similarity in CIDEr) could potentially penalize the wrong object prediction, they do not directly reflect faithfulness.", "Captions can still have good scores from standard metrics as long as they contain sufficient accurate objects, even if hallucinated objects exist.", "For example, VinVL$_{Large}$ achieves higher CIDEr and BLEU-4 scores than VinVL$_{Base}$ , but its CHAIR scores are also higher.", "Figure: An overview of the model architecture for image captioning and vision-language pretraining objectives, including image-text contrastive (ITC), matching (ITC), and image-conditioned language modeling (LM).Second, the Self-Critical Sequence Training (SCST) [42] for the CIDEr optimization method harms the faithfulness of generated captions.", "SCST is a reinforcement learning algorithm that has been widely adopted as a second-stage finetuning method after the standard cross-entropy optimization for image captioning [4], [64], [29], [63], [16], [57].", "It calculates the reward based on the CIDEr score by sampling captions during training without the need of another baseline.", "Although SCST can significantly boost previous standard metric scores, it encourages models to hallucinate more inaccurate objects in the captions.", "For example, applying SCST improves the CIDEr score by 11.1 and BLEU-4 score by 2.7 for VinVL$_{Base}$ , yet it also increases 0.9 CHAIRs score on the COCO dataset.", "Moreover, this problem becomes more severe on out-of-domain images.", "For the VinVL$_{Base}$ model, there are 10.9% more generated captions containing at least one hallucinated object after using SCST.", "We speculate that the CIDEr-based optimization encourages models to generate more n-grams with higher cosine similarity values to the ground truth captions in the multimodal representation space, which can be plausible but not faithful.", "We show a case study in Figure REF .", "After finetuned by SCST, models will take a bigger risk to generate more detailed yet incorrect information (e.g., in the second example in Figure REF , the sentence with hallucination generates more detailed information “mirror”, which cannot be found in the image).", "This will further amplify the object hallucination problem on out-of-domain images as models may have lower confidence on unfamiliar visual objects.", "Although insightful, these preliminary results cannot reveal more detailed reasons for the object hallucination, as different VLP models use different architectures, pre-training datasets, pre-training objectives, etc.", "In follow-up sections, we will study how various strategies influence hallucination and how to mitigate it." ], [ "Probing Image Features and VLP Objectives", "In this section, we investigate two fundamental factors of VLP models that can potentially affect the degree of object hallucination: 1) the type of visual features including region-based, grid-based, and patch-based; and 2) the pretraining objectives in VLP.", "We first introduce the model architecture in sec:modelarchitecture.", "Then, we ablate and compare the effect of different visual formats (sec:analysisvisualfeatures), analyze VLP objectives that intuitively could influence object hallucination (sec:analysisvlpobjectives), and finally propose a simple yet effective technique to mitigate this problem.", "Implementation details are included in Appendix ." ], [ "CLIP.", "CLIP [40] is a dual-stream VLP model that consists of an image encoder and a text encoder.", "It is pretrained on 400 million image-text pairs data using a cross-modal contrastive loss.", "Specifically, CLIP explores the image encoder with different sizes of two architectureshttps://github.com/openai/CLIP/blob/main/model-card.md, including the ResNet [15] and the Vision Transformer (ViT) [13].", "The resulting image and text encoders are aligned in the same multimodal feature space." ], [ "BERT.", "BERT [11] is a Transformer [55] model pre-trained on a large corpus by the masked language modeling (MLM) and sentence permutation losses.", "It is shown to have excellent performance on various downstream tasks after finetuning.", "Moreover, BERT can also handle generation tasks when the self-attention layers are restricted to the left-to-right direction to generate text auto-regressively.", "In this paper, we refer to this variant as BertLM.", "We design a flexible architecture that can plug in various visual encoders and fit modern VLP objectives without introducing extra influencing factors.", "As shown in Figure REF , the model consists of two parts, a visual encoder to encode images and a text decoder to generate captions conditioned on the image representations.", "We use two separate modules rather than a unified single-stream model as it is convenient to alter the visual encoder while keeping the text decoder the same.", "Specifically, for region-based image features, we explore the Faster R-CNN object detector [41] with two different backbones: the ResNet-101 used in BUTD [4] and the ResNeXt-152 [60] used by [63].", "They are both pretrained on COCO [31] and Visual Genome [22] datasets for object detection.", "For the grid-based convolutional image features and patch-based image features, we adpot the visual encoders from the CLIP family, as all its variants are pretrained on the same visual data.", "Table: Results of different types of visual encoders with the same BertLM text decoder on the COCO Karpathy test set and NoCaps validation set (out-of-domain)." ], [ "Effects of Different Image Features", "Recognizing visual objects correctly is crucial for avoiding object hallucination.", "In Table REF , we compare the performance of different visual encoders with the same text decoder on COCO (in-domain) and NoCaps (out-of-domain) datasets.", "Overall, patch-based visual encoders attain the best performance in terms of object hallucination.", "Models with grid features hallucinate more frequently when achieving comparable CIDEr scores to the other models.", "For example, on COCO, RN50$\\times $ 16 has similar CIDEr to ViT-B/16 but higher CHAIR$_s$ , which is also observed between RN50$\\times $ 64 and ResNeXt-152.", "We conjecture that the inductive biases of the Convolutional Neural Network (CNN), such as locality and translation invariance, weaken the connection of different characteristics of a single object and thus lead to more hallucination.", "Oppositely, regional or patch-level features are obtained by directly dividing images into different parts and further encode them through positional embeddings.", "In addition, we see that a smaller patch resolution helps to reduce object hallucination without enlarging the model size.", "For region-based visual encoders, although they achieve modest results on COCO with relatively small model sizes, their performance of object hallucination on out-of-domain images drops dramatically.", "One important reason is that the output of such encoders only contains representations of detected visual objects rather than the whole image, which may amplify detection errors as there is much less context.", "Moreover, as the object detector is pretrained separately from the whole model and fixed during finetuning, this gap could also aggravate object hallucination on unseen images." ], [ "Effects of Different VLP Objectives", "Based on the best performing ViT-L/14 baseline, we explore three commonly used vision-language pre-training objectives and their variants that could possibly affect object hallucination." ], [ "Pre-training Datasets", "We explore two datasets for pre-training: 1) the VG Caption from the Visual Genome [22] dataset, which contains 10K images with half overlapped with COCO; and 2) the more large-scale CC3M [45] dataset that contains 3 millions of image-text pairs.", "Table: Comparison of the effects of different VLP objectives and their combination on object hallucination." ], [ "Image-Text Contrastive (ITC) Loss", "The cross-modal contrastive loss is shown to be fairly effective in representation learning [54], [48] and vision-language pre-training [40], [26], [25].", "It aligns the visual and textual representations into the same multimodal feature space by shortening the distance between an image and a text if they are paired, and enlarging if they are not.", "Counter-intuitively, as shown in Table REF (b), ITC does not have any improvement on the faithfulness of generated captions.", "We speculate that it only enhances model's understanding on global-level representations rather than object-level similarities.", "To verify, we test the ITC with a more fine-grained token-level late interaction (ITC$_\\textit {Late}$ ) proposed by [61].", "As shown in Table REF (c), ITC$_\\textit {Late}$ is more effective than the original ITC and slightly reduce object hallucination.", "We think this is benefit from the word-patch alignment ability enabled by ITC$_\\textit {Late}$ , as illustrated in [61]." ], [ "Image-Text Matching (ITM) Loss", "ITM is a widely used objective in VLP [24], [7], [65].", "It is a binary classification task that aims to make the model learn whether an image and a sentence are paired or not.", "Based on that, ITM with hard negatives (ITM$_\\textit {Hard}$ ) is introduced to increase the difficulty of the task, which is shown to be very effective [19], [43], [28].", "We follow the ITM loss proposed by [25], in which an in-batch negative example is sampled either uniformly (normal) or from the similarity distribution of image-text pairs computed by ITC (hard).", "The results are exhibited in Table REF (d) and (e).", "Similar to ITC, both ITM and ITM$_\\textit {Hard}$ provide no enhancement on object hallucination.", "Although the ITM$_\\textit {Hard}$ can be seen as an analogy to the object hallucination problem (plausible but not correct) in a global and discriminative way, it has negligible effect on the downstream generative tasks.", "Figure: Comparison of ground truth captions in COCO and Visual Genome datasets for the same image.Figure: Comparison of generated captions with or without the image-conditioned language modeling pretraining on the VG dataset before finetuning." ], [ "Image-Conditioned Language Modeling", "Various image-conditioned language modeling losses have been proposed in the VLP research, in the form of masked language modeling (MLM) [52], [51], [53], [50], text infilling [10], [57], prefix LM [58], and causal LM [16], [25].", "This is one of the most crucial pre-training losses to activate the cross-modal text generation ability for the model.", "We first examine the causal LM loss, which is exactly the same objective as the image captioning loss.", "Surprisingly, as shown in Table REF (f), although pretraining on VG does not improve previous standard metrics like CIDEr, it helps to reduce object hallucination by a large margin when compared to (a).", "There are two reasons behind this performance lift.", "First, as described in Figure REF , for each image, VG contains more and shorter captions than COCO.", "Each caption in VG only describes one specific aspect of the image, unlike the global descriptions in COCO.", "Therefore, pre-training on VG and then finetuning on COCO is a fine-to-coarse process to first accurately describe different parts of an image and connect these clues together at a higher viewing point.", "Second, due to the nature of the short length of VG captions, the model becomes slightly more cautious.", "On average, after pre-training on VG, there are 0.08 and 0.24 fewer objects generated in each caption on COCO and NoCaps, respectively.", "Figure REF illustrates VG's effects on generated samples; the model is more faithful but more likely to lack some details when it is not confident.", "For CC3M, we observe a leap in all metrics.", "It improves the general image translation ability of the model, which can be seen as a large-scale data augmentation.", "However, it is less effective than VG in terms of reducing object hallucination.", "Furthermore, inspired by the whole word masking [9] in MLM, we propose a simple yet effective visual object MLM to mitigate object hallucination.", "It replaces words appearing in the object categories (sec:evaldatasets) with the [MASK] token and train the model to recover them.", "This objective enhances the model's recognition ability when describing the spatial relationship between objects, which is a common scenario that causes hallucination frequently.", "Finally, we achieve the best performance when combining ITC$_\\textit {Late}$ , causal LM, and object MLM on our baselines, which is comparable to state-of-the-art VLP models with regard to object hallucination.", "More cases are included in Appendix ." ], [ "Conclusion", "In this paper, we investigate the object hallucination problem in modern vision-language pre-trained models.", "Particularly, we study this issue from three aspects: 1) how much do contemporary VLP models hallucinate, and what are the patterns of their hallucination; 2) the differences between commonly used image encodings, including region-, grid-, and patch-based image features in terms of object hallucination; and 3) the effects of various VLP objectives and the way they affect object hallucination.", "We further propose a visual object masked language modeling loss to mitigate object hallucination.", "We believe our findings are beneficial for future work to build more reliable and responsible cross-modal text generation systems." ], [ "Implementation Details", "Our experiments are implemented in the PyTorch framework [38].", "For both pre-training and finetuning, we use 8 Nvidia V100 GPUs.", "For the finetuning of various image encoders, we use a batch size of 512 and train the models with the AdamW optimizer [32] for 10 epochs with a learning rate of $5e^{-5}$ and a weight decay of $1e^{-2}$ .", "The learning rate is decayed linearly after each epoch with a rate of 0.85.", "For the pre-training of generative losses like causal LM and Object MLM, we keep the same hyper-parameters.", "For ITC and ITM losses, we increase the batch size to 1024." ], [ "Additional Case Studies", " Figure: More cases of generated captions from different models, where the hallucinated objects are marked in red." ] ]
2210.07688
[ [ "General Classification of Entanglement Using Machine Learning" ], [ "Abstract A classification of multipartite entanglement in qubit systems is introduced for pure and mixed states.", "The classification is based on the robustness of the said entanglement against partial trace operation.", "Then we use current machine learning and deep learning techniques to automatically classify a random state of two, three and four qubits without the need to compute the amount of the different types of entanglement in each run; rather this is done only in the learning process.", "The technique shows high, near perfect, accuracy in the case of pure states.", "As expected, this accuracy drops, more or less, when dealing with mixed states and when increasing the number of parties involved." ], [ "Introduction", "Entanglement [1], [2], [3], as a quantum resource, simplifies and in many cases simply make it possible, the execution of some quantum information tasks.", "It has many applications, in quantum computing [4], quantum communications [5], [6] , quantum cryptography [7], security devices with quantum abilities [8], [9]...", "Many advances are made in the direction of understanding this purely quantum feature [10].", "But in spite of these, only the bipartite entanglement in two dimensional systems, i.e.", "qubits, has a satisfactory description and a proper one is still lacking when moving to higher dimensional subsystems, or when increasing the number of qubits composing the overall system.", "In the latter case, when studying the entanglement, the classification of entangled states is much richer than in bipartite systems.", "Instead of the simple distinction of being separable or entangled for bipartite states, one has to specify for the latter whether they are partially entangled or fully entangled states when dealing with multipartite systems.", "Moreover, the number of combinations of the different components increases and consequently the number of ways of sharing the entanglement among the components exponentially increases.", "Each way of achieving this then might define a different class of multipartite entanglement [11], [12]; although that is not the only way of defining a class of multipartite entanglement.", "As a matter of fact, there exists several classifications that are based on other properties of multipartite entanglement, such as the classification based on local operations and classical communications (LOCC) [11], it's extension to stochastic LOCC (SLOCC) [13], or classification based on persistent homology [14].", "Achieving a well defined classification of multipartite entanglement is of paramount importance, as it is established that quantum information processing tasks based on multipartite entanglement are sensitive to the kind or class of entanglement being used ([7], [9], [15]).", "In this regard, each task actually requires a specific feature out of the multipartite entanglement on which it is based, such as how it behaves when one or more components are out of play (traced out).", "As such, it is a prerequisite to know the class to which a given state belongs in order to assert its relevance to a given quantum task.", "In recent years, several studies focused on combining the disciplines of quantum information processing [4] and machine learning [18] resulting in the new discipline of quantum machine learning [20], [21], [22].", "This combination can be achieved in one of four possible ways.", "One can develop quantum versions of machine learning algorithms and use them to perform either classical tasks or quantum ones [23].", "On the other hand, more recently, it was demonstrated that simply using classical machine learning algorithms can be useful in solving quantum tasks especially classification tasks [24], [25]; this requires using classical algorithms in order to manipulate quantum data.", "This is the approach adopted in the current work for which we chose deep learning neural networks [17], [26] and evaluate its performance in the classification of full multipartite entanglement of qubits in both pure and mixed states.", "To achieve this, we start by presenting in section the classification of fully entangled states adopted in this paper and how it naturally emerges from the use of tangle as a measure of multipartite entanglement.", "The classification tree resulting and how it translates to the particular case of three and four qubits as well as a discussion of this classification scheme are also given.", "Section , contains the strategy adopted in using deep learning artificial neural networks to implement the classification of fully entangled multipartite states.", "We present the dataset used for the learning and testing phase of the protocol.", "A presentation of the results in terms of the confusion matrix and table of verification is given at the end of the section, followed by a conclusion with some perspectives of the current work.", "Appendices are added at the end, where the definitions of the entanglement measures used are given as well as the definitions of the multipartite entangled coherent states and their properties.", "Particular cases of each of the different classes of fully entangled states for three and four qubits are presented in terms of these coherent states.", "Their dependence on the amplitude parameter, simplifies the presentation and discussion of the relevant properties of the different classes discussed in the main text." ], [ "Classification of fully-entangled states", "An arbitrary quantum system $A_{1}\\otimes ...\\otimes A_{N}$ , with $N$ subsystems, described by the density matrix $\\rho \\in H = H^{A_1} \\otimes ... \\otimes H^{A_N} $ is called fully-separable if and only if $\\rho $ can be written in the form $\\rho = \\rho _{A_1} \\otimes ... \\otimes \\rho _{A_N} $ , where $\\rho _{ A_i}$ are reduced density matrices of the individual subsystems from $H^{A_i} $ .", "On the other hand if $\\rho $ can be written as a tensor product of $k$ substates: $\\rho = \\rho _{1} \\otimes ... \\otimes \\rho _{k} $ with $k < N$ then it is called $k$ -separable i.e.", "some subsystems are separable from the rest that is entangled.", "Outside of these two cases $\\rho $ is called fully-entangled, in which case all parties of the system are entangled.", "In this paper, we focus our attention on fully-entangled multipartite systems as the discussion of $k$ separable states classification is always amenable to that of fully entangled ones in a lower dimensional Hilbert space." ], [ "Entanglement quantifiers and detectors ", "The entanglement in a representative state, of a given class of entanglement (defined later in subsection REF ), is evaluated using the $i$ -tangles, $\\tau _{i}$ .", "In the following we present their definitions in a general form (valid for both pure and mixed states) adapted for the task at hand: $\\tau _{1} &=& \\left(\\prod _{p} \\sum _{\\alpha \\beta }(C^{p}_{\\alpha , \\beta } )^{2} \\right) ^{\\dfrac{1}{m}},\\\\\\tau _{2} &=& \\dfrac{1}{\\binom{N}{2}} \\sum _{i, j } (C_{i j})^{2},\\\\\\tau _{3} &=& \\dfrac{1}{\\binom{N}{3}}\\sum _{i, j,k }\\left( \\dfrac{1}{3} \\sum _{\\alpha , \\beta } [\\,(C^{ij/k}_{\\alpha \\beta })^{2} + (C^{ik/j}_{\\alpha \\beta })^{2} + (C^{jk/i}_{\\alpha \\beta })^{2} ]\\right) ,\\, \\\\\\tau _{4} &=& \\dfrac{1}{\\binom{N}{4}} \\Bigg (\\sum _{i, j,k,l } (\\dfrac{1}{7} \\sum _{\\alpha , \\beta } [\\,(C^{i/jkl}_{\\alpha \\beta })^{2} + (C^{j/ikl}_{\\alpha \\beta })^{2} + (C^{k/jil}_{\\alpha \\beta })^{2} +(C^{ijk/l}_{\\alpha \\beta })^{2} \\nonumber \\\\ & & +(C^{ij/kl}_{\\alpha \\beta })^{2} + (C^{ik/jl}_{\\alpha \\beta })^{2} +(C^{il/kj}_{\\alpha \\beta })^{2}]) \\, \\Bigg ), \\\\&\\vdots &\\nonumber \\\\\\tau _{N} &=& \\dfrac{1}{\\binom{N}{N}} \\left(\\dfrac{1}{m} \\sum _{p } \\sum _{\\alpha ,\\beta } [\\,(C^{p}_{\\alpha \\beta })^{2} ]\\right) ,\\,$ with $m = 2^{N-1} -1$ , $N$ being the number of qubits and $\\binom{N}{i}$ the binomial coefficient, giving all possible $i$ -tuples out of $N$ qubits.", "For the sake of not overloading the main text with additional definitions, we put those of the different concurrences used above as well as the corresponding references in the appendix .", "For instance, $C_{i/j}$ is the Wooter's concurrence (REF ) representing 2-qubit bipartite entanglement, while $C^{p}_{\\alpha \\beta }$ represents the entanglement shared in a bipartition $p$ , with an arbitrary dimension for each partition.", "It is given by the I-concurrence (REF ) when $\\rho _N$ is a pure state; if it is a mixed state $C^{p}_{\\alpha \\beta }$ is based on the lower bound (REF ).", "As such the $i$ -tangle is defined as weighted sum of the squares of the concurrences in each possible bipartition of the $i$ parties.", "The product $\\prod _{p}$ over all possible combinations, is adopted in equation (REF ) as it is more suitable to verify if a given state is fully entangled or not using $\\tau _{1}$ .", "In the case of pure states (of three and four qubits), it is better to use the pure-states-tangle $\\tau _{3}$ (REF ) and $\\tau _{4}$ (REF ) quantifying respectively the 3-way and 4-way entanglement, because of their ability to quantify more precisely genuine entanglement.", "For $\\tau _{N}$ , the $\\sum _{p}$ is carried over all possible partitions $p$ ; this ensures that for fully separable multipartite states, $\\tau _{N} = 0$ .", "Thus allowing $\\tau _{N} > 0$ to indicate the presence of at least some type of entanglement inside the quantum state." ], [ "Different types of entanglement", "Our classification scheme is based on how \"fragile\" the fully entangled states are when we apply a partial trace operation on them.", "To quantify this entanglement we use the $i$ -tangles defined in the previous subsection REF .", "Let $\\rho \\in H = \\underset{\\times N}{\\underbrace{H^{2} \\otimes ... \\otimes H^{2}}} $ be a (pure or mixed) state of $N$ parties; then $\\rho $ can be fully-entangled in $N-1$ nonequivalent ways: the $N$ -entanglement, in which case performing a partial trace over any individual subsystem will yield a separable state.", "Such an entanglement is shared between all the $N$ parties.", "The $(N-1)$ -entanglement, which vanishes if we trace out any two individual subsystems.", "This entanglement is shared between $(N-1)$ -tuples.", "$\\vdots $ The 2-entanglement, is the most robust against partial trace as it does not vanish even if traced out over any $(N-2)$ subsystem.", "In this case the entanglement is shared between all possible pairs.", "More generally the $(N-i)$ -entanglement, vanishes if we trace over any $(i+1)$ qubits and the entanglement is shared between $(N-i)$ -tuples.", "Henceforth, this entanglement is quantified using the $(N-i)$ -tangle ($\\tau _{N-i}$ ) as defined in the previous subsection.", "It is important to note that a fully entangled state can contain one type of entanglement or more, which imposes defining an order of precedence in this classification.", "This is carried over in the next subsection." ], [ "Classification tree", "A preliminary classification of an arbitrary $N$ -qubit state, puts it in a category of fully separable, $k$ -separable or fully entangled states.", "Then, if it is fully entangled, one can check the type of entanglement in play, by first checking if it contains the 2-entanglement as defined in the previous subsection.", "If it is the case, it is placed in the $[N]_{2}$ category, if not one checks whether it contains 3-entanglement, in which case it is a $[N]_{3}$ entanglement class and so on up to the last one where only states containing only the $N$ -entanglement type as defined in the previous subsection are left, which form the $[N]_{N}$ class of fully entangled states.", "This last class, should be the most fragile under partial trace operations.", "This procedure is schematized using a hierarchical tree showed in Figure REF .", "Figure: Classification of entanglement for NN-qubit states" ], [ "Three qubits ", "Applying the previous classification of entanglements to the case of three qubits, yields two classes of fully entangled states that are represented by the well-known $W$ and $GHZ$ sates.", "This classification is compatible with a LOCCLocal operation and classical communication-based classification in [11].", "In the following we will represent the different representative states ($W$ and $GHZ$ sates and others when dealing with systems containing more than three qubits) in the coherent state basis $\\left\\lbrace |{\\alpha }\\rangle \\right\\rbrace $ ; this will have the advantage of allowing to plot the different relevant $i$ -tangles in terms of the amplitude $\\alpha $ , keeping in mind that we recover the standard ($W$ and $GHZ$ ) sates written in the computational basis when $\\alpha $ (or equivalently the mean photon number) is large enough.", "The definitions of coherent states used are given in Apprendix .", "Figure REF , shows the different types of entanglement that are present in the two classes represented by the $W$ and $GHZ$ sates as well as the amount of each type of entanglement involved.", "Figure: Entanglement of the representative states of the classes of fully-entangled three-qubit states.", "(a) and (b) are respectively descriptive schemes corresponding to the representative states of the classes [3] 2 [3]_{2} and [3] 3 [3]_{3}.", "(c) and (d) show the amount of each type of entanglement present in each of the representative states.The first class $[3]_{2}$ , schematized in Figure REF , contains the states for which the entanglement is shared between each pair ($i,j$ ) i.e.", "states with a non-zero two-tangle, $\\tau _{2}$ as defined in equation ().", "The representative state for this class is the $W$ -state defined by $\\vert W_{3} \\rangle = \\dfrac{1}{\\sqrt{3}}( \\vert 100 \\rangle + \\vert 010 \\rangle + \\vert 001 \\rangle )$ .", "Figure REF based on the coherent state version of the $W$ -states (see equation (REF ) in Appendix ), shows that in this states the global entanglement $(\\tau _{1})$ as defined in equation (REF ) is double the value of the bipartite one ($\\tau _{2}$ ).", "This is due to the fact that the former quantifies the entanglement based on a (qubit, two-qubits) bipartition of the system while the later takes into account all (qubit,qubit) bipartitions.", "For instance, the global entanglement based on a ($A,BC$ ) bipartition takes into account the entanglement between the two pairs ($A,B$ ) and ($A,C$ ) and no genuine tripartite entanglement is present.", "This type of entanglement holds up even after a measurement is performed over one of the three qubits.", "The second class $[3]_{3}$ is represented in Figure REF , where the entanglement is shared between all parties.", "This class contains all states with zero two-tangle ($\\tau _{2}$ ) and non zero three-tangle ($\\tau _{3}$ ) and can be represented by the GHZ-state defined as $\\vert GHZ_{3} \\rangle = \\dfrac{1}{\\sqrt{2}}( \\vert 000 \\rangle + \\vert 111 \\rangle )$ .", "Figure REF based on the coherent state version of the $GHZ$ -states (see equation (REF ) in Appendix ), shows that in this state the global entanglement $(\\tau _{1})$ equals the genuine tripartite entanglement ($\\tau _{3}$ ) and no bipartite entanglement is present.", "This type of entanglement vanishes if a measurement is performed over one of the qubits." ], [ "Four qubits", "It is well established that as the number of qubits increases, the classes of entanglement increase as well.", "However these classes might differ from one work to another depending on the defining treat adopted for distinguishing the classes.", "Based on our classification, the case of four qubits yields three classes of fully entangled states, two of which are the same for the three qubits case.", "Namely, in the four qubit case, the classes obtained are represented by the $W$ and $GHZ$ versions for four qubits, in addition to another class we will designate as the $X$ -state.", "Figure: Entanglement of the representative states of the classes of fully-entangled four-qubits states.", "(a), (b) and(c) are respectively descriptive schemes corresponding to the representative states of the classes [4] 2 [4]_{2}, [4] 3 [4]_{3} and [4] 4 [4]_{4} .", "(d), (e) and (f) show the amount of each type of entanglement present in each of the representative states.Figure REF represents the $[4]_{2}$ category, that contains all states with non-zero two-tangle ($\\tau _{2}$ ) between any two pairs $(i,j)$ .", "The representative state of this class is the $W$ -state version for four qubits, defined by $ \\vert W_{4} \\rangle = \\dfrac{1}{2}( \\vert 1000 \\rangle + \\vert 0100 \\rangle + \\vert 0010 \\rangle + \\vert 0001 \\rangle ) $ .", "Figure REF based on the coherent state version of this later (see equation (REF ) in Appendix ), shows that in this state only the ($\\tau _{2}$ ) tangle is present and it combines to the global entanglement of four qubits $(\\tau _{1})$ (REF ) the value of which is triple that of the bipartite one ($\\tau _{2}$ ) as it is shared between three pairs $(A,B)$ , $(A,C)$ and $(A,D)$ , so this type of entanglement hold up even when tracing out one or two of the four qubits.", "The second Figure REF represents the $[4]_{3}$ class that contains all states with a zero two-tangle ($\\tau _{2}$ ) between each pair $(i,j)$ and non-zero three-tangle ($\\tau _{3}$ ) between all triplets $(i,j,k)$ .", "The representative state for this class is the state $ \\vert X \\rangle = \\dfrac{1}{\\sqrt{5}} ( \\vert 0000 \\rangle + \\vert 0111 \\rangle + \\vert 1011 \\rangle +\\vert 1101 \\rangle + \\vert 1110 \\rangle ) $ .", "Figure REF based on the coherent state version of this latter (see equation (REF ) in appendix ), shows that for this state only ($\\tau _{3}$ ) is present while the other types of entanglement are absent.", "This results in this type of entanglement surviving when one measures one of the four qubits but vanishes if the measurement is performed on a second qubit.", "The last Figure REF represents the class $[4]_{4}$ in which the entanglement is shared between all components.", "This class contains all states with only a four-tangle ($\\tau _{4}$ ) ( REF ) while the two and three tangle ($\\tau _{2},\\tau _{3}$ ) are zero.", "The representative state for this class is the $GHZ$ version for four qubits, which is defined by $ \\vert GHZ_{4} \\rangle = \\dfrac{1}{\\sqrt{2}}( \\vert 0000 \\rangle + \\vert 1111 \\rangle $ ).", "Figure REF , based on the coherent state version of this later (see equation (REF )), shows that in this state the global entanglement $(\\tau _{1})$ equals the ($\\tau _{4}$ ), so this type of entanglement vanishes if we trace out any of the four qubits." ], [ "Classification of mixed states ", "To study the types of entanglement that are contained in a given random mixed state, we start by analyzing the mixture of the representative states defined in the previous subsection and plot their $i$ -tangles defined in equations (, and ).", "This is applied for three and four qubits cases below." ], [ "Three qubits", "For three qubits, we write a general mixture of $\\vert GHZ_{3} \\rangle $ and $\\vert W_{3} \\rangle $ states as $\\rho (b) = b \\vert GHZ_{3} \\rangle \\langle GHZ_{3} \\vert + (1-b) \\vert W_{3} \\rangle \\langle W_{3} \\vert ,$ with $b$ being the mixing parameter ranging from 0 to 1.", "Figure: Entanglement with respect to the mixing parameter bb for mixtures of representatives states in three qubits, the solid blue line represents the bipartite concurrence and the dashed red line the lower bound for tripartite entanglement.It is clear that the type of entanglement, present in the mixed state (REF ) of $GHZ$ and $W$ , depends on the mixing parameter $b$ such that, when $b\\le 0.3$ , figure REF shows that the state belongs to $[3]_{2}$ class, because the bipartite concurrence is non-null.", "On the other hand when $b>0.3$ , figure REF shows that the concurrence is absent but the lower bound is still present which places the state in the $[3]_{3}$ class.", "Figure: Types of entanglement in mixtures of the representative states of three qubits with respect to the coherent amplitude α\\alpha for different values of the mixing parameter bb.", "The solid blue line represents the bipartite concurrence and the dashed red line the lower bound for tripartite entanglement.Alternatively, with the coherent states, figure REF shows the types of entanglement present when mixing the representative states based on the coherent state versions of three qubits classes (REF ).", "Focusing on larger values of $\\alpha $ ($\\alpha \\ge 1.5$ ) where the states $\\vert \\alpha \\rangle $ and $\\vert -\\alpha \\rangle $ become orthogonal, it is striking that in the case of a balanced mixture $(b=0.5)$ , figure REF shows that the resulting state belongs to the $[3]_{3}$ class, which means that the $\\vert GHZ_{3} \\rangle $ is the \"strongest\" state.", "However, in the unbalanced cases REF and REF it is seen that, as can be expected, the states with the most weight in the mixture dictates the class in which the mixture falls.", "As such, the mixture presented in REF falls in the $[3]_{2}$ class and the one in REF in the $[3]_{3}$ class." ], [ "Four qubits", "For four qubits, we define a mixture of $\\vert GHZ_{4} \\rangle $ , $\\vert W_{4} \\rangle $ and $\\vert X \\rangle $ states as $\\rho (b,c) = a\\vert GHZ_{4} \\rangle \\langle GHZ_{4} \\vert + b \\vert W_{4} \\rangle \\langle W_{4} \\vert + c \\vert X \\rangle \\langle X \\vert ,$ with $a,b,c$ being the mixing parameters ranging from 0 to 1 each, such that $a+b+c = 1$ .", "Figure: Types of entanglement in mixed representative states of Four qubits with respect to the coherent amplitude α\\alpha for different values of the mixing parameters aa,bb and cc.", "The solid blue line represents the bipartite concurrence while the dashed red line the lower bound for tripartite entanglement LB3LB3 and the dotted red line the lower bound for four-partite entanglement LB4LB4.Figure REF show the types of entanglement present when mixing the representative states based on the coherent state versions of four qubits classes (REF ).", "Just like for three qubits, here also, in the case of a balanced mixture $(a=b=c=1/3)$ , figure REF shows that the resulting state belongs to the $[4]_{4}$ class, which means that the $\\vert GHZ_{4} \\rangle $ is the \"strongest\" state.", "However, in the unbalanced cases REF , REF and REF it is seen that, as can be expected, the states with the most weight in the mixture dictates the class in which the mixture falls.", "As such, the mixture presented in REF falls in the $[4]_{4}$ class and the one in REF in the $[4]_{3}$ class, while the mixture in REF is in the $[4]_{2}$ class.", "Since a similar behavior is noticed in the three qubits case, it can be generalized to an arbitrary number of qubits.", "Namely that in the unbalanced case, the representative state with most weight dictates the class in which the mixture falls, while the balanced mixture, falls in the $[N]_{N}$ class." ], [ "Machine Learning, Deep Learning and Artificial Neural Networks", "Machine learning [18] is an application of artificial intelligence (AI) that gives systems the ability to automatically learn to do tasks without being explicitly programmed.", "It focuses on the development of algorithms that can access data to be used to independently learn without any intervention from the programmer.", "There are three types of machine learning methods: supervised, unsupervised and reinforcement learning.", "In the present work, we apply supervised learning methods, in which the class (output) of each state (input) from a sample, is known, with the goal of finding a function that best maps inputs to outputs and use it to classify new states.", "Deep Learning [27] is a subfield of machine learning concerned with algorithms inspired by the structure and function of the human brain called Artificial Neural Networks (ANN).", "Each \"Neuron\", called perceptron in Artificial intelligent (AI), is a mathematical function that collects and classifies information according to a particular structure.", "In multi-layer perceptron (MLP), the perceptrons are arranged in interconnected layers.", "An input layer (sensory unit) collects the input patterns while an output layer (response unit) contains classifications or exit value to which input patterns may be assigned.", "In between, hidden layers (associator unit) adjust input weights until the neural network loss is minimal.", "It is assumed that the hidden layers capture the salient features of the input data that have predictive power with respect to the corresponding output.", "In our case, the features to be extracted are the density matrix elements or their combinations that might be responsible for each entanglement class." ], [ "Data set", "The procedure to follow in order to prepare the data set, depends on the dimension of the system.", "In the following we outline the procedure depending on the overall number of qubits considered.", "Bipartite states: Draw a random density matrix state $ \\rho $ of 2 qubits.", "Compute the concurrence of $ \\rho $ , if it is equal to 0 we save $\\rho $ in our data set as \"Separable\", else we save it as \"Entangled\".", "Multipartite states: Draw a random density matrix state $ \\rho $ according to the number of qubits (3 or 4 in our applications).", "Compute the different $i$ -tangles and assigne the corresponding class according to the classfication established in REF and REF .", "Namely for 3 and 4 qubits cases we proceed as follows.", "For 3 qubits, if $ \\tau _{1} = 0 $ , we save $\\rho $ in our data set as \"separable\", else if $ \\tau _{2} \\ne 0 $ , we save it in our data set as \"$ [3]_{2} $ \", else we save it as \"$[3]_{3} $ \".", "For 4 qubits, if $ \\tau _{1} = 0 $ , we save $\\rho $ in our data set as \"separable\", else if $ \\tau _{2} \\ne 0 $ , we save it in our data set as \"$[4]_{2} $ \", else if $ \\tau _{3} \\ne 0 $ we save it as \"$[4]_{3}$ \", else we save it as \"$[4]_{4}$ \".", "The final dataset we generated consists of 540 000 density matrices scattered as follows for 2-qubit systems, we have 400 000 density matrices, 200 000 pure and 200 000 mixed , each type contain 100 000 of each class separable/entangled.", "for 3-qubit systems, we have 60 000 density matrices, 30 000 pure and 30 000 mixed, each type contain 10 000 of each class k-separable or separable /$ [3]_{2} $ /$[3]_{3} $ .", "for 4-qubit systems, we have 80 000 density matrices, 40 000 pure and 40 000 mixed, each type contain 10 000 of each class k-separable or separable /$ [4]_{2} $ / $[4]_{3} $ / $[4]_{4} $ .", "The discrepancy in the number of data generated is mainly due to the increase in the number of elements to be generated accompanying the increase of number of q-bits.", "For instance, for a system of 2-qubits only $2^2\\times 2^2=16$ elements are needed while for 4-qubits, one needs $2^4\\times 2^4=256$ elements, all while imposing the density matrices conditions.", "It turns out that some classes are easier (quicker) to generate than others, however to equilibrate things we chose equal numbers of each class generated.", "This data was genarated through computational resources of HPC-MARWAN (hpc.marwan.ma) provided by the National Center for Scientific and Technical Research (CNRST) , Rabat, Morocco" ], [ "Results", "After preparing the data, the first task is training the ANN classifiers to distinguish the different classes.", "To assess the overall performance of our classifier, we remove the redundancy to ensure that the training and testing data are totally different, and we split the data set in two parts: 80$ \\% $ for training and 20$ \\% $ for testing.", "The goal of this splitting is to test our model with new information (testing data), so we can ensure the model’s robustness against over-fitting, among other things.", "One can state that a given model is actually good, when it achieves a high performance in the testing phase.", "To display the performance of our classifiers we use the error matrices, also known as confusion matrices [28], as they make it easy to see how much the system is confusing two classes.", "The confusion matrices are two dimensional tables i.e $(N,N)$ matrices with $N$ being the number of classes, such that one dimension represents the true class, and the other dimension represents the predicted class.", "The diagonal elements of these matrices represent the number of samples belonging to the correct class (successful prediction) while the other elements represent the number of samples for which the predicted class is wrong.", "There are different measures, based on the confusion matrix, that allow to assess the performance of a classification.", "In the following, we will use the $Accuracy$ and the $Precision_i$ of a given class $i$ , defined hereafter.", "Accuracy is the proportion of the total number of correct predictions: $Accuracy=\\frac{\\sum _{i-1}^{N} M_{ii}}{ \\sum _{i-1}^{N} \\sum _{j=1}^{N} M_{ij}}.$ Precision is a measure of the number of cases in which a specific class $i$ has been correctly predicted: $Precision_{i}=\\frac{M_{ii}}{ \\sum _{k=1}^{N} M_{ki}}.$ In the above definitions, $M_{ij}$ are the confusion matrix elements and $N$ is the number of classes.", "Figure: The testing confusion matrix for the classification of pure (a) and mixed (b) bipartite density matrices.Figure REF represents the testing confusion matrices of our classifier for pure REF and mixed REF bipartite states, where the rows correspond to the true classes (Entangled or Separable) and the columns to the predicted classes.", "As stated before, the correctly classified states are accounted for in the diagonal entries and the incorrectly classified ones in the off-diagonal, with the number of states and their proportion, in each case, being reported.", "This confusion matrices shows that our classifier achieved an overall performance, as estimated using equation (REF ), that is almost perfect for pure states and 97.8$\\%$ for mixed states.", "In the case of pure states, the entangled states are accurately classified with a precision of 99.92$\\%$ and the separable ones with a precision of 100$\\%$ , but for mixed states, the entangled ones are accurately classified with a precision of 98.05$\\%$ and the separable states with a precision of 97.48$\\%$ as calculated using equation (REF ).", "Figure: The testing confusion matrix for the classification of pure (a) and mixed (b) tripartite density matrices.Figure REF represents the testing confusion matrices of our classifier for pure REF and mixed REF tripartite states, where again the columns correspond to the predicted classes (k-separable/separable, $[3]_{2} $ or $[3]_{3}$ ) and the rows to the corresponding true classes.", "This confusion matrices shows that our classification achieved an overall performance of 85.7$\\%$ for pure systems and 77.3$\\%$ for mixed systems, where it accurately classified separable/$k$ -separable states with precision of 86.18$\\%$ for pure systems and 86.94$\\%$ for mixed systems, $[3]_{2}$ states with 89.09$\\%$ for pure systems and 82.57$\\%$ for mixed systems, and $[3]_{3}$ states with a precision of 100$\\%$ for pure systems and 62.27$\\%$ for mixed systems.", "Figure: The testing confusion matrix for the classification of pure (a) and mixed (b) fourpartite density matrices.Figure REF represents the testing Confusion Matrices of our classifier for pure REF and mixed REF fourpartite states where the classes being distinguished are the separable/$k$ -separable vs. $[4]_{4} $ vs. $[4]_{3} $ vs. $[4]_{2} $ .", "As in the previous cases, the number of states as well as their proportion, in each case, are reported in the entries of the matrix.", "This confusion matrices shows that our classifier achieved an overall performance of 87.2$\\%$ for pure state and 65.9$\\%$ for mixed ones.", "The separable/$k$ -separable states are accurately classified with a precision of 86.49$\\%$ for pure states and 73.44$\\%$ for mixed states, the $[4]_{2} $ states with 96.03$\\%$ for pure sysytems and 85.48$\\%$ for mixed systems, the $[4]_{3} $ states with a precision of 83.93$\\%$ for pure systems and 54.79$\\%$ for mixed states, and the $[4]_{4} $ states with a precision of 83.71$\\%$ for pure states and 50$\\%$ for mixed states.", "To test the importance of using Machine learning in quantum information area, we must test our classifier with some known separable/$k$ -separable states as well as the representative states of the different classes such as the four Bell states for 2-qubits, $\\vert GHZ_3 \\rangle $ and $\\vert W_3 \\rangle $ states for 3-qubits (shown in the figures REF , REF ), and $\\vert GHZ_4 \\rangle $ , $\\vert W_4 \\rangle $ , $\\vert X_4 \\rangle $ states in the 4-qubits case (see figures REF , REF and REF respectively).", "Mixtures of the said representative states must also be included in the test, to discuss its performance with mixed states.", "The results of this test are summarized in Table REF .", "It is seen in Tables REF and REF that our model distinguishes perfectly the different classes of the different well-known states.", "The errors appearing in the tables are all stemming from elements of the confusion matrix with a high error percentage.", "The results in the tables are in agreement with the the results of the confusion matrices (Figures REF , REF and REF ).", "One clue that transpires through Tables REF and REF is that most errors occur in cases where the number and location of non-zero matrix elements of the tested state is close to that of the well established states (like the representative states) while still belonging to different classes.", "Table: Verification of the classification of some well known pure states." ], [ "Discussion and Conclusion", "In summary, we have introduced a new classification for multipartite fully entangled states.", "The classification is based on the robustness of entanglement against partial trace operations.", "We have used a neural networks model to automatically classify random quantum states , without the need to explicitly calculate the corresponding entanglement measures each time.", "Instead, the NN model learns to distinguish the different classes directly from the density matrix elements.", "We applied the technique to classify states of two, three and four qubits, into separable (or $k$ -separable) and fully entangled states while specifying the corresponding class denoted $[N]_j$ , where the index $j=1,2,\\dots , N-1$ , specifies the actual class given the number of qubits $N$ .", "The proposed algorithm can be integrated in any quantum information processing protocol that rely on a specific type of entanglement or, for that matter, adapt depending on the class of multipartite entanglement detected.", "As a matter of fact, The importance of any classification scheme stems from the usefulness of the resulting types of multipartite entanglement in the targeted applications.", "As an illustration for this, let us consider a given teleportation protocol involving three parties (Alice and Bob being the primary parties and Charlie being an ancilla) that share an entangled state.", "When Alice teleports an arbitrary qubit to Bob, if she decides to break the operation in the case Charlie measures his qubit, then they need to use a $ [ 3 ]_{3} $ class state.", "Otherwise, if Alice wants to continue this teleportation independently of Charlie's qubit, they'll need to use a $ [ 2 ]_{3} $ class state.", "Following alternative lines of thought, a quantum teleportation protocol was introduced in [29] and a comparison of the efficiency of different 4-qubits states to teleport two qubits resulted in the $GHZ$ state (belonging to the $[ 4 ]_{4} $ class) being more efficient than the $W$ state which belongs to the $ [ 2 ]_{4} $ class.", "Notwithstanding the foregoing, [30] establishes that a 4-qubit $GHZ$ state class is not a good candidate for a perfect quantum teleportation using their protocol and the so-called $\\chi $ state [15], which fall in the class $[ 3 ]_4 $ of our classification, provides a better channel.", "On the other hand, for quantum radars [9] making use of fully-entangled particles, whereby one part is kept in the system while the other parties are sent towards a target.", "So when one of the sent particles is traced out (interacting with the environment), the loss of entanglement with the other particles depends on the class of entangled state chosen in the first place.", "Bearing this in mind, it is evident that while states with genuine multipartite entanglement ($N$ -entanglement) will have the higher efficiency, they will be more vulnerable to the interaction with environment.", "So in this case an optimisation procedure is necessary depending on the types of interactions in play.", "Other results, suggest that multipartite entanglement speeds up quantum key distribution in networks [7] reaching different performances with different classes of entanglement.", "The results of this paper can be exploited in other directions as well.", "Among these, work is in progress to implement this technique to detect the entanglement transitions from one class to another in dynamical systems.", "It will be interesting to investigate the generalisation to larger dimensions and deal with qudits instead of qubits.", "This is even so important as the existence of a proper measure of entanglement even in the bipartite case is still debated.", "As a final note, it is believed that by using quantum machine learning algorithms will be more efficient, especially in the cases where the accuracy of the current technique was relatively small (like in the case of fourpartite mixed states here).", "This belief comes from the fact, that quantum algorithms should recognize quantum features (like entanglement) more \"intuitively\", thus resulting in more accurate classification schemes [21].", "However, this last technique is still limited by the number of qubits that one can use in the actual quantum computers." ], [ "Entanglement measures", " Wooter's concurrence [31]: $C_{i/j} = max \\lbrace 0,\\lambda (1)-\\lambda (2)-\\lambda (3)-\\lambda (4)\\rbrace $ In which $\\lambda (i)$ are the eigenvalues, in decreasing order, of the Hermitian matrix $\\sqrt{\\sqrt{\\rho }\\tilde{\\rho }\\sqrt{\\rho }}$ , with $\\tilde{\\rho } = (\\sigma _{y} \\otimes \\sigma _{y}) \\rho ^{*} (\\sigma _{y} \\otimes \\sigma _{y})$ .", "The I-concurrence [32]: $C^{p} = C_{i/j} = \\sqrt{2-Tr(\\rho _{i})^{2}-Tr(\\rho _{j})^{2}} $ with $\\rho _{i}$ and $\\rho _{j}$ are respectively the reduced density matrices of bipartition $i$ and $j$ The genuin tangle of pure three partite systems [12]: $\\tau _{3} = C_{A/BC} - (C_{A/B}+C_{A/C})$ with $C_{A/BC}$ is the I concurnce defined in (REF ) and , $C_{A/B}$ and $C_{A/C}$ are the Wooter's concurrence defined in (REF ) For a general pure four qubits state: $\\vert \\psi ^{ABCD}\\rangle = \\sum _{i_{1}i_{2}i_{3}i_{4}} a_{i_{1}i_{2}i_{3}i_{4}} \\vert i_{1}i_{2}i_{3}i_{4}\\rangle $ The genuin tangle that quantify the 4-ways entanglement for a pure state of four qubits is defined in [33]: $\\tau _{4} = 4 \\vert [ (F_{0001}-F_{0000}) + (F_{0100}-F_{0011}) ] ^{2} \\vert $ in which $F_{00i_{3}i_{4}} = det \\begin{vmatrix}a_{00i_{3}i_{4}} & a_{01i_{3+1}i_{4+1}} \\\\a_{10i_{3}i_{4}} & a_{11i_{3+1}i_{4+1}}\\end{vmatrix}$ .", "For a symmetric pure four qubits states we can extract the three tangle using the monogamy properties of entanglement [36]: $\\tau _{3} = \\dfrac{1}{3}(\\tau _{1} - \\tau _{4} -3\\tau _{2}),$ with $\\tau _{4},\\tau _{2}$ and $\\tau _{1}$ defined respectively in (REF ), () and (REF ).", "The lower Bound for mixed systems [34] mentioned in (REF , , and ) is based in this version of concurrence : $C_{\\alpha \\beta }^{p} = max \\lbrace 0,\\lambda (1)_{\\alpha \\beta }^{p}-\\lambda (2)_{\\alpha \\beta }^{p}-\\lambda (3)_{\\alpha \\beta }^{p}-\\lambda (4)_{\\alpha \\beta }^{p}\\rbrace $ In which $\\lambda (i)_{\\alpha \\beta }^{p}$ being the square roots of the four nonzero eigenvalues, in decreasing order, of the non-Hermitian matrix $\\sqrt{\\rho \\tilde{\\rho }_{\\alpha \\beta }}$ , with $\\tilde{\\rho }_{\\alpha \\beta } = (L_{\\alpha } \\otimes L_{\\beta }) \\rho ^{*} (L_{\\alpha } \\otimes L_{\\beta })$ , and $L_{\\alpha }$ and $L_{\\beta }$ being the generators of $SO(d)$ ." ], [ "Representative states in the Coherent states picture", "Coherent state [35] play an important role in quantum mechanics, specifically in quantum information resources as quantum computing and quantum circuit , because they are easily realizable which in turn is due to the fact that they are the most classical quantum states of the harmonic oscillator, they are defined by: $\\vert \\pm \\alpha \\rangle = e^{-\\dfrac{\\vert \\alpha \\vert ^{2}}{2}} \\sum _{n=0}^{\\infty }\\dfrac{(\\pm \\alpha )^{n}}{\\sqrt{n!}}", "\\vert n \\rangle $ In this work we use the coherent states basis and write the different perfect states of each classes, then we consider the following basic change: $\\vert \\pm \\rangle = \\dfrac{1}{\\sqrt{2(1\\pm e^{-2\\vert \\alpha \\vert ^{2}})}}(\\vert \\alpha \\rangle \\pm \\vert - \\alpha \\rangle )$ Then: $\\vert GHZ_{3\\alpha } \\rangle = N_{3\\alpha }^{GHZ}( \\vert \\alpha , \\alpha , \\alpha \\rangle + \\vert -\\alpha , -\\alpha , -\\alpha \\rangle )$ where $N_{3\\alpha }^{GHZ}$ is the normalization factor: $N_{3\\alpha }^{GHZ}=\\dfrac{1}{\\sqrt{2(\\exp (-6\\vert \\alpha \\vert ^{2})+1)}}$ $\\vert W_{3\\alpha } \\rangle = N_{3\\alpha }^{W}( \\vert \\alpha , \\alpha , -\\alpha \\rangle + \\vert \\alpha , -\\alpha , \\alpha \\rangle \\rangle + \\vert -\\alpha , \\alpha , \\alpha \\rangle )$ where $N_{3\\alpha }^{W}$ is the normalization factor: $N_{3\\alpha }^{W}= \\dfrac{1}{\\sqrt{3(2\\exp (-4\\vert \\alpha \\vert ^{2})+1)}}$ $\\vert GHZ_{4\\alpha } \\rangle = N_{4\\alpha }^{GHZ} ( \\vert \\alpha , \\alpha , \\alpha , \\alpha \\rangle + \\vert -\\alpha , -\\alpha , -\\alpha , -\\alpha \\rangle $ where $N_{4\\alpha }^{GHZ}$ is the normalization factor: $N_{4\\alpha }^{GHZ}=\\dfrac{1}{\\sqrt{2(\\exp (-8\\vert \\alpha \\vert ^{2})+1)}}$ $\\vert X_{4\\alpha } \\rangle = N_{4\\alpha }^{X}( \\vert \\alpha , \\alpha , \\alpha , \\alpha \\rangle + \\vert -\\alpha , -\\alpha , -\\alpha , \\alpha \\rangle + \\vert -\\alpha , -\\alpha , \\alpha , -\\alpha \\rangle + \\vert -\\alpha , \\alpha , -\\alpha , -\\alpha \\rangle + \\vert \\alpha , -\\alpha , -\\alpha , -\\alpha \\rangle )$ where $N_{4\\alpha }^{X}$ is the normalization factor: $N_{4\\alpha }^{X}=\\dfrac{1}{\\sqrt{5+8\\exp (-6\\vert \\alpha \\vert ^{2})+12\\exp (-4\\vert \\alpha \\vert ^{2}))}}$ $\\vert W_{4\\alpha } \\rangle = N_{4\\alpha }^{W}( \\vert -\\alpha , \\alpha , \\alpha , \\alpha \\rangle + \\vert \\alpha , -\\alpha , \\alpha , \\alpha \\rangle + \\vert \\alpha , \\alpha , -\\alpha , \\alpha \\rangle + \\vert \\alpha , \\alpha , \\alpha , -\\alpha \\rangle )$ where $ N_{4\\alpha }^{W}$ is the normalization factor: $N_{4\\alpha }^{W}=\\dfrac{1}{\\sqrt{4(3\\exp (-4\\vert \\alpha \\vert ^{2})+1)}}$" ] ]
2210.07711
[ [ "Asymmetric Student-Teacher Networks for Industrial Anomaly Detection" ], [ "Abstract Industrial defect detection is commonly addressed with anomaly detection (AD) methods where no or only incomplete data of potentially occurring defects is available.", "This work discovers previously unknown problems of student-teacher approaches for AD and proposes a solution, where two neural networks are trained to produce the same output for the defect-free training examples.", "The core assumption of student-teacher networks is that the distance between the outputs of both networks is larger for anomalies since they are absent in training.", "However, previous methods suffer from the similarity of student and teacher architecture, such that the distance is undesirably small for anomalies.", "For this reason, we propose asymmetric student-teacher networks (AST).", "We train a normalizing flow for density estimation as a teacher and a conventional feed-forward network as a student to trigger large distances for anomalies: The bijectivity of the normalizing flow enforces a divergence of teacher outputs for anomalies compared to normal data.", "Outside the training distribution the student cannot imitate this divergence due to its fundamentally different architecture.", "Our AST network compensates for wrongly estimated likelihoods by a normalizing flow, which was alternatively used for anomaly detection in previous work.", "We show that our method produces state-of-the-art results on the two currently most relevant defect detection datasets MVTec AD and MVTec 3D-AD regarding image-level anomaly detection on RGB and 3D data." ], [ "Introduction", "To ensure product quality and safety standards in industrial manufacturing processes, products are traditionally inspected by humans, which is expensive and unreliable in practice.", "For this reason, image-based methods for automatic inspection have been developed recently using advances in deep learning [9], [18], [29], [38], [39].", "Since there are no or only very few negative examples, erroneous products, available, especially at the beginning of production, and new errors occur repeatedly during the process, traditional supervised algorithms cannot be applied to this task.", "Instead, it is assumed that only data of a normal class of defect-free examples is available in training which is termed as semi-supervised anomaly detection.", "This work and others [9], [22], [36], [38], [39] specialize for industrial anomaly detection.", "This domain differs in contrast to others that normal examples are similar to each other and to defective products.", "In this work, we not only show the effectiveness of our method for common RGB images but also on 3D data and their combination as shown in Figure REF .", "Several approaches try to solve the problem by so-called student-teacher networks [5], [7], [19], [51], [53].", "First, the teacher is trained on a pretext task to learn a semantic embedding.", "In a second step, the student is trained to match the output of the teacher.", "The motivation is that the student can only match the outputs of the teacher on normal data since it is trained only on normal data.", "The distance between the outputs of student and teacher is used as an indicator of an anomaly at test-time.", "It is assumed that this distance is larger for defective examples compared to defect-free examples.", "However, this is not necessarily the case in previous work, since we discovered that both teacher and student are conventional (i. e. non-injective) neural networks with similar architecture.", "A student with similar architecture tends to undesired generalization, such that it extrapolates similar outputs as the teacher for inputs that are out of the training distribution, which, in turn, gives an undesired low anomaly score.", "This effect is shown in Figure REF using an explanatory experiment with one-dimensional data: If the same neural network with one hidden layer is used for student and teacher, the outputs are still similar for anomalous data in the yellow area of the upper plot.", "In contrast, the outputs for anomalies diverge if an MLP with 3 hidden layers is used as the student.", "In general, it is not guaranteed that an out-of-distribution input will cause a sufficiently large change in both outputs due to the missing injectivity of common neural networks.", "In contrast to normalizing flows, conventional networks have no guarantee to provide out-of-distribution outputs for out-of-distribution inputs.", "These problems motivate us to use an asymmetric student-teacher pair (AST): A bijective normalizing flow [34] acts as a teacher while a conventional sequential model acts as a student.", "In this way, the teacher guarantees to be sensitive to changes in the input caused by anomalies.", "Furthermore, the usage of different architectures and thus of different sets of learnable functions enforces the effect of distant outputs for out-of-distribution samples.", "Figure: Toy example with mini-MLPs: The students were optimized to match the outputs in the grey area.While the symmetric student-teacher pair (top) generalizes unintentionally and maps anomalous data very similarly, the distance between student and teacher outputs can be used for anomaly detection in the asymmetric student-teacher pair (bottom).As a pretext task for the teacher, we optimize to transform the distribution of image features and/or depth maps to a normal distribution via maximum likelihood training which is equivalent to a density estimation [15].", "This optimization itself is used in previous work [22], [38], [39] for anomaly detection by utilizing the likelihoods as an anomaly score: A low likelihood of being normal should be an indicator of anomalies.", "However, Le and Dinh [28] have shown that even perfect density estimators cannot guarantee anomaly detection.", "For example, just reparameterizing the data would change the likelihoods of samples.", "Furthermore, unstable training leads to misestimated likelihoods.", "We show that our student-teacher distance is a better measure for anomaly detection compared to the obtained likelihoods by the teacher.", "The advantage to using a normalizing flow itself for anomaly detection is that a possible misestimation in likelihood can be compensated for: If a low likelihood of being normal is incorrectly assigned to normal data, this output can be predicted by the student, thus still resulting in a small anomaly score.", "If a high likelihood of being normal is incorrectly assigned to anomalous data, this output cannot be predicted by the student, again resulting in a high anomaly score.", "In this way, we combine the benefits of student-teacher networks and density estimation with normalizing flows.", "We further enhance the detection by a positional encoding and by masking the foreground using 3D images.", "Our contributions are summarized as follows: Our method avoids the undesired generalization from teacher to student by having highly asymmetric networks as a student-teacher pair.", "We improve student-teacher networks by incorporating a bijective normalizing flow as a teacher.", "Our AST outperforms the density estimation capability of the teacher by utilizing student-teacher distances.", "Code is available on GitHubhttps://github.com/marco-rudolph/ast." ], [ "Student-Teacher Networks", "Originally, the motivation for having a student network that learns to regress the output of a teacher network was to distill knowledge and save model parameters [23], [31], [48].", "In this case, a student with clearly fewer parameters compared to the teacher almost matches the performance.", "Some previous work exploits the student-teacher idea for anomaly detection by using the distance between their outputs: The larger the distance, the more likely the sample is anomalous.", "Bergmann et al.", "[7] propose an ensemble of students which are trained to regress the output of a teacher for image patches.", "This teacher is either a distilled version of an ImageNet-pre-trained network or trained via metric learning.", "The anomaly score is composed of the student uncertainty, measured by the variance of the ensemble, and the regression error.", "Wang et al.", "[51] extend the student task by regressing a feature pyramid rather than a single output of a pre-trained network.", "Bergmann and Sattlegger [5] adapt the student-teacher concept to point clouds.", "Local geometric descriptors are learned in a self-supervised manner to train the teacher.", "Xiao et al.", "[53] let teachers learn to classify applied image transformations.", "The anomaly score is a weighted sum of the regression error and the class score entropy of an ensemble of students.", "By contrast, our method requires only one student and the regression error as the only criterion to detect anomalies.", "All of the existing work is based on identical and conventional (non-injective) networks for student and teacher, which causes undesired generalization of the student as explained in Section ." ], [ "Density Estimation", "Anomaly detection can be viewed from a statistical perspective: By estimating the density of normal samples, anomalies are identified through a low likelihood.", "The concept of density estimation for anomaly detection can be simply realized by assuming a multivariate normal distribution.", "For example, the Mahalanobis distance of pre-extracted features can be applied as an anomaly score [12], [35] which is equivalent to computing the negative log likelihood of a multivariate Gaussian.", "However, this method is very inflexible to the training distributions, since the assumption of a Gaussian distribution is a strong simplification.", "To this end, many works try to estimate the density more flexibly with a Normalizing Flow (NF) [14], [22], [38], [39], [41], [44].", "Normalizing Flows are a family of generative models that map bijectively by construction [3], [15], [34], [52] as opposed to conventional neural networks.", "This property enables exact density estimation in contrast to other generative models like GANs [21] or VAEs [27].", "Rudolph et al.", "[38] make use of this concept by modeling the density of multi-scale feature vectors obtained by pre-trained networks.", "Subsequently, they extend this to multi-scale feature maps instead of vectors to avoid information loss caused by averaging [39].", "To handle differently sized feature maps so-called cross-convolutions are integrated.", "A similar approach by Gudovskiy et al.", "[22] computes a density on feature maps with a conditional normalizing flow, where likelihoods are estimated on the level of local positions which act as a condition for the NF.", "A common problem of normalizing flows is unstable training, which has a tradeoff on the flexibility of density estimation [4].", "However, even the ground truth density estimation does not provide perfect anomaly detection, since the density strongly depends on the parameterization [28]." ], [ "Other Approaches", "Generative Models Many approaches try to tackle anomaly detection based on other generative models than normalizing flows as autoencoders [9], [18], [20], [37], [55], [57] or GANs [1], [11], [42].", "This is motivated by the inability of these models to generate anomalous data.", "Usually, the reconstruction error is used for anomaly scoring.", "Since the magnitude of this error depends highly on the size and structure of the anomaly, these methods underperform in the industrial inspection setting.", "The disadvantage of these methods is that the synthetic anomalies cannot imitate many real anomalies.", "Anomaly Synthesization Some work reformulates semi-supervised anomaly detection as a supervised problem by synthetically generating anomalies.", "Either parts of training images [29], [43], [46] or random images [54] are patched into normal images.", "Synthetic masks are created to train a supervised segmentation.", "Traditional Approaches In addition to deep-learning-based approaches, there are also classical approaches for anomaly detection.", "The one-class SVM [45] is a max-margin method optimizing a function that assigns a higher value to high-density regions than to low-density regions.", "Isolation forests [30] are based on decision trees, where a sample is considered anomalous if it can be separated from the rest of the data by a few constraints.", "Local Outlier Factor [10] compares the density of a point with that of its neighbors.", "A comparatively low density of a point identifies anomalies.", "Traditional approaches usually fail in visual anomaly detection due to the high dimensionality and complexity of the data.", "This can be circumvented by combining them with other techniques: For example, the distance to the nearest neighbor, as first proposed by Amer and Goldstein [2], is used as an anomaly score after features are extracted by a pre-trained network [32], [36].", "Alternatively point cloud features [24] or density-based clustering [16], [17] can be used to characterize a points neighborhood and label it accordingly.", "However, the runtime is linearly related to the dataset size." ], [ "Method", "Our goal is to train two models, a student model $f_s$ and a teacher model $f_t$ , such that the student learns to regress the teacher outputs on defect-free image data only.", "The training process is divided into two phases: First, the teacher model is optimized to transform the training distribution $p_X$ to a normal distribution $\\mathcal {N}(0,\\,I)$ bijectively with a normalizing flow.", "Second, the student is optimized to match the teacher outputs by minimizing the distance between $f_s(x)$ and $f_t(x)$ of training samples $x \\in X$ .", "We apply the distance for anomaly scoring at test-time, which is further described in Section REF .", "We follow [7], [22], [39] and use extracted features obtained by a pre-trained network on ImageNet [13] instead of RGB images as direct input for our models.", "Such networks have been shown to be universal feature extractors whose outputs carry relevant semantics for industrial anomaly detection.", "In addition to RGB data, our approach is easily extendable to multimodal inputs including 3D data.", "If 3D data is available, we concatenate depth maps to these features along the channels.", "Since the feature maps are reduced in height and width compared to the depth map resolution by a factor $d$ , we apply pixel-unshuffling [56] by grouping a depth image patch of $d \\times d$ pixels as one pixel with $d^2$ channels to match the dimensions of the feature maps.", "Any 3D data that may be present is used to extract the foreground.", "This is straightforward and reasonable whenever the background is static or planar, which is the case for almost all real applications.", "Pixels that are in the background are ignored when optimizing the teacher and student by masking the distance and negative log likelihood loss, which are introduced in Sections REF and REF .", "If not 3D data is available, the whole image is considered as foreground.", "Details of the foreground extraction are given in Section REF .", "Similar to [22], we use a sinusoidal positional encoding [50] for the spatial dimensions of the input maps as a condition for the normalizing flow $f_t$ .", "In this way, the occurrence of a feature is related to its position to detect anomalies such as misplaced objects.", "An overview of our pipeline is given in Figure REF .", "Figure: Overview of our pipeline: Teacher and student receive image features and/or depth maps as input which is conditioned by a positional encoding.First, the teacher represented by a normalizing flow is optimized to reduce the negative log likelihood loss that may be masked by a foreground map from 3D.Second, the student is trained to match the teacher outputs by minimizing the (masked) distance between them.Figure: Model architecture of teacher (left side) and student (right side).", "While the teacher is a Real-NVP-based  conditional normalizing flow , the student is a conventional convolutional neural network." ], [ "Teacher", "Similar to [22], [38], [39], we train a normalizing flow based on Real-NVP [15] to transform the training distribution to a normal distribution $\\mathcal {N}(0,\\,I)$ .", "In contrast to previous work, we do not use the outputs to compute likelihoods and thereby obtain anomaly scores directly.", "Instead, we interpret this training as a pretext task to create targets for our student network.", "The normalizing flow consists of multiple subsequent affine coupling blocks.", "Let the input $x \\in \\mathbb {R}^{w\\times h \\times n_{\\mathrm {feat}}}$ be feature maps with $n_{\\mathrm {feat}}$ features of size $w\\times h$ .", "Within these blocks, the channels of the input $x$ are split evenly along the channels into the parts $x_1$ and $x_2$ after randomly choosing a permutation that remains fixed.", "These parts are each concatenated with a positional encoding $c$ as a static condition.", "Both are used to compute scaling and shift parameters for an affine transformation of the counterpart by having subnetworks $s_i$ and $t_i$ for each part: $\\begin{aligned}y_2 = x_2 \\odot e^{s_1([x_1, c])} + t_1([x_1, c]) \\\\y_1 = x_1 \\odot e^{s_2([x_2, c])} + t_2([x_2, c]),\\end{aligned}$ where $\\odot $ is the element-wise product and $[\\cdot , \\cdot ]$ denotes concatenation.", "The output of one coupling block is the concatenation of $y_1$ and $y_2$ along the channels.", "Note that the number of dimensions of input and output does not change due to invertibility.", "To stabilize training, we apply alpha-clamping of scalar coefficients as in [4] and the gamma-trick as in [39].", "Using the change-of-variable formula with $z$ as our final output $p_X(x) = p_Z(z) *{\\det {\\frac{\\partial z}{\\partial x}}}\\quad ,$ we minimize the negative log likelihood with $p_Z$ as the normal distribution $\\mathcal {N}(0,\\,I)$ by optimizing the mean of $\\begin{aligned}\\mathcal {L}_{ij}^t = -\\log {p_X(x_{ij})} = \\frac{*{z_{ij}}_2^2}{2} - \\log {*{\\det {\\frac{\\partial z_{ij}}{\\partial x_{ij}}}}}\\end{aligned}$ over all (foreground) pixels at pixel position $(i,j)$ .", "Table: Overview of the used datasets.Table: AUROC in % for detecting defects of all categories of MVT2D on image-level grouped into textures and objects.", "We report the mean and standard deviation over 5 runs for our method.", "Best results are in bold.Beside the average value, detailed results of PaDiM  were not provided by the authors.", "The numbers of STFPM*  were obtained by a reimplementation." ], [ "Student", "As opposed to the teacher, the student is a conventional feed-forward network that does not map injectively or surjectively.", "We propose a simple fully convolutional network with residual blocks which is shown in Figure REF .", "Each residual block consists of two sequences of $3 \\times 3$ convolutional layers, batch normalization [25] and leaky ReLU activations.", "We add convolutions as the first and last layer to increase and decrease the feature dimensions.", "Similarly to the teacher, the student takes image features as input which are concatenated with 3D data if available.", "In addition, the positional encoding $c$ is concatenated.", "The output dimensions match the teacher to enable pixel-wise distance computation.", "We minimize the squared $\\ell _2$ -distance between student outputs $f_s(x)$ and the teacher outputs $f_t(x)$ on training samples $x \\in X$ , given the training set $X$ , at a pixel position $(i, j)$ of the output: $\\mathcal {L}^s_{ij} = *{f_s(x)_{ij} - f_t(x)_{ij}}^2_2 .$ Averaging $\\mathcal {L}^s_{ij}$ over all (foreground) pixels gives us the final loss.", "The distance $\\mathcal {L}^s$ is also used in testing to obtain an anomaly score on image level: Ignoring the anomaly scores of background pixels, we aggregate the pixel distances of one sample by computing either the maximum or the mean over the pixels." ], [ "Datasets", "To demonstrate the benefits of our method on a wide range of industrial inspection scenarios, we evaluate with a diverse set of 25 scenarios in total, including natural objects, industrial components and textures in 2D and 3D.", "Table REF shows an overview of the used benchmark datasets MVTec AD [6] and MVTec 3D-AD [8].", "For both datasets, the training set only contains defect-free data and the test set contains defect-free and defective examples.", "In addition to image-level labels, the datasets also provide pixel-level annotations about defective regions which we use to evaluate the segmentation of defects.", "MVTec AD, which will be called MVT2D in the following, is a high-resolution 2D RGB image dataset containing 10 object and 5 texture categories.", "The total of 73 defect types in the test set appear, for example, in the form of displacements, cracks or scratches in various sizes and shapes.", "The images have a side length of 700 to 1024 pixels.", "MVTec 3D-AD, to which we refer to as MVT3D, is a very recent 3D dataset containing 2D RGB images paired with 3D scans for 10 categories.", "These categories include deformable and non-deformable objects, partially with natural variations (e.g.", "peach and carrot).", "In addition to the defect types in MVT2D there are also defects that are only recognizable from the depth map, such as indentations.", "On the other hand, there are anomalies such as discoloration that can only be perceived from the RGB data.", "The RGB images have a resolution of 400 to 800 pixels per side, paired with rasterized 3D point clouds at the same resolution." ], [ "Image Preprocessing", "Following [12], [39], we use the layer 36 output of EfficientNet-B5 [47] pre-trained on ImageNet [13] as a feature extractor.", "This feature extractor is not trained during training of the student and teacher networks.", "The images are resized to a resolution of $768\\times 768$ pixels resulting in feature maps of size $24\\times 24$ with 304 channels." ], [ "3D Preprocessing", "We discard the $x$ and $y$ coordinates due to the low informative content and use only the depth component $z$ in centimeters.", "Missing depth values are repeatedly filled by using the average value of valid pixels from an 8-connected neighborhood for 3 iterations.", "We model the background as a 2D plane by interpolating the depth of the 4 corner pixels.", "A pixel is assumed as foreground if its depth is further than $7mm$ distant from the background plane.", "As an input to our models, we first resize the masks to $192\\times 192$ pixels via bilinear downsampling and then perform pixel-unshuffling [56] with $d=8$ as described in Section  to match the feature map resolution.", "In order to detect anomalies at the edge of the object and fill holes of missing values, the foreground mask is dilated using a square structural element of size 8.", "We subtract the mean foreground depth from each depth map and set its background pixels to 0.", "The binary foreground mask $M$ with ones as foreground and zeros as background is downsampled to feature map resolution to mask the loss for student and teacher.", "This is done by a bilinear interpolation $f_\\downarrow $ followed by a binarization where all entries greater than zero are assumed as foreground to mask the loss at position $(i, j)$ : $\\mathcal {L}^{\\mathrm {masked}}_{ij} ={\\left\\lbrace \\begin{array}{ll}\\mathcal {L}_{ij} & \\text{if }\\quad f_\\downarrow (M)_{ij} > 0 \\\\0 & else\\end{array}\\right.", "}.$" ], [ "Teacher", "For the normalizing flow architecture of the teacher, we use 4 coupling blocks which are conditioned on a positional encoding with 32 channels.", "Each pair of internal subnetworks $s_i$ and $t_i$ is designed as one shallow convolutional network $r_i$ with one hidden layer whose output is split into the scale and shift components.", "Inside $r_i$ we use ReLU-Activations and a hidden channel size of 1024 for MVT2D and 64 for MVT3D.", "We choose the alpha-clamping parameter $\\alpha =3$ for MVT2D and $\\alpha =1.9$ for MVT3D.", "The teacher networks are trained for 240 epochs for MVT2D and 72 epochs for MVT3D, respectively, with the Adam optimizer [26], using author-given momentum parameters $\\beta _1=0.9$ and $\\beta _2=0.999$ , a learning rate of $2 \\cdot 10^{-4}$ and a weight decay of $10^{-5}$ ." ], [ "Student", "For the student networks, we use $n_{\\mathrm {st\\_blocks}}=4$ residual convolutional blocks as described in Section REF .", "The Leaky-ReLU-activations use a slope of 0.2 for negative values.", "We choose a hidden channel size of $n_{hidden}=1024$ for the residual block.", "Likewise, we take over the number of epochs and optimizer parameters from the teacher.", "The scores at feature map resolution are aggregated for evaluation at image level by the maximum distance if a foreground mask is available, and the average distance otherwise (RGB only)." ], [ "Evaluation Metrics", "As common for anomaly detection, we evaluate the performance of our method on image-level by calculating the area under receiver operating characteristics (AUROC).", "The ROC measures the true positive rate dependent on the false positive rate for varying thresholds of the anomaly score.", "Thus, it is independent of the choice of a threshold and invariant to the class balance in the test set.", "For measuring the segmentation of anomalies at pixel-level, we compute the AUROC on pixel level given the ground truth masks in the datasets.", "Table: AUROC in % for detecting defects of all categories of MVT3D on image-level for 3D data, RGB data and the combination of both.", "We report the mean and standard deviation over 5 runs for our method.", "Best results per data domain are in bold.", "Numbers of listed methods followed by a are non-published results obtained by the corresponding authors on request.", "A * indicates that we used a reimplementation.", "The numbers from PatchCore are taken from .Table: Anomaly segmentation results measured by the mean pixel-AUROC over all classes and its standard deviation over 5 runs.", "Despite image-level detection is the focus of this work, our method is able to localize defects for practical purposes with an AUROC of 95% or 97.6%." ], [ "Detection", "Table REF shows the AUROC of our method and previous work for detecting anomalies on the 15 classes of MVT2D as well as the averages for textures, objects and all classes.", "We set a new state-of-the-art performance on the mean detection AUROC over all classes, improving it slightly to 99.2%.", "This is mainly due to the good performance on the more challenging objects, where we outperform previous work by a comparatively large margin of 0.9%, except for PatchCore [36].", "The detection of anomalies on textures, which CS-Flow [39] has already almost solved with a mean AUROC of 99.8%, still works very reliably at 99.3%.", "Especially compared to the two student-teacher approaches [7], [51], a significant improvement of 6% and 3.6% respectively is archieved.", "Moreover, our student-teacher distances show to be a better indicator of anomalies compared to the likelihoods of current state-of-the-art density estimators [22], [39] which, like our teacher, are based on normalizing flows.", "Even though MVT2D has established itself as a standard benchmark in the past, this dataset (especially the textures) is easily solvable for recent methods, and differences are mainly in the sub-percent range, which is only a minor difference in terms of the comparatively small size of the dataset.", "In the following, we focus on the newer, more challenging MVT3D dataset where the normal data shows more variance and anomalies only partly occur in one of the two data modalities, RGB and 3D.", "The results for individual classes of MVT3D grouped by data modality are given in Table REF .", "We are able to outperform all previous methods for all data modalities regarding the average of all classes by a large margin of 5.1% for 3D, 5% for RGB and 7.2% for the combination.", "Facing the individual classes and data domains, we set a new state-of-the-art in 21 of 30 cases.", "Note that this data set is much more challenging when comparing the best results from previous work (99.1% for MVT2D vs. 86.5% AUROC for MVT3D).", "Nevertheless, we detect defects in 7 out of 10 cases for RGB+3D at an AUROC of at least 93%, which demonstrates the robustness of our method.", "In contrast, the nearest-neighbor approach PatchCore [36], which provides comparable performance to us on MVT2D, struggles with the increased demands of the dataset and is outperformed by 11% on RGB.", "The same applies for the 3D extension [24] using FPFH [40] despite using a foreground mask as well.", "Figure REF shows qualitative results for the RGB+3D case given both inputs and ground truth annotations.", "More examples can be found in the supplemental material.", "Despite the low resolution, the regions of the anomaly can still be localized well for practical purposes.", "Table REF reports the pixel-AUROC of our method and previous work.", "For the class peach in the RGB+3D setting, the top of Figure REF compares the distribution of student-teacher distances for anomalous and normal regions.", "The distribution of anomalous samples shows a clear shift towards larger distances.", "At the bottom of Figure REF , the outputs of student and teacher as well as our the distance of corresponding pairs representing our anomaly score are visualized by a random orthographic 2D projection.", "Note that visualizations made by techniques such as t-SNE [49] or PCA [33] are not meaningful here, since the teacher outputs (and therefore most of the student outputs) follow an isotropic standard normal distribution.", "Therefore, different random projections barely differ qualitatively.", "Figure: Top: Histogram of our AST distances for normal and anomalous regions of the class peach in MVT3D.", "Bottom: Random orthographic projections of student and teacher outputs grouped in non-defective (left plot) anomalous regions (right plot) for the class peach.The plotted student-teacher distance representing the anomaly score is clearly higher for anomalous regions since the student is not able to match the teacher outputs, as it was only trained on non-defective regions." ], [ "Ablation Studies", "We demonstrate the effectiveness of our contributions and design decisions with several ablation studies.", "Table REF compares the performance of variants of students with the teacher, which can be used as a density estimator itself for anomaly detection by using its likelihoods, given by Eq.", "REF , as anomaly score.", "In comparison, a symmetric student-teacher pair worsens the results by 1 to 2%, excepting the RGB case.", "However, the performance is already improved for RGB and 3D+RGB by creating the asymmetry with a deeper version of the student than the teacher by doubling the number of coupling blocks to 8.", "This effect is further enhanced if the architecture of the NF-teacher is replaced by a conventional feedforward network as we suggest.", "We also vary the depth of our student network and analyzed its relation to performance, model size and inference time in Table REF .", "With an increasing number of residual blocks $n_{\\mathrm {st\\_blocks}}$ , we observe an increasing performance which is almost saturated after 4 blocks.", "Since the remaining potential in detection performance is not in relation to the linearly increasing additional computational effort per block, we suggest to choose 4 blocks to have a good trade-off.", "In Table REF we investigate the impact of the positional encoding and the foreground mask.", "For MVT3D, positional encoding improves the detection by 1.4% of our AST-pair when trained with 3D data as the only input.", "Even though the effect is not present when combining both data modalities, we consider it generally reasonable to use the positional encoding, considering that the integration with just 32 additional channels does not significantly increase the computational effort.", "Foreground extraction in order to mask the loss for training and anomaly score for testing is also highly effective.", "Since the majority of the image area often consists of background, the teacher has to spend a large part of the distribution on the background.", "Masking allows the teacher and student to focus on the essential structures.", "Moreover, noisy background scores are eliminated.", "Table: Comparison of average detection performance in AUROC percentage on MVT3D of teacher and student-teacher in a symmetric and asymmetric setting.", "Our proposed asymmetric student-teacher pair outperforms all baselines in all cases.Table: Tradeoff between performance and computational effort on 3D+RGB data of MVT3D.", "The inference time was measured with a NVIDIA RTX 1080 Ti.Table: Impact of the positional encoding and the foreground mask on the detection performance of student and teacher on MVT3D.", "Numbers are given in AUROC percentage.", "Since masks are obtained from 3D data, there is no mask for RGB." ], [ "Conclusion", "We discovered the generalization problem of previous student teacher pairs for AD and introduced an alternative student-teacher method that prevents this issue by using a highly different architecture for student and teacher.", "We were able to compensate for skewed likelihoods of a normalizing flow-based teacher, which was used directly for detection in previous work, by the additional use of a student.", "Future work could extend the approach to more data domains and improve the localization resolution." ], [ "Acknowledgements.", "This work was supported by the Federal Ministry of Education and Research (BMBF), Germany under the project LeibnizKILabor (grant no.", "01DD20003), the Center for Digital Innovations (ZDIN) and the Deutsche Forschungsgemeinschaft (DFG) under Germany’s Excellence Strategy within the Cluster of Excellence PhoenixD (EXC 2122)." ] ]
2210.07829
[ [ "Blind Super-Resolution for Remote Sensing Images via Conditional\n Stochastic Normalizing Flows" ], [ "Abstract Remote sensing images (RSIs) in real scenes may be disturbed by multiple factors such as optical blur, undersampling, and additional noise, resulting in complex and diverse degradation models.", "At present, the mainstream SR algorithms only consider a single and fixed degradation (such as bicubic interpolation) and cannot flexibly handle complex degradations in real scenes.", "Therefore, designing a super-resolution (SR) model that can cope with various degradations is gradually attracting the attention of researchers.", "Some studies first estimate the degradation kernels and then perform degradation-adaptive SR but face the problems of estimation error amplification and insufficient high-frequency details in the results.", "Although blind SR algorithms based on generative adversarial networks (GAN) have greatly improved visual quality, they still suffer from pseudo-texture, mode collapse, and poor training stability.", "In this article, we propose a novel blind SR framework based on the stochastic normalizing flow (BlindSRSNF) to address the above problems.", "BlindSRSNF learns the conditional probability distribution over the high-resolution image space given a low-resolution (LR) image by explicitly optimizing the variational bound on the likelihood.", "BlindSRSNF is easy to train and can generate photo-realistic SR results that outperform GAN-based models.", "Besides, we introduce a degradation representation strategy based on contrastive learning to avoid the error amplification problem caused by the explicit degradation estimation.", "Comprehensive experiments show that the proposed algorithm can obtain SR results with excellent visual perception quality on both simulated LR and real-world RSIs." ], [ "Introduction", "Remote sensing images (RSIs) are vulnerable to various factors such as sensor noise, imaging platform motion, and weather factors, resulting in the degradation of imaging quality.", "Super-resolution (SR) aims to restore clear texture details from low-resolution (LR) images, thereby improving the spatial resolution of RSIs.", "SR is a challenging ill-posed problem since different high-resolution (HR) images can be degraded into the same LR image through different degradation models.", "Figure: LR images generated by different degradations.", "(a) Various blur kernels.", "(b) Various noise levels.The convolutional neural network (CNN)-based SR algorithms have made great progress in objective metrics and visual perception quality [1], [2].", "However, most methods assume that LR images are obtained by an ideal and fixed degradation model, such as bicubic interpolation, and thus cannot obtain satisfactory results in dealing with RSIs in real-world scenes.", "This is because the degradation models of RSIs are usually complex, and the LR images may suffer from various blur kernels and varying levels of noise, as shown in Fig.", "REF .", "When the degradation kernel used in the training phase mismatch the real LR images in the testing phase, the model will fail to generate satisfactory SR results.", "Generally, the degradation process of RSIs can be modeled as follows [3]: $\\mathbf {X}_{\\mathrm {LR}} = (\\mathbf {X}_{\\mathrm {HR}}\\otimes k)\\downarrow _r + n,$ where $\\mathbf {X}_{\\mathrm {LR}}$ and $\\mathbf {X}_{\\mathrm {HR}}$ denote the LR and HR image, respectively; the blur kernel $k$ and the additive noise $n$ are two key factors of the degradation process; $\\downarrow _r$ denotes a downsampling operation with a scale factor of $r$ .", "In this case, it is impractical to train a model for each degradation kernel, which will cost huge model training and storage resources.", "Therefore, it is necessary to build a degradation-adaptive SR algorithm for RSIs in real-world scenes.", "Recently, some studies have focused on handling multiple degradations using a single model in real-world SR tasks, which can be categorized into blind SR and non-blind SR.", "The non-blind SR models [4], [5], [6] rely on the real degradation information and the LR image together as input in the testing phase.", "However, real degradation models in practical applications are often complex, unknown and difficult to obtain.", "To fill this gap, blind SR models do not require degradations as priors.", "Previous blind SR methods [7], [8] usually decompose the task into two consecutive steps, degradation kernel estimation [9], [10] and non-blind SR.", "However, the SR step may amplify the estimation error of degraded kernels, leading to poor SR results.", "Gu et al.", "[11] propose to iteratively estimate degenerate kernels and perform SR, which alleviates the difficulty of degradation kernel estimation at the cost of high computational complexity.", "The aforementioned blind SR methods are optimized using pixel-level losses.", "Although these methods can obtain satisfactory PSNR, they tend to generate blurred SR results with poor visual quality.", "These pixel-level loss-optimized methods learn a deterministic mapping from LR images to HR images, ignoring the ill-posed nature of SR tasks [12].", "Recent studies [13], [14], [15] have introduced adversarial learning into the blind SR task.", "They learn the probability distribution of HR space and obtain much clearer SR results.", "Nonetheless, these methods based on generative adversarial networks (GANs) still deterministically map LR images to SR results, and do not inherently alleviate the ill-posed problem.", "Furthermore, due to the convergence difficulties of GAN-based models, the generated results may suffer from model collapse.", "Normalizing flow [16] is another important class of generative models besides GAN.", "Lugmayr et al.", "[12] proposed to use a flow-based generative model to explicitly learn the probability distribution over the HR image space, from which multiple SR results can be sampled.", "However, the flow-based model requires the network to be invertible, which greatly increases the difficulty of architecture designing and limits the expressiveness of the network.", "Lately, researchers [17], [18], [19] employed Gaussian diffusion process to generalize the normalizing flow to the stochastic case, named stochastic normalizing flow (SNF).", "SNF does not require the network to be invertible and has demonstrated superior performance in many applications such as 3D cloud point generation [20] and speech synthesis [21].", "In this article, we propose a novel blind SR model based on SNF (BlindSRSNF) to address the severe ill-posedness of blind SR tasks, the lack of texture details in SR results, and the instability of GAN-based model training.", "BlindSRSNF utilizes a Markov process to transform the distribution from the HR image space to a Gaussian latent space.", "Then, we take the LR image encoding and degradation information as conditions to construct the conditional transition probability of the reverse Markov process.", "This reverse process maps the samples in the latent space to the HR image space by transferring hidden variables step by step, which can be regarded as the sample generation process.", "Furthermore, to address the problem of inaccurate estimation of degradation kernels, we introduce a degradation representation strategy [22] based on unsupervised contrastive learning.", "Therefore, the proposed model is robust to various degradation models and can achieve satisfactory blind SR results in real scenarios.", "To the best of our knowledge, BlindSRSNF is the first SNF-based blind SR method that provides a new idea for improving the quality of RSIs in real scenes.", "The main contributions of this article are as follows: We propose a novel SNF-based blind SR framework for RSIs named BlindSRSNF, which can effectively stabilize model training by explicitly optimizing the variational bound of the NLL.", "The BlindSRSNF adopts contrastive learning to learn the degradation information of LR images in an unsupervised manner, avoiding the amplification of the degradation kernel estimation error and the time-consuming iterative degradation correction.", "Compared with the state-of-the-art (SOTA) GAN-based blind SR algorithms, our proposed BlindSRSNF can generate SR results with better visual perception quality.", "Our results have more natural, accurate texture details with lower spatial distortion.", "Recently, with the rapid development of deep learning (DL), SISR have made great progress.", "Many DL-based methods were proposed to learn the mapping from LR space to HR space in an end-to-end manner.", "Dong et al.", "[23] proposed the first DL-based end-to-end SR network, which greatly improved the performance of SISR.", "Kim et al.", "[24] proposed a residual network, which can efficiently learn the high-frequency information of images and reduce learning burden of the network.", "Ledig et al.", "[25] proposed a generative adversarial network, it can generate more realistic details and textures, which greatly improved visual perceptual quality of SR. Zhang et al.", "[26] proposed a residual dense network (RDN), which uses the dense and skip structure and further improves the SR performance.", "Zhang et al.proposed a deep residual channel attention network (RCAN), which consists of several residual groups with long skip connections.", "The RCAN embeds a channel attention mechanism, which can adaptively rescale channel-wise features by considering interdependencies among channels.", "Li et al.", "[27] introduced a feedback mechanism into the SR task and proposed a lightweight super-resolution feedback network.", "Dai et al.", "[28] proposed a second-order attention network for more powerful feature expression and feature correlation learning.", "Soh et al.", "[29] proposed a novel training scheme based on meta-transfer learning to exploit both external information from a large-scale dataset and internal information from a specific image.", "Kong et al.", "[30] found that different image regions have different restoration difficulties and proposed a new solution pipeline, which makes different regions be processed by networks with different capacities and reduces a lot of computational consumption.", "Ma et al.", "[31] employed a wavelet transform and recursive res-net to achieve single image super-resolution for remote sensing images.", "Arun et al.", "[32] explored an optimal spectral super-resolution framework for remote sensing images, which can ensure the spectral and spatial fidelity of reconstructions with mini-mum number of samples.", "Lei et al.", "[33] presented a coupled adversarial training mode for remote sensing image super-resolution, in which the discriminator is specifically designed to take in a pair of images rather than a single input to make better discrimination of the inputs.", "Zhang et al.", "[34] proposed a multiscale attention network to characterize the structural features of remote sensing images at multiple levels for remote sensing image super-resolution.", "Huan et al.", "[35] proposed an improved multi-scale residual network, which combined hierarchical residual-like connections and dilation convolution to solve the problem of forgetting and underutilizing network features.", "Wu et al.", "[36] used the saliency-guided feedback GAN to discriminate different regions with varying levels of saliency and reconstruct the high-resolution remote sensing images.", "Lei et al.", "[37] proposed a hybrid-scale self-similarity exploitation network (HSENet) to learn single- and cross-scale internal recurrence of patterns in remote sensing images.", "Li et al.", "[38] designed an attention-based GAN model that applied both local attention and global attention for the super-resolution task of remote sensing images." ], [ "Degradation-Adaptive SR", "Shocher et al.", "[39] trained a small CNN suitable for the test images to adapt to the specific degraded kernel in the test stage, but the inference efficiency of the model is low.", "Zhang et al.", "[4] took the fuzzy kernel information and LR images as the input of the network, and proposed a SR network for multiple degradations (SRMD).", "After that, Zhang et al.", "[5] proposed an end-to-end unfolding SR network (USRNet) to deal with different degradations by alternately solving data and prior problems.", "Xu et al.", "[6], based on dynamic convolution, further improved the performance of SR under a variety of degraded kernels.", "However, these non-blind SR reconstruction algorithms [4], [5], [6] rely on the degraded information provided by the degraded kernel estimation methods [10], [9] to perform SR tasks.", "In addition, the SR network will magnify the estimation errors of degraded kernel, causing obvious artifacts in the results [11].", "To solve this problem, Gu et al.", "[11] proposed an iterative kernel correction (IKC) method, which used the SR reconstruction results of the previous iteration to correct the degraded kernel estimation, so as to improve the SR quality of the next iteration.", "Although IKC can effectively alleviate the artifact problem caused by degraded kernel estimation error, multiple iterations in the test stage are very time-consuming." ], [ "Flow-Based Models", "Normalizing Flow (NF)[40], [16], [41] is a kind of generative model, which has a wide range of applications in nuclear physics, materials science and other fields[19], [42], [43].", "In recent years, in the field of computer vision, it is gradually attracting the attention of researchers[12], [44], [45].", "To solve above problem, some researchers introduce normalizing flow (NF) [16] into SR reconstruction tasks.", "Lugmayr et al.", "[12] proposed a SR with normalizing flow (SRFlow), which used NF to model the conditional distribution in HR space.", "In this way, SRFlow can directly optimize the negative log-likelihood function to obtain realistic SR reconstruction results.", "Compared with the GAN-based methods, the NF-based models can effectively avoid the problems of mode collapse [46], [47] and unstable training [48].", "However, the NF model requires each layer of the neural network to be reversible so as to establish bijection from HR space to hidden space.", "The conventional convolution layer is difficult to meet the requirements of reversibility, and the specially designed reversibility layer will greatly limit the expression ability of the network [19].", "Subsequently, Ho et al.", "[17] proposed a denoising diffusion probabilistic model, which mapped the samples to the hidden space using the diffusion process, and constructed an inverse denoising process to generate samples.", "This kind of generation models [17], [18] that realizes probability distribution transformation based on stochastic process is also called stochastic normalizing flow (SNF)." ], [ "Normalizing Flow", "The NF is a series of invertible functions parameterized by a neural network to realize the probability distribution transformation from a prior space $\\mathcal {Z}$ to the target space $\\mathcal {X}$ .", "We can obtain the exact probability distribution over the target space by using the change of variable theorem.", "Thus, the parameters of the neural network can be optimized by maximizing the likelihood of samples or minimizing the Kullback-Leibler (KL) divergence of the generated distribution from the target distribution.", "Let $F_{ZX}$ denote the invertible mapping from the space $\\mathcal {Z}$ to $\\mathcal {X}$ .", "To simplify the construction of the invertible function, $F_{ZX}$ is usually decomposed into $T$ invertible layers $F_0,\\cdots , F_T$ , $\\mathbf {y}_{t+1} = F_t(\\mathbf {y}_t),\\ \\ \\mathbf {y}_t = F_{t}^{-1}(\\mathbf {y}_{t+1}),$ where $\\mathbf {y}_t, 0\\le t\\le T$ are intermediate states.", "Let $\\mathbf {z}$ and $\\mathbf {x}$ denote samples in $\\mathcal {Z}$ and $\\mathcal {X}$ spaces, respectively, then the NF model can be expressed as: $\\mathbf {z}=\\mathbf {y}_{0} \\underset{F_0^{-1}}{\\stackrel{F_{0}}{\\rightleftarrows }} \\mathbf {y}_{1} \\rightleftarrows \\cdots \\rightleftarrows \\mathbf {y}_{T-1} \\underset{F_{T-1}^{-1}}{\\stackrel{F_{T-1}}{\\rightleftarrows }} \\mathbf {y}_{T}=\\mathbf {x}.$ Assuming that each transformation function is differentiable, let $|\\operatorname{det} \\mathbf {J}_{t}(\\mathbf {y})|$ denote its Jacobian determinant.", "Using the change of variable theorem, we can calculate the probability density of $\\mathbf {y}_{t+1}$ , $p_{t+1}\\left(\\mathbf {y}_{t+1}\\right)=p_{t+1}\\left(F_{t}\\left(\\mathbf {y}_{t}\\right)\\right)=p_{t}\\left(\\mathbf {y}_{t}\\right)\\left|\\operatorname{det} \\mathbf {J}_{t}\\left(\\mathbf {y}_{t}\\right)\\right|^{-1}.$ Then, the negative log-likelihood (NLL) of the training samples can be obtained, $\\mathcal {L}(\\Theta ;\\mathbf {x}) = -\\log p_{\\mathbf {x}}(\\mathbf {x}) = -\\log p_{\\mathbf {z}}(\\mathbf {z}) - \\sum _{t=0}^{T-1} |\\operatorname{det} \\mathbf {J}_{t} (\\mathbf {y}_{t})|.$ Thus, the network parameters can be optimized by minimizing the NLL by computing the Jacobian determinant of each transformation function.", "In practice, each transformation function corresponds to a layer in the neural network.", "For efficient training and inference, the inverse and Jacobian determinants of each layer must be efficiently computed.", "However, designing a layer that satisfies the above characteristics is challenging because common neural network structures are not invertible.", "The literature [40], [45] proposes an affine coupling layer, which provides a simple and effective way for constructing an invertible neural network layer.", "However, due to the limitation of invertibility, the expressive ability of the network is severely constrained [19]." ], [ "Stochastic Normalizing Flow", "The SNF is a generalization of the NF in the random case, which realizes the transformation of the hidden state through random sampling, rather than a fixed invertible function.", "Unlike the NF, which implements probability distribution transformation through $T$ certain functions, the SNF constructs a Markov chain $\\lbrace \\mathbf {X}_t\\rbrace _{t=0}^T$ of length $T+1$ to achieve probability distribution transformation.", "From the perspective of stochastic processes, the NF can also be regarded as a special case where all transformation probabilities in the Markov chain are Dirac measure (probability mass is concentrated at a single point).", "Let $\\mathbf {X}_T$ denote standard Gaussian noise.", "The SNF starts from $\\mathbf {X}_{T}$ and passes through $T$ -step probability transformation to obtain the sample $\\mathbf {X}_0$ in the target space.", "This process of sample generation is also called reverse process.", "Conversely, starting from $\\mathbf {X}_0$ , the process of corrupting the sample to get random Gaussian noise is called forward process.", "To simplify the model, we assume the forward process is a diffusion process [17], that is, Gaussian noise is gradually added to the sample according to the variance sequence $\\beta _1\\cdots , \\beta _T$ .", "Specifically, let $\\mathbf {I}$ denote the identity matrix, and the joint probability and transition probability of the forward process are respectively defined as: $q(\\mathbf {X}_{1:T}|\\mathbf {X}_0) & := \\prod _{t=1}^{T}q(\\mathbf {X}_{t}|\\mathbf {X}_{t-1}), \\\\q(\\mathbf {X}_t|\\mathbf {X}_{t-1}) & :=\\mathcal {N}(\\mathbf {X}_t;\\sqrt{1-\\beta _t}\\mathbf {X}_{t-1},\\beta _t\\mathbf {I}).$ It is worth noting that, let $\\alpha _t := 1-\\beta _t$ , $\\bar{\\alpha }_t := \\prod _{s=1}^t\\alpha _s$ , the $t$ -step transition probability of the forward process can be calculated exactly: $q\\left(\\mathbf {X}_{t} \\mid \\mathbf {X}_{0}\\right)=\\mathcal {N}\\left(\\mathbf {X}_{t} ; \\sqrt{\\bar{\\alpha }_{t}} \\mathbf {X}_{0},\\left(1-\\bar{\\alpha }_{t}\\right) \\mathbf {I}\\right).$ The variances $\\lbrace \\beta _t\\rbrace _{t=1}^T$ of the forward process are hyperparameters.", "When the variances $\\lbrace \\beta _t\\rbrace _{t=1}^T$ are small, the transition probability of the reverse process $p_\\theta (\\mathbf {X}_{t-1} | \\mathbf {X}_t)$ also obeys Gaussian distribution [49], where $\\theta $ denotes the model parameters.", "Therefore, the reverse process is a Markov chain whose transition probability follows a Gaussian distribution, and the initial distribution $p(\\mathbf {X}_T) = \\mathcal {N}(\\mathbf {X}_T; \\mathbf {0}, \\mathbf {I})$ .", "Specifically, $p_\\theta (\\mathbf {X}_{T-1:0} |\\mathbf {X}_T) & := \\prod _{t=1}^{T}p_\\theta (\\mathbf {X}_{t-1}|\\mathbf {X}_t), \\\\p_{\\theta }\\left(\\mathbf {X}_{t-1} \\mid \\mathbf {X}_{t}\\right) & :=\\mathcal {N}\\left(\\mathbf {X}_{t-1} ; \\mathbf {\\mu }_{\\theta }\\left(\\mathbf {X}_{t}, t\\right), \\mathbf {\\Sigma }_{\\theta }\\left(\\mathbf {X}_{t}, t\\right)\\right).$ The training of the generative model $p_\\theta (\\mathbf {X}_0)$ can be achieved by optimizing the variational upper bound of the NLL, $\\begin{aligned}& \\mathbf {E}[-\\log p_\\theta (\\mathbf {X}_0)] \\\\\\le & \\mathbf {E}_q\\left[-\\log \\frac{p_\\theta (\\mathbf {X}_{T:0})}{q(\\mathbf {X}_{1:T}|\\mathbf {X}_0)}\\right] \\\\= & \\mathbf {E}_q\\left[-\\log p(\\mathbf {X}_T)-\\sum _{t\\ge 1}\\log \\frac{p_\\theta (\\mathbf {X}_{t-1}|\\mathbf {X}_t)}{q(\\mathbf {X}_t|\\mathbf {X}_{t-1})}\\right]= : L.\\end{aligned}$ We adopt the parameterization method proposed by Ho et al.", "[17], where $\\Sigma _\\theta (\\mathbf {X}_t, t) = \\sigma _t^2\\mathbf {I}$ , and the mean term has the following form: $\\mu _{\\theta }\\left(\\mathbf {X}_{t}, t\\right)=\\frac{1}{\\sqrt{\\alpha _{t}}}\\left(\\mathbf {X}_{t}-\\frac{\\beta _{t}}{\\sqrt{1-\\bar{\\alpha }_{t}}} \\epsilon _{\\theta }\\left(\\mathbf {X}_{t}, t\\right)\\right),$ where $\\mathbf {\\epsilon }_{\\theta }$ is a function to predict noise $\\mathbf {\\epsilon }$ , accepting $\\mathbf {X}_t$ and $t$ as input.", "The optimization objective $L$ in (REF ) can be simplified to the following form: $\\min _{\\theta } \\mathcal {L}_{\\mathrm {simple}}(\\theta )=\\mathbf {E}_{\\mathbf {X}_{0}, \\mathbf {\\epsilon }, t}\\left\\Vert \\mathbf {\\epsilon }-\\mathbf {\\epsilon }_{\\theta }\\left(\\sqrt{\\bar{\\alpha }_{t}} \\mathbf {X}_{0}+\\sqrt{1-\\bar{\\alpha }_{t}} \\mathbf {\\epsilon }, t\\right)\\right\\Vert ^{2},$ where $\\mathbf {\\epsilon }\\sim \\mathcal {N}(\\mathbf {0}, \\mathbf {I})$ is random noise.", "Figure: Architecture of the Denoising Network" ], [ "Proposed Method", "In this section, we first give an overview of our proposed conditional SNF for blind SR (Sec.", "REF ), and then introduce the specific parameterization method (Sec.", "REF ).", "Then, the LR encoder and degradation representation model are detailed in Sec.", "REF and Sec.", "REF .", "Finally, we summarize the training and inference process of the proposed model in Sec.", "REF and Sec.", "REF ." ], [ "Conditional SNF for Blind SR", "The goal of the SR task is to generate an HR image $\\mathbf {X}_{\\mathrm {HR}}$ given an LR image $\\mathbf {X}_{\\mathrm {LR}}$ .", "Since the mapping from LR images to HR images is one-to-many, we propose to parameterize the conditional distribution $p(\\mathbf {X}_{\\mathrm {HR}} | \\mathbf {X}_{ \\mathrm {LR}})$ based on the SNF model.", "In order to adapt the SR model to multiple degradation kernels, the generative model should comprehensively consider the content information of LR images and degradation representation.", "Therefore, we construct a conditional SNF model, in which the transition probability of the reverse process is modeled as the conditional probability given the LR encoding and degradation representation vector.", "The reverse process starts from Gaussian noise, and after $T$ -step probability transformation, a sample $\\mathbf {X}_0$ that obeys the conditional distribution $p(\\mathbf {X}_{\\mathrm {HR} } | \\mathbf {X}_{\\mathrm {LR}})$ can be obtained, thereby realizing blind SR of RSIs.", "The overall framework is shown in Fig.", "REF .", "The LR encoder $f_\\theta $ aims to extract the content information of the LR images as the condition of the SNF to ensure the consistency of the SR result with the LR image.", "The degradation representation model $g_\\theta $ aims to extract a vector $\\mathbf {v}$ from LR images that can effectively characterize its degradation information.", "Let $\\theta $ be the set of all learnable parameters of the proposed BlindSRSNF.", "We take the LR encoding $\\mathbf {u} = f_\\theta (\\mathbf {X}_{\\mathrm {LR}})$ and the degradation representation vector $\\mathbf {v} = g_\\theta (\\mathbf {X}_{\\mathrm {LR}})$ as conditions and define the the transition probability of the conditional SNF.", "The reverse process is defined as: $\\begin{aligned}p_\\theta (\\mathbf {X}_{T-1:0}|\\mathbf {X}_T) & := \\prod _{t=1}^{T}p_\\theta (\\mathbf {X}_{t-1}|\\mathbf {X}_t, \\mathbf {u}, \\mathbf {v}), \\\\p_{\\theta }\\left(\\mathbf {X}_{t-1} \\mid \\mathbf {X}_{t}, \\mathbf {u}, \\mathbf {v}\\right) & :=\\mathcal {N}\\left(\\mathbf {X}_{t-1} ; \\mathbf {\\mu }_{\\theta }\\left(\\mathbf {X}_{t}, t, \\mathbf {u}, \\mathbf {v} \\right), \\mathbf {\\sigma }_t^2 \\mathbf {I}\\right).\\end{aligned}$ The initial distribution of the reverse process $p(\\mathbf {X}_T) := \\mathcal {N}(\\mathbf {X}_T; \\mathbf {0}, \\mathbf {I})$ .", "It can be seen from (REF ) that the reverse process is determined by the noise prediction function $\\mathbf {\\epsilon }_\\theta $ .", "Therefore, we only need to model $\\mathbf {\\epsilon }_\\theta $ , and we obtain a generative model.", "According to (REF ), we have $\\mathbf {X}_t=\\sqrt{\\bar{\\alpha }_t}\\mathbf {X}_{\\mathrm {HR}}+\\sqrt{1-\\bar{\\alpha }_t}\\mathbf {\\epsilon },\\mathbf {\\epsilon }\\sim \\mathcal { N}(\\mathbf {0}, \\mathbf {I}), \\ 1<t\\le T.$ If a model can accurately predict the noise $\\mathbf {\\epsilon }$ according to $\\mathbf {X}_t$ , it is equivalent to predicting the “clean” image before adding noise $\\mathbf {X}_{\\mathrm {HR}}$ .", "Therefore, in our blindSRSNF, we construct a denoising model $h_{\\theta }$ to parameterize the inverse process of the SNF.", "The goal of the denoising model is to remove the noise added by the forward diffusion process based on the LR encoding and degradation representation vector and obtain a “clean” predicted image $\\mathbf {X}_0$ .", "The inputs of the denoising model include $\\mathbf {X}_t$ , the time step of the diffusion process $t$ , the feature encoding of the LR image $\\mathbf {u}$ and the degradation representation vector $\\mathbf {v}$ , and the output is denoted as $\\mathbf {\\hat{X}}_0$ .", "In this study, we use $L_1$ norm instead of the $L_2$ norm as the loss function of the denoising model for better convergence performance.", "Therefore, the loss function of the stochastic normalized flow model is defined as: $\\begin{aligned}\\mathcal {L}_{\\mathrm {SNF}} = & \\mathbf {E}_{\\mathbf {X}_{\\mathrm {LR}}, \\mathbf {X}_{\\mathrm {HR}}} \\mathbf {E}_{\\mathbf {\\epsilon }, t}\\Vert \\mathbf {X}_{\\mathrm {HR}} - \\\\ & h_\\theta (\\sqrt{\\bar{\\alpha }_t}\\mathbf {X}_{\\mathrm {HR}}+\\sqrt{1 -\\bar{\\alpha }_t}\\mathbf {\\epsilon }, t, \\underbrace{f_\\theta (\\mathbf {X}_{\\mathrm {LR}})}_{\\mathbf {u}}, \\underbrace{g_\\theta (\\mathbf {X}_{\\mathrm {LR}}}_{\\mathbf {v}})\\Vert _1.\\end{aligned}$" ], [ "Architecture of the Denoising Network", "We take U-Net [50] as the backbone of the denoising model $h_{\\theta }$ , as shown in Fig.", "REF .", "To reduce the computational complexity of the reverse process, we use the pixel folding operation to reduce the spatial resolution of images.", "The pixel folding is the inverse operation of pixel shuffle, as shown in Fig.", "REF .", "The pixel folding operation rearranges the pixels and reduces the spatial resolution to $1/2$ of the original image without losing information.", "Figure: Illustration of the pixel folding and pixel shuffle operations.We take a convolutional layer to extract shallow features from the pixel-folded image, which are then concatenated with the LR feature encoding $\\mathbf {u}$ and fed into the U-shaped network.", "The compression path of the U-Net contains four feature extraction groups, each of which consists of two residual blocks and a downsampling operation.", "The downsampling operation is implemented by a $3\\times 3$ convolutional layer with a stride of 2.", "The expansion path of the U-Net contains four feature extraction groups, each preceded by an upsampling layer to increase the resolution of the feature maps.", "The upsampling operation is implemented by the nearest neighbor interpolation and a $3\\times 3$ convolutional layer.", "Then, we concatenate the feature maps of the same size in the expansion path and the compression path and feed them into two residual blocks.", "Each residual block accepts not only feature maps as input, but also a time encoding $t_e$ of step $t$ and a degradation representation vector $\\mathbf {v}$ as conditions.", "The time encoding $t_e$ is defined as: $\\begin{aligned}\\phi (t) & = [\\sin (\\omega _1 t), \\cos (\\omega _1 t), \\sin (\\omega _2 t), \\cos (\\omega _2 t),\\cdots ], \\\\t_e & = \\mathtt {MLP_3}(\\phi (t)),\\end{aligned}$ where $\\lbrace \\omega _1,\\omega _2,\\cdots \\rbrace $ are frequency parameters, $\\mathtt {MLP_3}(\\cdot )$ represents a three-layer MLP where the hidden layer uses Swish [51] as the activation function.", "The residual block contains two convolution layers, each preceded by a group normalization [52] to stabilize the training, using Swish as the activation function.", "To enable the residual block to perceive the degradation information of the LR image and dynamically adjust the kernel weights, we design a degradation-aware convolution (DAConv) layer to replace the second ordinary convolutional layer in the residual block.", "The DAConv layer takes the degradation representation vector as an additional input, which first uses a three-layer MLP to calculate the kernel weights, and then uses the kernel to perform the convolution operation.", "Let $\\mathtt {DAConv}(\\cdot , \\mathbf {v})$ denote the DAConv layer; $\\mathtt {GroupNorm}(\\cdot )$ denotes the group normalization; $F_{\\mathrm {in}}$ and $F_{\\mathrm {out}}$ denote the input and output of the residual block, respectively.", "Then, the operation of the residual block can be summarized as: $\\begin{aligned}F_1 & = \\mathtt {Conv}_{3\\times 3}(\\mathtt {Swish}(\\mathtt {GroupNorm}(F_{\\mathrm {in}}))), \\\\F_2 & = F_1 + t_e, \\\\F_{\\mathrm {out}} & = \\mathtt {DAConv}_{3\\times 3}(\\mathtt {Swish}(\\mathtt {GroupNorm}(F_{2})), \\mathbf {v}) + F_{\\mathrm {in}}.\\end{aligned}$" ], [ "LR Encoder", "We encode the input LR image as a condition of the transition probability in the reverse process.", "The output of the LR encoder (i.e., LR encoding) is denoted as $h_\\theta $ .", "The proposed framework can use any differentiable network structure to implement LR feature encoding.", "In this study, we adopt residual in residual dense block (RRDB) [53] as the basic feature extraction unit, which exhibits superior performance on previous SR tasks.", "The LR encoder, as shown in Fig.", "REF , contains multiple sequentially connected RRDBs and a global residual connection.", "The RRDB combines long-short residual connections, including multiple dense connected blocks (Dense Block).", "The architecture of RRDB is shown in Fig.", "REF .", "The operation of the LR encoder is denoted as $f_\\theta $ , then $\\mathbf {u} = f_\\theta (\\mathbf {X}_{\\mathrm {LR}})$ .", "In order for $f_\\theta $ to effectively capture the content information in LR images, we add an upsampling layer after the LR encoder to calculate the distance between the upsampled result and the HR image.", "Let $f_\\theta ^\\uparrow (\\mathbf {X}_{\\mathrm {LR}})$ denote the upsampled result; we calculate the $L_1$ loss between $f_\\theta ^\\uparrow (\\mathbf {X}_ {\\mathrm {LR}})$ and the HR image $\\mathbf {X}_{\\mathrm {HR}}$ as a supervision of the LR encoder: $\\mathcal {L}_{\\mathrm {encoder}} = \\mathbf {E}_{\\mathbf {X}_{\\mathrm {LR}},\\mathbf {X}_{\\mathrm {HR}}}\\Vert f_\\theta ^\\uparrow (\\mathbf {X}_{\\mathrm {LR}}) - \\mathbf {X}_{\\mathrm {HR}}\\Vert _1.$" ], [ "Degradation Representation Model", "Degradation representation learning aims to extract discriminative degradation information from LR images.", "Due to the lack of degradation labels of LR images, we adopt an unsupervised contrastive learning strategy [54].", "Inspired by the literature [22], we assume that image patches from the same LR image have the same degradation kernel, while degradation kernels of different LR images are different.", "First, we randomly crop two image patches from an input LR image, and label one of them as a query sample and the other as a positive sample.", "We then crop two image patches from another LR image and label them as negative samples.", "Second, we employ a six-layer CNN, named degradation representation encoder, to encode the query sample, positive sample, and negative samples, and add a global average pooling (GAP) operation to obtain the corresponding degradation representation vectors, denoted as $\\mathbf {v}, \\mathbf {v}^+, \\mathbf {v}_1^-, \\mathbf {v}_2^-$ , respectively.", "Third, as suggested by [55], the degradation representation vectors of these image patches are send into a three-layer MLP, and the outputs are denoted as $\\mathbf {w}, \\mathbf {w}^+ , \\mathbf {w}_1^-, \\mathbf {w}_2^-$ .", "To make the degradation representation discriminative, $\\mathbf {w}$ should be as similar as possible to $\\mathbf {w}^+$ and dissimilar to $\\mathbf {w}_i^-$ .", "These similarities can be measured by (REF ): $\\mathcal {L}_{\\mathbf {w}}=-\\log \\frac{\\exp \\left(\\mathbf {w} \\cdot \\mathbf {w}^{+} / \\tau \\right)}{\\exp (\\mathbf {w} \\cdot \\mathbf {w}_{1}^{-} / \\tau ) + \\exp (\\mathbf {w} \\cdot \\mathbf {w}_{2}^{-} / \\tau )},$ where $\\tau $ is the temperature hyperparameter, and “ $\\cdot $  ” represents the inner product of the vectors.", "Studies [55], [56], [57] have shown that constructing a large-scale set of negative samples can improve the performance of contrastive learning.", "Thus, during the training phase, we use a queue to store degradation representation vectors of a large number of training samples.", "In each iteration, the degradation representation vectors of the current batch are enqueued.", "If the queue is full, the earliest enqueued vectors are dequeued.", "Therefore, the loss function of the degradation representation model is defined as: $\\mathcal {L}_{\\mathrm {degrad}}=-\\mathbf {E}_{\\mathbf {w},\\mathbf {w}^{+}}\\log \\frac{\\exp \\left(\\mathbf {w} \\cdot \\mathbf {w}^{+} / \\tau \\right)}{\\sum _{i=1}^{N_{q}}\\exp (\\mathbf {w} \\cdot \\mathbf {w}_{ q}^{(i)} / \\tau )},$ where $N_{q}$ denotes the queue capacity, and $\\mathbf {w}_{q}^{(i)}$ denotes the $i$ th negative sample in the queue.", "The degradation representation encoder consists of six consecutive $3\\times 3$ convolutional layers, and the number of output channels of these layers are 64, 64, 128, 128, 256, 256, respectively.", "The third and fifth layers are with a stride of 2 to reduce the spatial resolution of feature maps, and the rest of the layers are with a stride of 1.", "We add a batch normalization layer after each convolutional layer, and employ LeakyReLU as the activation function.", "Finally, the degradation representation vectors are obtained by the GAP operation.", "[t] Training [1] Total steps $T$ of the diffusion process, training samples $\\lbrace (\\mathbf {X}_{\\mathrm {LR}}, \\mathbf {X}_{\\mathrm {LR}}^+, \\mathbf {X}_{\\mathrm {HR}})\\rbrace $ .", "Initialize $h_\\theta $ , $f_\\theta $ , and $g_\\theta $ .", "Randomly sample $(\\mathbf {X}_{\\mathrm {LR}}, \\mathbf {X}_{\\mathrm {LR}}^+, \\mathbf {X}_{\\mathrm {HR}})$ Compute the LR encoding $\\mathbf {u}=f_\\theta (\\mathbf {X}_{\\mathrm {LR}})$ and loss $\\mathcal {L}_{ \\mathrm {encoder}}$ by (REF ).", "Compute the degradation representation vectors $\\mathbf {v}=g_\\theta (\\mathbf {X}_{\\mathrm {LR}}), \\mathbf {v}^+=g_\\theta (\\mathbf {X}_{\\mathrm {LR }^+})$ , and loss $\\mathcal {L}_{\\mathrm {degrad}}$ by (REF ).", "Randomly sample $\\mathbf {\\epsilon }\\sim \\mathcal {N}(\\mathbf {0}, \\mathbf {I})$ , $t\\sim U(\\lbrace 1, \\cdots , T\\rbrace )$ .", "Substitute $(\\mathbf {X}_{\\mathrm {LR}}, \\mathbf {X}_{\\mathrm {HR}}, \\mathbf {u}, \\mathbf {v}, t, \\mathbf {\\epsilon })$ into (REF ), and then calculate the loss of the denoising model $\\mathcal {L}_{\\mathrm {SNF}}$ .", "Perform a gradient descent step: $\\nabla _\\theta \\mathcal {L} = \\nabla _\\theta \\big (\\mathcal {L}_{\\mathrm {SNF}} + \\mathcal {L}_{\\mathrm {encoder}} + \\mathcal {L}_{\\mathrm {degrad}}\\big )$ converged" ], [ "Training", "The training dataset consists of triples $(\\mathbf {X}_{\\mathrm {LR}}, \\mathbf {X}_{\\mathrm {LR}}^+, \\mathbf {X}_{\\mathrm {HR }})$ , where $(\\mathbf {X}_{\\mathrm {LR}}, \\mathbf {X}_{\\mathrm {HR}})$ are paired LR-HR image patches; $\\mathbf {X}_{\\mathrm {LR}}^+$ and $\\mathbf {X}_{\\mathrm {LR}}$ are patches cropped from the same LR image with the same degradation kernel and size for degradation representation learning.", "Algorithm REF summarizes the training process.", "First, we randomly initialize the denoising model $h_\\theta $ , the LR encoder $f_\\theta $ and the degradation representation model $g_\\theta $ .", "Second, we sample a batch of training images, and calculate the LR encoding $\\mathbf {u}=f_\\theta (\\mathbf {X}_{\\mathrm {LR}})$ and the loss of LR encoder $\\mathcal {L}_{\\mathrm {encoder}}$ .", "Then, we calculate the degradation representation vectors $\\mathbf {v} = g_{\\theta }(\\mathbf {X}_{\\mathrm {LR}})$ , $\\mathbf {v}^+ = g_{\\theta }(\\mathbf {X}_{\\mathrm {LR}^+})$ , and the loss of the degradation representation model $\\mathcal {L}_{\\mathrm {degrad}}$ .", "Finally, we randomly sample the time step $t\\in \\lbrace 1, \\cdots , T\\rbrace $ and Gaussian noise $\\mathbf {\\epsilon }$ , and calculate the loss of the denoising model $\\mathcal {L}_{\\mathrm {SNF}}$ .", "We jointly optimize the denoising model, the LR encoder, and the degradation representation model, so all learnable parameters can be updated by computing the gradient of (REF ).", "$\\mathcal {L} = \\mathcal {L}_{\\mathrm {SNF}} + \\mathcal {L}_{\\mathrm {encoder}} + \\mathcal {L}_{\\mathrm {degrad}}.$" ], [ "Inference", "[t] Inference [1] LR image $\\mathbf {X}_{\\mathrm {LR}}$ , sampling interval $\\gamma $ SR result $\\mathbf {X}_{\\mathrm {SR}}$ Sampling at intervals of $\\gamma $ from $\\lbrace T, T-1, \\cdots , 0\\rbrace $ to obtain $\\lbrace \\tau _M=T, \\tau _{M-1}, \\cdots ,\\tau _1, \\tau _0=0\\rbrace $ Compute the LR encoding $\\mathbf {u}=f_\\theta (\\mathbf {X}_{\\mathrm {LR}})$ and the degradation representation vector $\\mathbf {v}=g_\\theta (\\mathbf {X} _{\\mathrm {LR}})$ Random sample $\\mathbf {X}_T \\sim \\mathcal {N}(\\mathbf {0}, \\mathbf {I})$ $i=M, M-1, \\cdots , 2$ Sample $\\mathbf {z} \\sim \\mathcal {N}(\\mathbf {0}, \\mathbf {I})$ Predict the “clean” image $\\hat{\\mathbf {X}}_0 = h_\\theta (\\mathbf {X}_{\\tau _i}, \\tau _i, \\mathbf {u}, \\mathbf {v})$ Compute the predicted noise, $\\hat{\\mathbf {\\epsilon }} = {\\left(\\mathbf {X}_{\\tau _i}-\\sqrt{\\bar{\\alpha }_{\\tau _i}}\\,\\hat{\\mathbf {X}}_0\\right)}/{\\left(\\sqrt{1-\\bar{\\alpha }_{\\tau _i}}\\right)}$ Perform an update step by (REF ), $\\mathbf {X}_{\\tau _{i-1}}\\leftarrow \\sqrt{\\bar{\\alpha }_{\\tau _{i-1}}} \\hat{ \\mathbf {X}}_0+ \\sqrt{1-\\bar{\\alpha }_{\\tau _{i-1}}-\\sigma _{\\tau _{i}}^{2}} \\hat{\\mathbf {\\epsilon }}+\\sigma _{\\tau _{i}}\\mathbf {z}$ Compute $\\hat{\\mathbf {X}}_{0}=h_\\theta (\\mathbf {X}_{\\tau _1}, \\tau _1, \\mathbf {u}, \\mathbf {v})=:\\mathbf {X}_{\\mathrm {SR}}$ by (REF ) In the inference phase, we first calculate the LR encoding $\\mathbf {u}$ and the degradation representation vector $\\mathbf {v}$ of the input LR image.", "Then, with $\\mathbf {u}$ and $\\mathbf {v}$ as conditions, we start from a Gaussian noise $\\mathbf {X}_T$ , iteratively predict and remove the noise added by the forward process, and finally generate the SR result.", "Here, the LR encoding and the degradation representation vector only need to be computed once, while the denoising model needs to be performed repeatedly according to the number of iterations.", "Therefore, when the number of steps $T$ of the reverse process is large, the inference of the model will take a long time.", "To improve the inference efficiency, we adopt the accelerated sampling strategy proposed in [18], which selects a subset of the reverse process time series to reduce the number of iterations.", "This strategy can greatly improve the inference speed without compromising the quality of the SR results.", "Specifically, the reverse process starts from the $T$ th step, sampling every $\\gamma $ steps (assuming $\\gamma $ can divide $T$ ) to obtain a new reverse process sampling path: $T \\rightarrow (T-\\gamma ) \\rightarrow (T-2\\gamma ) \\rightarrow \\cdots \\rightarrow 0.$ Let $\\lbrace \\tau _0, \\tau _1, \\cdots , \\tau _M\\rbrace $ denote the new reverse process sampling path, where $\\tau _0 = 0$ and $M=T/\\gamma $ .", "In this study, we simulate a trajectory of the reverse process on the simplified sampling path to generate SR results.", "When $i > 1$ , samples can be generated iteratively: $\\begin{aligned}& \\mathbf {X}_{\\tau _{i-1}}= \\sqrt{\\bar{\\alpha }_{\\tau _{i-1}}}\\underbrace{h_\\theta (\\mathbf {X}_{\\tau _i}, \\tau _i, \\mathbf {u}, \\mathbf {v})}_{\\text{``predicted clean image $\\mathbf {X}_0$''}} \\\\& + \\sqrt{1-\\bar{\\alpha }_{\\tau _{i-1}}-\\sigma _{\\tau _{i}}^{2}}\\underbrace{\\frac{\\mathbf {X}_{\\tau _i}-\\sqrt{\\bar{\\alpha }_{\\tau _i}}\\,h_\\theta (\\mathbf {X}_{\\tau _i}, \\tau _i, \\mathbf {u}, \\mathbf {v})}{\\sqrt{1-\\bar{\\alpha }_{\\tau _i}}}}_{\\text{``predicted noise $\\mathbf {\\epsilon }$''}} \\\\& + \\underbrace{\\sigma _{\\tau _{i}} \\mathbf {z}_{\\tau _{i}}}_{\\text{``random Noise''}},\\end{aligned}$ where $\\mathbf {z}_{\\tau _i}\\sim \\mathcal {N}(\\mathbf {0}, \\mathbf {I})$ , $\\sigma _{\\tau _{i}}=\\eta \\sqrt{\\frac{1-\\bar{\\alpha }_{\\tau _{i-1}}}{1-\\bar{\\alpha }_{\\tau _{i}}} \\beta _{\\tau _{i}}}$ , and $\\eta $ is the temperature coefficient that controls the variance.", "When $i = 1$ , the last step of the reverse process is reached.", "$\\mathbf {X}_{0} = h_\\theta (\\mathbf {X}_{\\tau _1}, \\tau _1, \\mathbf {u}, \\mathbf {v}).$ If $\\eta = 0$ , the sample generation process is no longer stochastic, and the model degrades into a deterministic mapping from $\\mathbf {X}_T$ to $\\mathbf {X}_0$ .", "The diversity of samples will increase as $\\eta \\ (< 1)$ increases.", "Algorithm REF demonstrates the inference phase." ], [ "Experimental Results", "In this section, we first introduce the datasets and evaluation metrics.", "Then, the model settings and training details are presented.", "Finally, we demonstrate the effectiveness of the proposed method using various degradation kernels and real-world RSIs." ], [ "Datasets and Metrics", "We use RSIs provided by GeoEye-1 satellite and GoogleEarth to verify the effectiveness of our proposal.", "The GeoEye-1 dataset contains 130 multispectral images with a resolution of ${0.41}{}$ and a size of $512\\times 512$ , of which 115 are used for training, and the remaining 15 are used for testing.", "The GoogleEarth dataset contains 239 optical RSIs with a resolution of ${1}{}$ and a size of $512\\times 512$ , of which 224 are used for training and the remaining 15 are used for testing.", "In our experiments, the training set contains a total of 339 RSIs from the above two sources.", "The proposed BlindSRSNF aims to generate more realistic SR results for real-world RSIs.", "Besides the common used peak signal-to-noise ratio (PSNR), we also introduce two objective metrics FID [58] and LPIPS [59] to better measure the visual quality of SR results.", "FID can better evaluate the quality and diversity of images generated by the model, and LPIPS can obtain evaluations that are almost consistent with human visual perception [60]." ], [ "Implementation Details", "In the SNF, the number of diffusion steps $T$ is set to 1000, and the noise variance is reduced from $\\beta _1 = 2\\times 10^{-2}$ to $\\beta _{T} = 1\\times 10^{-4}$ by using the setting in [61].", "The basic channel number $c$ of the convolutional layers in the denoising network is set to 64.", "The sampling interval $\\gamma $ is set to 50, and the temperature coefficient $\\eta $ is fixed to 1.", "The setting of the sampling interval $\\gamma $ is discussed in Sec.", "REF .", "The number of RRDBs in the LR encoder is set to 23, and the number of channels in each RRDB is set to 64.", "We follow [22] to generate LR images using two degradation models.", "The first model degrades the images using isotropy Gaussian blur kernels without adding noise.", "The size of kernels is fixed at $21\\times 21$ , and the kernel width obeys a uniform distribution, $\\sigma \\sim U(0.2, 4.0)$ .", "The second model degrades the images using anisotropy Gaussian blur kernels and then adds additive white Gaussian noise.", "The size of the blur kernel is fixed at $21\\times 21$ .", "The covariance matrix of the blur kernel is determined by two random eigenvalues $\\lambda _1, \\lambda _2 \\sim U(0.2, 4)$ and a random rotation angle $\\theta \\sim U(0, \\pi )$ .", "The noise level various randomly from 0 to 25.", "In the training phase, we randomly crop the degraded LR images into patches with a size of $64\\times 64$ as input, and randomly flip vertically or horizontally, and random rotate $90^\\circ $ for data augmentation.", "The proposed method is implemented based on the PyTorch framework and trained on an NVIDIA GeForce RTX 3090 GPU." ], [ "Comparison on Noise-free Degradations with Isotropy Gaussian Kernels", "We take bicubic interpolation as the baseline and compare our proposed BlindSRSNF with six SOTA SR algorithms, including SAN [28], ESRGAN [53], ZSSR [39], DASR [22], Real-ESRGAN [15], and BSRGAN [14].", "SAN is an excellent SR method optimized by pixel-level loss, and ESRGAN is a perceptual loss-optimized algorithm with good visual results.", "The above two algorithms assume that the degradation model is bicubic.", "ZSSR is an unsupervised blind SR algorithm that trains a small CNN to the specific test image during the inference phase.", "DASR is a blind SR algorithm based on contrastive learning, optimized by pixel-level loss.", "Real-ESRGAN and BSRGAN are SOTA GAN-based blind SR algorithms with more visual pleasing results than methods optimized by pixel-level losses.", "For a fair comparison, all competitive algorithms are retrained on the same training datasets by using their public codes.", "Table REF shows the comparison of objective metrics with kernel widths of 0, $1.2$ , $2.4$ and $3.6$ on the GeoEye-1 and GoogleEarth datasets, respectively.", "When $\\sigma =0$ , the degradation model is reduced to the ordinary bicubic degradation.", "Since SAN and ESRGAN are trained based on the bicubic assumption, better results can be obtained when $\\sigma =0$ .", "However, SAN and ESRGAN are difficult to deal with other complex degradations.", "ZSSR and DASR are optimized by pixel-level losses, which is equivalent to directly optimizing PSNR, so higher PSNR values can be obtained.", "However, they perform poorly on the perceptual metrics, FID and LPIPS.", "Real-ESRGAN and BSRGAN achieve better FID and LPIPS scores but significantly lower PSNR than DASR.", "The proposed BlindSRSNF is optimized by the negative log-likelihood, which significantly improves the FID and LPIPS metrics, achieving the best performance on all degradations.", "Furthermore, the PSNR of BlindSRSNF significantly outperforms GAN-based methods by up to ${4.24}{dB}$ .", "The results show that out proposal can generate higher-quality SR results while significantly reducing the spatial distortions.", "Fig.", "REF shows the visual comparison on noise-free isotropic kernels.", "It can be seen that SAN and ESRGAN trained based on the bicubic assumption are difficult to remove blur effectively.", "DASR can effectively deal with various degradations, but due to its optimization goal, the generated results are too smooth and lack texture details.", "Although the results of Real-ESRGAN are visually realistic, their content deviates greatly from real HR image, and even changes the category of ground objects.", "BlindSRSNF obtains SR results with the best visual perceptual quality.", "The great visual quality are attributed to the stochastic normalized flow, which allow the model can explicitly learn the probability distribution in HR space through maximum likelihood estimation.", "Besides, since BlindSRSNF contains an LR encoder, the texture details of the SR results are in good agreement with real HR images.", "The integrated degradation representation model and the conditional probability transition paradigm also make it possible to adapt the BlindSRSNF to multiple degradations." ], [ "Comparison on General Degradations with Anisotropy Gaussian Kernels", "We adopt the more general degradations with anisotropic Gaussian kernels to compare the proposed BlindSRSNF with six SR algorithms, including SAN[28], ESRGAN[53], ZSSR[39], DASR[22], Real-ESRGAN[15], BSRGAN[14].", "Table REF shows the comparison of objective metrics for general degradations on the GeoEye-1 dataset.", "Since the blur kernels of these degradation models are more general and additional noise is added, the performance of all the competitive methods on this more difficult task degrades.", "The DASR optimized by pixel-level loss achieves the highest PSNR, but performs poorly on visual perceptual metrics, FID and LPIPS.", "The GAN-based models significantly outperform DASR on visual perception metrics.", "The proposed BlindSRSNF outperforms all competitive algorithms in terms of FID and LPIPS, with an improvement of up to $48\\%$ in FID and up to $46\\%$ in LPIPS compared to the second-ranked Real-ESRGAN.", "Although the PSNR of BlindSRSNF is lower than DASR, BlindSRSNF achieves the best PSNR among all perception-optimized blind SR algorithms, and achieves a ${2.31}{}$ improvement over the second-ranked BSRGAN.", "Fig.", "REF shows the visual comparison for anisotropic degradation kernels.", "We randomly select four sets of blur kernel parameters and with three noise levels for demonstration.", "It can be seen that noise will seriously affect the SR results.", "Methods trained on bicubic degradations can barely recover texture details.", "The results of DASR are too smooth to clearly distinguish between houses and roads in residential areas.", "Although the results of Real-ESRGAN look clear, the texture details of real HR images are severely falsified, such as changing the layout of houses, the location of roads, and the shape of rivers.", "Besides, Real-ESRGAN changes the spectral information of the images in the GoogleEarth dataset, manifesting as a significant color shift.", "The textures generated by BSRGAN and BlindSRSNF are more realistic, but BSRGAN is not as sharp as BlindSRSNF.", "Combining the results in in Fig.", "REF and the LPIPS scores in Table REF , it can be found that the LPIPS can obtain objective evaluation consistent with the quality of human visual perception.", "In conclusion, the proposed BlindSRSNF can generate blind SR results with the best visual perceptual quality." ], [ "Comparison on Real-World RSIs", "In this section, we perform SR on real-world RSIs (rather than simulated LR images) to verify the performance of the proposed method in real scenarios.", "LR images are from the GoogleEarth dataset, and the comparison algorithms include ZSSR [39], Real-ESRGAN [15], BSRGAN [14] and the proposed BlindSRSNF.", "The visual results are shown in Fig.", "REF .", "Due to the lack of ground truths, we adopt a blind/referenceless image spatial quality evaluator (BRISQUE) [62] to measure the quality of the SR results.", "It can be observed that the texture details of ZSSR are relatively blurred, and the two GAN-based algorithms have tampered with the contents such as forest and farmland areas.", "Although Real-ESRGAN obtain the second BRISQUE after our proposed BlindSRSNF, it suffers from severe spectral shift and produces very unrealistic texture details.", "In summary, the proposed BlindSRSNF can obtain more realistic and clear SR results in real scenarios.", "Figure: Comparison of visual results with different sampling intervals." ], [ "Ablation Studies", "In this section, we first verify the effectiveness of the degraded representation learning.", "Then, we discuss the parameter settings of the SNF model." ], [ "Degradation Representation Learning", "We construct a contrastive model without the degradation representation learning module, where the degradation-aware convolutional layers in the denoising network are replaced by ordinary convolutional layers, and the contrastive loss is removed.", "Table REF shows the ablation results, which were tested on the GeoEye-1 dataset.", "The blur kernel parameters are $\\lambda _1=3.6,\\lambda _2=2.4,\\theta =0$ and the noise levels are 0, 5 and 10, respectively.", "The results show that the degradation representation learning module can improve performance of our proposed BlindSRSNF on blind SR tasks." ], [ "Sampling Interval", "The sampling interval $\\gamma $ of the reverse process is an important parameter of the SNF.", "It determines the number of times the denoising model is performed in the reverse process, so it will directly affect the inference time and the performance of our model.", "Table REF shows the effect of sampling interval on model performance and runtime.", "Fig.", "REF shows a comparison of visual results for different sampling intervals.", "Table REF shows that the PSNR keeps increasing with the sampling interval increasing.", "The proposed method achieves the best FID and LPIPS scores when the sampling interval is set to 25 or 50.", "When the sampling interval is greater than 100, although a higher PSNR is obtained, the FID and LPIPS scores drop significantly.", "As can be seen from Fig.", "REF , too small or too large sampling interval will lead to a decrease in the perception quality of SR results.", "From the results of $\\gamma =1$ , obvious pseudo textures appear in the farmland area; while the SR results of $\\gamma =500$ are blurry and lack clear texture details.", "This is because a smaller sampling interval results in more sampling steps in reverse process of SNF, and the method will tend to generate more texture details.", "Although generating more textures can significantly improve the visual perceptual quality of SR results, the potential pseudo-textures can also exacerbate the spatial distortion of results, reflected as a decrease in PSNR values.", "Furthermore, the sampling interval is inversely proportional to the runtime.", "Therefore, too small sampling interval will greatly increase the computational cost in the inference phase.", "In conclusion, to comprehensively balance the visual perception quality, runtime and spatial distortion degree of the BlindSRSNF, we set the sampling interval to 50.", "It is worth noting that the proposed method can control the richness of the texture by adjusting the sampling interval during the inference stage.", "This characteristic allows user to flexibly control the performance of the method during the inference phase without retraining the model, which is not possible with GAN-based models." ], [ "Conclusions", "In this article, we propose a novel blind SR algorithm based on SNF to better handle various blur kernels and noise levels in real-world RSIs.", "The BlindSRSNF realizes the probability distribution transformation between the prior space and the target space through a Markov process.", "Combining the LR encoding and the degradation representation vector, we construct a conditional transition probability of the reverse diffusion process, which makes it possible to explicitly optimize the NLL of the generative model.", "This optimization mechanism significantly reduces the training difficulty of generative models compared to GAN-based algorithms.", "We propose to use pixel folding and pixel shuffle operations to reduce the dimension of feature maps, combined with the interval sampling strategy, which effectively improves the sampling efficiency of flow-based models.", "Furthermore, we introduce a contrastive learning-based degradation representation strategy to avoid the error amplification problem caused by inaccurate degradation kernel estimation.", "Comprehensive experiments on the GeoEye-1 and GoogleEarth datasets show that the BlindSRSNF improves the performance of blind SR compared to the SOTA algorithms.", "Visual results show that the proposed BlindSRSNF can more realistically restore the details of ground objects in real-world LR RSIs." ] ]
2210.07751
[ [ "FeatureBox: Feature Engineering on GPUs for Massive-Scale Ads Systems" ], [ "Abstract Deep learning has been widely deployed for online ads systems to predict Click-Through Rate (CTR).", "Machine learning researchers and practitioners frequently retrain CTR models to test their new extracted features.", "However, the CTR model training often relies on a large number of raw input data logs.", "Hence, the feature extraction can take a significant proportion of the training time for an industrial-level CTR model.", "In this paper, we propose FeatureBox, a novel end-to-end training framework that pipelines the feature extraction and the training on GPU servers to save the intermediate I/O of the feature extraction.", "We rewrite computation-intensive feature extraction operators as GPU operators and leave the memory-intensive operator on CPUs.", "We introduce a layer-wise operator scheduling algorithm to schedule these heterogeneous operators.", "We present a light-weight GPU memory management algorithm that supports dynamic GPU memory allocation with minimal overhead.", "We experimentally evaluate FeatureBox and compare it with the previous in-production feature extraction framework on two real-world ads applications.", "The results confirm the effectiveness of our proposed method." ], [ "Introduction", "Deep learning has been widely employed in many real-world applications, e.g., computer vision [10], [13], [28], [5], data mining [8], [22], [14], [17], [21], [33], and recommendation systems [3], [2], [29], [35], [20], [15], [32].", "In recent years, sponsored online advertising also adopts deep learning techniques to predict the Click-Through Rate (CTR) [7], [38], [19], [40], [9], [24], [27], [36], [34], [31].", "Unlike common machine learning applications, the accuracy of the CTR prediction is critical to the revenue.", "In the context of a many-billion-dollar online ads industry, even a $0.1\\%$ accuracy increase will result in a noticeable revenue gain [37].", "In this work, we identify two major paths to improve the model accuracy.", "The first area is to propose different and enhanced model architectures.", "Every improvement in this direction is considered a fundamental milestone in the deep learning community—and does not happen often in the CTR prediction industry.", "The other (more practical) is feature engineering, i.e., to propose and extract new features from the raw training data.", "The benefit of feature engineering is usually neglected in common deep learning applications because of the general belief that deep neural networks inherently extract the features through their hidden layers.", "However, recall that CTR prediction applications are accuracy-critical, hence, the gain from an improved feature engineering strategy remains attractive for in-production CTR prediction models.", "Therefore, in order to achieve a better prediction performance, CTR deep learning models in real-world ads applications tend to utilize larger models and more features extracted from raw data logs.", "Testing on the historical and online data is the rule-of-the-thumb way to determine whether a new feature is beneficial.", "Every new feature with positive accuracy improvement (e.g., $0.1\\%$ ) is included into the CTR model.", "Machine learning researchers and practitioners keep this feature engineering trial-and-error on top of the current in-production CTR model.", "As a result, the in-production CTR model becomes larger and larger with more and more features.", "To support the trial-and-error research for new features, it requires us to efficiently train massive-scale models with massive-scale raw training data in a timely manner.", "Previous studies [37] propose hierarchical GPU parameter server that trains the out-of-memory model with GPU servers to accelerate the training with GPUs and SSDs.", "With a small number of GPU servers, e.g., 4, can obtain the same training efficiency as a CPU-only cluster with hundreds of nodes.", "The training framework focuses on the training stage and assumes the training data are well-prepared—the training data are accessed from a distributed file system.", "However, preparing the training data is not trivial for industrial level CTR prediction models—with $\\sim $ $10^{12}$ features.", "The feature extraction from raw data logs can take a significant proportion of the training time.", "In addition to the frequent retraining for new feature engineering trials, online ads systems have to digest a colossal amount of newly incoming data to keep the model up-to-date with the optimal performance.", "For the rapid training demands, optimizing the feature extraction stage becomes one of the most desirable goals of online ads systems.", "This latter point is the scope of our contribution.", "Figure: A visual illustration for the original feature extraction and training workflow (upper); and our proposed FeatureBox (lower).Training workflow.", "The upper part of Figure REF depicts a visual illustration of the feature extraction.", "Due to the large amount of raw data, the original feature extraction task is constructed as MapReduce [4] jobs that compute feature combinations, extract keywords with language models, etc.", "Those MapReduce jobs frequently read and write intermediate files with the distributed file system (i.e., HDFS [1]).", "The intermediate I/O can be as large as 200 TB.", "Once the features are extracted, we also need to materialize them to the $\\sim $ 15 TB extracted features to the HDFS so that the following distributed training framework can read them from the distributed file system.", "This training workflow incurs rapid communication with HDFS that generates heavy I/O overhead.", "One straightforward question can be raised: Can we perform the feature extraction within GPU servers to eliminate the communication overhead?", "In the lower part of Figure REF , we depict an example for the proposed training framework that combines the feature extraction and the training computation within GPU servers.", "The intermediate I/O is eliminated by integrating the feature extraction and the training computation into a pipeline: for each batch of extracted features, we feed the batch to the model training without writing them as intermediate files into HDFS.", "Challenges & Approaches.", "However, moving the feature extraction to GPU servers is non-trivial.", "Note that the number of GPU nodes is much fewer compared with the CPU-only cluster.", "We acknowledge two main challenges in embedding the feature extraction phase into GPU servers: Network I/O bandwidth.", "The network I/O bandwidth of GPU servers is by orders of magnitude smaller than the bandwidth of CPU clusters because we have fewer nodes—the total number of network adapters is lower.", "We materialize frequently-used features as basic features so that we can reuse them without extra I/O and computations.", "In addition, we use column-store that reads only the required columns in the logs to reduce I/O.", "Computing Resources.", "With a smaller number of nodes, the CPU computing capability on GPU servers is also orders of magnitudes less powerful than the CPU cluster.", "We have to move the CPU computations to GPU operations to bridge the computing power gap.", "Memory Usage.", "The feature extraction process contains many memory-intensive operations, such as dictionary table lookup, sort, reduce, etc.", "It is desired to have an efficient memory management system to efficiently perform dynamic memory allocations on GPU servers with limited memory.", "We summarize our contributions as follows: We propose FeatureBox, a novel end-to-end training framework that pipelines the feature extraction and the training on GPU servers.", "We present a layer-wise operator scheduling algorithm that arranges the operators to CPUs and GPU.", "We introduce a light-weight GPU memory management algorithm that supports dynamic GPU memory allocation with minimal overhead.", "We experimentally evaluate FeatureBox and compare it with the previous in-production feature extraction framework on two real-world ads applications.", "The results confirm the effectiveness of our proposed methods." ], [ "Preliminary", "In this section, we present a brief introduction of CTR prediction models and the hierarchical GPU parameter server.", "Both concepts are the foundations of FeatureBox." ], [ "CTR Prediction Models", "About a decade ago, CTR prediction strategies with large-scale logistic regression model on carefully engineered features are proposed in [6], [11].", "With the rapid development of deep learning, deep neural networks (DNN) attract a lot of attention in the CTR research community: The DNN model, with wide embedding layers, obtains significant improvements over classical models.", "The model takes a sparse high-dimensional vector as input and converts those sparse features into dense vectors through sequential embedding layers.", "The output dense vector is considered a low-dimensional representation of the input and is then fed into the following layers in order to compute the CTR.", "Most proposed CTR models share the same embedding layer architecture and only focus on the following neural network layers, see for e.g., Deep Crossing [26], Product-based Neural Network (PNN) [25], Wide&Deep Learning [2], YouTube Recommendation CTR model [3], DeepFM [12], xDeepFM [18] and Deep Interest Network (DIN) [39].", "They introduce special neural layers for specific applications that capture latent feature interactions.", "Figure: An example for the CTR prediction network architecture.We summarize those architectures in Figure REF .", "The input features are fed to the neural network as a sparse high-dimensional vector.", "The dimension of the vector can be $\\sim $$10^{12}$ or more.", "The input features for CTR models are usually from various resources with categorical values, e.g., query words, ad keywords, and user portrait.", "The categorical values are commonly represented as a one-hot or multi-hot encoding.", "Therefore, with categorical values with many sources, the number of dimensions is high ($\\sim $$10^{12}$ ) for industry CTR prediction models.", "Note that, as demonstrated in [37], feature compression or hashing strategies [30], [16] that reduce the number of dimensions are not fully applicable to the CTR prediction model because those solutions inevitably trade off the prediction accuracy for better computational time—recall that even a small accuracy loss leads to a noticeable online advertising revenue decrease, which is unacceptable.", "We embed the high-dimensional features through an embedding layer to obtain a low-dimensional ($\\sim $$10^3$ ) representation.", "The number of parameters in the embedding layer can be 10 TB or more due to the high input dimension.", "After the low-dimensional embedding is obtained, we fed this dense vector to the neural network components to compute the CTR." ], [ "Hierarchical GPU Parameter Server", "Due to the extremely high dimension of the embedding layer, the model contains more than 10 TB parameters which do not fit on most computing servers.", "Conventionally, the huge model is trained on an MPI cluster.", "We partition the model parameters across multiple computing nodes (e.g., 150 nodes) in the MPI cluster.", "Every computing node is assigned a batch of training data streamed directly from the HDFS.", "For each node, it retrieves the required parameters from other nodes and computes the gradients for its current working mini-batch.", "The gradients are then updated to the nodes that maintain the corresponding parameters through MPI communications.", "Recently, hierarchical GPU parameter servers [37] are proposed to train the massive-scale model on a limited number of GPU servers.", "The key observation of the hierarchical GPU parameter server is that the number of referenced parameters in a mini-batch fits the GPU memory because the input vector is sparse.", "It maintains three levels of hierarchical parameter servers on GPU, CPU main memory, and SSD.", "The working parameters are stored in GPUs, the frequently used parameters are kept in CPU main memory, and other parameters are materialized as files on SSDs.", "The upper-level module acts as a high-speed cache of the lower-level module.", "With 4 GPU nodes, the hierarchical GPU parameter server is able to be 2X faster than 150 CPU-only nodes in an MPI cluster.", "Our proposed FeatureBox follows the design of the training framework in the hierarchical GPU parameter server and absorbs the feature engineering workload into GPUs to eliminate excessive intermediate I/O." ], [ "FeatureBox Overview", "In this section, we present an overview of FeatureBox.", "We aim at allowing the training framework to support pipeline processing with mini-batches so that we can eliminate the excessive intermediate resulting I/O in conventional stage-after-stage methods.", "Figure REF depicts the detailed workflow of the FeatureBox pipeline.", "Figure: FeatureBox pipeline.Figure: An example for the heterogeneous operator scheduling.The workflow in Figure REF has two major tracks: – extract features from input views and – reading basic features.", "A view is a collection of raw data logs from one source, e.g., user purchase history.", "CTR prediction models collect features from multiple sources to obtain the best performance.", "The views are read from the network file system HDFS.", "We need to clean the views by filling null values and filtering out unrelated instances.", "Afterwards, the views are joined with particular keys such as user id, ads id, etc.", "We extract features from the joined views to obtain the desired features from the input views.", "Then, these features are merged with the basic features, read in a parallel path.", "We provide a detailed illustration for these operations as follows: Read views and basic features.", "The views and basic features are streamed from the distributed file system.", "The features are organized in a column-wise manner so that we only need to read the required features.", "Clean views.", "Views contain null values and semi-structured data, e.g., JSON format [23].", "At the view cleaning stage, we fill the null values and extract required fields from the semi-structured data.", "Following the cleaning, all columns have non-empty and simple type (as integer, float, or string) fields.", "Note that the resulting views contain all the logged instances.", "For an application, it may not need to include all instances, e.g., an application for young people.", "A custom filter can be applied to filter out unrelated instances of the current application.", "Join views.", "We now have one structured table for each view.", "Data from different views are concatenated by joining their keys, e.g., user id, ad id, etc.", "We recall that the join step combines multiple views into a single structured table.", "Extract features.", "Every time CTR model engineers propose a new feature, an operator that computes the new feature extraction on the structured table is created.", "A collection of those operators are executed in the feature extraction stage.", "The FeatureBox framework figures out the dependencies of operators and schedules the execution of the operators.", "Merge features.", "The extracted features are further merged with the basic features read from HDFS.", "The merging is also realized by a join operation on the instance id, which is a unique value generated when an instance is logged.", "Subsequent to the merging, a mini-batch of training data is generated and is fed to the neural network for the training." ], [ "Heterogeneous Operator Scheduling", "The stages discussed above are represented as operators in the FeatureBox pipeline.", "Note that those operators are heterogeneous: Some operators are network I/O intensive, e.g., read views and read basic features; some operators are computation-intensive, e.g., clean views and extract features; and the remaining operators with joining, e.g., join views and merge features, rely on heavy memory consumption for large table joins (which corresponds to a large dictionary lookup).", "Therefore, we introduce a heterogeneous operator scheduler that manages the operator execution on both CPUs and GPUs.", "Scheduling.", "Figure REF shows an example for the heterogeneous operator scheduling algorithm.", "We first present a function call graph for operators in Figure REF (a).", "Three operators and three major functions are displayed in the example.", "Op1 calls Func3; Op2 calls Func1 and Func3; and Op3 calls Func2 and Func3, where Func1 and Func2 are pre-processing calls, and Func3 is a post-processing call.", "We make a fine granularity pipeline so that the initialing overhead of the pipeline is minimized.", "The fine-granularity is obtained by viewing each function call as a separate operator.", "Then, we obtain 5 more operators: Op4 is a call for Func1; Op5 is a call for Func2; Op6, Op7, and Op8 are the Func3 calls from Op1, Op2, and Op3, respectively.", "Their dependency graph is illustrated in Figure REF (b).", "Now we have a directed acyclic graph (DAG) for the operators.", "As shown in Figure REF (c), we perform a topological sort on the dependency graph, assign the operators with no dependencies (root operators) to the first layer, and put the remaining operators to the corresponding layer according to their depth from the root operators.", "With this layer-wise partition, we observe that the operators in the same layer do not have any execution dependency.", "We issue the operators in the same layer together and perform a synchronization at the end of each layer to ensure the execution dependency.", "We prefer to execute operators on GPUs unless an operator requires a significant memory footprint that does not fit in the GPU memory.", "For instance, Op5 (Func2) in Figure REF is a word embedding table look up operation that requires a considerable amount of memory.", "We assign this operation to CPU workers and move its results from the CPU main memory to GPUs as a host-to-device (H2D) CUDA call.", "Inner-GPU operator launching.", "After the layer-wise DAG operator scheduling, we have determined the execution device for each operator and the synchronization barriers.", "However, CUDA kernel launching is has a noticeable overhead.", "We report the CUDA kernel launch overhead in Table REF .", "Table: The kernel launching overhead with an empty kernel on Nvidia Tesla V100-SXM2-32GB.The test is performed on an Nvidia Tesla V100-SXM2-32GB GPU for an empty kernel with 5 pointer-type arguments.", "The CUDA driver version is 10.2.", "The average launching time for a kernel is around 3.5 us.", "Since we have fine-granularity operators, we have to rapidly launch CUDA kernels to execute the large number of operators.", "In order to eliminate the launching overhead, we rewrite the operator kernel as a CUDA device function for each operator in the same layer and create a meta-kernel that sequentially executes the operator device functions in a runtime-compilation manner.", "The overhead of the meta-kernel generation is disregarded—we only need to create this meta-kernel for each layer once as a pre-processing of the training since we determine the operator execution order before the actual training phase and keep the scheduling fixed.", "With the generated meta-kernels, we only need to launch one kernel for each layer." ], [ "GPU Memory Management", "Feature extraction operators usually need to cope with strings of varying length, e.g., query keywords and ads titles.", "The execution of the operator commonly dynamically allocates memory to process the strings.", "For example, splitting a string with a delimiter needs to allocate an array to store the result of the splitting operation.", "We propose a light-weight block-level GPU memory pool to accelerate this dynamic allocation.", "Figure: A visual illustration for the GPU memory pool architecture.Figure REF presents a visual illustration for our proposed block-level GPU memory pool.", "The Thread Offsets denotes an array that stores the pointers to the dynamically allocated memory in the GPU memory pool.", "The memory in the GPU memory pool is pre-allocated in the GPU global memory.", "For each block, the allocated memory is aligned in 128 bytes for a cache-friendly execution.", "Dynamic GPU memory allocation.", "Algorithm  describes the workflow of the in-kernel dynamic memory allocation.", "We maintain a global variable idle_memory_head that stores the pointer of the head address of our pre-allocated GPU memory pool.", "We assume each GPU thread in a block has computed their required allocation size $\\textit {size}_i$ .", "We first compute an in-block parallel prefix sum on $\\textit {size}_{1..N}$ to obtain the prefix sum $\\textit {prefix}_{1..N}$ , where $N$ is the number of threads in a block.", "The prefix sum is used to compute the total size of the requested memory.", "In addition, we can easily compute the thread offsets by adding the prefix sum to the head of the allocated memory address.", "After that, we let one thread in the block, e.g., thread 1, to apply the memory for the entire block—the total size is $\\textit {prefix}_N$ .", "The memory allocation is implemented by an atomic_add operation.", "Line  calls the CUDA atomic add that adds $\\textit {prefix}_N$ to idle_memory_head and returns the old value of idle_memory_head to address in an atomic fashion—no data race within this operation.", "Once the requested memory is allocated for the block, we increment the idle_memory_head pointer in the memory pool.", "We finalize the allocation by letting all threads in the block compute their corresponding offsets by adding the prefix sum to the allocated address.", "The memory allocation is called inside the meta-kernel that we generated in the operator scheduling.", "The entire allocation process has very little overhead costs—it does not require any inter-block synchronization or any kernel launches.", "[b!]", "In-Kernel Dynamic Memory Allocation Input: allocation memory size for the $i^\\textit {th}$ thread, $\\textit {size}_i$ ; global memory pool head pointer, $\\textit {idle\\_memory\\_head}$ ; Output: thread offsets, $\\textit {offsets}_i$ ; [1] $\\textit {prefix}_{1..N} \\leftarrow \\textit {parallel\\_prefix\\_sum}(\\textit {size}_{1..N})$ $\\textit {address} \\leftarrow \\textit {atomic\\_add}(\\textit {idle\\_memory\\_head},\\textit {prefix}_N)$ each thread $i$ in the current block concurrently $\\textit {offsets}_i \\leftarrow \\textit {address} + \\textit {prefix}_i- \\textit {prefix}_1$ Table: End-to-end training of MapReduce feature extraction with hierarchical GPU parameter server and FeatureBox.Reset GPU memory pool.", "Our light-weight memory allocation strategy only maintains a pointer on a pre-allocated continuous global memory.", "However, the single-pointer design does not support memory freeing.", "We have to maintain an additional collection of freed memory and allocate the requested memory chunks from this collection—the maintenance of this additional data structure leads to significant memory allocation overhead.", "We observe that our operators are in fine-granularity and are scheduled layer by layer.", "Therefore, we can assume that the total required memory for dynamic allocations fits the GPU memory.", "We perform the memory release in a batch fashion: the memory pool is reset after each meta-kernel.", "The reset can be done in a constant time—we only need to set idle_memory_head to the original allocated memory address for the memory pool so that the allocation request in the meta-kernel for the following layer gets the allocation from the beginning of the memory pool." ], [ "Experimental Evaluation", "In this section, we investigate the effectiveness of our proposed framework FeatureBox through a set of numerical experiments.", "Specifically, the experiments are targeted to address the following questions: How is the end-to-end training time of FeatureBox compared with the previous MapReduce solution?", "How much intermediate I/O is saved by the pipelining architecture?", "What is the performance of FeatureBox in the feature extraction task?", "Systems.", "The MapReduce feature extraction baseline is our previous in-production solution to extract features for the training tasks.", "It runs in an MPI cluster with CPU-only nodes in a data center.", "Commonly, a feature extraction job requires 20 to 30 nodes.", "Each node is equipped with server-grade CPUs ($\\sim $ 100 threads).", "The training part is executed on GPU nodes.", "Each GPU node has 8 cutting-edge 32 GB HBM GPUs, $\\sim $ 1 TB main memory, $\\sim $ 20 TB RAID-0 NVMe SSDs, and a 100 Gb RDMA network adaptor.", "The training framework is the hierarchical GPU parameter server.", "All nodes are inter-connected through a high-speed Ethernet switch.", "Models.", "We use CTR prediction models on two real-world online advertising applications.", "The neural network backbones of both models follow the design in Figure REF .", "The major difference between the two models is the number of input features.", "Both models have more than $\\sim $ 10 TB parameters.", "We collect real user click history logs as the training dataset." ], [ "End-to-End Training ", "We report Table REF specifications about the training data and the end-to-end training comparison between our proposed FeatureBox and the MapReduce feature extraction with hierarchical GPU parameter server training as a baseline.", "Both training datasets contain billions of instances.", "The size of the logs is $\\sim $ 15 TB for application A, and $\\sim $ 25 TB for application B.", "The end-to-end training time includes the features extraction from the log time and the model training time.", "FeatureBox uses 1 GPU server for application A and 2 GPU servers for application B.", "In addition to the GPU servers, the baseline solution also employs 20/30 CPU-only servers to perform feature extraction.", "The baseline solution first extracts features using MapReduce, saves the features as training data in HDFS, and streams the generated training data to the GPU servers to train the model.", "On the other hand, FeatureBox processes the data in a pipeline fashion: features are extracted on GPU servers and then are immediately fed to the training framework on the same GPU server.", "For application A, FeatureBox only takes 3.5 hours to finish the feature extraction and the training while the baseline solution requires 18 hours—with fewer number of machines, FeatureBox has a 5.14X speedup compared to the baseline.", "Meanwhile, Application B presents a bigger volume of log instances.", "Hence, we use two GPU servers to perform the training.", "We can observe a larger gap between FeatureBox and the baseline when the data size scales up: FeatureBox outperforms the baseline with a 10.19X speedup.", "One of the main reasons of the speedup is that FeatureBox eliminates the huge intermediate I/O from the MapReduce framework.", "We save $\\sim $ 50-100 TB intermediate I/O while using FeatureBox." ], [ "Feature Extraction", "Although the improvement of FeatureBox in the end-to-end training time mainly benefits from the pipeline architecture, we also investigate the feature extraction performance to confirm that our proposed GPU feature extraction framework is a better alternative to the baseline MapReduce solution.", "Figure: Feature extraction time of MapReduce and FeatureBox.We report, in Figure REF , the time to extract features from $10,000$ log instances of Application B. MapReduce runs on 30 CPU-only servers and FeatureBox runs on 2 GPU servers.", "The pre-processing time includes the stages to prepare the data for the feature extraction, such as read, clean, and join views.", "The pre-processing time of both methods are comparable because the executed operations are mostly memory and network I/O.", "Regarding the time to extract features, FeatureBox is more than 3 times faster than MapReduce.", "FeatureBox only takes around half of the time to extract the features than the baseline." ], [ "Discussion", "Based on these results, we can answer the questions that drive the experiments: The end-to-end training time of FeatureBox is 5-10 times faster than the baseline.", "Due to the pipeline design, FeatureBox saves us 50-100 TB intermediate I/O.", "For feature extraction only tasks, FeatureBox on 2 GPU servers is 2X faster than MapReduce on 30 CPU-only servers." ], [ "Conclusions", "In this paper, we introduce FeatureBox, a novel end-to-end training framework that pipelines the feature extraction and the training on GPU servers to save the intermediate I/O of the feature extraction.", "We rewrite computation-intensive feature extraction operators as GPU operators and leave the memory-intensive operator on CPUs.", "We introduce a layer-wise operator scheduling algorithm to schedule these heterogeneous operators.", "We present a light-weight GPU memory management algorithm that supports dynamic GPU memory allocation with minimal overhead.", "We experimentally evaluate FeatureBox and compare it with the previous in-production MapReduce feature extraction framework on two real-world ads applications.", "The results show that FeatureBox is 5-10X faster than the baseline." ] ]
2210.07768
[ [ "A combinatorial algebraic approach for the modified second-generation\n time-delay interferometry" ], [ "Abstract We generalize the combinatorial algebraic approach first proposed by Dhurandhar et al.", "to construct various classes of modified second-generation time-delay interferometry (TDI) solutions.", "The main idea behind the algorithm is to enumerate, in a given order, a specific type of commutator between two monomials defined by the products of particular time-displacement operators.", "On the one hand, the above commutators can be systematically rewritten as the elements of a left ideal, defined by the l.h.s.", "of the relevant equation for the TDI solution.", "On the other hand, these commutators are shown to vanish if we only keep up the first-order contributions regarding the rate of change of armlengths.", "In other words, each commutator furnishes a valid TDI solution pertaining to the given type of modified second-generation combinations.", "In this work, the original algorithm, which only involves time-delay operators, is extended by introducing the time-advance ones and then utilized to seek solutions of the Beacon, Relay, Monitor, Sagnac, and fully symmetric Sagnac types.", "We discuss the relation between the present scheme's solutions and those obtained by the geometric TDI approach, a well-known method of exhaustion of virtual optical paths.", "In particular, we report the results on novel Sagnac-inspired solutions that cannot be straightforwardly obtained using the geometric TDI algorithm.", "The average response functions, floor noise power spectral densities, and sensitivity functions are evaluated for the obtained solutions." ], [ "Introduction", "In terms of ground-based laser interferometry, the first ever direct detection of gravitational waves was accomplished by LIGO and Virgo collaborations in 2015 [1].", "The observation has been widely recognized as the inauguration of an era of gravitational-wave astronomy, as a new window to the universe had become available besides that via the electromagnetic spectrum.", "Nonetheless, ground-based gravitational detections are constrained by a couple of crucial factors, such as the baseline length, Earth's seismic vibrations, and gravity-gradient noise.", "To this end, the measurement is only feasible for the frequency band typically above 10Hz [2].", "On the other hand, abundant wave sources, potentially for more robust and durable signals, are associated with a lower frequency band of 0.1mHz to 1Hz.", "The latter is aimed at by the ongoing space-based gravitational wave detector projects, which include, notably, the LISA [3], TianQin [4], Taiji [5], and DECIGO [6].", "A space-borne detector typically consists of three identical spacecraft that form a giant, almost equilateral triangle configuration.", "Encoded as Doppler frequency shifts, the information on the gravitational wave is embedded in the resultant beat notes between the laser beams exchanged among the spacecraft.", "Unlike their ground-based counterpart, the relative frequency fluctuations primarily come from the laser's intrinsic phase instability.", "In particular, the laser frequency noise cannot be simply canceled out using an equal-arm Michelson configuration.", "This is because the distances between spacecraft vary in time according to the orbital dynamics, which gives rise to armlengths mismatch, where the rate of change of the armlength is between 5 and $10\\mathrm {m/s}$  [3].", "Typically, the dominant laser frequency noise is about $7-8$ orders of magnitude higher than those from other sources [7].", "In this regard, the TDI algorithm, first introduced by Tinto et al.", "in 1999 [8], aims to construct a virtual equal-arm interferometer by a proper combination of the delayed science data streams in order to suppress the laser frequency noise [7].", "The existing TDI solutions can be divided into four categories: first-generation, modified first-generation, second-generation, and modified second-generation.", "The first-generation TDI [9] approximates the entire triangular configuration as a rigid constellation without rotation, and subsequently, the detector arms are treated as static.", "The modified first-generation TDI considers the Sagnac effect caused by a rigid rotation of the entire constellation [10].", "For both cases, mathematically, the delay operations involved are commutative.", "Therefore, the solution space is a polynomial ring ${R}$ in three [11] and six [12] variables over the rational numbers.", "Subsequently, the problem can be reformulated [11], [12] to solve for the first module of syzygies of a relevant left ideal, whose generators can be obtained through the Groebner basis [13].", "The second-generation TDI further considers nonrigid rotation so that the armlengths vary independently but slowly in time [14], [15].", "The residual noise is therefore evaluated as an expansion in terms of time-derivatives of the armlengths truncated at the second-order contributions.", "The modified second-generation TDI discriminates between and enforces independent cancelation of distinct cyclic directions of the detector arms, leading to a more restrictive truncation scheme [15].", "Owing to the non-commutative nature, the second and modified second-generation TDI solutions are not straightforward.", "Vallisneri proposed the geometric TDI [16], a method of exhaustion to seek TDI combinations by enumerating close trajectories in the space-time diagram.", "The approach is intuitive because it can be shown that a geometric TDI solution corresponds to virtual equal-arm interferometry.", "It has been extensively utilized in the literature to solve the second and modified second-generation TDI solutions [17], [15].", "Nonetheless, since the solution space of the geometric TDI method grows by $3^n$ where $n$ is the number of links, searching for TDI combinations is often computationally expensive.", "Moreover, by definition, the algorithm demands that successive links must be “connected”.", "Thus the solution space is somewhat restrictive.", "In particular, the well-known fully symmetric Sagnac solutions of both generations lie beyond such a solution space.", "Since the TDI algorithm was first proposed, it has flourished experimentally and theoretically in the literature.", "Regarding ground-based experiments, Vine et al.", "verified the noise cancellation performance of the Sagnac TDI combination [18], and more recently, Vinckier et al.", "implemented the optical comb TDI scheme [19] by using the acousto-optic modulators and optical combs.", "These results demonstrated TDI's viability in effectively suppressing laser frequency noise.", "From the statistical inference perspective, Romano et al.", "[20] pointed out that the principal component analysis of the noise covariance matrix effectively furnishes a feasible TDI solution, and subsequently, further development of Bayesian TDI [21], [22] was proposed.", "Recently, Dhurandhar et al.", "showed [23] a close relationship between two matrix-based TDI approaches that were independently proposed by Vallisneri [24] et al.", "and Tinto [25] et al..", "In addition, efforts have been devoted to related topics such as eliminating residual clock noise using optical combs [26].", "The clock noise cancellation scheme has also been generalized to the second-generation TDI combination by using sideband measurement [27], [28].", "Notably, Dhurandhar et al.", "elaborated a combinatorial algebraic algorithm [29] for the modified second-generation TDI solutions in the case of one arm being dysfunctional.", "In this simplified case, the polynomials that furnish a valid TDI solution are specific elements of a polynomial ring in four variables that corresponds to the kernel of the homomorphism $\\varphi : {R}^2\\rightarrow {R} ,$ namely, the first module of syzygies.", "The proposed algorithm enumerates, in a given order, a specific type of commutator between two monomials defined by the products of particular time-displacement operators.", "The authors showed that the commutators in question are the elements of the above left ideal ${R}^2$ .", "Moreover, these commutators manifestly vanish if we only keep up the first-order contributions regarding the rate of change of armlengths.", "In other words, these commutators correspond to valid TDI solutions.", "It was shown how such solutions could be systematically constructed, and then the algorithm was applied to solve for the Michelson-type TDI combinations of the modified second-generation.", "Based on Dhurandhar et al.", "'s results [29], the present paper further develops and explores the algorithm.", "By inspecting the procedure, we argue that the interchange operation introduced in the original work is not an indispensable element.", "Besides, the original algorithm, which only involves time-delay operators, is extended by introducing the time-advance ones.", "As will be discussed below, the essence of the algorithm is the introduction of additional constraints to the TDI equation.", "Consequently, the solution space simplifies so that the TDI solution can be constructed in terms of particular commutators.", "Moreover, we show that the above generalizations expand the solution space, giving rise to a more flexible and robust algorithm.", "Then, it is utilized to seek TDI solutions of the Beacon, Relay, Monitor, Sagnac, and fully symmetric Sagnac types.", "In particular, we report a novel set of Sagnac-inspired solutions that cannot be straightforwardly derived using the geometric TDI.", "We also discuss the relationship between the present scheme's solutions and those obtained by the geometric TDI approach.", "The average response functions, floor noise power spectral densities, and sensitivity functions are evaluated for the obtained solutions.", "The remainder of the paper is organized as follows.", "In Sec.", ", we briefly present the problem of the TDI algorithm, together with the notations and conventions used in this paper.", "The combinatorial approach proposed by Dhurandhar et al.", "is discussed in Sec. .", "The original application to Michelson-type TDI solutions is revisited.", "In Sec.", ", the approach is elaborated further by including time-advance operators.", "Subsequently, the properties of the commutator are derived and discussed.", "A more general version of the algorithm is formulated.", "Subsequently, in Sec.", ", the main results are presented by taking the Monitor type TDI solutions as an example.", "We show that the nine sixteen-link modified second-generation TDI combinations, initially obtained by employing the geometric TDI, can be readily retrieved.", "Furthermore, we apply the method to the case of fully symmetric Sagnac combinations.", "Besides the well-known solutions in the literature, we present a few novel Sagnac-inspired solutions which cannot be straightforwardly obtained using the geometric TDI.", "The average response functions, residual noise power spectral densities, and the sensitivity curves of the obtained novel solutions are evaluated.", "Further discussions and concluding remarks are given in the last section.", "Some complementary derivations and discussions are relegated to the Appendices of the paper.", "In Appendix , we give proof of some of the mathematical relations utilized in the main text.", "A few other classes of modified second-generation TDI solutions, namely, the Beacon, Relay, and Sagnac TDI combinations, are derived and presented in Appendix  using the method proposed in the present paper.", "In Appendix , we enumerate the lower-order commutators that furnish the TDI solutions using the proposed algorithm.", "We also show that two classes of higher-order TDI solutions can be induced using the lower-order ones, and the results are presented in Appendix .", "The explicit expressions of noise power spectral densities and the average response functions of the detector for the Sagnac-type combinations are given in Appendix ." ], [ "Definitions, notations, and conventions of the TDI algorithm", "As illustrated in Fig.", "REF , the experimental layout of a space-based gravitational wave detector consists of three identical spacecraft, denoted as SC$i$ (with $i=1, 2, 3$ ) [7].", "The armlengths sitting on the opposite side of SC$i$ are denoted as $L_i$ (and $L_{i^{\\prime }}$ ) in the counterclockwise (and clockwise) direction.", "Two optical benches, labeled by $i$ and $i^{\\prime }$ , are installed on each spacecraft.", "The phasemeters of the optical benches perform the measurements of three types of data streams, namely, the science data streams $s_{i (i^{\\prime })}$ , test mass data streams $\\epsilon _{i (i^{\\prime })}$ , and reference data streams $\\tau _{i (i^{\\prime })}$ .", "The science data streams carry the essential information on the gravitational waves embedded in the beat notes between the laser beams from the distant and local spacecraft.", "The test mass and reference data streams are formed by the interference between the two local laser beams from the adjacent optical benches $i$ and $i^{\\prime }$ , whereas for the test mass data stream, one of the laser beams is reflected from the test mass.", "By considering the laser frequency noise $p_{i (i^{\\prime })}$ , optical bench motion noise $\\dot{\\vec{\\Delta }}_{i (i^{\\prime })}$ , test mass noise $\\dot{\\vec{\\delta }}_{i, (i^{\\prime })}$ , shot noise $N_{i (i^{\\prime })}^{opt}$ associated with the optical benches $i (i^{\\prime })$ .", "The relevant data streams recorded at the optical bench possess the following forms $s_{i} (t) &= D_{i-1} p_{{(i+1)}^{^{\\prime }}}(t) - p_{i} (t) + \\nu _{(i+1)^{^{\\prime }}}[\\vec{n}_{i-1}\\cdot D_{i-1}\\dot{\\vec{ \\Delta }}_{(i+1)^{^{\\prime }}}(t)+\\vec{n}_{(i-1)^{^{\\prime }}}\\cdot \\dot{\\vec{\\Delta }}_{i}(t)] + H_{i} (t) + N_{i} ^{opt} (t),\\\\\\epsilon _{i} (t)&= p_{{i}^{^{\\prime }}}(t) - p_{{i}}(t) - 2\\nu _{i^{^{\\prime }}} [\\vec{n}_{(i-1)^{^{\\prime }}} \\cdot \\dot{\\vec{\\delta }}_i (t)-\\vec{n}_{(i-1)^{^{\\prime }}} \\cdot \\dot{\\vec{\\Delta }}_i (t)], \\\\\\tau _i (t)&= p_{i^{^{\\prime }}} (t) - p_i (t),$ and $s_{i^{^{\\prime }}} (t) &= D_{{(i+1)}^{^{\\prime }}} p_{i-1}(t) - p_{i^{^{\\prime }}} (t) + \\nu _{i-1}[\\vec{n}_{i+1}\\cdot \\dot{\\vec{\\Delta }}_{i^{^{\\prime }}}(t)+\\vec{n}_{(i+1)^{^{\\prime }}}\\cdot D_{(i+1)^{^{\\prime }}}\\dot{\\vec{\\Delta }}_{i-1}(t)]+ H_{i^{^{\\prime }}} (t) + N_{i^{^{\\prime }}} ^{opt} (t),\\\\\\epsilon _{i^{^{\\prime }}} (t)&= p_{{i}}(t) - p_{{i}^{^{\\prime }}}(t) - 2\\nu _{i} [\\vec{n}_{i+1} \\cdot \\dot{\\vec{\\delta }}_{i^{^{\\prime }}} (t)-\\vec{n}_{i+1} \\cdot \\dot{\\vec{\\Delta }}_{i^{^{\\prime }}} (t)], \\\\\\tau _{{i}^{^{\\prime }}} (t)&= p_i (t) - p_{i^{^{\\prime }}} (t),$ where $H_i (t), H_{i^{^{\\prime }}} (t)$ represent the gravitational wave signals, $D_{i (i^{\\prime })}$ are the time-delay operators along the related armlengths, $\\nu _{i (i^{\\prime })}$ are the laser's frequency, and $\\vec{n}_{i(i^{^{\\prime }})}$ are unit vectors along the armlengths.", "Following the standard procedure of the TDI algorithm, the optical bench motion noise can be eliminated by introducing intermediate variables [27].", "Subsequently, the two local lasers are effectively connected by intra-spacecraft phase locking, and the resultant observables read ${\\eta _i}(t) &= {H_i}(t) + {{D}_{i - 1}}{p_{i + 1}}(t) - {p_i}(t) + \\nu _{(i+1)^{^{\\prime }}} {{\\vec{n}}_{i - 1}}\\left[ {{{D}_{i - 1}}{{\\dot{\\vec{\\delta }}}_{(i + 1)^{\\prime }}}(t) - {{\\dot{\\vec{\\delta }}}_i}(t)} \\right] + N_i^{{opt}}(t) ,\\\\{\\eta _{i^{\\prime }}}(t) &= {H_{i^{\\prime }}}(t) + {{D}_{(i + 1)^{\\prime }}}{p_{i - 1}}(t) - {p_i}(t) + \\nu _{i-1}{{\\vec{n}}_{i + 1}} \\cdot \\left[ {{{\\dot{\\vec{\\delta }}}_{i^{\\prime }}}(t) - {{D}_{(i + 1)^{\\prime }}}{{\\dot{\\vec{\\delta } }}_{i - 1}}(t)} \\right] + N_{i^{\\prime }}^{{opt}}(t) .$ In terms of the above observables, a valid TDI solution [7] is aimed to eliminate the three remaining independent laser frequency noise $p_i$ by a combination of the form $\\mathrm {TDI}=\\sum _{i=1,2,3} (q_{i} \\eta _{i} + q_{i^{^{\\prime }}} \\eta _{i^{\\prime }}) ,$ where $q_{i}$ and $q_{i^{\\prime }}$ are polynomials in the six time-delay operators.", "By focusing on the laser frequency noise and explicitly demanding the coefficients before individual $p_{i}$ vanish, the above equation gives $\\begin{aligned}&q_{1}+q_{1^{\\prime }}-q_{2^{\\prime }} {D}_{3^{\\prime }}-q_{3} {D}_{2}=0,\\\\&q_{2}+q_{2^{\\prime }}-q_{3^{\\prime }} {D}_{1^{\\prime }}-q_{1} {D}_{3}=0, \\\\&q_{3}+q_{3^{\\prime }}-q_{1^{\\prime }} {D}_{2^{\\prime }}-q_{2} {D}_{1}=0.\\end{aligned}$ Using two equations, one may choose to remove two of the six coefficients.", "For instance, by eliminating $q_1$ and $q_2$ , one finds $q_{3}(1-D_{231})+q_{1^{\\prime }}(D_{31}-D_{2^{\\prime }})+q_{2^{\\prime }}(D_{1}-D_{3^{\\prime }31})+q_{3^{\\prime }}(1-D_{1^{\\prime }1}) = 0.$ From the geometric TDI perspective [15], different generations of the TDI solutions can be classified by how the delayed laser frequency noise, as an expansion using time derivatives of the armlengths, are truncated.", "To be specific, for the first-generation TDI, one considers $L_{i}(t)=L_{i}=\\mathrm {const.", "},\\ \\ L_{i}=L_{i^{^{\\prime }}} .$ For the modified first-generation TDI, one distinguishes armlengths of different cyclic orders, namely, $L_{i}(t)=L_{i}=\\mathrm {const.", "},\\ \\ L_{i}\\ne L_{i^{^{\\prime }}} .$ Following the literature [14], one refers to the case where $L_{i}(t)=L_{i}+t\\dot{L}_{i}, \\ \\ L_{i}\\ne L_{i^{^{\\prime }}}, \\ \\ \\dot{L}_{i}=\\dot{L}_{i^{^{\\prime }}}$ as the second-generation TDI.", "Lastly, the solutions satisfying $L_{i}(t)=L_{i}+t\\dot{L}_{i}, \\ \\ L_{i}\\ne L_{i^{^{\\prime }}}, \\ \\ \\dot{L}_{i}\\ne \\dot{L}_{i^{^{\\prime }}} ,$ as investigated by the studies [15] will be dubbed as the modified second-generation TDI It is noted that the last case was referred to as the second-generation TDI in some literature [35].. For the first-generation TDI defined in Eqs.", "(REF ) and (REF ), the polynomials $q_{i (i^{\\prime })}$ form a commutative ring which allows us to solve for the TDI combination using the Groebner basis [13].", "On the other hand, the delay operators in Eq.", "(REF ) can no longer be viewed as commutative in the context of the second generation TDI defined in Eqs.", "(REF ) and (REF ).", "To be more precise, the solution space of Eq.", "(REF ) is a left module over a non-commutative ring ${R} = \\mathbb {Q} (D_{1},D_{2},D_{3},D_{1^{^{\\prime }}},D_{2^{^{\\prime }}},D_{3^{^{\\prime }}})$ , where $\\mathbb {Q}$ is the field of rational numbers.", "However, the non-commutative nature of the problem implies that it is not straightforward to solve for the general and exhaustive form of the TDI combinations using an algebraic approach.", "In the next section, the properties of the commutators between polynomials of the delay operator will be studied, which eventually leads to a combinatorial algebraic approach." ], [ "A specific type of commutators and the vanishing condition", "By applying the delay operators $D_{i}$ and $D_{i^{^{\\prime }}}$ on an arbitrary time-dependent variable $\\phi (t)$ , we have $\\begin{aligned}D_{i} \\phi (t) &=\\phi (t-L_{i} (t)),\\\\D_{ji}\\phi (t) & = \\phi (t-L_{j}(t)-L_{i} (t-L_{j} (t))) ,\\end{aligned}$ where, for convenience, one has denoted successive applications of the time-delay operators by $D_{j} D_{i}\\phi (t) \\equiv D_{ji} \\phi (t).$ By making use of the expansion $L_i (t)= L_i+t\\dot{L}_i+\\frac{1}{2}t^2\\ddot{L}_{i}+\\cdots ,$ where the speed of light in the vacuum is taken as unit $c=1$ , to the first order, Eqs.", "(REF ) give, $D_{i} \\phi (t) \\simeq \\phi (t-L_{i}) - \\dot{\\phi }(t-L_{i}) t \\dot{L}_{i},$ and $\\begin{aligned}D_{ji} \\phi (t) &\\simeq \\phi (t-L_{j} (t) - L_{i} (t) + \\dot{L}_{i} L_{j}(t))\\\\&\\simeq \\phi (t-L_{i}-L_{j}) + \\dot{\\phi }(t-L_{i}-L_{j})\\dot{L}_{i} L_{j}\\\\&-\\dot{\\phi }(t-L_{i}-L_{j})t(\\dot{L}_{i}+\\dot{L}_{j}) .\\end{aligned}$ It is straightforward to generalize the above results to higher orders.", "For instance, by applying three time-delay operators, one finds $\\begin{aligned}D_{kji} \\phi (t) &\\simeq \\phi (t-L_{i}-L_{j}-L_{k}) \\\\&+ \\dot{\\phi }(t-L_{i}-L_{j}-L_{k}) \\big (\\dot{L}_{i} (L_{j} + L_{k})+\\dot{L}_{j} L_{k}\\big )\\\\&-\\dot{\\phi }(t-L_{i}-L_{j}-L_{k})t(\\dot{L}_{i}+\\dot{L}_{j}+\\dot{L}_{k}).\\end{aligned}$ It was first pointed out in [29] that many TDI combinations can be attributed to a specific type of commutator whose form is related to the residual of the laser frequency noise.", "For convenience, in what follows, we will use the subscript $x$ to denote the first element of a commutator, and the subscript $y$ will be reserved for the second element of the commutator.", "For instance, a commutator will be formally written as $[D_{x_{1} x_{2} \\dots x_{n}},D_{y_{1} y_{2} \\dots y_{n}}]$ .", "Let us denote the commutator associated with the residual laser noise of the TDI solution as $\\mathrm {TDI}^{p}=[D_{x_{1} x_{2} \\dots x_{n}},D_{y_{1} y_{2} \\dots y_{n}}] p_{i} (t),$ where $x_{i}$ and $y_{i}$ are one of time-delay operators $D_{i}$ or $D_{i^{^{\\prime }}}$ .", "We illustrate the above statement with two examples.", "Let us take the modified first-generation Michelson combination $X_{1}$ as the first example.", "We have $\\begin{aligned}{X}_{1}(t)&=\\eta _{1}+D_{3} \\eta _{2^{\\prime }}+D_{33^{\\prime }} \\eta _{1^{\\prime }}+D_{33^{\\prime } 2^{\\prime }} \\eta _{3}\\\\&- \\left(\\eta _{1^{\\prime }}+D_{2^{\\prime }} \\eta _{3}+D_{2^{\\prime } 2} \\eta _{1}+D_{2^{\\prime } 23} \\eta _{2^{\\prime }}\\right) .\\end{aligned}$ It is straightforward to show that the residual of the laser frequency noise is $X^{p}_1(t)=[D_{3 3^{^{\\prime }}},D_{2^{^{\\prime }} 2}] p_{1} (t).$ For the second example, we consider the modified second-generation Michelson combination $X_{2}$ , $\\begin{aligned}X_{2}(t)&=\\eta _{1}+{D}_{3} \\eta _{2^{\\prime }}+{D}_{33^{\\prime }} \\eta _{1^{\\prime }}+{D}_{33^{\\prime } 2^{\\prime }} \\eta _{3}+{D}_{33^{\\prime } 2^{\\prime } 2} \\eta _{1^{\\prime }} \\\\&+{D}_{33^{\\prime } 2^{\\prime } 22^{\\prime }} \\eta _{3}+{D}_{33^{\\prime } 2^{\\prime } 22^{\\prime } 2} \\eta _{1}+{D}_{33^{\\prime } 2^{\\prime } 22^{\\prime } 23} \\eta _{2^{\\prime }} \\\\&-\\left(\\eta _{1^{\\prime }}+{D}_{2^{\\prime }} \\eta _{3}+{D}_{2^{\\prime } 2} \\eta _{1}+{D}_{2^{\\prime } 23} \\eta _{2^{\\prime }}+{D}_{2^{\\prime } 233^{\\prime }} \\eta _{1}\\right.", "\\\\&\\left.+{D}_{2^{\\prime } 233^{\\prime } 3} \\eta _{2^{\\prime }}+{D}_{2^{\\prime } 233^{\\prime } 33^{\\prime }} \\eta _{1^{\\prime }}+{D}_{2^{\\prime }233^{\\prime } 332^{\\prime }} \\eta _{3}\\right),\\end{aligned}$ the residual reads $X^{p}_2(t)=[D_{33^{^{\\prime }}2^{^{\\prime }}2},D_{2^{^{\\prime }}233^{^{\\prime }}}]p_1 (t).$ From the two Michelson combinations given by Eqs.", "(REF ) and (REF ), it is observed that the residuals can always be written as commutators composed of the products of two components $D_{33^{^{\\prime }}}$ and $D_{2^{^{\\prime }}2}$ .", "This indicates that one may construct TDI solutions from specific commutators satisfying some rules.", "Indeed, modified second-generation Michelson-type combinations of higher orders can be obtained from the following residuals: $[D_{33^{^{\\prime }}2^{^{\\prime }}22^{^{\\prime }}233^{^{\\prime }}},D_{2^{^{\\prime }}233^{^{\\prime }}33^{^{\\prime }}2^{^{\\prime }}2}], \\nonumber $ $[D_{33^{^{\\prime }}33^{^{\\prime }}2^{^{\\prime }}22^{^{\\prime }}2},D_{2^{^{\\prime }}22^{^{\\prime }}233^{^{\\prime }}33^{^{\\prime }}}], \\nonumber $ and $[D_{33^{^{\\prime }}2^{^{\\prime }}233^{^{\\prime }}2^{^{\\prime }}2},D_{2^{^{\\prime }}233^{^{\\prime }}2^{^{\\prime }}233^{^{\\prime }}}].", "\\nonumber $ A systematic approach based on the above speculations was first proposed in [29].", "It consists of two crucial building blocks.", "First, one identifies a specific type of commutator corresponding to vanishing residual laser noise.", "Second, these commutators can be mapped to the TDI solution characterized by the corresponding coefficients $q_{i (i^{\\prime })}$ .", "The first point will be addressed shortly, and Sec.", "REF will be devoted to the second point.", "In [31], the following relation was established ${\\begin{array}{c}{\\left[D_{x_{1} x_{2} \\ldots x_{n}}, D_{y_{1} y_{2} \\ldots y_{n}}\\right] \\phi (t)}=\\left(\\sum _{k=1}^{n} L_{x_{k}} \\sum _{m=1}^{n} \\dot{L}_{y_{m}}-\\sum _{m=1}^{n} L_{y_{m}} \\sum _{k=1}^{n} \\dot{L}_{x_{k}}\\right)\\dot{\\phi }\\left(t-\\sum _{k=1}^{n} L_{x_{k}}-\\sum _{m=1}^{n} L_{y_{m}}\\right).\\end{array}}$ For the lower order cases, it can be readily verified using Eqs.", "(REF )-(REF ).", "A general proof of Eq.", "(REF ) is relegated to Appendix .", "Apparently, the first factor on the r.h.s.", "of Eq.", "(REF ) vanishes as long as $y_i=x_{\\pi (i)} .$ where $\\pi \\in \\mathcal {S}_n$ is an arbitrary element of the permutation group of degree $n$ .", "In other words, Eq.", "(REF ) vanishes for a specific type of commutator characterized by a given number of indices and specific permutations.", "Moreover, if one can show that, under certain circumstances, such commutator can be recognized as, for instance, the l.h.s.", "of Eq.", "(REF ), it furnishes a valid TDI solution.", "The latter proposition will be explored in the following subsection.", "For the remainder of the present subsection, however, we elaborate further on the converse proposition by revisiting the Michelson combinations Eq.", "(REF ) discussed above.", "As a matter of fact, this class of solutions has been extensively explored in [29], referred to as one-arm dysfunctional ones.", "Without loss of generality, one assumes that the communications through the armlength connecting SC2 and SC3 are interrupted.", "In other words, we have $\\eta _{2}=\\eta _{{3}^{^{\\prime }}}=0$ , or equivalently, $q_{2}=q_{3^{^{\\prime }}}=0 .$ Substituting Eq.", "(REF ) into Eq.", "(REF ), we have $&q_{1}+q_{1^{\\prime }}-q_{2^{\\prime }} {D}_{3^{\\prime }}-q_{3} {D}_{2}=0, \\\\&q_{2^{\\prime }}-q_{1} {D}_{3}=0, \\\\&q_{3}-q_{1^{\\prime }} {D}_{2^{\\prime }}=0 .$ Eliminating Eqs.", "() and () by substituting the forms of $q_{2^{\\prime }}$ and $q_3$ into Eq.", "(REF ), one finds the desired equation $q_{1}(1-D_{33^{^{\\prime }}})+q_{1^{^{\\prime }}}(1-D_{2^{^{\\prime }}2})=0.$ We note that the above elimination process algebraically ensures that the residual laser noise is entirely governed by $p_{1}$ .", "If one had chosen to eliminate two other equations, the residual would have been determined by other laser noise.", "The difference, however, is irrelevant from a modified second-generation TDI perspective.", "Also, by adding distinct solutions, one may construct an optimal solution that minimizes a specific target quantity.", "By observing the specific form of Eq.", "(REF ), it is also apparent that $D_{33^{^{\\prime }}}$ and $D_{2^{^{\\prime }}2}$ are the elementary units to furnish any solution.", "For convenience, we denote $a=D_{33^{^{\\prime }}}$ and $b=D_{2^{^{\\prime }}2}$ , and Eq.", "(REF ) simplifies to read $q_{1}(1-a)+q_{1^{^{\\prime }}}(1-b)=0.$ Also, the solution Eq.", "(REF ) corresponds to the coefficients $q_{1}$ and $q_{1^{^{\\prime }}}$ $\\begin{aligned}q_{1} &= 1-b-ba+ab^2,\\\\q_{1^{^{\\prime }}} &= - (1-a-ab+ba^2) ,\\end{aligned}$ and the residual Eq.", "(REF ) reads $[ba, ab]p_1(t).$ Following [29], one can interpret the above residual as that of a cancelation taking place between the two terms on the l.h.s.", "of Eq.", "(REF ) in an order-by-order fashion.", "In other words, this motivated a procedure regarding how each term of the two expressions given by Eqs.", "(REF ) is written down.", "As all the lower order terms are canceled out identically, the procedure continues until the last two remaining terms of the highest order coincidently give rise to a commutator, which possesses the form of the l.h.s.", "of Eq.", "(REF ) while satisfying Eq.", "(REF ).", "Subsequently, the commutator vanishes, reassuring that Eqs.", "(REF ) or (REF ) is indeed a valid TDI solution.", "Dhurandhar et al.", "introduced an interchange operator $\\mathcal {I}$ between the symbols $a$ and $b$  [29].", "To be specific, for a given monomial $s$ composed of $a$ and $b$ , the string $t = \\mathcal {I}(s)$ is obtained by replacing $a$ with $b$ , and meanwhile substituting original $b$ by $a$ , in $s$ .", "For instance, $\\mathcal {I}(abba) = baab$ .", "The authors pointed out that for the one-arm dysfunctional case, namely, the Michelson combinations, the solutions possess the form $[s, \\mathcal {I}(s)]$ .", "Moreover, in accordance with the l.h.s.", "of Eq.", "(REF ), if one has $n$ instances of $a$ in the string $s$ , the count of $b$ 's must be the same.", "Therefore, the length of the string $s$ is $2n$ , and the commutator consists of strings of length $4n$ ." ], [ "TDI solutions derived from the commutators", "The discussions elaborated by the end of the last subsection indicate that one may derive TDI solutions by exhaustively enumerating all possible commutators.", "To be specific, for $n=1$ , the only commutator is $[ab, ba]$ , which corresponds to Eq.", "(REF ).", "For $n=2$ , one has three options: $[a^{2}b^{2},b^{2}a^{2}]$ , $[abab,baba]$ , and $[ab^2a,ba^2b]$ .", "For an arbitrary integer number $n$ , there are $\\frac{(2n)!}{2n!n!", "}$ relevant combinations of the form $\\Delta =[s_{2n},\\mathcal {I}(s_{2n})]=s_{2n}\\mathcal {I}(s_{2n})-\\mathcal {I}(s_{2n})s_{2n},$ where $s_{2n}$ is an monomial comprised of $n$ instances of $a$ and $b$ .", "Also, $q_{1^{\\prime }} = -\\mathcal {I}(q_{1})$ , which implies that we only need the explicit form of $q_{1}$ for the solution of Eq.", "(REF ).", "The remaining coefficients $q_{3}, q_{2^{^{\\prime }}}$ are subsequently obtained by Eqs.", "() and ().", "Based on the above arguments, an algorithm was proposed [29] to derive TDI solutions for the one-arm dysfunctional case systematically.", "The idea of the algorithm is first to enumerate all possible commutators in question, then deduce the polynomial coefficients $q_{i (i^{\\prime })}$ of the TDI combinations from a given commutator.", "The latter process is reiterated below, illustrated by the control-flow diagram Fig.", "REF .", "Consider the first term in Eq.", "(REF ), $t_{4n}\\equiv s_{2n} \\mathcal {I}(s_{2n})$ , it is a monomial of degree $4n$ .", "It is noted that $t_{4n}$ ends in either $a$ or $b$ , that is, $t_{4n}=t_{4n-1}a$ or $t_{4n}=t_{4n-1}b$ .", "If $t_{4n}=t_{4n-1}a$ , let $t_{4n-1}=t_{4n-1}$ ; If $t_{4n}=t_{4n-1}b$ , let $t_{4n-1}=-\\mathcal {I}(t_{4n-1})$ .", "Repeat the above procedure for $t_{4n-1}$ until the degree of the monomial vanishes, namely, $t_{0}$ .", "We note that $t_{0}=\\pm 1$ .", "The TDI coefficients are obtained by summing up $4n$ monomials: $q_{1}=\\sum _{k=0}^{4n-1} t_{k}$ and $q_{1^{^{\\prime }}}=-\\mathcal {I}(q_{1})$ .", "As an example, for the case of $[ab,ba]$ which $n=1$ .", "The first term is $t_{4}=ab^{2}a$ , then we have $t_{3}=ab^2$ , $t_{2}=-\\mathcal {I}(ab)=-ba$ , $t_{1}=-b$ , and $t_{0}=1$ .", "It is readily verified that Eq.", "(REF ) is retrieved by summing up the above 4 monomials.", "Figure: The control-flow diagram of the algorithm proposed in  for Michelson TDI combinations." ], [ "The extended combinatorial algebraic algorithm", "In the last section, we revisited an existing algorithm dedicated to deriving the modified second-generation Michelson TDI combinations using a specific type of commutator.", "The present section extends the above approach to other classes of TDI solutions.", "The generalization is essentially based on the following considerations: The algorithm proposed in [29] does not depend on the specific forms of $a$ and $b$ , and it is viable as long as one encounters an equation of the form Eq.", "(REF ).", "Algebraically, to derive something similar to Eq.", "(REF ), one only needs to introduce two additional constraints (e.g.", "Eq.", "(REF )) into Eq.", "(REF ).", "The introduction of the interchange operation $\\mathcal {I}$ is not an obligation.", "One only needs the condition Eq.", "(REF ) for commutator Eq.", "(REF ) to vanish.", "The number of variables of the non-commutative ring ${R}$ can be expanded to include time-advance operators, which leads to a generalized commutation relation.", "These points give rise to an extended version of the algorithm, which can be used to derive TDI solutions of other classes, such as Beacon, Relay, Monitor, Sagnac, and full symmetric Sagnac ones.", "Moreover, one encounters novel Sagnac-inspired solutions that cannot be straightforwardly derived using the geometric TDI.", "In the remainder of the present section, we will elaborate on the general algorithm.", "In particular, the generalized communtation relation will be discussed in Sec.", "REF .", "Explicit examples will be explored in Sec.", "." ], [ "The generalized commutator", "By observing other classes of TDI combinations well-known in the literature, it is noted that the polynomial coefficients often contain time-advance operators $D_{\\bar{i} (\\bar{i^{\\prime }})}$ .", "They are the inverse of the corresponding time-delay operators, satisfying $D_{\\bar{i}} D_{i} = D_{i} D_{\\bar{i}} = {I} ,$ where ${I}$ is the identity operator.", "When a time-advance operator is applied to a time-dependent variable $\\phi (t)$ , Eq.", "(REF ) implies, to the first order, $D_{\\bar{i}} \\phi (t) = \\phi (t+L_{i}(t+L_{i})) ,$ which gives $\\begin{aligned}{D}_{\\overline{i}} \\phi (t) & \\simeq \\phi \\left({t}+{L}_{{i}}\\right)+\\dot{\\phi }\\left(t+L_{i}\\right)\\left(t+L_{i}\\right) \\dot{L}_{i} \\\\&=\\phi \\left({t}+{L}_{{i}}\\right)+\\dot{\\phi }\\left(t+L_{i}\\right) t \\dot{L}_{i}+\\dot{\\phi }\\left(t+L_{i}\\right) L_{i} \\dot{L}_{i}.\\end{aligned}$ By comparing Eq.", "(REF ) against Eq.", "(REF ), one observes that in the argument of the first and second terms, there is an extra $(-1)$ factor for $L_{i}$ and $\\dot{L}_{i}$ .", "Besides, the last term on the second line of Eq.", "(REF ) gives rise to a novel contribution.", "Now, we proceed to show that Eq.", "(REF ) continues to be valid when the time-advance operator is included.", "This can be done by appropriately modifying the proof given in Appendix , as follows.", "We assume that some operators in the l.h.s.", "of Eq.", "(REF ) are time-advance ones and introduce modifications to the existing derivations.", "Without loss of generality, let us assume there are $l$ instances of time-advance operators in $D_{x_{1} x_{2} \\dots x_{n}}$ and $m$ instances in $D_{y_{1} y_{2} \\dots y_{n}}$ .", "To be specific, we denote the subscripts corresponding to these operators by $\\lambda _i$ with ($i=1, \\cdots , l$ ) and $\\gamma _j$ with ($j=1, \\cdots , m$ ).", "Also, the additional $(-1)$ factors in Eq.", "(REF ) can be accounted for by the factor $\\begin{aligned}\\delta x_k &= \\left\\lbrace \\begin{matrix}-1&&\\mathrm {if}\\ k=\\lambda _i\\ \\mathrm {for\\ any}\\ i=1,\\cdots ,l\\\\+1&&\\mathrm {otherwise}\\end{matrix} \\right.", ",\\\\\\delta y_{k^{\\prime }} &= \\left\\lbrace \\begin{matrix}-1&&\\mathrm {if}\\ k^{\\prime }=\\gamma _j\\ \\mathrm {for\\ any}\\ j=1,\\cdots ,m\\\\+1&&\\mathrm {otherwise}\\end{matrix} \\right.", ".\\end{aligned}$ By substituting Eq.", "(REF ) into $\\left[D_{x_{1} x_{2} \\ldots x_{n}}, D_{y_{1} y_{2} \\ldots y_{n}}\\right]$ , the novel contributions owing to the third term in Eq.", "(REF ) read $\\begin{aligned}\\left(\\sum _{i=1}^{l} \\dot{L}_{x_{\\lambda _i}} L_{x_{\\lambda _i}}+\\sum _{j=1}^{m} \\dot{L}_{y_{\\gamma _j}} L_{y_{\\gamma _j}}-\\sum _{i=1}^{l} \\dot{L}_{x_{\\lambda _i}} L_{x_{\\lambda _i}}-\\sum _{j=1}^{m} \\dot{L}_{y_{\\gamma _j}} L_{y_{\\gamma _j}}\\right)\\dot{\\phi }\\left(t-\\sum _{k=1}^{n} \\delta _{x_{k}}L_{x_{k}}-\\sum _{k^{\\prime }=1}^{n} \\delta _{y_{k^{\\prime }}}L_{y_{k^{\\prime }}}\\right)=0 ,\\end{aligned}$ which manifestly vanishes.", "Therefore, by putting all the pieces together, we arrive at the following generalized form of Eq.", "(REF ) ${\\begin{array}{c}{\\left[D_{x_{1} x_{2} \\ldots x_{n}}, D_{y_{1} y_{2} \\ldots y_{n}}\\right] \\phi (t)}=\\left(\\sum _{k=1}^{n} \\delta _{x_{k}} L_{x_{k}} \\sum _{k^{^{\\prime }}=1}^{n} \\delta _{y_{k^{^{\\prime }}}}\\dot{L}_{y_{k^{^{\\prime }}}}-\\sum _{k^{^{\\prime }}=1}^{n} \\delta _{y_{k^{^{\\prime }}}} L_{y_{k^{^{\\prime }}}} \\sum _{k=1}^{n} \\delta _{x_{k}} \\dot{L}_{x_{k}}\\right)\\dot{\\phi }\\left(t-\\sum _{k=1}^{n} \\delta _{x_{k}}L_{x_{k}}-\\sum _{k^{\\prime }=1}^{n} \\delta _{y_{k^{\\prime }}}L_{y_{k^{\\prime }}}\\right) .\\end{array}}$ It is now apparent that the vanishing condition Eq.", "(REF ) remains unchanged when the time-advance operators are considered.", "By using a broadened choice of variables provided by the generalized commutation relation Eq.", "(REF ), in the following subsection, we elaborate on an extended combinatorial algebraic approach for the modified second-generation TDI." ], [ "The extended combinatorial algorithm", "We start by discussing an explicit example not embraced by the original algorithm.", "To be specific, we consider alternative Michelson combination [16], [15] given by $\\begin{aligned}{X}_{2}(t) &=\\left[\\eta _{1}+D_{3} \\eta _{2^{\\prime }}+D_{33^{\\prime }} \\eta _{1^{\\prime }}+D_{33^{\\prime } 2^{\\prime }} \\eta _{3}+D_{33^{\\prime } 2^{\\prime } 2} \\eta _{1}\\right.\\\\&\\left.+D_{33^{\\prime } 2^{\\prime } 23} \\eta _{2^{\\prime }}-D_{33^{\\prime } 2^{\\prime } 233^{\\prime } \\overline{2}} \\eta _{3}-D_{33^{\\prime } 2^{\\prime } 233^{\\prime } \\bar{2} \\bar{2}^{\\prime }} \\eta _{1^{\\prime }}\\right] \\\\&-\\left[\\eta _{1^{\\prime }}+D_{2^{\\prime }} \\eta _{3}+D_{2^{\\prime } 2} \\eta _{1}+D_{2^{\\prime } 23} \\eta _{2^{\\prime }}-D_{2^{\\prime } 233^{\\prime } \\overline{2}} \\eta _{3}\\right.\\\\&\\left.-D_{2^{\\prime } 233^{\\prime } \\overline{2} \\overline{2}^{\\prime }} \\eta _{1^{\\prime }}+D_{2^{\\prime } 233^{\\prime } \\overline{2} \\overline{2}^{\\prime }} \\eta _{1}+D_{2^{\\prime } 233^{\\prime } \\overline{2} \\overline{2}^{\\prime } 3} \\eta _{2^{\\prime }}\\right].\\end{aligned}$ As pointed out by some authors, the above combination occupies a shorter space in the time domain, which can effectively avoid the influence of instrument gaps and glitches.", "Moreover, in the high-frequency region, its gravitational wave response and noise power spectral density also display better performance, as discussed in [16].", "The coefficients $q_{1}$ and $q_{1^{^{\\prime }}}$ of the above solution can be extracted and found to be $\\begin{aligned}q_{1} &= 1-b+ab-ba\\bar{b},\\\\q_{1^{^{\\prime }}} &= - (1-a-ba\\bar{b}+aba\\bar{b}).\\end{aligned}$ By substituting the solution into Eq.", "(REF ), the residual is found to be $\\Delta =[ba\\bar{b}, a]$ .", "At first glance, it seems that the form of the residual does not satisfy the condition Eq.", "(REF ), as the two terms in the commutator do not even have the same length.", "They are, nonetheless, indeed related by a permutation by making use of Eq.", "(REF ) and, therefore $\\Delta = [ba\\bar{b}, a]=[ba\\bar{b}, a\\bar{b}b] .$ Moreover, following the train of thought of the combinatorial approach discussed in the previous section, one can explicitly show how the residual Eq.", "(REF ) is related to the equation Eq.", "(REF ).", "This is achieved by successively subtracting out terms proportional to the factor $(1-a)$ or $(1-b)$ until the last trailing “1”, which gives $ba\\bar{b}a=-ba\\bar{b}(1-a)+ba\\bar{b}(1-b)-b(1-a)-(1-b)+1,$ and $aba\\bar{b}=aba\\bar{b}(1-b)-ab(1-a)-a(1-b)-(1-a)+1 .$ Subsequently, one has $\\Delta =(1-b+ab-ba\\bar{b})(1-a)-(1-a-ba\\bar{b}+aba\\bar{b})(1-b) ,$ which essentially indicates that the coefficients of Eq.", "(REF ) are given by Eq.", "(REF ).", "It is not difficult to observe that the above derivations are generally applicable to solving the TDI equation of the following form $\\alpha (1-a)+\\beta (1-b)=0 ,$ where the forms of $a$ and $b$ are not specified.", "Its valid solution is associated with a commutator of a rather arbitrary form which is composed of two monomials of time-delay and time-advance operators $\\Delta = \\left[s_{n}, \\bar{s}_{n}\\right]\\equiv t_{2n} - \\bar{t}_{2n},$ once it satisfies the condition Eq.", "(REF ).", "Also, as demonstrated by the above example, the monomials in question do not necessarily have the same counts of $a$ s and $b$ s. Naturally, the two terms in the commutator are not obliged to be related by the interchange operation $\\mathcal {I}$ .", "For a given commutator, the extended combinatorial algorithm is summarized as follows, which is also illustrated by the control-flow diagram Fig.", "REF .", "Initiate $\\alpha =0$ and $\\beta =0$ Consider the first term on the r.h.s.", "of Eq.", "(REF ), $t_{2n}$ , it is a monomial of degree $2n$ .", "It is noted that $t_{2n}$ ends in either $a$ , $\\bar{a}$ , $b$ , or $\\bar{b}$ , namely, $t_{2n}=t_{2n-1}a$ , $t_{2n}=t_{2n-1}\\bar{a}$ , $t_{2n}=t_{2n-1}b$ , or $t_{4n}=t_{2n-1}\\bar{b}$ .", "If $t_{2n}=t_{2n-1}a$ , let $\\alpha =\\alpha -t_{2n-1}$ ; If $t_{2n}=t_{2n-1}\\bar{a}$ , let $\\alpha =\\alpha +t_{2n-1}\\bar{a}$ ; If $t_{2n}=t_{2n-1}b$ , let $\\beta =\\beta -t_{2n-1}$ ; If $t_{2n}=t_{2n-1}\\bar{b}$ , let $\\beta =\\beta +t_{2n-1}\\bar{b}$ ; Repeat the above procedure 2 for $t_{2n-1}$ until the degree of the monomial vanishes, namely, $t_{0}$ .", "We note that $t_{0}=1$ .", "Perform the steps similar to 2 and 3 for $\\bar{t}_{2n}$ , If $\\bar{t}_{2n}=\\bar{t}_{2n-1}a$ , let $\\alpha =\\alpha +\\bar{t}_{2n-1}$ ; If $\\bar{t}_{2n}=\\bar{t}_{2n-1}\\bar{a}$ , let $\\alpha =\\alpha -\\bar{t}_{2n-1}\\bar{a}$ ; If $\\bar{t}_{2n}=\\bar{t}_{2n-1}b$ , let $\\beta =\\beta +\\bar{t}_{2n-1}$ ; If $\\bar{t}_{2n}=\\bar{t}_{2n-1}\\bar{b}$ , let $\\beta =\\beta -\\bar{t}_{2n-1}\\bar{b}$ ; until the degree of the monomial vanishes.", "It is noted that $\\bar{t}_{0}=1$ .", "Both $t_{2n}-t_0$ and $\\bar{t}_{2n}-\\bar{t}_0$ can be rewritten as a summation of multipliers of either $(1-a)$ or $(1-b)$ .", "Therefore, the TDI coefficients of Eq.", "(REF ) are the resulting $\\alpha $ and $\\beta $ .", "As an example, for the case of Eq.", "(REF ), we have $t_{4}=ba\\bar{b}a$ and $\\bar{t}_{4}=aba\\bar{b}$ .", "Subsequently, we have $t_{3}=ba\\bar{b}$ , $t_{2}=ba$ , $t_{1}=b$ , $t_{0}=1$ ; $\\bar{t}_3=aba$ , $\\bar{t}_2=ab$ , $\\bar{t}_1=a$ , and $\\bar{t}_0=1$ .", "So that $\\alpha =-t_{3}-t_1+\\bar{t}_2+\\bar{t}_0=-ba\\bar{b}+ab-b+1$ and $\\beta =t_2\\bar{b}-t_0-\\bar{t}_3\\bar{b}+\\bar{t}_1=-aba\\bar{b}+ba\\bar{b}+a-1$ , consistent with Eq.", "(REF ).", "Figure: The control-flow diagram of the generalized algorithm proposed in the present study." ], [ "Applications to TDI combinations", "In this section, we explore the applications of the extended combinatorial approach developed in the last section.", "We elaborate on the Monitor-type TDI solutions in the main text and relegate the remaining results on other TDI solutions to Appendix .", "In particular, we present a novel class of Sagnac-inspired combinations, which cannot be straightforwardly obtained using the geometric TDI.", "Moreover, in Appendix , we give explicit forms of the relevant lower-order commutators in an exhaustive fashion that furnish the TDI solution investigated in the present study.", "Appendix  is devoted to discussing how to construct specific higher-order TDI solutions based on existing lower-order ones." ], [ "The Monitor-type combinations", "From the specific form of the modified first-generation and modified second-generation Monitor-E combinations [14], namely, $\\begin{aligned}E_{1} (t) &= [(1-D_{311^{^{\\prime }}\\bar{3}})\\eta _{1}+D_{3}\\eta _{2} + D_{31}\\eta _{{3}^{^{\\prime }}}]\\\\&-[(1-D_{2^{^{\\prime }}1^{^{\\prime }}1\\bar{2}^{^{\\prime }}})\\eta _{1^{^{\\prime }}}+D_{2^{^{\\prime }}}\\eta _{3^{^{\\prime }}}+D_{2^{^{\\prime }}1^{^{\\prime }}}\\eta _{2}],\\end{aligned}$ and $\\begin{aligned}E_{2} (t) &= (1-D_{2^{^{\\prime }}1^{^{\\prime }}1\\bar{2}^{^{\\prime }}})[(1-D_{311^{^{\\prime }}\\bar{3}})\\eta _{1}+D_{3}\\eta _{2} + D_{31}\\eta _{{3}^{^{\\prime }}}]\\\\&-(1-D_{311^{^{\\prime }}\\bar{3}})[(1-D_{2^{^{\\prime }}1^{^{\\prime }}1\\bar{2}^{^{\\prime }}})\\eta _{1^{^{\\prime }}}+D_{2^{^{\\prime }}}\\eta _{3^{^{\\prime }}}+D_{2^{^{\\prime }}1^{^{\\prime }}}\\eta _{2}] ,\\end{aligned}$ one observes the following constraint equations $q_{3}=0,q_{2^{^{\\prime }}}=0.$ Substituting Eq.", "(REF ) into Eq.", "(REF ), one finds $&q_{1}+q_{1^{^{\\prime }}}=0,\\\\&q_{2}-q_{3^{^{\\prime }}}D_{1^{^{\\prime }}}-q_{1}D_{3}=0,\\\\&q_{3^{^{\\prime }}}-q_{1^{^{\\prime }}}D_{2^{^{\\prime }}}-q_{2}D_{1}=0.$ Eliminating Eqs.", "() and () by substituting the forms of $q_{1}$ and $q_{1^{\\prime }}$ into Eq.", "(REF ), one finds the desired equation $q_{2}(D_{\\bar{3}}-D_{1\\bar{2}^{^{\\prime }}})+q_{3^{^{\\prime }}}(D_{\\bar{2}^{^{\\prime }}}-D_{1^{^{\\prime }}\\bar{3}})=0.$ The above equation is essentially Eq.", "(REF ) by recognizing $\\alpha =q_{2}D_{\\bar{3}}, \\beta =q_{3^{^{\\prime }}}D_{\\bar{2}^{^{\\prime }}}$ , $a=D_{31\\bar{2}^{^{\\prime }}}$ , and $b=D_{2^{^{\\prime }}1^{^{\\prime }}\\bar{3}}$ .", "To proceed, the algorithm proposed in the Sec.", "REF can be applied to extract the TDI coefficients of the Monitor-type combination for a given valid commutator consisting of $a$ , $b$ , $\\bar{a}$ , $\\bar{b}$ .", "For instance, for the commutator $[ba,ab]=[D_{2^{^{\\prime }}1^{^{\\prime }}1\\bar{2}^{^{\\prime }}},D_{311^{^{\\prime }}\\bar{3}}]$ , the corresponding TDI solution expressed in terms of the coefficients $q_2, q_{3^{^{\\prime }}}$ reads $\\begin{aligned}q_{2} &= (1-D_{2^{^{\\prime }}1^{^{\\prime }}\\bar{3}}-D_{2^{^{\\prime }}1^{^{\\prime }}1\\bar{2}^{^{\\prime }}}+D_{311^{^{\\prime }}\\bar{3}2^{^{\\prime }}1^{^{\\prime }}\\bar{3}})D_{3},\\\\q_{3^{^{\\prime }}} &= -(1-D_{31\\bar{2}^{^{\\prime }}}-D_{311^{^{\\prime }}\\bar{3}}+D_{2^{^{\\prime }}1^{^{\\prime }}1\\bar{2}^{^{\\prime }}31\\bar{2}^{^{\\prime }}})D_{2^{^{\\prime }}} ,\\end{aligned}$ which is the modified second-generation standard Monitor-E combination.", "As a second example, consider $[ba\\bar{b},a]\\bar{a}$ , the corresponding TDI solution is found to be $\\begin{aligned}q_{2}&=\\alpha D_{3} =(1-D_{2^{^{\\prime }}1^{^{\\prime }}\\bar{3}}+D_{311^{^{\\prime }}\\bar{3}}-D_{311^{\\prime }1\\bar{2}^{\\prime }3\\bar{1}^{\\prime }\\bar{1}\\bar{3}})D_{3},\\\\q_{3^{^{\\prime }}} &= \\beta D_{2^{^{\\prime }}} = -(1-D_{31\\bar{2}^{^{\\prime }}}-D_{2^{^{\\prime }}1^{^{\\prime }}1\\bar{2}^{^{\\prime }}3\\bar{1}^{^{\\prime }}\\bar{2}^{^{\\prime }}}+D_{311^{^{\\prime }}1\\bar{2}^{^{\\prime }}3\\bar{1}^{^{\\prime }}\\bar{2}^{^{\\prime }}})D_{2^{^{\\prime }}} .\\end{aligned}$ Substitute Eqs.", "(REF ) into Eq.", "(REF ) and Eq.", "() for the remaining coefficients $q_{1}$ and $q_{1^{^{\\prime }}}$ , one finds $\\begin{aligned}{E}({t})&=\\left(1-{D}_{2^{\\prime } 1^{\\prime } 1 \\overline{2}^{\\prime }}+{D}_{311^{\\prime } 1 \\overline{2}^{\\prime }}-{D}_{{2}^{\\prime } 1^{\\prime } 1 \\overline{2}^{\\prime } 3 \\overline{1}^{\\prime } \\overline{2}^{\\prime }}\\right) \\eta _{1}\\\\&+\\left(1-D_{2^{^{\\prime }}1^{^{\\prime }}\\bar{3}}+D_{311^{^{\\prime }}\\bar{3}}-D_{311^{\\prime }1\\bar{2}^{\\prime }3\\bar{1}^{\\prime }\\bar{1}\\bar{3}}\\right) D_{3} \\eta _{2}\\\\&-\\left(1-{D}_{2^{\\prime } 1^{\\prime } 1 \\overline{2}^{\\prime }}+{D}_{311^{\\prime } 1 \\overline{2}^{\\prime }}-{D}_{2^{\\prime } 1^{\\prime } 1 \\overline{2}^{\\prime } 3 \\overline{1}^{\\prime } \\overline{2}^{\\prime }}\\right) \\eta _{1^{\\prime }}\\\\&-\\left(1-D_{31\\bar{2}^{^{\\prime }}}-D_{2^{^{\\prime }}1^{^{\\prime }}1\\bar{2}^{^{\\prime }}3\\bar{1}^{^{\\prime }}\\bar{2}^{^{\\prime }}}+D_{311^{^{\\prime }}1\\bar{2}^{^{\\prime }}3\\bar{1}^{^{\\prime }}\\bar{2}^{^{\\prime }}}\\right) D_{2^{\\prime }} \\eta _{3^{\\prime }} ,\\end{aligned}$ which is recognized as the alternative Monitor-E combination (c.f.", "Fig.", "9(b) of Ref. [15]).", "The same procedure can be carried out for other commutator such as $[ab^2,bab]$ , $[a\\bar{b},\\bar{b}a]$ , and $[aab,aba]$ , which subsequently gives rise to a class of (higher-order, from the geometric TDI perspective) Monitor-E combinations.", "Moreover, by introducing cyclic permutations between the indices: $3\\rightarrow 2,2^{^{\\prime }}\\rightarrow 1^{^{\\prime }}$ and $3\\rightarrow 1,2^{^{\\prime }}\\rightarrow 3^{^{\\prime }}$ , one obtains Monitor-F-type and Monitor-G combinations.", "We refer to the above solutions obtained by considering the constraints Eq.", "(REF ) as Monitor-type solutions.", "We point out that this strategy can be utilized to derive other types of TDI combinations, such as the Beacon, Relay, and Sagnac ones.", "We relegate the derivations to Appendix  of the paper.", "In Tab.", "REF , we summarize the relevant types of TDI solutions that can be dealt with using the present approach and the associated constraint equations.", "Moreover, in Appendix , we enumerate the explicit forms of the lower-order commutators, which furnish the TDI solutions.", "Table: The TDI combinations and the associated constraints that are adopted systematically by the proposed algorithm.Before closing this subsection, we compared the derived TDI solutions with those obtained using the geometric TDI approach.", "In Ref.", "[15], a total of nine sixteen-link modified second-generation TDI combinations were reported.", "All these solutions can be retrieved using the present method, and the associated commutator and link trajectory are enumerated in Tab.", "REF .", "Table: The relevant commutators of the nine sixteen-link modified second-generation TDI combinations.The superscript of a TDI combination indicates the number of links, and the subscript represents the index of the combination for a specific solution family ." ], [ "The fully symmetric Sagnac-type combinations", "Unlike other, the fully symmetric Sagnac combinations constitute a unique family of solutions.", "Their distinctive nature partly resides in the fact that these solutions do not often possess a straightforward geometric TDI interpretation.", "In the last two subsections, we focus on these peculiar solutions.", "The present subsection will be devoted to retrieving the existing fully symmetric TDI combinations already explored in the literature.", "In the following subsection, we elaborate on some novel Sagnac-inspired solutions that have not been reported.", "Again, we first discuss the constraint equations through which the appropriate equation Eq.", "(REF ) can be established.", "The available TDI solutions of fully symmetric Sagnac-type are then shown to be consistent with the derived equation.", "Subsequently, the algorithm is employed straightforwardly to solve for modified second-generation solutions.", "The first-generation fully symmetric Sagnac solution is given by $\\zeta _{1}({t})={D}_{1} \\eta _{1}+D_{2} \\eta _{2}+D_{3} \\eta _{3}-D_{1^{\\prime }} \\eta _{1^{\\prime }}-D_{2^{\\prime }} \\eta _{2^{\\prime }}-D_{3^{\\prime }} \\eta _{3^{\\prime }},$ for which the residual laser frequency noise reads $\\Sigma _{i}(D_{i^{^{\\prime }}}-D_{i}+D_{(i-1)(i+1)}-D_{(i+1)^{^{\\prime }}(i-1)^{^{\\prime }}})p_{i}(t) .$ The modified first-generation fully symmetric Sagnac solution possesses the form [14] $\\begin{aligned}\\zeta _{2} (t) &= (D_{2^{^{\\prime }}3^{^{\\prime }}}-D_{1})(D_{3}\\eta _{3}-D_{3}\\eta _{3^{\\prime }}+D_{1^{^{\\prime }}}\\eta _{1})\\\\&-(D_{32}-D_{1^{^{\\prime }}})(D_{1}\\eta _{1^{^{\\prime }}}-D_{2^{^{\\prime }}}\\eta _{2}+D_{2^{^{\\prime }}}\\eta _{2^{^{\\prime }}}),\\end{aligned}$ where the residual laser frequency noise is found to be $[D_{2^{^{\\prime }}3^{^{\\prime }}}-D_{1},D_{32}-D_{1^{^{\\prime }}}]p_{1}(t) .$ It is noted that Eq.", "(REF ) is in accordance with the assertion that an arbitrary commutator between different delay operators vanishes, namely, $[D_{i(i^{^{\\prime }})}, D_{j(j^{^{\\prime }})}]=0 .$ It was pointed out in [14] that although the residual Eq.", "(REF ) does not vanish regarding the terms $\\dot{L}$ , the first-order derivative in time, the noise has been suppressed below the desired level.", "In other words, for a second- or higher-generation TDI solution, one would demand the more stringent condition imposed by Eq.", "(REF ), which ensures an explicit cancelation of the terms $\\dot{L}$ in the residual.", "In what follows, we will first retrieve Eq.", "(REF ) using the combinatorial algebraic approach, then explore further by aiming at the modified second-generation results.", "For the first-generation TDI, besides Eq.", "(REF ), one has $D_{i^{^{\\prime }}} = D_{i} .$ Subsequently, we consider the following constraints $q_{3}+q_{3^{^{\\prime }}}=0, q_{2}+q_{2^{^{\\prime }}}=0$ .", "Note that it is also viable to use the constraint involving $q_{1}$ and $q_{1^{^{\\prime }}}$ by permutating the indices in a cyclic order.", "As pointed out in [14], the Sagnac effect from the rotation of the array breaks the uniqueness of the “$\\zeta $ -like” combinations, which incorporate with the constraints $q_{1}+q_{1^{^{\\prime }}}=0$ .", "By substituting $q_{3}=-q_{3^{^{\\prime }}}, q_{2}=-q_{2^{^{\\prime }}}$ into Eq.", "(REF ), we have $q_{1}(1-D_{3\\bar{1}^{^{\\prime }}2})+q_{1^{^{\\prime }}}(1-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}})=0 ,$ which is equivalent to Eq.", "(REF ) by identifying $a=D_{3\\bar{1}^{^{\\prime }}2}, b=D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}}$ .", "From Eq.", "(REF ), one has $q_1=D_{1}$ and $q_{1^{\\prime }}=D_{1^{^{\\prime }}}$ , which manifestly satisfies Eq.", "(REF ) while taking into account Eq.", "(REF ).", "For the case of modified first-generation TDI, we replace $D_{\\bar{1}^{^{\\prime }}}$ and $D_{2^{^{\\prime }}}$ with $D_{3}$ and $D_{\\bar{1}}$ in the above expression to obtain $q_{1}(1-D_{\\bar{1}^{^{\\prime }}32})+q_{1^{^{\\prime }}}(1-D_{\\bar{1}2^{^{\\prime }}3^{^{\\prime }}})=0 .$ Again, it is straightforward to show that $q_1=(D_{2^{^{\\prime }}3^{^{\\prime }}}-D_{1})D_{1^{^{\\prime }}}$ and $q_{1^{\\prime }}=-(D_{32}-D_{1^{^{\\prime }}})D_{1}$ read off from Eq.", "(REF ) satisfies the above equation with Eq.", "(REF ).", "We now use the algorithm proposed in Sec.", "REF to solve for the modified second-generation fully symmetric Sagnac combinations.", "To start with, consider the commutator $[ba,ab]=[D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}3\\bar{1}^{^{\\prime }}2},D_{3\\bar{1}^{^{\\prime }}22^{^{\\prime }}\\bar{1}3^{^{\\prime }}}]$ , the TDI coefficients are found to be $\\begin{aligned}q_{1} &= 1-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}}-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}3\\bar{1}^{^{\\prime }}2}+D_{3\\bar{1}^{^{\\prime }}22^{^{\\prime }}\\bar{1}3^{^{\\prime }}2^{^{\\prime }}\\bar{1}3^{^{\\prime }}},\\\\q_{1^{^{\\prime }}}&=-(1-D_{3\\bar{1}^{^{\\prime }}2}-D_{3\\bar{1}^{^{\\prime }}22^{^{\\prime }}\\bar{1}3^{^{\\prime }}}+D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}3\\bar{1}^{^{\\prime }}23\\bar{1}^{^{\\prime }}2}).\\end{aligned}$ The corresponding TDI solution reads $\\begin{aligned}\\zeta (t)&=\\left(1-D_{2^{\\prime } \\overline{1} 3^{\\prime }}-D_{2^{\\prime } \\overline{1} 3^{\\prime } 3 \\overline{1}^{\\prime } 2}+D_{3 \\overline{1}^{\\prime } 22^{\\prime } \\overline{1} 3^{\\prime } 2^{\\prime } \\overline{1} 3^{\\prime }}\\right) \\eta _{1}\\\\&+\\left(1-D_{3 \\overline{1}^{\\prime } 2}-D_{3 \\overline{1}^{\\prime } 22^{\\prime } \\overline{1} 3^{\\prime }}+D_{2^{\\prime } \\overline{1} 3^{\\prime } 3\\overline{1}^{\\prime } 23 \\overline{1}^{\\prime } 2}\\right) D_{2^{\\prime } \\overline{1}} \\eta _{2}\\\\&+\\left(1-D_{2^{\\prime } \\overline{1} 3^{\\prime }}-D_{2^{\\prime } \\overline{1} 3^{\\prime } 3 \\overline{1}^{\\prime } 2}+D_{3 \\overline{1}^{\\prime } 22^{\\prime } \\overline{1} 3^{\\prime } 2^{\\prime } \\overline{1} 3^{\\prime }}\\right) D_{3 \\overline{1}^{\\prime }} \\eta _{3}\\\\&-\\left(1-D_{3\\bar{1}^{^{\\prime }}2}-D_{3\\bar{1}^{^{\\prime }}22^{^{\\prime }}\\bar{1}3^{^{\\prime }}}+D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}3\\bar{1}^{^{\\prime }}23\\bar{1}^{^{\\prime }}2}\\right) \\eta _{1^{\\prime }}\\\\&-\\left(1-D_{3 \\overline{1}^{\\prime } 2}-D_{3 \\overline{1}^{\\prime } 22^{\\prime } \\overline{1} 3^{\\prime }}+D_{2^{\\prime } \\overline{1} 3^{\\prime } 3 \\overline{1}^{\\prime } 23 \\overline{1}^{\\prime } 2}\\right) D_{2^{\\prime } \\overline{1}} \\eta _{2^{\\prime }}\\\\&-\\left(1-D_{2^{\\prime } \\overline{1} 3^{\\prime }}-D_{2^{\\prime } \\overline{1} 3^{\\prime } 3 \\overline{1}^{\\prime } 2}+D_{3 \\overline{1}^{\\prime } 22^{\\prime } \\overline{1} 3^{\\prime } 2^{\\prime } \\overline{1} 3^{\\prime }}\\right) D_{3 \\overline{1}^{\\prime }} \\eta _{3^{\\prime }}.\\end{aligned}$ The above procedure can be similarly applied to other commutators to give rise to other combinations.", "We refer to these combinations associated with the constraints Eq.", "(REF ) as the fully symmetric Sagnac-type solution.", "It is rather interesting to point out that some fully symmetric Sagnac combinations possess a geometric TDI interpretation.", "As shown in Fig.", "REF , Eq.", "(REF ) corresponds to the following space-time diagram regarding the geometric TDI.", "Figure: The space-time diagram for the modified second-generation fully symmetric Sagnac combination given by Eq.", "().The notation is consistent with Ref.", ".The red solid and blue dashed lines represent the two synthetic paths, and the black square and dot indicate the start and end points of the light path, respectively." ], [ "Novel Sagnac-inspired combinations", "In this subsection, we elaborate on a novel class of Sagnac-inspired solutions that do not associate with any geometric TDI trajectory.", "The distinction is based on the fact that any valid geometric TDI solution demands successive transmissions of laser signal between the spacecraft.", "In other words, only specific time-delay or time-advance operators can be applied to a signal emitted from a given optical bench.", "For instance, the science data stream $\\eta _{1}$ cannot be applied by the time-delay operation associated with the opposite armlength, $D_{1}$ .", "On the other hand, it is apparent that such a physical constraint is not necessary from an algebraic perspective.", "To this end, we consider the following two constraints $q_{1}=q_{2}D_{1}, q_{2}=q_{3}D_{2}$ .", "One observes the similarity between the above constraints and those of the Sagnac-type combinations.", "Therefore, the resulting solutions will be referred to as Sagnac-inspired.", "Subsequently, Eq.", "(REF ) gives $q_{3} D_{21}+q_{1^{\\prime }}-q_{2^{\\prime }} {D}_{3^{\\prime }}-q_{3} {D}_{2}&=0, \\\\q_{3} D_{2}+q_{2^{\\prime }}-q_{3^{\\prime }} {D}_{1^{\\prime }}-q_{3} D_{213}&=0, \\\\q_{3}+q_{3^{\\prime }}-q_{1^{\\prime }} {D}_{2^{\\prime }}-q_{3} D_{21}&=0 .$ Eliminating Eqs.", "(REF ) and () by substituting the forms of $q_{1^{\\prime }}$ and $q_{2^{\\prime }}$ into Eq.", "(), one finds the desired equation $q_{3}\\left(1-A\\right)+q_{3^{\\prime }}\\left(1-b\\right)=0,$ where $A= D_{2133^{\\prime } 2^{\\prime }}-D_{23^{\\prime } 2^{\\prime }}+D_{22^{\\prime }}-D_{212^{\\prime }}+D_{21}=\\sum _i a_i$ and $b={D}_{1^{\\prime } 3^{\\prime } 2^{\\prime }}$ .", "Since $A$ is not a monomial but a polynomial, we note that the condition Eq.", "(REF ) is not satisfied.", "However, one may argue that the vanishing condition of the associated commutator can be generalized to include polynomial, such as $[bA, Ab]$ , by noticing $[bA, Ab]=\\left[b\\left(\\sum _i a_i\\right), \\left(\\sum _j a_j\\right)b\\right]=\\sum _{ij}[b a_i, a_j b]\\sim \\sum _{i}[b a_i, a_i b] ,$ where “$\\sim $ ” means identical as an operator when applied to an arbitrary function of time.", "This is because the cross terms vanish $[b a_i, a_j b]+[b a_j, a_i b]\\sim 0,$ for $i\\ne j$ .", "To be specific, one has $\\left\\lbrace [D_b D_{a_i}, D_{a_j} D_b]+[D_b D_{a_j}, D_{a_i} D_b]\\right\\rbrace \\phi (t)=0 ,$ whose validity will be shown shortly.", "The r.h.s.", "of Eq.", "(REF ) subsequently vanishes as it satisfies the condition Eq.", "(REF ).", "The above result is a special case of a generalized relation of Eq.", "(REF ), which applies to polynomials $A=\\sum _i^n a_i$ and $B=\\sum _j^m b_j$ : $\\left[AB, BA\\right]\\phi (t)=\\left[\\sum _{i=1}^{n} a_i \\sum _{j=1}^{m} b_j, \\sum _{k=1}^{m} b_k \\sum _{l=1}^{n} a_l\\right] \\phi (t)=\\sum _{ij}\\left[a_i b_j, b_j a_i\\right]\\phi (t) ,$ to the first-order time-derivative terms $\\dot{L}$ .", "To confirm that all the cross terms vanish, we show that they cancel out in pairs.", "To be specific, there are $mn(mn-1)/2$ pairs of the form $\\Big ([a_{i}b_{j},b_{k}a_{l}]+[a_{l}b_{k},b_{j}a_{i}]\\Big )\\phi (t) ,$ where either $i\\ne l$ or $j\\ne k$ .", "Using Eq.", "(REF ), we have $\\begin{aligned}&{\\left[a_{i} b_{j}, b_{k} a_{l}\\right] \\phi (} t) \\\\&=\\left[\\left(L_{a_{i}}+L_{b_{j}}\\right) \\left(\\dot{L}_{b_{k}}+\\dot{L}_{a_{l}}\\right)\\right.\\\\&\\left.-\\left(L_{b_{k}}+L_{a_{l}}\\right) \\left(\\dot{L}_{a_{i}}+\\dot{L}_{b_{j}}\\right)\\right]\\\\ &\\dot{\\phi }\\left(t-\\left(L_{a_{i}}+L_{b_{j}}\\right)\\right.\\left.-\\left(L_{b_{k}}+L_{a_{l}}\\right)\\right),\\end{aligned}$ and similarly, $\\begin{aligned}&{\\left[a_{l} b_{k}, b_{j} a_{i}\\right] \\phi (} t) \\\\&=\\left[\\left(L_{a_{l}}+L_{b_{k}}\\right) \\left(\\dot{L}_{b_{j}}+\\dot{L}_{a_{i}}\\right)\\right.\\\\&\\left.-\\left(L_{b_{j}}+L_{a_{i}}\\right) \\left(\\dot{L}_{a_{l}}+\\dot{L}_{b_{k}}\\right)\\right]\\\\ &\\dot{\\phi }\\left(t-\\left(L_{a_{l}}+L_{b_{k}}\\right)\\right.\\left.-\\left(L_{b_{j}}+L_{a_{i}}\\right)\\right),\\end{aligned}$ where $L_{a_{i}},L_{b_{j}},\\dot{L}_{a_{i}},\\dot{L}_{b_{j}}$ represent the armlengths and their rates of change regarding the associated time-translation operations.", "The sum of the two terms manifestly vanishes.", "Based on the above arguments, one applies the commutator and finds $\\begin{aligned}q_{3}&=1-b-bA+Ab^{2},\\\\q_{3^{^{\\prime }}}&=-(1-A-Ab+bA^{2}).\\end{aligned}$ The remaining coefficients are found to be $\\begin{aligned}q_{2^{\\prime }}&=-\\left(1-A-A b+b A^{2}\\right) {D}_{1^{\\prime }}\\\\&+\\left(1-b-b A+A b^{2}\\right) D_{2}\\left(D_{13}-1\\right), \\\\q_{1^{\\prime }}&=(-(1-A-A b+b A^{2}) {D}_{1^{\\prime }}\\\\&+(1-b-b A+A b^{2}) D_{2}(D_{13}-1)) {D}_{3^{\\prime }}\\\\&+(1\\left.-b-b A+A b^{2}\\right) {D}_{2}\\left(1-D_{1}\\right), \\\\{q}_{2}&=\\left(1-b-b A+A b^{2}\\right) D_{2}, \\\\{q}_{1}&=\\left(1-b-b A+A b^{2}\\right) D_{21}.\\end{aligned}$ Due to the terms such as $D_{21}\\eta _{1}$ in $q_{1}\\eta _{1}$ , $D_{2}\\eta _{2}$ in $q_2\\eta _{2}$ , the above solution cannot be straightforwardly addressed by the geometric TDI approach.", "To analyze the performance of the combination obtained by the end of the last section, we evaluate its response function and sensitivity curve.", "They are shown in Fig.", "REF and REF in comparison with those of the conventional Sagnac combinations derived in [32], [33].", "Also, we adopt typical parameters in LISA mission, by assuming that armlength $L=2.5\\times 10^{6}\\mathrm {km}$ and the corresponding ASDs of the test mass and shot noise: $s^{\\mathrm {LISA}}_{a}=3\\times 10^{-15}\\mathrm {m}\\cdot \\mathrm {s^{-2}}/\\sqrt{\\mathrm {Hz}}$ and $s^{\\mathrm {LISA}}_{x}=15\\times 10^{-12}\\mathrm {m}/\\sqrt{\\mathrm {Hz}}$ [34].", "The specific forms of the averaged response functions of gravitational waves and noise power spectral densities are given in Appendix .", "For the averaged response function, the Sagnac-inspired combination shows some advantage in higher frequencies.", "On the other hand, the difference in the resultant sensitivity curves between the two combinations is not significant.", "It is worth noting, besides the two constraints considered in this section, other choices of constraint conditions such as $q_{1}=q_{2}D_{1}$ and $q_{1^{^{\\prime }}}=q_{2^{^{\\prime }}}D_{1^{^{\\prime }}}$ also leads to solutions without any geometric TDI interpretation.", "Therefore, the extended algebraic, combinatorial approach proposed in this study indicates a much broader solution space beyond the scope of the geometric TDI.", "Figure: The gravitational waves averaged response functions of the Sagnac and Sagnac-inspired combinations.Figure: Sensitivity curves of the Sagnac and Sagnac-inspired combinations." ], [ "Concluding remarks", "In this paper, an extended algorithm was proposed to solve for the modified second-generation TDI combinations.", "The approach is based on a combinatorial algebraic scheme first introduced by Dhurandhar et al.", "applied to the Michelson-type second-generation TDI solutions.", "By introducing the time-advance operator and adopting different constraint conditions, we showed that various second-generation TDI combinations, such as the Monitor, Beacon, and Relay combinations, can be retrieved.", "We also reproduced the fully symmetric Sagnac combinations whose laser-link trajectory cannot be understood in the framework of the geometric TDI algorithm.", "Inspired by such constraint conditions, we deduced a novel class of Sagnac-inspired TDI solutions.", "On the mathematical side, we generalized the commutation relation and its vanishing condition to a more general context by considering the time-advance operators and their polynomials.", "Moreover, we demonstrated that the present scheme is rather general, and the solution space is broader than what has already been explored in the literature.", "Subsequently, the present study constitutes an attempt to enrich our understanding of the TDI solutions from an algebraic perspective.", "The proposed enumerating scheme is more efficient than geometric TDI, as the latter is potentially time-consuming at higher orders.", "Nonetheless, the exhaustive nature of the present algorithm is not entirely clear.", "The constraint conditions explicitly delimit the solution.", "As a result, it only addresses a specific subspace of the entire solution space.", "Also, the commutator relation is only valid up to the first-order time-derivative terms.", "Therefore further generalization to higher-order scenarios is not straightforward.", "We plan to address these topics in further studies." ], [ "Acknowledgements", "This work is supported by the National Natural Science Foundation of China (Grant No.11925503), the Postdoctoral Science Foundation of China (Grant No.2022M711259), Guangdong Major project of Basic and Applied Basic Research (Grant No.2019B030302001), Key Laboratory of TianQin Project(Sun Yat-sen University), Ministry of Education, and the Fundamental Research Funds for the Central Universities, HUST: 2172019kfyRCPY029.", "We also gratefully acknowledge the financial support from Brazilian agencies Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), Fundação de Amparo à Pesquisa do Estado do Rio de Janeiro (FAPERJ), Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), and Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)." ], [ "A proof of the commutation relation", "In this Appendix, we give proof of Eq.", "(REF ), which furnishes the basis for its generalization in Sec. .", "Using mathematical induction, we first show the following proposition for all nature number $n$ : to the first order time-derivative, one has $\\begin{split}&P(n): \\\\&D_{x_{n}}...D_{x_{1}}\\phi (t)=\\phi \\left(t-\\sum ^{n}_{i=1}L_{x_{i}}\\right)+\\dot{\\phi }\\left(t-\\sum ^{n}_{i=1}L_{x_{i}}\\right)\\left(\\sum ^{n-1}_{j=1}\\dot{L}_{x_{j}}\\left(\\sum ^{n}_{k=j+1}L_{x_{k}}\\right)\\right)-\\dot{\\phi }\\left(t-\\sum ^{n}_{i=1}L_{x_{i}}\\right)t\\left(\\sum ^{n}_{i=1}\\dot{L}_{x_{i}}\\right).\\end{split}$ Proof: It is straightforward to show that $P(1)$ is true $\\begin{split}&P(1):\\\\&D_{x_{1}}\\phi (t)=\\phi (t-L_{x_{1}})-\\dot{\\phi }(t-L_{x_{1}})t\\dot{L}_{x_{1}}\\end{split}$ Induction step: if $P(n)$ holds, we have, $\\begin{aligned}P(n+1):\\\\D_{x_{n+1}}D_{x_{n}}...D_{x_{1}}\\phi (t)&=D_{x_{n+1}}\\left(\\phi \\left(t-\\sum ^{n}_{i=1}L_{x_{i}}\\right)+\\dot{\\phi }\\left(t-\\sum ^{n}_{i=1}L_{x_{i}}\\right)\\left(\\sum ^{n-1}_{j=1}\\dot{L}_{x_{j}}\\left(\\sum ^{n}_{k=j+1}L_{x_{k}}\\right)\\right)-\\dot{\\phi }\\left(t-\\sum ^{n}_{i=1}L_{x_{i}}\\right)t\\left(\\sum ^{n}_{i=1}\\dot{L}_{x_{i}}\\right)\\right)\\\\&=\\phi \\left(t-\\sum ^{n+1}_{i=1}L_{x_{i}}\\right)+\\dot{\\phi }\\left(t-\\sum ^{n+1}_{i=1}L_{x_{i}}\\right)\\left(\\sum ^{n}_{j=1}\\dot{L}_{x_{j}}\\left(\\sum ^{n+1}_{k=j+1}L_{x_{k}}\\right)\\right)-\\dot{\\phi }\\left(t-\\sum ^{n+1}_{i=1}L_{x_{i}}\\right)t\\left(\\sum ^{n+1}_{i=1}\\dot{L}_{x_{i}}\\right),\\end{aligned}$ so that $P(n+1)$ also holds true.", "Q.E.D.", "It is readily shown that Eq.", "(REF ) implies $\\begin{aligned}D_{y_{n}}...D_{y_{1}}D_{x_{n}}...D_{x_{1}}\\phi (t)&=\\phi \\left(t-\\sum ^{n}_{i=1}L_{x_{i}}-\\sum ^{n}_{i=1}L_{y_{i}}\\right)-\\dot{\\phi }\\left(t-\\sum ^{n}_{i=1}L_{x_{i}}-\\sum ^{n}_{i=1}L_{y_{i}}\\right)t\\left(\\sum ^{n}_{i=1}\\dot{L}_{x_{i}}+\\sum ^{n}_{i=1}\\dot{L}_{y_{i}}\\right)\\\\&+\\dot{\\phi }\\left(t-\\sum ^{n}_{i=1}L_{x_{i}}-\\sum ^{n}_{i=1}L_{y_{i}}\\right)\\left(\\sum ^{n-1}_{j=1}\\dot{L}_{y_{j}}\\left(\\sum ^{n}_{k=j+1}L_{y_{k}}\\right)+\\sum ^{n}_{j=1}\\dot{L}_{x_{j}}\\left(\\sum ^{n}_{k=j+1}L_{x_{k}}+\\sum ^{n}_{k=1}L_{y_{k}}\\right)\\right),\\end{aligned}$ and $\\begin{aligned}D_{x_{n}}...D_{x_{1}}D_{y_{n}}...D_{y_{1}}\\phi (t)&=\\phi \\left(t-\\sum ^{n}_{i=1}L_{y_{i}}-\\sum ^{n}_{i=1}L_{x_{i}}\\right)-\\dot{\\phi }\\left(t-\\sum ^{n}_{i=1}L_{y_{i}}-\\sum ^{n}_{i=1}L_{x_{i}}\\right)t\\left(\\sum ^{n}_{i=1}\\dot{L}_{y_{i}}+\\sum ^{n}_{i=1}\\dot{L}_{x_{i}}\\right)\\\\&+\\dot{\\phi }\\left(t-\\sum ^{n}_{i=1}L_{y_{i}}-\\sum ^{n}_{i=1}L_{x_{i}}\\right)\\left(\\sum ^{n-1}_{j=1}\\dot{L}_{x_{j}}\\left(\\sum ^{n}_{k=j+1}L_{x_{k}}\\right)+\\sum ^{n}_{j=1}\\dot{L}_{y_{j}}\\left(\\sum ^{n}_{k=j+1}L_{y_{k}}+\\sum ^{n}_{k=1}L_{x_{k}}\\right)\\right).\\end{aligned}$ By subtracting Eq.", "(REF ) from Eq.", "(REF ), one finds Eq.", "(REF ) given in the main text." ], [ "The derivations of other TDI combinations", "Here, we present the derivations of other TDI combinations using the combinatorial algorithm proposed in this paper.", "As the procedure is mainly reminiscent of what is discussed in Sec.", ", only the essential arguments are given below." ], [ "Sagnac-type combinations", "From the specific forms of the first-generation Sagnac-$\\alpha $ combination $\\alpha _{1}(t)=\\eta _{1}-\\eta _{1^{\\prime }}+{D}_{3} \\eta _{2}-{D}_{2^{\\prime } 1} \\eta _{2^{\\prime }}+{D}_{31} \\eta _{3}-{D}_{2^{\\prime }} \\eta _{3^{\\prime }},$ and the modified second-generation standard Sagnac-$\\alpha $ combination $\\begin{aligned}\\alpha _{2}(t) &=\\left(D_{2^{\\prime } 1^{\\prime } 3^{\\prime }}-1\\right)\\left(\\eta _{1}+D_{3} \\eta _{2}+D_{31} \\eta _{3}\\right) \\\\&-\\left(D_{312}-1\\right)\\left(\\eta _{1^{\\prime }}+D_{2^{\\prime }} \\eta _{3^{\\prime }}+D_{2^{\\prime } 1^{\\prime }} \\eta _{2^{\\prime }}\\right) ,\\end{aligned}$ it is observed that the coefficients satisfy the following relations ${q}_{2}={q}_{1} D_{3}, {q}_{3}=q_{1} D_{31}, {q}_{3^{\\prime }}=q_{1^{\\prime }} D_{2^{\\prime }}, {q}_{2^{\\prime }}=q_{1^{\\prime }} D_{2^{\\prime } 1^{\\prime }}.$ The above relations introduce four constraints.", "However, it can be shown that these four constraints are not entirely independent from Eq.", "(REF ).", "In particular, the second and third lines of Eq.", "(REF ) are manifestly linear combinations of the above constraints, and therefore, Eq.", "(REF ) only provides two independent conditions.", "Subsequently, the relevant equation is obtained from the first line of Eq.", "(REF ) by substituing the above conditions and eliminating $q_2, q_3, q_{2^{\\prime }}$ , and $q_{3^{\\prime }}$ .", "One finds $q_{1}(1-D_{312})+q_{1^{^{\\prime }}}(1-D_{2^{^{\\prime }}1^{^{\\prime }}3^{^{\\prime }}})=0.$ Again, the above equation is nothing but Eq.", "(REF ) by identifying $a=D_{312}$ and $b=D_{2^{^{\\prime }}1^{^{\\prime }}3^{^{\\prime }}}$ .", "Subsequently, the algorithm may proceed as described in Sec.", "REF , as long as no time-advance operator is involved.", "In this case, one may essentially follow [29] to systematically enumerate the relevant commutators related to the Sagnac-$\\alpha $ type combinations.", "The simplest choice is $\\Delta =[ba,ab]=[D_{2^{^{\\prime }}1^{^{\\prime }}3^{^{\\prime }}312},D_{3122^{^{\\prime }}1^{^{\\prime }}3^{^{\\prime }}}]$ , whose the coefficients $q_1,q_{1^{^{\\prime }}}$ , $\\begin{aligned}&q_{1}=1-D_{2^{\\prime } 1^{\\prime } 3^{\\prime }}-D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 312}+D_{3122^{\\prime } 1^{\\prime } 3^{\\prime } 2^{\\prime } 1^{\\prime } 3^{\\prime }}, \\\\&q_{1^{\\prime }}=-\\left(1-D_{312}-D_{3122^{\\prime } 1^{\\prime } 3^{\\prime }}+D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 312312}\\right) ,\\end{aligned}$ are recognized as those of the modified second-generation Sagnac-$\\alpha $ combination given above.", "On the other hand, one may consider commutator that contains the time-advance operator.", "As an example, for the case of $\\Delta =[ba\\bar{b},a]$ , we arrive at an alternative Sagnac combination $\\begin{aligned}&\\alpha ({t})=\\left(1-D_{2^{\\prime } 1^{\\prime } 3^{\\prime }}+{D}_{3122^{\\prime } 1^{\\prime } 3^{\\prime }}-D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 312 \\overline{3}^{\\prime } \\overline{1}^{\\prime } \\overline{2}^{\\prime }}\\right) \\eta _{1}\\\\&+\\left(1-D_{2^{\\prime } 1^{\\prime } 3^{\\prime }}+{D}_{3122^{\\prime } 1^{\\prime } 3^{\\prime }}-D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 312 \\overline{3}^{\\prime } \\overline{1}^{\\prime } \\overline{2}^{\\prime }}\\right) D_{3} \\eta _{2}\\\\&+\\left(1-D_{2^{\\prime } 1^{\\prime } 3^{\\prime }}+{D}_{3122^{\\prime } 1^{\\prime } 3^{\\prime }}-D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 312 \\overline{3}^{\\prime } \\overline{1}^{\\prime } \\overline{2}^{\\prime }}\\right) D_{31} \\eta _{3}\\\\&-\\left(1-{D}_{312}-D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 312 \\overline{3}^{\\prime } \\overline{1}^{\\prime } \\overline{2}^{\\prime }}+{D}_{3122^{\\prime } 1^{\\prime } 3^{\\prime } 312 \\overline{3}^{\\prime } \\overline{1}^{\\prime } \\overline{2}^{\\prime }}\\right) \\eta _{1^{\\prime }}\\\\&-\\left(1-{D}_{312}-D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 312 \\overline{3}^{\\prime } \\overline{1}^{\\prime } \\overline{2}^{\\prime }}+{D}_{3122^{\\prime } 1^{\\prime } 3^{\\prime } 312 \\overline{3}^{\\prime } \\overline{1}^{\\prime } \\overline{2}^{\\prime }}\\right) D_{2^{\\prime }} \\eta _{2^{\\prime }}\\\\&-\\left(1-{D}_{312}-D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 312 \\overline{3}^{\\prime } \\overline{1}^{\\prime } \\overline{2}^{\\prime }}+{D}_{3122^{\\prime } 1^{\\prime } 3^{\\prime } 312 \\overline{3}^{\\prime } \\overline{1}^{1} \\overline{2}^{\\prime }}\\right) D_{2^{\\prime } 1^{\\prime }} \\eta _{3^{\\prime }}.\\end{aligned}$ It is readily verified that the commutators $[ab^2,bab]$ , $[a\\bar{b},\\bar{b}a]$ , $[aab,aba]$ , among others, give rise to a class of higher-order (from the perspective of the geometric TDI) Sagnac-$\\alpha $ combinations.", "Similarly, Sagnac-$\\beta $ and Sagnac-$\\gamma $ combinations can be obtained, respectively, by rotating the indices of the constraints $2\\rightarrow 3,3\\rightarrow 1$ and $2\\rightarrow 1,3\\rightarrow 2$ ." ], [ "Beacon-type combinations", "From the specific forms of the modified first-generation Beacon-P combination $\\begin{aligned}{P}_{1}(t) &=D_{\\bar{2}^{^{\\prime }}}\\eta _{1}+D_{\\bar{2}^{^{\\prime }}3} \\eta _{2^{\\prime }}+D_{\\bar{2}^{^{\\prime }}33^{\\prime }} \\eta _{1^{\\prime }}+D_{\\bar{1}} \\eta _{2} \\\\&-\\left(D_{\\bar{1}}\\eta _{2^{\\prime }}+D_{\\bar{1}3^{\\prime }} \\eta _{1}+D_{\\bar{1}3^{\\prime }3} \\eta _{2}+D_{\\bar{2}^{^{\\prime }}} \\eta _{1^{^{\\prime }}}\\right),\\end{aligned}$ and the modified second-generation standard Beacon-P combination $\\begin{aligned}P_{2}(t) &=\\left(1-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}31\\bar{2}^{^{\\prime }}}+D_{33^{^{\\prime }}2^{^{\\prime }}\\bar{1}3^{^{\\prime }}}-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}}\\right)\\eta _{1}\\\\&+\\left(-1+D_{33^{^{\\prime }}}+D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}31\\bar{2}^{^{\\prime }}}-D_{33^{^{\\prime }}2^{^{\\prime }}\\bar{1}3^{^{\\prime }}31\\bar{2}^{^{\\prime }}}\\right)\\eta _{1^{^{\\prime }}}\\\\&+\\left(D_{33^{^{\\prime }}2^{^{\\prime }}\\bar{1}3^{^{\\prime }}3}-D_{33^{^{\\prime }}2^{^{\\prime }}\\bar{1}}-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}3}+D_{2^{^{\\prime }}\\bar{1}}\\right)\\eta _{2} \\\\&+\\left(D_{3}+D_{33^{^{\\prime }}2^{^{\\prime }}\\bar{1}}-D_{2^{^{\\prime }}\\bar{1}}-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}31\\bar{2}^{^{\\prime }}3}\\right)\\eta _{2^{^{\\prime }}},\\end{aligned}$ it is observed that the coefficients satisfy the following relations $q_{3}=0,q_{3^{^{\\prime }}}=0.$ By substituting Eq.", "(REF ) into Eq.", "(REF ), one finds $q_{1}(1-D_{33^{^{\\prime }}})+q_{1^{^{\\prime }}}(1-D_{2^{^{\\prime }}\\bar{1} 3^{^{\\prime }}})=0.$ Again, the above equation is nothing but Eq.", "(REF ) by identifying $a=D_{33^{^{\\prime }}},b=D_{2^{^{\\prime }}\\bar{1} 3^{^{\\prime }}}$ .", "Since there is a time-advance operator $D_{\\bar{1}}$ in $b$ , one applies the process described in Sec.", "REF to systematically enumerate the relevant commutators related to the Beacon-P type combinations.", "One possible choice is $\\Delta =[ba\\bar{b},a]=[D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}31\\bar{2}^{^{\\prime }}},D_{33^{^{\\prime }}}]$ , whose the coefficients $q_1,q_{1^{^{\\prime }}}$ , $\\begin{aligned}q_{1} &= (1-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}}-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}31\\bar{2}^{^{\\prime }}}+D_{33^{^{\\prime }}2^{^{\\prime }}\\bar{1}3^{^{\\prime }}}),\\\\q_{1^{^{\\prime }}} &= -(1-D_{3{3}^{^{\\prime }}}-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}31\\bar{2}^{^{\\prime }}}+D_{33^{^{\\prime }}2^{^{\\prime }}\\bar{1}3^{^{\\prime }}31\\bar{2}^{^{\\prime }}}),\\end{aligned}$ are recognized as those of the modified second-generation standard Beacon-P combination given above.", "On the other hand, for the case of $\\Delta =[b,ab\\bar{a}]$ , we arrive at an alternative Beacon combination $\\begin{aligned}&{P}({t})=\\left(1-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}}-D_{33^{\\prime }2^{^{\\prime }}\\bar{1}\\bar{3}}+D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}33^{^{\\prime }}2^{^{\\prime }}\\bar{1}\\bar{3}}\\right)\\eta _{1} \\\\&+\\left(-1+D_{33^{^{\\prime }}}-D_{{2}^{\\prime }\\bar{1}3^{\\prime }33^{^{\\prime }}}+D_{33^{^{\\prime }}2^{^{\\prime }}\\bar{1}\\bar{3}}\\right)\\eta _{1^{^{\\prime }}} \\\\&+\\left({D}_{{2}^{\\prime }\\bar{1}}-D_{33^{\\prime }{2}^{\\prime }\\bar{1}}+D_{{2}^{\\prime }\\bar{1}3^{^{\\prime }}33^{^{\\prime }}2^{^{\\prime }}\\bar{1}}-D_{33^{\\prime }2^{^{\\prime }}\\bar{1}\\bar{3}2^{^{\\prime }}\\bar{1}}\\right) \\eta _{2} \\\\&+\\left(D_{3}-D_{2^{^{\\prime }}\\bar{1}}-D_{2^{^{\\prime }}\\bar{1}3^{^{\\prime }}3}+D_{33^{^{\\prime }}2^{^{\\prime }}\\bar{1}\\bar{3}2^{^{\\prime }}\\bar{1}}\\right)\\eta _{2^{\\prime }}.\\end{aligned}$ It is readily verified that the commutators $[ab^2,bab]$ , $[a\\bar{b},\\bar{b}a]$ , $[aab,aba]$ , among others, give rise to a class of higher-order (from the perspective of the geometric TDI) Beacon-P combinations.", "Similarly, Beacon-Q and Beacon-R combinations can be obtained, respectively, by rotating the indices of the constraints $3\\rightarrow 2,3^{^{\\prime }}\\rightarrow 2^{^{\\prime }}$ and $3\\rightarrow 1,3^{^{\\prime }}\\rightarrow 1{^{\\prime }}$ ." ], [ "Relay-type combinations", "From the specific forms of the modified first-generation Relay-U combination $\\begin{aligned}{U}_{1}(t) &=\\eta _{1}+D_{3} \\eta _{2^{\\prime }}+D_{33^{\\prime }} \\eta _{1^{\\prime }}+D_{33^{\\prime } 2^{\\prime }} \\eta _{3^{\\prime }} \\\\&-\\left(\\eta _{1^{\\prime }}+D_{2^{\\prime }} \\eta _{3^{\\prime }}+D_{2^{\\prime } 1^{\\prime }} \\eta _{2^{\\prime }}+D_{2^{\\prime } 1^{\\prime } 3^{\\prime }} \\eta _{1}\\right),\\end{aligned}$ and the modified second-generation standard Relay-U combination $\\begin{aligned}{U}_{2}(t) &=\\left(D_{33^{\\prime } 2^{\\prime } 1^{\\prime }\\overline{3}}-1\\right) \\left(\\eta _{1^{\\prime }}+D_{2^{\\prime }} \\eta _{3^{\\prime }}+D_{2^{\\prime } 1^{\\prime }} \\eta _{2^{\\prime }} +D_{2^{^{\\prime }}1^{^{\\prime }}3^{^{\\prime }}} \\eta _{1}\\right) \\\\&-\\left(D_{2^{\\prime } 1^{\\prime } 3^{\\prime }}-1\\right)D_{3}\\left(\\eta _{2^{^{\\prime }}}+D_{3^{^{\\prime }}} \\eta _{1^{\\prime }}+D_{3^{\\prime }2^{^{\\prime }}} \\eta _{3^{\\prime }}\\right)\\\\&-\\left(D_{33^{\\prime } 2^{\\prime } 1^{\\prime }\\overline{3}}-1\\right)\\eta _{1},\\end{aligned}$ it is observed that the coefficients satisfy the following relations $q_{2}=0,q_{3}=0.$ By substituting Eq.", "(REF ) into Eq.", "(REF ), we have $q_{2^{^{\\prime }}}(D_{\\bar{3}}-D_{3^{^{\\prime }}})+q_{3^{^{\\prime }}}(D_{\\bar{2}^{^{\\prime }}}-D_{1^{^{\\prime }}\\bar{3}})=0.$ The above equation is essentially Eq.", "(REF ) by recognizing $\\alpha =q_{2^{\\prime }}D_{\\bar{3}}, \\beta =q_{3^{^{\\prime }}}D_{\\bar{2}^{^{\\prime }}}$ , $a=D_{33^{^{\\prime }}}$ , and $b=D_{2^{^{\\prime }}1^{^{\\prime }}\\bar{3}}$ .", "Since there is a time-advance operator $D_{\\bar{3}}$ in $b$ , one applies the algorithm described in Sec.", "REF to systematically enumerate the relevant commutators for the Relay-U type combinations.", "The simplest choice is $\\Delta =[ba,ab]=[D_{\\bar{1}3^{^{\\prime }}31},D_{\\bar{2}^{^{\\prime }}33^{^{\\prime }}2^{^{\\prime }}}]$ , whose the coefficients $q_{2^{^{\\prime }}},q_{3^{^{\\prime }}}$ , $\\begin{aligned}&{q}_{2^{\\prime }}=\\left(1-D_{2^{\\prime } 1^{\\prime } \\overline{3}}-D_{2^{\\prime } 1^{\\prime } 3^{\\prime }}+D_{33^{\\prime } 2^{\\prime } 1^{\\prime } \\overline{3} 2^{\\prime } 1^{\\prime } \\overline{3}}\\right) D_{3}, \\\\&{q}_{3^{\\prime }}=-\\left(1-D_{33^{\\prime }}-D_{33^{\\prime } 2^{\\prime } 1^{\\prime } \\overline{3}}+D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 33^{\\prime }}\\right) D_{2^{\\prime }},\\end{aligned}$ are recognized as those of the modified second-generation Relay-U combination given above.", "On the other hand, for the case of $\\Delta =[ba\\bar{b},a]$ , we arrive at an alternative Relay combination $\\begin{aligned}{U}({t})&=\\left(1-{D}_{2^{\\prime } 1^{\\prime } 3^{\\prime }}-{D}_{2^{\\prime } 1^{\\prime } 3^{\\prime } 3 \\bar{1}^{^{\\prime }} \\bar{2}^{^{\\prime }}}+D_{33^{\\prime } 2^{\\prime } 1^{\\prime } 3^{\\prime }}\\right) \\eta _{1} \\\\&-\\left(1-D_{33^{\\prime }}-D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 3 \\bar{1}^{^{\\prime }} \\bar{2}^{^{\\prime }}}+D_{33^{\\prime } 2^{\\prime } 1^{\\prime } 3^{\\prime } 3 \\bar{1}^{^{\\prime }} \\bar{2}^{^{\\prime }}}\\right) \\eta _{1^{\\prime }} \\\\&+\\left(1-D_{2^{\\prime } 1^{\\prime } \\overline{3}}+D_{33^{\\prime } 2^{\\prime } 1^{\\prime } \\overline{3}}-D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 3 \\bar{1}^{^{\\prime }} \\bar{2}^{^{\\prime }}}\\right) D_{3} \\eta _{2^{\\prime }} \\\\&-\\left(1-D_{33^{\\prime }}-D_{2^{\\prime } 1^{\\prime } 3^{\\prime } 3 \\bar{1}^{^{\\prime }} \\bar{2}^{^{\\prime }}}+D_{33^{\\prime } 2^{\\prime } 1^{\\prime } 3^{\\prime } 3 \\bar{1}^{^{\\prime }} \\bar{2}^{^{\\prime }}}\\right) D_{2^{\\prime }} \\eta _{3^{\\prime }}.\\end{aligned}$ It is readily verified that the commutators $[ab^2,bab]$ , $[a\\bar{b},\\bar{b}a]$ , $[aab,aba]$ , among others, give rise to a class of higher-order (from the perspective of the geometric TDI) Relay-U combinations.", "Similarly, Relay-V, Relay-W combinations can be obtained, respectively, by rotating the indices of the constraints $2\\rightarrow 3,3\\rightarrow 1$ and $2\\rightarrow 1,3\\rightarrow 2$ ." ], [ "The specific forms of the lowest-order commutators", "In this Appendix, we enumerate the forms of the lowest-order commutators for our proposed algorithm.", "Using the notations in Sec.", "REF , we denote the length of the commutator by $2n$ .", "Due to the presence of the time-advance operators, it is noted that the two terms of the commutator do not necessarily have the same length.", "For $n=2$ , the total count of $a$ 's, $\\bar{a}$ 's, $b$ 's, and $\\bar{b}$ 's is $2n=4$ .", "This implies two possibilities, $4=1+3$ or $4=2+2$ , such as $[ab,ba]$ and $[a,ba\\bar{b}]$ .", "By taking $n=2$ and $n=3$ as examples, we enumerate all possible forms of the relevant commutators that satisfy Eq.", "(REF ) for the proposed algorithm in Tab.", "REF .", "It is noted that even at the lowest order, the number of possible commutators is much more significant compared to the original algorithm presented in Sec. .", "Table: The specific forms of the commutators with n=2n=2 and n=3n=3" ], [ "Specific higher-order TDI combinations obtained using the lower-order commutators", "This section elaborates on constructing specific higher-order TDI solutions based on the lower-order commutators.", "We will be focused on two possible schemes.", "The first feasible scheme is multiplying an arbitrary monomial to the left of a relevant commutator.", "For instance, $a[ba,ab]$ is a valid commutator satisfying Eq.", "(REF ), which is obtained by multiplying $a$ to the left of $[ba,ab]$ .", "Similarly, one may derive a higher-order commutator by multiplying an additional monomial to the right of a valid commutator.", "However, as will be shown shortly, the second choice implies some subtlety in deriving the corresponding TDI coefficients.", "For the first scheme, it is not difficult to derive the corresponding TDI coefficients defined in Eq.", "(REF ).", "Taking $a[ba, ab]$ as an example, one only needs to multiply to the left of the existing TDI solution, Eq.", "(REF ), by the monomial in question, $a$ , namely $\\begin{aligned}{\\tilde{q}}_{1}&=a{q}_{1}=a(1-b-b a+a b^{2}), \\\\{\\tilde{q}}_{1^{\\prime }}&=a{q}_{1^{\\prime }}=-a\\left(1-a-a b+b a^{2}\\right) .\\end{aligned}$ The validity and generality of the above scheme can be verified straightforwardly by either substituting Eq.", "(REF ) back into Eq.", "(REF ) or closely observing the process discussed in Sec.", "REF .", "For the second scheme, although the validity of the commutator is apparent, the corresponding TDI coefficients do not possess a simple relation with the lower-order counterpart.", "We first consider a simple example, $[ba,ab]b$ .", "It is not difficult to show that the TDI coefficients of Eq.", "(REF ) for this example is $\\begin{aligned}{\\tilde{q}}_{1}&=1-b-b a+a b^{2},\\\\{\\tilde{q}}_{1^{\\prime }}&=-\\left(1-a-a b+b a^{2}+[b a, a b]\\right) ,\\end{aligned}$ which does not have a simple relation with the TDI solution of the original commutator $[ba, ab]$ , Eq.", "(REF ).", "However, the above result motivates us to assume an ansatz for the TDI solution and then solve for its specific form.", "To this end, we will illustrate the approach by a more complicated example, $[ab^2, bab]b\\bar{a}$ .", "We utilize the following ansatz $\\begin{aligned}{\\tilde{q}}_{1} &={q}_{1}+\\left[a b^{2}, b a b\\right] \\alpha , \\\\{\\tilde{q}}_{1^{\\prime }} &=q_{1^{\\prime }}+\\left[a b^{2}, b a b\\right] \\beta ,\\end{aligned}$ where $\\alpha $ and $\\beta $ are operator polynomials to be determined, $q_1$ and $q_{1^{^{\\prime }}}$ are the TDI solution of Eq.", "(REF ) associated with the commutator $[ab^2,bab]$ .", "Substitute Eq.", "(REF ) back into Eq.", "(REF ), one finds an equation $\\left[a b^{2}, b a b\\right](1+\\alpha (1-a)+\\beta (1-b)) ,$ which should be nothing but $[ab^2,bab]\\bar{b}a$ .", "This implies $\\alpha (1-a)+\\beta (1-b)=\\bar{b} a -1.$ Solving Eq.", "(REF ) for $\\alpha $ and $\\beta $ will provide us with the desired TDI coefficients.", "Employing a similar trick in the derivation of Eq.", "(REF ), we have $\\bar{b} a=-\\bar{b}(1-a)+\\bar{b}=-\\bar{b}(1-a)+\\bar{b}(1-b)+1 ,$ in other words, $\\bar{b} a-1=(-\\bar{b})(1-a)+\\bar{b}(1-b) .$ The simplest choice (that is, the lowest-order solution) is therefore $-\\alpha =\\beta =\\bar{b}$ , and subsequently, we have $\\begin{aligned}{\\tilde{q}}_{1} &=q_{1}-\\left[a b^{2}, b a b\\right]\\bar{b}, \\\\{\\tilde{q}}_{1^{\\prime }} &=q_{1^{\\prime }}+\\left[a b^{2}, b a b\\right]\\bar{b} .\\end{aligned}$ We observe that the above procedure is, by and large, general.", "It provides an approach to construct two specific classes of higher-order (from the geometric TDI perspective) TDI solutions from the lower-order ones." ], [ "Sensitivity funtions for Sagnac and novel Sagnac-inspired combinations", "This Appendix presents the expressions for the averaged response functions of gravitational waves and noise power spectral density for Sagnac-type combinations elaborated in the main text.", "Here, we consider that the residual noise primarily comprises the shot noise and test mass vibrations, whereas the other ones, such as the clock jitter noise, are suppressed below the noise floor.", "For the Sagnac combination, the noise power spectral density $N_{\\alpha }(u)$ can be expressed as $N_{\\alpha }(u)=\\left[16\\frac{s_{a}^{2}L^{2}}{c^{4}u^{2}}\\Big (3-2\\mathrm {cos}u-2\\mathrm {cos}3u\\Big )+24\\frac{s_{x}^{2}u^{2}}{L^{2}}\\right]\\mathrm {sin}^{2}\\frac{3u}{2},$ and the gravitational waves averaged response function $R_{\\alpha }(u)$ reads $\\begin{aligned}R_{\\alpha }(u)=\\frac{1}{6}\\mathrm {sin}^2\\frac{3u}{2}&\\left[28-24\\mathrm {cos}u-4\\mathrm {cos}3u+48\\mathrm {log}2\\left(1+2\\mathrm {cos}u\\right)+72\\left(2\\mathrm {log}2-\\mathrm {log}3\\right)\\left(2+\\mathrm {cos}3u\\right)\\right.\\\\&\\left.+\\frac{-204\\mathrm {sin}u+147\\mathrm {sin}2u-30\\mathrm {sin}3u}{u}+\\frac{-81+144\\mathrm {cos}u-81\\mathrm {cos}2u+18\\mathrm {cos}3u}{u^2}\\right.\\\\&\\left.+\\frac{-108\\mathrm {sin}u+81\\mathrm {sin}2u-18\\mathrm {sin}3u}{u^3}+48\\left(4\\mathrm {Ci}u-7\\mathrm {Ci}2u+3\\mathrm {Ci}3u\\right)+96\\mathrm {cos}u\\left(\\mathrm {Ci}u-\\mathrm {Ci}2u\\right)\\right.\\\\&\\left.+72\\mathrm {cos}3u\\left(\\mathrm {Ci}u-2\\mathrm {Ci}2u+\\mathrm {Ci}3u\\right)+72\\mathrm {sin}3u\\left(\\mathrm {Si}u-2\\mathrm {Si}2u+\\mathrm {Si}3u\\right)\\right],\\end{aligned}$ where $u=2\\pi fc/L$ and $f$ is observed frequency, SinIntergral $\\mathrm {Si}\\left(z\\right)=\\int _{0}^{z}\\mathrm {sin}t/tdt$ and CosIntegral $\\mathrm {Ci}\\left(z\\right)=-\\int _{z}^{\\infty }\\mathrm {cos}t/tdt$ .", "For the Sagnac-inspired combination, the noise power spectral density $N_{s}(u)$ can be expressed as $\\begin{aligned}N_{s}(u)=&64\\left(4+3\\mathrm {cos}u+2\\mathrm {cos}2u+3\\mathrm {cos}3u+2\\mathrm {cos}4u+\\mathrm {cos}5u+2\\mathrm {cos}6u+\\mathrm {cos}7u\\right)\\mathrm {sin}^4\\frac{u}{2}\\times \\\\&\\left(\\frac{2s_{a}^{2}L^{2}}{c^4u^{2}}\\left(13+\\mathrm {cos}u-8\\mathrm {cos}2u-4\\mathrm {cos}3u-2\\mathrm {cos}4u\\right)+\\frac{s_{x}^{2}u^{2}}{L^{2}}\\left(16+10\\mathrm {cos}u-2\\mathrm {cos}2u+2\\mathrm {cos}3u+\\mathrm {cos}4u\\right)\\right),\\end{aligned}$ and the averaged response function of gravitational waves $R_{s}(u)$ is found to be $\\begin{aligned}R_{s}(u)=&\\frac{2}{3}\\left(4+3\\mathrm {cos}u+2\\mathrm {cos}2u+3\\mathrm {cos}3u+2\\mathrm {cos}4u+\\mathrm {cos}5u+2\\mathrm {cos}6u+\\mathrm {cos}7u\\right)\\mathrm {sin}^4\\frac{u}{2}\\times \\\\&\\left[96\\left(41+31\\mathrm {cos}u+4\\mathrm {cos}2u+22\\mathrm {cos}3u+10\\mathrm {cos}4u\\right)\\mathrm {log}2-144\\left(11+6\\mathrm {cos}u+7\\mathrm {cos}3u+3\\mathrm {cos}4u\\right)\\mathrm {log}3\\right.\\\\&\\left.+944\\mathrm {sin}^2\\frac{u}{2}+64\\mathrm {sin}^{2}\\frac{u}{2}\\left(15\\mathrm {cos}u+3\\mathrm {cos}2u+\\mathrm {cos}3u\\right)-\\frac{3\\left(684\\mathrm {sin}u-273\\mathrm {sin}2u-146\\mathrm {sin}3u+86\\mathrm {sin}4u-10\\mathrm {sin}5u+\\mathrm {sin}6u\\right)}{u}\\right.\\\\&\\left.+\\frac{12\\mathrm {sin}^2\\frac{u}{2}\\left(-16+78\\mathrm {cos}u-29\\mathrm {cos}3u+16\\mathrm {cos}4u+5\\mathrm {cos}5u\\right)}{u^{2}}-\\frac{24\\mathrm {sin}^3\\frac{u}{2}\\left(96\\mathrm {cos}\\frac{u}{2}-50\\mathrm {cos}\\frac{3u}{2}-18\\mathrm {cos}\\frac{5u}{2}+21\\mathrm {cos}\\frac{7u}{2}+5\\mathrm {\\frac{9u}{2}}\\right)}{u^{3}}\\right.\\\\&\\left.+48\\left(49\\mathrm {Ci}u-82\\mathrm {Ci}2u+33\\mathrm {Ci}3u\\right)+96\\mathrm {cos}u\\left(22\\mathrm {Ci}u-31\\mathrm {Ci}2u+9\\mathrm {Ci}3u\\right)+384\\mathrm {cos}2u\\left(\\mathrm {Ci}u-\\mathrm {Ci}2u\\right)\\right.\\\\&\\left.+48\\mathrm {cos}3u\\left(23\\mathrm {Ci}u-44\\mathrm {cos}2u+21\\mathrm {Ci}3u\\right)+48\\mathrm {cos}4u\\left(11\\mathrm {Ci}u-20\\mathrm {Ci}2u+9\\mathrm {Ci}3u\\right)\\right.\\\\&\\left.+432\\left(-\\mathrm {sin}u+2\\mathrm {sin}3u+\\mathrm {sin}4u\\right)\\left(\\mathrm {Si}u-2\\mathrm {Si}2u+\\mathrm {Si}3u\\right)\\right].\\end{aligned}$ The resultant sensitivity function is $S(u)=\\frac{\\sqrt{N(u)}}{\\sqrt{\\frac{2}{5}}\\sqrt{R(u)}}.$ Fig.", "REF and Fig.", "REF are evaluated using Eqs.", "(REF )$-$ (REF )." ] ]
2210.07801
[ [ "Classical Density Functional Theory: Representability and Universal\n Bounds" ], [ "Abstract We provide upper and lower bounds on the lowest free energy of a classical system at given one-particle density $\\rho(x)$.", "We study both the canonical and grand-canonical cases, assuming the particles interact with a pair potential which decays fast enough at infinity." ], [ "Introduction", "Density functional theory (DFT) is a powerful tool used in quantum physics and chemistry to model quantum electrons in atoms, molecules and solids [51], [19], [74], [25], [55], [12].", "However, DFT is based on a rather general mathematical scheme and it can be applied to many other situations.", "This work is devoted to the rigorous study of classical DFT, which is used for finite or infinite systems of interacting classical particles.", "Classical DFT is widely employed in materials science, biophysics, chemical engineering and civil engineering [98].", "It has a much lower computational cost than the more precise molecular dynamics simulations, which are limited to small systems and short times [35], [76], [50].", "Classical DFT is typically used at interfaces between liquid-gas, liquid-liquid (in fluid mixtures), crystal-liquid and crystal-gas phases at bulk coexistence.", "The density is then non constant in space and varies in the interfacial region between the two phases.", "The physical theory of inhomogenous fluids goes essentially back to the 60s [61], [17], [16], [85], [59].", "Functional methods and their applications to the theory of the structure of bulk fluids were described in [66], [92].", "The realization that methods developed in the quantum case by Hohenberg-Kohn-Sham [38], [47] could be transferred to classical fluids arose in the middle of the 70s, in particular in the works of Ebner-Saam [28], [27], [87] and Yang et al [99].", "Several authors then developed approximate free-energy functionals to calculate the density profile and surface tension of the liquid-gas interface.", "The square-gradient approximation could later be derived rather systematically, following the important works of Hohenberg-Kohn-Sham on the gradient expansion of the uniform (quantum) electron gas.", "Deriving efficient functionals for the solid-liquid transition was harder and took longer [83], [84], [42].", "Well-known references on classical DFT are the two reviews by Evans [29], [30].", "Other important physical references on the subject include [40], [64], [1], [3], [88], [58], [26], [41].", "Rigorous works on classical DFT are rather scarse.", "Most of the mathematical works are about proving that one can find an external potential $V$ whose interacting equilibrium Gibbs measure has any desired given density $\\rho $ .", "This is called the inverse or dual problem and justifies the use of density functional methods.", "In quantum DFT, $V$ is called the Kohn-Sham potential and its existence is unclear in most situations.", "However, in the classical case, $V$ is usually well defined.", "The grand-canonical 1D hard-core gas was solved exactly in a celebrated work by Percus [67], who provided an exact expression of the external potential $V$ .", "This was used and extended in later works [95], [69], [68], [71].", "In two famous works [6], [5], Chayes, Chayes and Lieb proved in a quite general setting (in particular any space dimension $d$ ) the existence and uniqueness of the dual potential $V$ at any positive temperature $T>0$ .", "At $T=0$ , the canonical model can be reformulated as a multi-marginal optimal transport problem [10], [11], [63], [20], [86], where $V$ is usually called the Kantorovich potential.", "Its existence and properties are known in many cases [45], [24], [2] but uniqueness usually does not hold.", "The grand-canonical case was studied in the recent article [21].", "Most of these works are based on compactness arguments and do not furnish any quantitative information on the shape of the potential $V$ in terms of the given density $\\rho $ .", "In the recent paper [43], a novel Banach inversion theorem was used to provide an explicit formula for $V$ in terms of $\\rho $ in the form of a convergent series, under the assumption that $\\rho $ is small in $L^\\infty (\\mathbb {R}^d)$ .", "This is the equivalent of the Virial expansion for uniform systems.Recall that one can express the constant density $\\rho $ of an infinite gas as a convergent series in terms of the activity $z=e^{\\beta \\mu }$ , in the regime $z\\ll 1$  [81].", "This corresponds to placing the system in the constant external potential $V(x)=-\\mu $ .", "Since $\\rho \\sim _{z\\rightarrow 0} z$ , the series is invertible and any small uniform density is therefore representable by such a uniform potential, with $\\mu \\sim _{\\rho \\rightarrow 0}\\beta ^{-1}\\log \\rho $ .", "In this work and the companion paper [44] we do not discuss the dual potential $V$ and instead focus on more quantitative properties of the model depending on the shape of the density $\\rho $ .", "The case of the three dimensional Coulomb interaction $w(x)=|x|^{-1}$ or, more generally long range Riesz interactions $|x|^{-s}$ with $s<d$ has been the object of several recent works [55], [49].", "Here we always assume that the interaction potential $w$ decays fast enough at infinity and do not discuss more complicated long range potentials such as Coulomb.", "Our main goal in this paper is to show universal local bounds on the free energy $F_T[\\rho ]$ at given density $\\rho \\in L^1(\\mathbb {R}^d)$ .", "By local we mean that we only use terms in the form $\\int _{\\mathbb {R}^d}\\rho (x)^p\\,\\mathrm {d}x,\\qquad \\int _{\\mathbb {R}^d}\\rho (x)^q\\log \\rho (x)\\,\\mathrm {d}x.$ The admissible values of $p$ and $q$ will depend on the temperature $T$ as well as on the singularity of the interaction potential $w$ at the origin, that is, how strong the particles repel each other when they get close.", "Such universal bounds are important in DFT.", "They can help finding the natural form of approximate functionals to be used for practical computations.As an example, in the quantum case the Lieb-Oxford inequality [57] was used to calibrate some famous functionals such as PBE and SCAN [70], [60], [65], [93], [89], [90], [91], [73].", "In addition, these bounds will be useful in our next work [44] where we study the local density approximation.", "Deriving simple lower bounds is usually easy, under reasonable stability assumptions on the interaction potential $w$ .", "Obtaining upper bounds can be much more difficult.", "They require constructing a good trial state, but the constraint that the density is given and must be exactly reproduced can generate important mathematical complications.", "The simplest trial state is obtained by taking i.i.d.", "particles, that is, a factorized $N$ -particle probability $(\\rho /N)^{\\otimes N}$ where $N=\\int _{\\mathbb {R}^d}\\rho \\in \\mathbb {N}$ .", "Doing so provides an upper bound on the free energy in terms of mean-field theory, often called in this context the Kirkwood-Monroe functional [46]: $\\frac{1}{2}\\iint _{\\mathbb {R}^d\\times \\mathbb {R}^d}w(x-y)\\rho (x)\\rho (y)\\,\\mathrm {d}x\\,dy+T\\int _{\\mathbb {R}^d}\\rho (x)\\log \\rho (x)\\,\\mathrm {d}x.$ This only makes sense when the pair interaction potential $w$ is locally integrable.", "If $w$ is globally integrable, one can use Young's inequality and estimate the first integral by the local energy $(\\int _{\\mathbb {R}^d}w_+/2)\\int _{\\mathbb {R}^d}\\rho (x)^2\\,\\mathrm {d}x$ , where $w_+:=\\max (w,0)$ denotes the positive part.", "The simplest models of classical DFT use (REF ) as a basis.", "In classical statistical mechanics, it is often convenient to consider potentials $w$ diverging fast enough at the origin, which helps to stabilize the system [81], [23], [22], [34], [75], [72].", "This divergence implies that the particles can never get too close to each other, and this requires that the trial state contains rather strong correlations.", "A factorized state is not appropriate and (REF ) is infinite.", "The simplest singular interaction is of course the hard-core potential $w(x)=(+\\infty ){1 }(|x|< r_0)$ , which is simply infinite over a ball and vanishes outside.", "In this paper we provide two different constructions of a correlated trial state, which give reasonable upper bounds on the classical free energy at given density, for singular potentials at the origin.", "Our first method uses some ideas from harmonic analysis in the form of a Besicovitch-type covering lemma [18].", "We cover space with cubes whose size is adapted to the local value of the density, and put essentially one particle per cube, with the constraint that the cubes are far enough from each other.", "This method works very well in the grand-canonical setting where the number of particles is allowed to fluctuate.", "In order to handle the canonical ensemble, a different construction is needed.", "We instead use techniques from optimal transport theory developed in [8], which give a rather good bound at zero temperature, $T=0$ .", "For $T>0$ we couple this to the Besicovitch-type covering lemma and obtain an upper bound which is not as good as the grand-canonical one.", "In [44] we will study the behavior of $F_T[\\rho ]$ in some particular regimes and the upper universal bounds derived here will be useful.", "Namely we will consider the thermodynamic limit where $\\rho $ is essentially constant over a large domain as well as the local density approximation when $\\rho $ varies slowly over big regions.", "Such regimes have been recently considered for the three dimensional Coulomb potential $w(x)=|x|^{-1}$ in [52], [54], for more general Riesz potentials in [15], [14] and for a special class of positive-type interactions in [62].", "The methods used in these works all rely on the assumption that the potential is positive-definite, and new ideas are necessary in the general (short-range) case.", "Acknowledgement.", "ML thanks Rupert L. Frank for providing him with a preliminary version of the book [32].", "This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement MDFT No 725528 of ML).", "MJ also received financial support from the Ministry of Education, Youth and Sport of the Czech Republic under the Grant No.", "RVO 14000." ], [ "Free energies at given density", "This subsection is mainly devoted to precisely introducing models and notation used in the paper.", "Our main results are stated in the next subsections." ], [ "The interaction potential $w$", "For convenience, we work in $\\mathbb {R}^d$ with a general dimension $d\\geqslant 1$ .", "The physical cases of interest are of course $d\\in \\lbrace 1,2,3\\rbrace $ but the proofs are the same for all $d$ , except sometimes for $d=1$ .", "We consider systems of indistinguishable classical particles interacting through a short-range pair potential $ w $ .", "Throughout the paper, we work with an interaction satisfying the following properties.", "Assumption 1 (on the short-range potential $w$ ) Let $ w :\\mathbb {R}^d\\rightarrow \\mathbb {R}\\cup \\lbrace +\\infty \\rbrace $ be an even lower semi-continuous function satisfying the following properties, for some constant $\\kappa >0$ : $ w $ is stable, that is, $\\sum \\limits _{1 \\leqslant j < k \\leqslant N} w {x_j - x_k} \\geqslant - \\kappa N$ for all $ N \\in \\mathbb {N}$ and $ x_1, \\cdots , x_N \\in \\mathbb {R}^d $ .", "There exist $ r_0 \\geqslant 0 $ , $ 0 \\leqslant \\alpha \\leqslant \\infty $ and $s>d$ such that $w(x) \\leqslant \\kappa \\left({1 }(|x|<r_0)\\left(\\frac{r_0}{|x|}\\right)^\\alpha + \\frac{1}{1 + {x}^s}\\right).$ The lower semi-continuity of $w$ will be used later to ensure that the energy is lower semi-continuous as a function of the one-particle density (see Remark REF below).", "In statistical mechanics, the stability condition (REF ) is used to ensure the existence of the thermodynamic limit [81].", "On the other hand, upper bounds of the form (REF ) are often called upper regularity of $w$ and are sometimes used to get more information on the equilibrium states [80].", "At infinity, we assume that our potential $w$ is bounded above by $|x|^{-s}$ , which is integrable since $s>d$ .", "It could of course decay faster.", "On the other hand, the parameter $ \\alpha $ determines the allowed repulsive strength of the interaction at the origin.", "If $ \\alpha = 0 $ , then $ w $ is everywhere bounded above, and if $ 0 < \\alpha < d $ , then $ w $ has at most an integrable singularity at the origin.", "In particular, the positive part $ w_+$ is integrable over the whole of $\\mathbb {R}^d$ (since we are interested in upper bounds, the negative part $w_-$ will not play a role in this paper).", "In the case where $ \\alpha \\geqslant d $ , $ w $ can have a non-integrable singularity at the origin.", "If $ \\alpha = \\infty $ , then $ w $ can have a hard-core.", "Our convention is that $(r_0/|x|)^\\alpha =(+\\infty ){1 }(|x|<r_0)$ for $\\alpha =+\\infty $ .", "When $\\alpha <\\infty $ we can always assume that $r_0=1$ , possibly after increasing $\\kappa $ .", "Most short range potentials of physical interest are covered by Assumption REF , including for instance the simple hard-core and the Lennard-Jones potential $w(x)=a|x|^{-12}-b|x|^{-6}$ ." ], [ "The canonical free energy", "In this subsection we define the canonical free energy $F_T[\\rho ]$ at given density $\\rho $ .", "Suppose that we have $ N $ particles in $ \\mathbb {R}^d $ , distributed according to some Borel probability measure $ \\mathbb {P}$ on $ \\mathbb {R}^{dN} $ .", "Since the particles are indistinguishable, we demand that the measure $ \\mathbb {P}$ is symmetric, that is, $\\mathbb {P}(A_{\\sigma {1}}\\times \\cdots \\times A_{\\sigma {N}})= \\mathbb {P}(A_1\\times \\cdots \\times A_N)$ for any permutation $ \\sigma $ of $ {1, \\cdots , N} $ , and any Borel sets $A_1,...,A_N\\subset \\mathbb {R}^d$ .", "The one-body density of such a symmetric probability $ \\mathbb {P}$ equals $N$ times the first marginal of $\\mathbb {P}$ , that is, $\\rho _{\\mathbb {P}} = N\\int _{\\mathbb {R}^{d {N-1}}} \\, \\mathrm {d}\\mathbb {P}{\\cdot ,x_2, \\cdots , x_N},$ where the integration is over $ x_2,\\cdots ,x_N $ .", "Equivalently, $\\rho _\\mathbb {P}(A)=N\\mathbb {P}(A\\times (\\mathbb {R}^d)^{N-1})$ for every Borel set $A$ .", "Note the normalization convention $ \\rho _{\\mathbb {P}}(\\mathbb {R}^d) = N $ .", "For a non-symmetric probability $\\mathbb {P}$ we define $\\rho _\\mathbb {P}$ as the sum of the $N$ marginals.", "Notice that any positive measure $\\rho $ on $\\mathbb {R}^d$ with $\\rho (\\mathbb {R}^d)=N\\in \\mathbb {N}$ arises from at least one $N$ –particle probability measure $\\mathbb {P}$ .", "One can take for instance $\\mathbb {P}=(\\rho /N)^{\\otimes N}$ for independent and identically distributed particles.", "The pairwise average interaction energy of the particles is given by $\\mathcal {U}_{N} {\\mathbb {P}}= \\int _{\\mathbb {R}^{dN}} \\sum \\limits _{1 \\leqslant j < k \\leqslant N} w {x_j - x_k} \\, \\mathrm {d}\\mathbb {P}{x_1, \\cdots , x_N}.$ It could in principle be equal to $+\\infty $ , but it always satisfies $\\mathcal {U}_{N} {\\mathbb {P}}\\geqslant -\\kappa N$ due to the stability condition on $w$ in Assumption REF .", "When considering systems at positive temperature $T>0$ , it is necessary to also take the entropy of the system into account.", "It is given by $\\mathcal {S}_N {\\mathbb {P}} := - \\int _{\\mathbb {R}^{dN}} \\mathbb {P}{x} \\log \\big (N!", "\\, \\mathbb {P}{x}\\big ) \\, \\mathrm {d}x.$ If $ \\mathbb {P}$ is not absolutely continuous with respect to the Lebesgue measure on $ \\mathbb {R}^{dN} $ , we use the convention that $\\mathcal {S}_N {\\mathbb {P}}=-\\infty $ .", "The factor $N!$ appears because the particles are indistinguishable.", "In fact, we should think that $N!\\,\\mathbb {P}$ defines a probability measure over $(\\mathbb {R}^d)^N/\\mathfrak {S}_N$ where $\\mathfrak {S}_N$ is the permutation group.", "We need to make sure that $\\mathcal {S}_N(\\mathbb {P})<+\\infty $ , which follows if we assume for instance that $\\rho _\\mathbb {P}$ is absolutely continuous with $\\int _{\\mathbb {R}^d}\\rho _\\mathbb {P}|\\log \\rho _\\mathbb {P}|<\\infty $ .", "This is due to the well-known inequality (see, e.g., [21]) $\\mathcal {S}_N {\\mathbb {P}}\\leqslant -\\int _{\\mathbb {R}^d}\\rho _\\mathbb {P}(x)\\log \\rho _\\mathbb {P}(x)\\,\\mathrm {d}x+N.$ The latter follows immediately from writing the relative entropy of $\\mathbb {P}$ with respect to $(\\rho /N)^{\\otimes N}$ , which is non-negative, and using $(N/e)^N\\leqslant N!$ .", "The total free energy of the system in the state $ \\mathbb {P}$ at temperature $ T \\geqslant 0 $ equals $\\mathcal {F}_T {\\mathbb {P}}:={} \\mathcal {U}_{N} {\\mathbb {P}} - T \\mathcal {S}_N {\\mathbb {P}}= \\int _{\\mathbb {R}^{dN}} \\sum \\limits _{j < k} w {x_j - x_k} \\, \\mathrm {d}\\mathbb {P}{x} + T \\int _{\\mathbb {R}^{dN}} \\mathbb {P}\\log {N!", "\\, \\mathbb {P}}.$ It can be equal to $+\\infty $ but never to $-\\infty $ due to the stability of $w$ and thanks to the inequality (REF ) if $T>0$ and $\\int _{\\mathbb {R}^d}\\rho _\\mathbb {P}|\\log \\rho _\\mathbb {P}|<\\infty $ .", "Throughout the paper, we will only consider systems with a given one-body density $\\rho $ , which is absolutely continuous with respect to the Lebesgue measure.", "At $T>0$ we also assume that $\\int _{\\mathbb {R}^d}\\rho |\\log \\rho |<\\infty $ .", "This allows us to consider the minimal energy of $ N $ -particle classical systems with density $ \\rho $ , given by $\\boxed{F_T [\\rho ]:= \\inf _{\\rho _{\\mathbb {P}} = \\rho } \\mathcal {F}_T {\\mathbb {P}}}$ where the infimum is taken over $ N $ -particle states $ \\mathbb {P}$ on $ \\mathbb {R}^{dN} $ with one-particle density $ \\rho _{\\mathbb {P}} $ equal to $ \\rho $ .", "At $T=0$ , the entropy term disappears and we obtain $F_0 [\\rho ]:= \\inf _{\\rho _{\\mathbb {P}} = \\rho }\\int \\sum \\limits _{1\\leqslant j < k\\leqslant N} w {x_j - x_k} \\, \\mathrm {d}\\mathbb {P}{x}.$ This is a multi-marginal optimal transport problem with symmetric cost $\\sum _{j<k}w(x_j-x_k)$ and with all the marginals of $\\mathbb {P}$ equal to $\\rho /N$  [10], [11], [63], [20], [86].", "From the stability assumption on $w$ and (REF ), we have $F_T [\\rho ]\\geqslant -(\\kappa +T)N+T\\int _{\\mathbb {R}^d}\\rho (x)\\log \\rho (x)\\,\\mathrm {d}x.$ One of our goals will be to find simple conditions ensuring that $F_T[\\rho ]<\\infty $ .", "Before we turn to this question, we first introduce the grand-canonical problem.", "Remark 2 (Symmetry) In the definition (REF ) we can freely remove the constraint that $\\mathbb {P}$ is symmetric.", "Since the interaction is a symmetric function and the entropy $\\mathcal {S}_N$ is concave, the minimum is the same as for symmetric $\\mathbb {P}$ 's.", "Recall that for a non-symmetric $\\mathbb {P}$ , $\\rho _\\mathbb {P}$ is by definition the sum of the $N$ marginals.", "Remark 3 (Lower semi-continuity) The function $\\rho \\mapsto F_T[\\rho ]$ is lower semi-continuous for the strong topology.", "That is, we have $F_T[\\rho ]\\leqslant \\liminf _{n\\rightarrow \\infty } F_T[\\rho _n]\\quad \\text{if $\\int |\\rho _n-\\rho |\\rightarrow 0$ and $T\\int \\rho _n|\\log \\rho _n|\\leqslant C$}$ At $T>0$ this is valid under the sole condition that $w$ is measurable (since the limiting probability $\\mathbb {P}$ is necessary absolutely continuous) but at $T=0$ , this uses the lower semi-continuity of $w$ .", "The details of the argument are provided later in the proof of Theorem REF , for the convenience of the reader.", "Remark 4 (Convexity and duality) Using the concavity of the entropy $\\mathcal {S}_N$ , one can verify that $\\rho \\mapsto F_T[\\rho ]$ is convex.", "This can be used to derive the dual formulation of $F_T[\\rho ]$ in terms of external potentials $F_T[\\rho ]={}& T\\int _{\\mathbb {R}^d}\\rho \\log \\rho +\\sup _{\\widetilde{V}}\\bigg \\lbrace -\\int _{\\mathbb {R}^d}\\rho (x) \\widetilde{V}(x)\\,\\mathrm {d}x \\nonumber \\\\&-T\\log \\int _{\\mathbb {R}^{dN}}\\exp [\\bigg ]{ -\\frac{1}{T}\\sum _{1\\leqslant j<k\\leqslant N}w(x_j-x_k)-\\frac{1}{T}\\sum _{j=1}^N\\widetilde{V}(x_j) } \\mathrm {d}\\rho ^{\\otimes N}\\bigg \\rbrace ,$ see [6].", "Our notation $\\widetilde{V}$ is because the final physical dual potential is, rather, $V:=\\widetilde{V}-T\\log \\rho $ .", "The existence of a maximizer $\\widetilde{V}$ realizing the above supremum is proved in [6].", "It is the unique potential (up to an additive constant) so that the corresponding Gibbs state has density $\\rho $ , that is, $\\rho _\\mathbb {P}=\\rho ,\\quad \\mathbb {P}=\\frac{1}{Z}\\exp \\bigg (-\\frac{1}{T}\\sum _{1\\leqslant j<k\\leqslant N}w(x_j-x_k)-\\frac{1}{T}\\sum _{j=1}^N\\widetilde{V}(x_j)\\bigg )\\rho ^{\\otimes N}$ with $Z$ a normalization constant.", "At $T=0$ , we have the similar formula $F_0[\\rho ]=\\sup _{\\widetilde{V}}\\bigg \\lbrace E_N[V]-\\int _{\\mathbb {R}^d}\\rho (x) V(x)\\,\\mathrm {d}x\\bigg \\rbrace $ where $E_N[V]=\\inf _{x_1,...,x_N\\in \\mathbb {R}^d}\\left\\lbrace \\sum _{1\\leqslant j<k\\leqslant N}w(x_j-x_k)+\\sum _{j=1}^NV(x_j)\\right\\rbrace ,$ is the ground state energy in the potential $V$  [45].", "Although there usually exist dual potentials at $T=0$ , those are often not unique." ], [ "The grand-canonical free energy", "In the grand-canonical picture, where the exact particle number of the system is not fixed, a state $ \\mathbb {P}$ is a family of symmetric $ n $ -particle positive measures $ \\mathbb {P}_n $ on $(\\mathbb {R}^d)^n$ , so that $\\sum _{n \\geqslant 0} \\mathbb {P}_n\\big ((\\mathbb {R}^d)^n\\big )=1.$ Here $\\mathbb {P}_0$ is just a number, interpreted as the probability that there is no particle at all in the system.", "After replacing $\\mathbb {P}_n$ by $\\mathbb {P}_n/\\mathbb {P}_n(\\mathbb {R}^{dn})$ , we can equivalently think that $\\mathbb {P}$ is a convex combination of canonical states.", "The entropy of $ \\mathbb {P}$ is defined by $\\mathcal {S}{\\mathbb {P}}:= \\sum \\limits _{n \\geqslant 0} \\mathcal {S}_n {\\mathbb {P}_n}= -\\mathbb {P}_0\\log (\\mathbb {P}_0)- \\sum \\limits _{n \\geqslant 1} \\int _{\\mathbb {R}^{dn}} \\mathbb {P}_n \\log {n!", "\\, \\mathbb {P}_n },$ and the single particle density of the state $ \\mathbb {P}$ is $\\rho _{\\mathbb {P}} = \\sum \\limits _{n \\geqslant 1} \\rho _{\\mathbb {P}_n}=\\sum _{n\\geqslant 1}n\\int _{(\\mathbb {R}^d)^n}\\mathrm {d}\\mathbb {P}_n(\\cdot ,x_2,\\cdots ,x_n).$ The grand-canonical free energy of the state $ \\mathbb {P}$ at temperature $ T \\geqslant 0 $ is $\\mathcal {G}_T {\\mathbb {P}}:= \\mathcal {U}{\\mathbb {P}} - T \\mathcal {S}{\\mathbb {P}},$ where $ \\mathcal {U}{\\mathbb {P}} $ denotes the interaction energy in the state $ \\mathbb {P}$ , $\\mathcal {U}{\\mathbb {P}}:= \\sum \\limits _{n \\geqslant 2} \\mathcal {U}_{n} {\\mathbb {P}_n}= \\sum \\limits _{n \\geqslant 2} \\int _{\\mathbb {R}^{dn}} \\sum \\limits _{j<k}^n w {x_j - x_k} \\, \\mathrm {d}\\mathbb {P}_n {x_1,...,x_N}.$ From the stability of $w$ we have $\\mathcal {U}_{n} {\\mathbb {P}_n}\\geqslant -\\kappa n\\,\\mathbb {P}_n(\\mathbb {R}^{dn})$ so that, after summing over $n$ , $ \\mathcal {U}{\\mathbb {P}}\\geqslant -\\kappa \\int _{\\mathbb {R}^d}\\rho _\\mathbb {P}(x)\\,\\mathrm {d}x.$ By [21] we have the universal entropy bound $\\mathcal {S}{\\mathbb {P}}\\leqslant -\\int _{\\mathbb {R}^d}\\rho _\\mathbb {P}\\big (\\log \\rho _\\mathbb {P}-1).$ This is because the entropy at fixed density $ \\rho $ is maximized by the grand-canonical Poisson state $\\mathbb {Q}:= \\left(\\frac{e^{- \\int _{\\mathbb {R}^d} \\rho }}{n!}", "\\rho ^{\\otimes n}\\right)_{n \\geqslant 0}$ whose entropy is the right side of (REF ).", "When keeping the one-particle density $ \\rho = \\rho _{\\mathbb {P}} \\in L^1 {\\mathbb {R}^d} $ fixed, we denote the minimal grand-canonical free energy by $\\boxed{G_T [\\rho ]:= \\inf _{\\rho _{\\mathbb {P}} = \\rho } \\mathcal {G}_T {\\mathbb {P}}.", "}$ Using (REF ), we obtain $G_T [\\rho ]\\geqslant - [\\big ]{\\kappa + T} \\int _{\\mathbb {R}^d} \\rho + T \\int _{\\mathbb {R}^d} \\rho \\log \\rho ,$ where $ \\kappa $ is the stability constant of $ w $ in Assumption REF .", "Remark 5 (Comparing $F_T$ and $G_T$ ) Since a canonical trial state is automatically also admissible for the grand-canonical minimisation problem (REF ), we have the bound $G_T [\\rho ]\\leqslant F_T [\\rho ]$ for any density $ 0 \\leqslant \\rho \\in L^1 {\\mathbb {R}^d} $ with integer mass.", "Hence, any universal lower energy bound for the grand-canonical ensemble is also a lower bound for the canonical ensemble.", "A natural question to ask is under which condition we have $F_T[\\rho ] =G_T[\\rho ]$ for a density $\\rho $ of integer mass.", "In general this is a difficult problem.", "See [21] for results and comments in this direction at $T=0$ .", "If $\\int _{\\mathbb {R}^d}\\rho =N+t$ with $t\\in (0,1)$ and $N\\in \\mathbb {N}$ , we can write $\\rho =(1-t)\\frac{N}{N+t}\\rho +t\\frac{N+1}{N+t}\\rho $ and obtain after using the concavity of the entropy $G_T [\\rho ]\\leqslant (1-t)\\,F_T \\left[\\frac{N}{N+t}\\rho \\right]+t\\,F_T \\left[\\frac{N+1}{N+t}\\rho \\right].$ This can be used to deduce an upper bound on $G_T[\\rho ]$ , once an upper bound has been established in the canonical case.", "We will see, however, that it is usually much easier to directly prove upper bounds on $G_T[\\rho ]$ than on $F_T[\\rho ]$ .", "Remark 6 (Weak lower semi-continuity) The functional $\\rho \\mapsto G_T[\\rho ]$ is weakly lower semi-continuous and, in fact, a kind of lower continuous envelope of $F_T[\\rho ]$ (see [55], [21]).", "At $T=0$ this uses the lower semi-continuity of $w$ .", "Remark 7 (Duality II) Like in the canonical case, we have the dual formulation $G_T[\\rho ]=T\\int _{\\mathbb {R}^d}\\rho \\log \\rho +\\sup _{\\widetilde{V}}\\bigg \\lbrace -\\int _{\\mathbb {R}^d}\\rho (x) \\widetilde{V}(x)\\,\\mathrm {d}x\\\\-T\\log \\bigg [\\sum _{n\\geqslant 0}\\int _{\\mathbb {R}^{dn}}\\exp \\bigg (-\\frac{1}{T}\\sum _{1\\leqslant j<k\\leqslant n}w(x_j-x_k)-\\frac{1}{T}\\sum _{j=1}^n\\widetilde{V}(x_j)\\bigg )\\mathrm {d}\\rho ^{\\otimes N}\\bigg ]\\bigg \\rbrace ,$ see [6], [5] and the more recent work [21]." ], [ "Representability", "Next we turn to the problem of representability.", "Namely, we are asking what kind of densities $\\rho $ can arise from $N$ –particle probabilities with finite free energy.", "This depends on the shape of the interaction potential $w$ .", "We only address this question for $\\rho \\in L^1(\\mathbb {R}^d)$ and do not look at general measures.", "The main result is that all densities are representable at zero temperature in the non-hard-core case ($\\alpha <\\infty $ ).", "At positive temperature, it is sufficient to assume in addition that $\\int _{\\mathbb {R}^d}\\rho |\\log \\rho |<\\infty $ .", "Theorem 8 (Representability in the canonical case) Let $\\rho \\in L^1(\\mathbb {R}^d)$ with $\\int _{\\mathbb {R}^d}\\rho (x) \\, \\mathrm {d}x \\in \\mathbb {N}$ .", "There exists a symmetric probability measure $\\mathbb {P}$ on $(\\mathbb {R}^d)^N$ of density $\\rho $ so that $|x_j-x_k|\\geqslant \\delta >0$ $\\mathbb {P}$ –almost everywhere, for some $\\delta >0$ .", "If $w$ satisfies Assumption REF without hard-core ($\\alpha <\\infty $ ), we obtain $F_0[\\rho ]<\\infty $ .", "If furthermore $\\int _{\\mathbb {R}^d}\\rho |\\log \\rho |<\\infty $ , then $\\mathbb {P}$ can be assumed to have finite entropy and $F_T[\\rho ]<\\infty $ for any $T>0$ .", "The theorem follows from results in optimal transport theory and we quickly outline the proof here for the convenience of the reader.", "In this paper we will prove much more.", "We will in fact need some of these tools and more details will thus be provided later in the paper.", "If $\\int _{\\mathbb {R}^d}\\rho =1$ , we must take $\\mathbb {P}=\\rho $ and end up with $F_T[\\rho ]=T\\int \\rho \\log \\rho $ .", "In the rest of the proof we assume that $\\int _{\\mathbb {R}^d}\\rho \\geqslant 2$ .", "For $\\rho \\in L^1(\\mathbb {R}^d)$ , the existence of $\\mathbb {P}$ is proved in [8].", "The number $\\delta $ must be so that $\\int _{B(x,\\delta )}\\rho <1$ for any $x\\in \\mathbb {R}^d$ , where $B(x,R)$ denotes the ball centered at $x$ and of radius $R$ .", "Such a $\\delta >0$ always exists when $\\rho \\in L^1(\\mathbb {R}^d)$ .", "See Section REF below for more details on the results from [8].", "Next we prove that $\\mathcal {F}_0{\\mathbb {P}}<\\infty $ .", "Since $\\alpha <\\infty $ (no hard-core), we can assume $r_0=1$ .", "We then have $w(x)\\leqslant C_\\delta |x|^{-s}$ for all $|x|\\geqslant \\delta $ , with the constant $C_\\delta =\\kappa (1+\\delta ^{s-\\alpha })$ , due to Assumption REF .", "Hence, on the support of $\\mathbb {P}$ we have $\\sum _{1\\leqslant j<k\\leqslant N}w(x_j-x_k)=\\frac{1}{2}\\sum _{j=1}^N\\sum _{k\\ne j}w(x_j-x_k)\\leqslant \\frac{C_\\delta }{2}N\\max _{\\begin{array}{c}|y_j|\\geqslant \\delta \\\\ |y_j-y_k|\\geqslant \\delta \\end{array}}\\sum _{j=1}^{N-1}\\frac{1}{|y_j|^s}.$ The maximum is bounded by $C\\delta ^{-s}$ independently of $N$ due to [49].", "Integrating with respect to $\\mathbb {P}$ we have proved that $\\mathcal {F}_0{\\mathbb {P}}\\leqslant C_\\delta \\delta ^{-s}N$ .", "This bound is not very explicit but it only depends on $\\delta $ and $N$ .", "Of course, $\\delta $ itself depends on $\\rho $ in a rather indirect way.", "The probability measure $\\mathbb {P}$ obtained by the optimal transport method of [8] is probably a singular measure, hence with an infinite entropy.", "In [9], it is explained how to regularize any given $\\mathbb {P}$ using a method called the Block approximation.", "This method works well for a compactly supported density, for which it easily implies $F_T[\\rho ]<\\infty $ .", "We quickly describe the method here and refer to Section REF below for details.", "In short, we split the space into small cubes $\\lbrace \\mathcal {C}_j\\rbrace $ of size proportional to $\\delta $ and introduce the trial probability measure $\\widetilde{\\mathbb {P}}=\\sum _{j_1,...,j_N}\\mathbb {P}(\\mathcal {C}_{j_1}\\times \\cdots \\times \\mathcal {C}_{j_N})\\frac{\\rho {1 }_{\\mathcal {C}_{j_1}}\\otimes \\cdots \\otimes \\rho {1 }_{\\mathcal {C}_{j_N}}}{\\int _{\\mathcal {C}_{j_1}}\\rho \\cdots \\int _{\\mathcal {C}_{j_N}}\\rho }.$ That is, we take a convex combination of independent particles over small cubes with probability $\\mathbb {P}(\\mathcal {C}_{j_1}\\times \\cdots \\times \\mathcal {C}_{j_N})$ .", "Choosing the cubes small enough, we can ensure that $|x_j-x_k|\\geqslant \\delta /2$ on the support of $\\widetilde{\\mathbb {P}}$ and $\\int _{\\mathcal {C}_j}\\rho <1$ .", "A computation gives $\\rho _{\\widetilde{\\mathbb {P}}}=\\rho _{\\mathbb {P}}=\\rho $ .", "The entropy can be estimated by $\\int _{\\mathbb {R}^{dN}}\\widetilde{\\mathbb {P}}\\log (N!", "\\, \\widetilde{\\mathbb {P}})\\leqslant \\int _{\\mathbb {R}^d} \\rho \\log \\rho -\\sum \\limits _{j} [\\bigg ]{\\int _{\\mathcal {C}_j} \\rho } \\log [\\bigg ]{\\int _{\\mathcal {C}_j} \\rho }$ (see Lemma REF below).", "Estimating the last sum is not an easy task for a general density.", "For a compactly supported density we can simply bound it by $1/e$ times the numbers of cubes intersecting the support of $\\rho $ .", "Since the energy of $\\widetilde{\\mathbb {P}}$ is finite by the previous argument, we deduce that $F_T[\\rho ]<\\infty $ for any $\\rho $ of compact support.", "It thus remains to explain how to prove that $F_T[\\rho ]$ is finite for a density $\\rho $ of unbounded support.", "The idea is of course to truncate it.", "We choose two radii $R_1<R_2$ so that $\\int _{\\mathbb {R}^d\\setminus B_{R_2}}\\rho =\\int _{B_{R_2}\\setminus B_{R_1}}\\rho =\\frac{1}{2}$ (using here $\\int \\rho \\geqslant 2$ ) and we define for shortness $\\rho _1:=\\rho {1 }_{B_{R_1}}$ , $\\rho _2:=\\rho {1 }_{B_{R_2}\\setminus B_{R_1}}$ and $\\rho _3:=\\rho {1 }_{\\mathbb {R}^d\\setminus B_{R_2}}$ .", "We can write $\\rho =\\frac{\\rho _1+2\\rho _2}{2}+\\frac{\\rho _1+2\\rho _3}{2}$ where $\\int _{\\mathbb {R}^d}(\\rho _1+2\\rho _2)=\\int _{\\mathbb {R}^d}(\\rho _1+2\\rho _3)=N$ .", "From the convexity of $F_T$ we obtain $F_T[\\rho ]\\leqslant \\frac{1}{2}F_T[\\rho _1+2\\rho _2]+\\frac{1}{2}F_T[\\rho _1+2\\rho _3].$ The first density $\\rho _1+2\\rho _2$ has compact support hence has a finite energy, as explained above.", "For the second density $\\rho _1+2\\rho _3$ we use an uncorrelated trial state in the form $\\mathbb {P}_1\\otimes _s (2\\rho _3)$ where $\\mathbb {P}_1$ is also constructed as before, but with $\\rho $ replaced by $\\rho _1$ which has mass $N-1$ .", "Here $\\otimes _s$ means the symmetric tensor product.", "A calculation shows that $F_T[\\rho _1+2\\rho _3]\\leqslant {}& \\mathcal {F}_T\\big (\\mathbb {P}_1\\otimes _s(2\\rho _3)\\big )\\\\={}&\\mathcal {F}_T(\\mathbb {P}_1)+2\\iint _{\\mathbb {R}^{2d}}w(x-y)\\rho _1(x)\\rho _3(y)\\, \\mathrm {d}x \\, \\mathrm {d}y \\\\&+2T\\int \\rho _3\\log (2\\rho _3) \\\\\\leqslant {}& \\mathcal {F}_T(\\mathbb {P}_1)+(N-1)\\sup _{|x|\\geqslant R_2-R_1}|w(x)|+2T\\int _{\\mathbb {R}^d\\setminus B_{R_2}} \\rho \\log (2\\rho ).$ Thus the finiteness for densities of compact support implies the same for all densities.", "In fact, after optimizing over $\\mathbb {P}_1$ we have proved the bound $F_T[\\rho ]\\leqslant {}& \\frac{F_T[\\rho _1+2\\rho _2]+F_T[\\rho _1]}{2}\\\\&+\\frac{N-1}{2}\\sup _{|x|\\geqslant R_2-R_1}|w(x)|+T\\int _{\\mathbb {R}^d\\setminus B_{R_2}} \\rho \\log (2\\rho ).$ This concludes the proof of Theorem REF .", "We have not considered here the hard-core potential, to which we will come back later in Section REF .", "Representability is much more delicate in this case.", "From the inequality (REF ), we immediately obtain the following.", "Corollary 9 (Representability in the grand-canonical case) Let $\\rho \\in L^1(\\mathbb {R}^d)$ .", "Then we have $G_0[\\rho ]<\\infty $ if $w$ has no hard-core ($\\alpha <\\infty $ ).", "If furthermore $\\int _{\\mathbb {R}^d}\\rho |\\log \\rho |<\\infty $ , then $G_T[\\rho ]<\\infty $ for all $T>0$ ." ], [ "Local upper bounds", "Recall that we already have rather simple lower bounds in (REF ) and (REF ).", "The proof of Theorem REF furnishes an upper bound on $F_T[\\rho ]$ but it depends on the smallest distance $\\delta $ between the particles in the system, which is itself a highly nonlinear and nonlocal function of $\\rho $ .", "For non compactly-supported densities, the proof also involves the two radii $R_1,R_2$ which depend on $\\rho $ as well.", "Our goal here is to provide simple local upper bounds involving only integrals of the given density $\\rho $ .", "We start in the next subsection by recalling the simple integrable case at the origin $\\alpha <d$ , for which we can just choose i.i.d. particles.", "The case $\\alpha \\geqslant d$ is much more complicated since particles cannot be allowed to get too close." ], [ "Upper bound in the weakly repulsive case $\\alpha <d$", "In the case where $ w_+ $ is integrable at the origin, it is easy to provide a simple upper bound.", "Theorem 10 (Weakly repulsive case $\\alpha <d$ ) Let $ w $ satisfy Assumption REF with $ \\alpha < d $ .", "Let $ 0 \\leqslant \\rho \\in L^1 {\\mathbb {R}^d} \\cap L^2 {\\mathbb {R}^d} $ with integer mass $ \\int \\rho \\in \\mathbb {N}$ .", "Let also $T\\geqslant 0$ and assume that $\\int _{\\mathbb {R}^d}\\rho |\\log \\rho |<\\infty $ if $T>0$ .", "Then we have $F_T [\\rho ]&\\leqslant \\frac{1}{2}\\iint _{\\mathbb {R}^d\\times \\mathbb {R}^d}w(x-y)\\rho (x)\\rho (y)\\,\\mathrm {d}x\\,dy+T\\int _{\\mathbb {R}^d}\\rho \\log \\rho \\nonumber \\\\&\\leqslant \\frac{{w_+}{L^1}}{2} \\int _{\\mathbb {R}^d} \\rho ^2 + T \\int _{\\mathbb {R}^d} \\rho \\log \\rho .$ In the grand-canonical case we have the exact same bound on $G_T[\\rho ]$ , this time without any constraint on $\\int _{\\mathbb {R}^d} \\rho $ and with $\\rho \\log \\rho $ replaced by $\\rho (\\log \\rho -1)$ in the last integral.", "As we have mentioned in the introduction, the functional appearing on the right side of the first line of (REF ) is the so-called Kirkwood-Monroe free energy [46].", "It is the simplest approximation of $F_T[\\rho ]$ and typically arises in the mean-field approximation at high density [33], [36], [4], [77].", "It only makes sense for a locally integrable potential.", "We denote $ N = \\int _{\\mathbb {R}^d} \\rho $ and simply take as a trial state the pure tensor product $ \\mathbb {P}:= {\\rho /N }^{\\otimes N} $ .", "The interaction energy satisfies $\\mathcal {U}_{N} {\\mathbb {P}}={}& \\frac{N {N-1}}{2} \\int _{\\mathbb {R}^{dN}} w {x_1 - x_2} [\\Big ]{\\frac{\\rho }{N} }^{\\otimes N} {x} \\, \\mathrm {d}x_1 \\cdots \\, \\mathrm {d}x_N \\nonumber \\\\={}& \\frac{1-1/N}{2} \\iint _{\\mathbb {R}^d\\times \\mathbb {R}^d} w {x_1 - x_2} \\rho {x_1} \\rho {x_2} \\, \\mathrm {d}x_1 \\, \\mathrm {d}x_2\\\\\\leqslant {}& \\frac{{w_+}{L^1}}{2} \\int _{\\mathbb {R}^d} \\rho ^2.\\nonumber $ From the stability condition on $w$ , we know that for any $\\eta \\geqslant 0$ with $\\int \\eta =1$ , $\\mathcal {U}_{K} {\\eta ^{\\otimes K}}=\\frac{K(K-1)}{2}\\iint _{\\mathbb {R}^d\\times \\mathbb {R}^d} w {x -y} \\eta (x)\\eta (y)\\,\\mathrm {d}x\\,\\mathrm {d}y\\geqslant -\\kappa K.$ Letting $K\\rightarrow \\infty $ , we find $\\iint _{\\mathbb {R}^d\\times \\mathbb {R}^d} w {x -y} \\eta (x)\\eta (y)\\,\\mathrm {d}x\\,\\mathrm {d}y\\geqslant 0,\\qquad \\forall \\eta \\geqslant 0.$ This is how the stability is expressed in mean-field theory [56].", "Since the double integral in (REF ) is non-negative, we can remove the $1/N$ for an upper bound.", "The entropy can itself be estimated by $-\\mathcal {S}_N {\\mathbb {P}}={}& \\int _{\\mathbb {R}^{dN}} [\\Big ]{\\frac{\\rho }{N} }^{\\otimes N} \\log [\\Big ]{N!", "[\\Big ]{\\frac{\\rho }{N} }^{\\otimes N}} \\\\={}& \\log [\\Big ]{\\frac{N!", "}{N^N}} + \\int _{\\mathbb {R}^d} \\rho \\log \\rho \\leqslant {} \\int _{\\mathbb {R}^d} \\rho \\log \\rho ,$ showing that (REF ) holds.", "In the grand-canonical case we use instead the Poisson state in (REF ) and exactly obtain the mean-field energy on the right side of (REF ) with $\\rho \\log \\rho $ replaced by $\\rho (\\log \\rho -1)$ in the last integral." ], [ "Upper bounds in the strongly repulsive case $\\alpha \\geqslant d$", "When $\\alpha \\geqslant d$ the right side of (REF ) is infinite due to the non-integrability of $w$ at the origin.", "We cannot use a simple uncorrelated probability $\\mathbb {P}$ as a trial state and it is necessary to correlate the particles in such a way that they never get too close to each other.", "The difficulty is to do this at fixed density.", "Also, we expect the typical distance between the particles to depend on the local value of $\\rho $ .", "If we imagine that there are $\\rho (x)$ particles per unit volume in a neighborhood of a point $x$ , then the distance should essentially be proportional to $\\rho (x)^{-1/d}$ .", "We thus expect a bound in terms of $\\rho (x)^{1+\\alpha /d}$ for large densities.", "We can only fully solve this question in the grand-canonical case.", "In the canonical case we can only treat $T=0$ in full.", "The following is our first main result.", "Theorem 11 (Strongly repulsive case $\\alpha \\geqslant d$ ) Suppose that the interaction $ w $ satisfies Assumption REF with $ d \\leqslant \\alpha < \\infty $ .", "Let $T\\geqslant 0$ and assume that for $ T > 0 $ , we have $\\int _{\\mathbb {R}^d}\\rho |\\log \\rho |<\\infty $ .", "$\\bullet $ In the grand-canonical ensemble, we have for any $ 0 \\leqslant \\rho \\in L^1 {\\mathbb {R}^d} $ , $G_T [\\rho ]&\\leqslant C\\kappa \\int _{\\mathbb {R}^d}\\rho ^2+CT \\int _{\\mathbb {R}^d} \\rho + T \\int _{\\mathbb {R}^d} \\rho \\log \\rho \\nonumber \\\\&\\qquad +{\\left\\lbrace \\begin{array}{ll}\\displaystyle C\\kappa r_0^\\alpha \\int _{\\mathbb {R}^d} \\rho ^{1+\\frac{\\alpha }{d}}&\\text{for $\\alpha >d$,}\\\\[0.4cm]\\displaystyle C\\kappa r_0^d \\left(\\int _{\\mathbb {R}^d} \\rho ^2+\\int _{\\mathbb {R}^d}\\rho ^2\\big (\\log r_0^d\\rho \\big )_+\\right)&\\text{for $\\alpha =d$.}\\end{array}\\right.", "}$ Here the constant $ C $ only depends on the dimension $ d $ and the powers $\\alpha ,s$ from Assumption REF .", "$\\bullet $ In the canonical ensemble we have the same estimate on $F_T[\\rho ]$ for all $T\\geqslant 0$ in dimension $d=1$ and on $F_0[\\rho ]$ at $T=0$ for $d\\geqslant 2$ , provided of course that $\\rho $ has an integer mass.", "In the proof we provide an explicit value for the constant $C$ in (REF ) but we do not display it here since it is by no means optimal and depends on the cases.", "The parameters $\\kappa $ and $r_0$ can be used to track the origin of the different terms in our bound (REF ).", "The integrable part of the potential gives the $\\rho ^2$ term as it did in Theorem REF .", "The terms involving $r_0^\\alpha $ on the second line are solely due to the divergence of $w$ at the origin.", "It is important that we get here the expected and optimal $\\rho ^{1+\\alpha /d}$ due to the singularity.", "Finally, we have an additional term involving $T\\rho $ which is an error in the entropy due to our construction.", "We otherwise get the optimal $T\\rho \\log \\rho $ .", "In dimension $d=1$ , the proof of Theorem REF is relatively easy, both in the canonical and grand-canonical cases.", "It is detailed for convenience in Section .", "The idea is to split the density $\\rho $ into successive intervals of mass $1/2$ and then write $\\rho =(2\\rho _\\text{odd}+2\\rho _\\text{even})/2$ where $\\rho _\\text{odd}$ is the density restricted to the odd intervals and $\\rho _\\text{even}$ to the even ones.", "We then take a trial state of the form $(\\mathbb {P}_\\text{odd}+\\mathbb {P}_\\text{even})/2$ , where $\\mathbb {P}_\\text{odd}$ corresponds to placing exactly one particle per odd interval at density $2\\rho $ and $\\mathbb {P}_\\text{even}$ is defined similarly.", "This way we have inserted some distance between the particles.", "It depends on the form of $\\rho $ in the opposite set of intervals.", "The interaction between the particles can then be easily controlled in terms of $\\rho ^{1+\\alpha /d}$ , as we explain in Section .", "In higher dimensions, there seems to be no general way of splitting $\\mathbb {R}^d$ into disjoints sets containing a fixed mass of $\\rho $ , so that each set has finitely many neighbors at a given distance (except perhaps for very special densities [37]).", "We can however carry over a similar argument as in the 1D case if we allow a covering with intersections.", "The Besicovitch covering lemma [18] allows us to work with cubes $Q_j$ intersecting with finitely many other cubes, such that $\\int _{Q_j}\\rho $ is any given number.", "We can also distribute the $Q_j$ into a finite (universal) number of subcollections so that the cubes in each family are disjoint and not too close to each other.", "For each collection of disjoint cubes we then use a simple tensor product similar to the 1D case.", "The interaction is estimated using that the length of the cubes is related to $\\int _{Q_j}\\rho ^{1+\\alpha /d}$ , leading to a bound involving only $\\int _{\\mathbb {R}^d}\\rho ^{1+\\alpha /d}$ .", "This proof was inspired by the presentation in the recent book [32] of a proof of the Lieb-Thirring and Cwikel-Lieb-Rozenblum inequalities from [78], [79], [97], thus in a completely different context.", "The difficulty here is that we have no information on the number of particles in each subcollection, due to the overlaps.", "This is the reason why the proof works well in the grand-canonical setting, but not in the canonical case.", "The details are given in Section .", "To prove the result in the canonical case at $T=0$ for $d\\geqslant 2$ , we use a completely different method based on optimal transport tools from [8].", "As we will explain in Section , the latter work can be used to construct a trial state $\\mathbb {P}$ with $\\rho _\\mathbb {P}=\\rho $ so that the distance between any two given particles on the support can be related to some average local value of the density around the particles.", "This is how we can obtain the bound (REF ) at $T=0$ in the canonical case.", "The next natural step is to smear this trial measure $\\mathbb {P}$ and use it at $T>0$ but we could unfortunately not give an optimal bound on the entropy of the smearing.", "Our bound relies on the local radius $ R {x} $ of a density $ \\rho $ , which is thoroughly studied in Section REF and is defined as follows.", "Let $ 0 \\leqslant \\rho \\in L^1 {\\mathbb {R}^d} $ with $ \\int _{\\mathbb {R}^d} \\rho {y} \\, \\mathrm {d}y > 1 $ .", "For each $ x \\in \\mathbb {R}^d $ , we define the local radius $ R(x)$ to be the largest number satisfying $\\int _{B {x, R {x}}} \\rho {y} \\, \\mathrm {d}y = 1.$ This number is always bounded below for a given $\\rho \\in L^1(\\mathbb {R}^d)$ but behaves like $|x|$ at infinity.", "If $\\rho $ has compact support, then $R(x)$ is bounded on the support of $\\rho $ .", "Theorem 12 (Strongly repulsive case $\\alpha \\geqslant d$ II) Suppose that the interaction $ w $ satisfies Assumption REF with $ 2\\leqslant d \\leqslant \\alpha < \\infty $ .", "Let $T>0$ and $ 0 \\leqslant \\rho \\in L^1 {\\mathbb {R}^d}$ of integer mass with $\\int _{\\mathbb {R}^d}\\rho |\\log \\rho |<\\infty $ .", "Then we have $F_T [\\rho ]&\\leqslant C(\\kappa +T) \\int _{\\mathbb {R}^d}\\rho ^2+CT \\int _{\\mathbb {R}^d} \\rho + T \\int _{\\mathbb {R}^d} \\rho \\log \\rho +T \\int _{\\mathbb {R}^d} \\rho \\log R^d\\nonumber \\\\&\\qquad +{\\left\\lbrace \\begin{array}{ll}\\displaystyle C\\kappa r_0^\\alpha \\int _{\\mathbb {R}^d} \\rho ^{1+\\frac{\\alpha }{d}}&\\text{for $\\alpha >d$,}\\\\[0.4cm]\\displaystyle C\\kappa r_0^d \\left(\\int _{\\mathbb {R}^d} \\rho ^2+\\int _{\\mathbb {R}^d}\\rho ^2\\big (\\log r_0^d\\rho \\big )_+\\right)&\\text{for $\\alpha =d$,}\\end{array}\\right.", "}$ where the constant $ C $ only depends on the dimension $ d $ and the powers $\\alpha ,s$ from Assumption REF .", "The main difference compared to (REF ) is the additional term $T\\int \\rho \\log R^d$ , which we conjecture should not be present.", "It is only affecting the bound in places where $R$ is large on the support of $\\rho $ , that is, where one cannot find a sufficient amount of mass at a finite distance of $x$ .", "Another small difference is the additional term $CT\\int \\rho ^2$ due to our way of estimating the entropy.", "The proof is detailed in Section REF below.", "The upper bounds in Theorems REF and REF will be very useful for our next work [44] where we study $F_T[\\rho ]$ and $G_T[\\rho ]$ for extended systems.", "The sub-optimal upper bound (REF ) in the canonical case will be sufficient in this context.", "Remark 13 (Lower bounds) Even when $w$ really behaves like $|x|^{-\\alpha }$ at the origin (for instance satisfies $w(x)\\geqslant c|x|^{-\\alpha }$ for some $c>0$ ), a lower bound in the form (REF ) cannot hold in general.", "This is because the density can be large in regions where there is only one particle at a time, which does not create any divergence in the interaction.", "As an example, consider $N$ points $X_1,...,X_N\\in \\mathbb {R}^d$ and place around each point one particle in the state $\\chi _r:=|B_r|^{-1}{1 }_{B_r}$ , with $r$ small enough.", "The corresponding state is the (symmetrization of the) tensor product $\\mathbb {P}_r=\\bigotimes _{j=1}^N\\chi _r(\\cdot -X_j)$ .", "Assuming that $w$ is continuous, its interaction energy behaves as $\\lim _{r\\rightarrow 0}\\mathcal {U}_{N}(\\mathbb {P}_r)=\\sum _{1\\leqslant j<k\\leqslant N}w(X_j-X_k)$ hence stays finite, whereas the entropy equals $\\mathcal {S}_{N}(\\mathbb {P}_r)=-N\\int \\chi _r\\log \\chi _r=N\\log (|B_1|r^d)\\underset{r\\rightarrow 0}{\\longrightarrow }-\\infty .$ On the other hand, the right side of (REF ) diverges much faster, like $Nr^{-\\alpha }$ .", "This proves that a lower bound of the form (REF ) cannot hold for all possible densities.", "Nevertheless, it is expected that the term $\\int \\rho ^{1+\\alpha /d}$ should appear when there are many particles in a small domain and is thus optimal in such situations.", "For instance, assuming $w\\geqslant c|x|^{-\\alpha }$ for $|x|\\leqslant r_0$ and taking $\\rho =N|B_{r_0/2}|^{-1}{1 }_{B_{r_0/2}}$ ($N$ particles at uniform density in the small ball), we see that $F_T[\\rho ]\\geqslant \\min _{x_1,...,x_N\\in B_{r_0/2}}\\left( \\sum _{1\\leqslant j<k\\leqslant N}\\frac{c}{|x_j-x_k|^\\alpha }\\right)+T\\log (N/|B_{r_0/2}|)-TN.$ The first minimum is known to behave like $N^{1+\\alpha /d}r_0^{-\\alpha }$ in the limit $N\\rightarrow \\infty $  [49], which is exactly proportional to $\\int \\rho ^{1+\\alpha /d}$ .", "Thus in this case, the lower bound holds and the power $1+\\alpha /d$ is optimal." ], [ "The hard-core case", "We conclude this section with a discussion of the hard-core case, which is notoriously more difficult [6].", "We start with the question of representability of a given density and then turn to some upper bounds on the free energy." ], [ "Representability", "Let $r_0>0$ be a positive number and consider the hard-core potential $w_{r_0}(x)=(+\\infty ){1 }(|x|<r_0)$ .", "Then we have for any $N$ -particle probability measure $\\mathbb {P}$ $\\mathcal {U}_{N}(\\mathbb {P})={\\left\\lbrace \\begin{array}{ll}0&\\text{if $|x_j-x_k|\\geqslant r_0$ $\\forall j\\ne k$, $\\mathbb {P}$--almost surely,}\\\\+\\infty &\\text{otherwise.}\\end{array}\\right.", "}$ The set of $\\mathbb {P}$ 's such that $\\mathcal {U}_{N}(\\mathbb {P})=0$ is convex and its extreme points are the symmetric tensor products of Dirac deltas located at distance $\\geqslant r_0$ from each other.", "It follows that the convex set of $w_{r_0}$ –representable densities is the convex hull of the densities in the form $\\rho =\\sum _{j=1}^N \\delta _{x_j},\\qquad \\min _{j\\ne k}|x_j-x_k|\\geqslant r_0.$ There is a similar result in the grand-canonical case.", "In spite of this simple characterization, it seems very hard, in general, to determine whether a given density belongs to this convex set or not.", "In dimension $d=1$ , the problem can be solved exactly.", "Any extreme point (REF ) satisfies $\\rho \\big ([x,x+r_0)\\big )\\leqslant 1,\\qquad \\forall x\\in \\mathbb {R},$ since there is always at most one Dirac delta in any interval of length $r_0$ .", "This property pertains on the whole convex hull of $w_{r_0}$ –representable densities.", "Conversely, any positive measure $\\rho $ with $\\rho (\\mathbb {R})=N$ satisfying (REF ) can be written as a convex combination of Dirac deltas at distance $\\geqslant r_0$ .", "To see this, assume for simplicity $\\rho \\in L^1(\\mathbb {R})$ and define as in [7] the non-decreasing function $t\\mapsto x(t)$ on $(0,N)$ so that $\\int _{-\\infty }^{x(t)}\\rho (s)\\,\\mathrm {d}s=t,\\qquad \\forall t\\in (0,N).$ To avoid any ambiguity when the support of $\\rho $ is not connected, we can choose $x(t)$ to be the largest possible real number satisfying the above condition.", "The function $t\\mapsto x(t)$ is differentiable, except possibly on a countable set, with $x^{\\prime }(t)=\\rho (x(t))^{-1}$ .", "When $\\rho >0$ almost surely, we have $\\lim _{t\\rightarrow 0^+}x(t)=-\\infty $ and $\\lim _{t\\rightarrow N^-}x(t)=+\\infty $ .", "From the definition of $x(t)$ we have $\\rho =\\int _0^N\\delta _{x(t)}\\,\\mathrm {d}t.$ Indeed, if we integrate the right side against some continuous function $f$ we find $\\int _0^N f(x(t))\\,\\mathrm {d}t=\\int _\\mathbb {R}f(s)\\rho (s)\\,\\mathrm {d}s$ after changing variable $s=x(t)$ .", "Now we can also rewrite (REF ) as $\\rho =\\int _0^1\\sum _{k=0}^{N-1}\\delta _{x(t+k)}\\,\\mathrm {d}t.$ By definition of $x(t)$ we have $\\int _{x(t+k)}^{x(t+k+1)}\\rho (s)\\,\\mathrm {d}s=1,\\qquad \\forall k=0,...,N-2,\\quad \\forall t\\in (0,1),$ and therefore $|x(t+k+1)-x(t+k)|\\geqslant r_0$ when the condition (REF ) is satisfied.", "Hence (REF ) is the sought-after convex combination of delta's located at distance $\\geqslant r_0$ .", "The corresponding $N$ -particle probability is $\\mathbb {P}=\\Pi _s\\int _0^1\\delta _{x(t)}\\otimes \\delta _{x(t+1)}\\otimes \\cdots \\otimes \\delta _{x(t+N-1)}\\,\\mathrm {d}t$ where $\\Pi _s {f_1 \\otimes \\cdots \\otimes f_N}= \\frac{1}{N!}", "\\sum \\limits _{\\sigma \\in \\mathfrak {S}_N} f_{\\sigma {1}} \\otimes \\cdots \\otimes f_{\\sigma {N}},$ is the symmetrization operator.", "At positive temperature, the previous state can be regularized using the block approximation described in the proof of Theorem REF , provided that $\\int _\\mathbb {R}\\rho |\\log \\rho |<\\infty $ and (REF ) holds with a strict inequality.", "In dimensions $d\\geqslant 2$ , the situation is much less clear.", "The condition (REF ) can be re-expressed in the form $\\boxed{R_\\rho :=\\min _{x\\in \\mathbb {R}^d}R(x)\\geqslant \\frac{r_0}{2}}$ where $R(x)$ is the radius previously defined in (REF ).", "This can also be written in the form $\\int _{B(x,r_0/2)}\\rho \\leqslant 1,\\qquad \\forall x\\in \\mathbb {R}^d.$ This is definitely a necessary condition for a density to be $w_{r_0}$ –representable, in dimension $d\\geqslant 1$ .", "Otherwise we would be able to find an $x\\in \\mathbb {R}^d$ and an $R<r_0/2$ such that $\\int _{B(x,R)}\\rho >1$ .", "But then the probability that there are at least two particles in the ball $B(x,R)$ cannot vanish for any $\\mathbb {P}$ of density $\\rho $ and those are at distance $<r_0$ .", "This was already mentioned in [6].", "For $d\\geqslant 2$ the condition (REF ) is definitely not sufficient for a density to be representable.", "A counter example arises naturally within the sphere packing problem.", "Recall that the $d$ -dimensional sphere packing density $\\rho _c(d):=\\lim _{\\ell \\rightarrow \\infty }\\frac{\\max \\lbrace N\\ :\\ \\exists x_1,...,x_N\\in \\Omega _\\ell ,\\ |x_j-x_k|\\geqslant 1\\rbrace }{|\\Omega _\\ell |}$ gives the maximal number of points per unit volume one can put while ensuring that they are at distance $\\geqslant 1$ to each other.", "Here $\\Omega $ is any fixed smooth domain and $\\Omega _\\ell =\\ell \\Omega $ .", "The packing density equals $\\rho _c(1)=1$ in dimension $d=1$ and is otherwise only known in dimensions $ d\\in \\lbrace 2,3,8,24\\rbrace $ , for which it is given by some special lattices [13], [96].", "The sphere packing fraction is defined by $v_c(d):=\\rho _c (d)|B_{1/2}|=2^{-d}\\rho _c (d)|B_{1}|$ and represents the fraction of the volume occupied by the balls.", "This is simply $v_c(1)=1$ in dimension $d=1$ but is strictly less than 1 for $d\\geqslant 2$ .", "Some volume has to be left unoccupied due to the impossibility to fill space with disjoint balls of fixed radius.", "It has been shown that $v_c(d)$ tends to 0 exponentially fast in the limit $d\\rightarrow \\infty $ but its exact behavior is still unknown [94].", "Let us now consider a constant density $\\rho (x)=\\rho _0{1 }_{\\Omega _\\ell }(x)$ over a large domain $\\Omega _\\ell =\\ell \\Omega $ (for instance a ball).", "Then we have $R(x)=(\\rho _0|B_1|)^{-1/d}$ well inside $\\Omega _\\ell $ , whereas $R(x)\\geqslant (\\rho _0|B_1|)^{-1/d}$ close to the boundary.", "This shows that for this density $R_\\rho =\\min _{x\\in \\mathbb {R}^d}R(x)=(\\rho _0|B_1|)^{-\\frac{1}{d}}=\\frac{r_0}{2} [\\bigg ]{ \\frac{r_0^{-d}\\rho _c(d)}{\\rho _0v_c(d)} }^{\\frac{1}{d}}.$ In particular, the condition (REF ) is satisfied whenever $\\rho _0\\leqslant r_0^{-d}\\rho _c(d)/v_c(d)$ .", "On the other hand, it is clear from the packing problem (rescaled by $r_0$ ) that when $\\rho _0>r_0^{-d}\\rho _c(d)$ the density cannot be representable for $\\ell $ large enough.", "Otherwise we would be able to place $N=\\rho _0|\\Omega _\\ell |>r_0^{-d}\\rho _c(d)|\\Omega _\\ell |$ points in $\\Omega _\\ell $ at distance $r_0$ , which contradicts the definition of $\\rho _c(d)$ .", "In conclusion, we have found that, in dimensions $d\\geqslant 2$ , constant densities $\\rho _0{1 }_{\\Omega _\\ell }$ with $r_0^{-d}\\rho _c(d)<\\rho _0\\leqslant \\frac{r_0^{-d}\\rho _c(d)}{v_c(d)}$ satisfy (REF ) but cannot be $w_{r_0}$ –representable for $\\ell \\gg 1$ .", "As a side remark, we mention that there are representable densities satisfying (REF ), with $R_\\rho $ as close as we want to $r_0/2$ .", "We can just take the sum of two Dirac deltas placed at distance $R\\geqslant r_0$ or a smooth approximation of it.", "This proves that there cannot exist a simple necessary and sufficient condition of hard core representability involving $R_\\rho $ only, in dimensions $d\\geqslant 2$ .", "This is in stark contrast with the one-dimensional case.", "There exists, however, a simple sufficient condition in a form that was conjectured in [6].", "In [8] (see also Theorem REF below), it is proved that any density satisfying $\\boxed{R_\\rho \\geqslant r_0}$ is $w_{r_0}$ –representable.", "The same holds when $T>0$ if one puts a strict inequality.", "It would be interesting to know if such a result is valid for $R_\\rho \\geqslant c_d r_0$ with $c_d<1$ , depending on the dimension.", "The conclusion of our discussion is that there seems to exist no simple characterization of hard core representability in dimensions $d\\geqslant 2$ , involving averages of $\\rho $ over balls.", "There are necessary or sufficient conditions but they do not match." ], [ "Upper bounds", "Next we discuss upper bounds in the hard core case.", "Even if we do not completely understand when a density is hard-core representable, the energy is very easy to bound when it is the case.", "Let us assume that $w$ satisfies Assumption REF with $\\alpha =+\\infty $ and that $\\rho \\in L^1(\\mathbb {R}^d)$ is $w$ –representable.", "For simplicity we also assume that $w=+\\infty $ on $B_{r_0}$ .", "Then, for any optimizer $\\mathbb {P}$ , we have $|x_j-x_k|\\geqslant r_0$ for $j\\ne k$ , $\\mathbb {P}$ –almost surely.", "This implies $F_0[\\rho ]= \\mathcal {U}_N(\\mathbb {P})\\leqslant \\int _{(\\mathbb {R}^d)^N}\\sum _{1\\leqslant j<k\\leqslant N} \\frac{\\kappa {1 }(|x_j-x_k|\\geqslant r_0)}{|x_j-x_k|^s}\\,\\mathrm {d}\\mathbb {P}\\leqslant C\\kappa Nr_0^{-s}$ by [49].", "The constant $C$ only depends on $s$ and $d$ .", "Upper bounds are easy once we know that the particles cannot get too close.", "Constructing trial states with a good entropy is more difficult.", "Our proofs of Theorems REF and REF work in the hard-core case, but they require additional conditions, of the form $R_\\rho >r_0\\qquad \\text{or}\\qquad \\int _{B(x,r_0/2)}\\rho \\leqslant \\varepsilon $ for a sufficiently small $\\varepsilon $ .", "We do not state the corresponding results here and rather refer the reader to Remarks REF , REF , and REF below.", "In the rest of this section we quickly discuss the grand-canonical 1D case which has been studied in a famous paper of Percus [67] and the situation where $\\rho $ is bounded uniformly." ], [ "The 1D grand-canonical Percus formula", "The grand-canonical inverse problem was completely solved by Percus in dimension $d=1$ in [67] (see also [82]).", "Under the optimal assumption that $R_\\rho >r_0/2$ , he proved that the grand-canonical Gibbs state with external potential $V(x)=-\\log \\rho (x)+\\log \\left(1-\\int _{x-r_0}^x\\rho \\right)-\\int _x^{x+r_0}\\frac{\\rho (s)}{1-\\int _{s-r_0}^s\\rho }\\,\\mathrm {d}s$ and hard-core $w_{r_0}$ has the density $\\rho $ .", "Since the potential $\\widetilde{V}=V+\\log \\rho $ solves the supremum in the dual formula (REF ), we obtain $\\boxed{G_T[\\rho ]= T\\int _\\mathbb {R}\\rho (x)\\big (\\log \\rho (x)-1\\big )\\,\\mathrm {d}x-T\\int _\\mathbb {R}\\rho (x)\\log \\left(1-\\int _{x-r_0}^x\\rho \\right)\\mathrm {d}x}$ for the hard core potential $w_{r_0}$ .", "This explicit expression shows us that, in one dimension, the nonlocality is solely due to the second logarithmic term, which involves the local average $\\int _{x-r_0}^x\\rho $ over a window of length $r_0$ .", "This is further discussed in [82].", "For a general potential $w$ satisfying Assumption REF , we only obtain an upper bound and need to add $C\\kappa r_0^{-s}\\int _\\mathbb {R}\\rho $ by (REF ).", "We can estimate the logarithm by assuming, for instance, that $\\int _{x-r_0}^x\\rho \\leqslant 1-\\varepsilon $ for all $x\\in \\mathbb {R}$ .", "To our knowledge the canonical problem was never solved in the manner of Percus.", "It would be interesting to derive an upper bound on $F_T[\\rho ]$ of the same form as the right side of (REF ).", "In dimensions $d\\geqslant 2$ we have no simple criterion of representability, as we have seen.", "One simpler situation is when $\\rho $ is everywhere bounded above by the sphere packing density, which we have defined in (REF ).", "Then we can prove it is representable and furnish an explicit upper bound on its grand-canonical free energy.", "Theorem 14 (Hard-core case with packing density bound) Assume that $w$ satisfies Assumption REF with $\\alpha =+\\infty $ .", "Let $\\rho _c(d)$ be the sphere packing density in (REF ) and $v_c(d)=2^{-d}\\rho _c(d)|B_1|$ be the volume fraction.", "Let $\\rho \\in L^1 {\\mathbb {R}^d,\\mathbb {R}_+}$ be such that $\\rho (x)\\leqslant (1-\\varepsilon )^dr_0^{-d}\\rho _c(d)$ for some $\\varepsilon \\in (0,1)$ .", "We also assume that $\\int _{\\mathbb {R}^d}\\rho |\\log \\rho |<\\infty $ if $ T > 0 $ .", "Then $G_T[\\rho ]\\leqslant \\frac{C\\kappa }{r_0^s}\\int _{\\mathbb {R}^d}\\rho +T\\int _{\\mathbb {R}^d}\\rho \\log \\rho +T\\log \\left(\\frac{2^d}{\\varepsilon ^d v_c(d)}\\right)\\int _{\\mathbb {R}^d}\\rho ,$ with a constant $C$ depending only on the dimension $d$ and the power $s$ from Assumption REF .", "The idea of the proof is to first construct a trial state for a constant density $\\rho _0\\approx (1-\\varepsilon )^dr_0^{-d}\\rho _c(d)$ by using a periodic sphere packing with a large period, uniformly averaged over translations (often called a “floating crystal” [53]).", "We then “geometrically localize” [48] this state to make it have density $\\rho $ .", "The proof is detailed later in Section ." ], [ "Proof of Theorem ", "We start with the one-dimensional canonical case, for which the argument is relatively easy.", "We detail the proof for the convenience of the reader and because this will pave the way for the more complicated covering methods in higher dimensions.", "We only consider here the canonical case.", "The grand-canonical bound (REF ) follows using (REF ), but in the next section we will provide a direct proof in the grand-canonical case which also works in dimension $d=1$ .", "Theorem 15 ($ d = 1 $ ) Suppose the interaction $ w $ satisfies Assumption REF with $ 1 \\leqslant \\alpha < \\infty $ .", "Let $ T \\geqslant 0 $ and assume that $ \\int _{\\mathbb {R}} \\rho {\\log \\rho } < \\infty $ for $ T > 0 $ .", "Then for any density $ 0 \\leqslant \\rho \\in L^1 {\\mathbb {R}} $ with $ \\int _{\\mathbb {R}} \\rho \\in \\mathbb {N}$ , we have $F_T {\\rho }\\leqslant {}& \\frac{4 \\kappa s}{s-1} \\int _{\\mathbb {R}} \\rho ^2 + \\log {2} T \\int _{\\mathbb {R}} \\rho + T \\int _{\\mathbb {R}} \\rho \\log \\rho \\nonumber \\\\&+ {\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{2^{3+2\\alpha }}{\\alpha -1} \\kappa r_0^\\alpha \\int _{\\mathbb {R}} \\rho ^{1+\\alpha } &\\text{for $\\alpha >1$,}\\\\[0.4cm]\\displaystyle 2^5 \\kappa r_0 [\\bigg ]{2 \\log {2} \\int _{\\mathbb {R}} \\rho ^2+\\int _{\\mathbb {R}}\\rho ^2 [\\big ]{\\log r_0 \\rho }_+ } &\\text{for $\\alpha =1$.}\\end{array}\\right.", "}$ Denoting $ N = \\int _{\\mathbb {R}} \\rho $ , we can split the real numbers $ \\mathbb {R}$ into two families $ {L_j}_{j=1}^N $ , $ {L_j^{\\ast }}_{j=1}^N $ of disjoint intervals in such a way that the mass of $ \\rho $ in each of these intervals is exactly $ \\int _{L_j} \\rho = 1/2 $ , and such that each $ L_j $ has neighboring intervals only among the $ L_j^{\\ast } $ , and vice versa (see fig:intervals).", "Figure: Sketch of intervalsThis allows us to write $\\rho = \\frac{1}{2} [\\Big ]{ \\sum \\limits _j 2 \\rho {1}_{L_j} + \\sum \\limits _j 2 \\rho {1}_{L_j^{\\ast }}},$ a convex combination of two measures with mass equal to $ N $ .", "As trial states for each of these, we take the symmetric tensor products $\\mathbb {Q}= \\Pi _s [\\Big ]{\\bigotimes \\limits _j {2 \\rho {1}_{L_j}} },\\qquad \\mathbb {Q}^{\\ast } = \\Pi _s [\\Big ]{\\bigotimes \\limits _j {2 \\rho {1}_{L_j^{\\ast }}} },$ where $ \\Pi _s $ denotes the symmetrization operator in (REF ).", "Then the state $\\mathbb {P}:= \\frac{1}{2} {\\mathbb {Q}+ \\mathbb {Q}^{\\ast }}$ has one-body density equal to $ \\rho _{\\mathbb {P}} = \\rho $ .", "Using that the intervals $ {L_j} $ are all disjoint, we have for instance $- \\mathcal {S}_N {\\mathbb {Q}}={}& \\int _{\\mathbb {R}^N} \\frac{1}{N!}", "\\sum \\limits _{\\sigma \\in S_N} \\bigotimes _j {2 \\rho {1}_{L_{\\sigma {j}}}} \\log [\\Big ]{\\bigotimes _j {2 \\rho {1}_{L_{\\sigma {j}}}}} \\\\={}& \\sum \\limits _{j=1}^N \\int _{\\mathbb {R}} 2 \\rho {1}_{L_j} \\log {2 \\rho {1}_{L_j}}={} \\int _{\\bigcup _j L_j} 2 \\rho \\log {2 \\rho },$ and similarly for $ \\mathbb {Q}^{\\ast } $ .", "By concavity of the entropy, we conclude that $- \\mathcal {S}_N {\\mathbb {P}}\\leqslant {} - \\frac{1}{2} \\mathcal {S}_N {\\mathbb {Q}} - \\frac{1}{2} \\mathcal {S}_N {\\mathbb {Q}^{\\ast }}={} \\log 2 \\int _{\\mathbb {R}} \\rho + \\int _{\\mathbb {R}} \\rho \\log \\rho .$ To estimate the interaction energy in the state $ \\mathbb {P}$ , it suffices to provide an estimate for both $ \\mathbb {Q}$ and $ \\mathbb {Q}^{\\ast } $ .", "We write here the argument only for $ \\mathbb {Q}$ , since the argument for $ \\mathbb {Q}^{\\ast } $ is exactly the same.", "By Assumption REF and the construction of $ \\mathbb {Q}$ , we immediately have $\\mathcal {U}_N {\\mathbb {Q}}={}& \\iint _{\\mathbb {R}^2} w {x-y} \\rho _{\\mathbb {Q}}^{{2}} {x,y} \\, \\mathrm {d}x \\, \\mathrm {d}y \\\\\\leqslant {}& 4 \\kappa \\sum \\limits _{i < j} \\iint _{\\mathbb {R}^2} [\\Big ]{\\frac{r_0^{\\alpha } {1} {{x - y} < r_0}}{{x - y}^{\\alpha }} + w_2 {x - y} } \\times \\nonumber \\\\&\\qquad \\qquad \\qquad \\qquad \\times \\rho {x} {1}_{L_i} {x} \\rho {y} {1}_{L_j} {y} \\, \\mathrm {d}x \\, \\mathrm {d}y$ where $ w_2 {x} = {1 + {x}^s}^{-1} $ and $\\rho _{\\mathbb {Q}}^{{2}}$ is the two-particle correlation function.", "For the contribution from the tail of the interaction, we have by Young's inequality $[6] \\sum \\limits _{i < j} \\iint _{\\mathbb {R}^2} w_2 {x - y} \\rho {x} {1}_{L_i} {x} \\rho {y} {1}_{L_j} {y} \\, \\mathrm {d}x \\, \\mathrm {d}y \\\\\\leqslant {}& \\frac{1}{2} \\iint _{\\mathbb {R}^2} w_2 {x-y} \\rho {x} \\rho {y} \\, \\mathrm {d}x \\, \\mathrm {d}y \\\\\\leqslant {}& \\frac{{w_2}{L_1}}{2} \\int _{\\mathbb {R}} \\rho ^2\\leqslant {} \\frac{s}{s-1} \\int _{\\mathbb {R}} \\rho ^2.$ From the core of $ w $ we get $4 \\sum \\limits _{i < j} \\iint _{\\mathbb {R}^2} \\frac{{1} {{x - y} < r_0}}{{x - y}^{\\alpha }} \\rho {x} {1}_{L_i} {x} \\rho {y} {1}_{L_j} {y} \\, \\mathrm {d}x \\, \\mathrm {d}y\\leqslant \\sum \\limits _{i < j} \\frac{{1}_{\\mathrm {d}{L_i,L_j} < r_0}}{\\mathrm {d}{L_i,L_j}^{\\alpha }}.$ The idea now is to use the intervals $ {L_j^{\\ast } } $ to estimate the sum above.", "For each $ i $ we denote by $ \\eta _i $ the minimal length of neighboring intervals, $\\eta _i = \\min {\\ell _j^{\\ast } \\mathrm {d}{L_i, L_j^{\\ast }}= 0},$ where $ \\ell _j^{\\ast } := {L_j^{\\ast }} $ is the interval length, and we re-order the collection $ {L_i} $ such that $ \\eta _1 \\leqslant \\cdots \\leqslant \\eta _N $ .", "Fixing the index $ i $ , we now clearly have for $ j > i $ , $\\mathrm {d}{L_i, L_j} \\geqslant \\eta _j \\geqslant \\eta _i,$ in particular, $ \\eta _i $ is smaller than the side length of any interval neighboring $ L_j $ .", "Pick $ x_j \\in \\overline{L_i} $ and $ y_j \\in \\overline{L_j} $ such that $ \\mathrm {d}{L_i, L_j} = {x_j-y_j} $ , and let $ L_k^{\\ast } $ be the neighboring interval of $ L_j $ facing $ y_j $ , that is, $ \\mathrm {d}{y_j, L_k^{\\ast }} = 0 $ .", "Defining $\\widetilde{L}_j := {y_j-\\eta _i/2, y_j+\\eta _i /2} \\cap L_k^{\\ast },$ then $ {\\widetilde{L}_j} = \\eta _i / 2 $ , and $ \\eta _i /2 \\leqslant {x_j-y} \\leqslant {x_j-y_j} $ for all $ y \\in \\widetilde{L}_j $ , so we can estimate $\\frac{{1}_{\\mathrm {d}{L_i,L_j} < r_0}}{\\mathrm {d}{L_i,L_j}^{\\alpha }}\\leqslant {} \\frac{2}{\\eta _i} \\int _{\\widetilde{L}_j} \\frac{{1} {{x_j - y} < r_0 }}{{x_j - y}^{\\alpha }} \\, \\mathrm {d}y= \\frac{2}{\\eta _i} \\int _{\\widetilde{L}_j-x_j} \\frac{{1} {{y} < r_0 }}{{y}^{\\alpha }} \\, \\mathrm {d}y.$ Now summing over $ j $ gives $\\sum \\limits _{j = i+1}^N \\frac{{1}_{\\mathrm {d}{L_i,L_j} < r_0}}{\\mathrm {d}{L_i,L_j}^{\\alpha }}\\leqslant {}& \\frac{2}{\\eta _i} \\int _{\\mathbb {R}} \\frac{{1} {\\eta _i /2 \\leqslant {y} < r_0 }}{{y}^{\\alpha }} \\, \\mathrm {d}y={} \\frac{4}{\\eta _i} [\\bigg ]{ \\int _{\\eta _i / 2}^{r_0} \\frac{1}{{y}^{\\alpha }} \\, \\mathrm {d}y }_+ \\nonumber \\\\\\leqslant {}& {\\left\\lbrace \\begin{array}{ll}\\displaystyle \\frac{2^{1+\\alpha }}{\\alpha -1} \\frac{1}{\\eta _i^{\\alpha }} &\\text{for $\\alpha >1$,}\\\\[0.4cm]\\displaystyle \\frac{4}{\\eta _i} [\\Big ]{\\log [\\Big ]{\\frac{2 r_0}{\\eta _i}} }_+ &\\text{for $\\alpha =1$.}\\end{array}\\right.", "}$ By Hölder's inequality, we have by construction of the intervals $ L_j^{\\ast } $ $\\frac{1}{{\\ell _j^{\\ast }}^{\\alpha }} \\leqslant 2^{1+\\alpha } \\int _{L_j^{\\ast }} \\rho ^{1+\\alpha },$ for any $ \\alpha > 1 $ , so in this case we conclude that $\\sum \\limits _{i < j} \\frac{{1}_{\\mathrm {d}{L_i,L_j} < r_0}}{\\mathrm {d}{L_i,L_j}^{\\alpha }}\\leqslant {} \\sum \\limits _{i=1}^N \\frac{2^{1+\\alpha }}{\\alpha -1} \\frac{1}{\\eta _i^{\\alpha }}\\leqslant {} \\frac{2^{2+\\alpha }}{\\alpha -1} \\sum \\limits _{i=1}^N \\frac{1}{{\\ell _i^{\\ast }}^{\\alpha }}\\leqslant {} \\frac{2^{3+2\\alpha }}{\\alpha -1} \\sum \\limits _{i=1}^N \\int _{L_i^{\\ast }} \\rho ^{1+\\alpha }.$ The same bound holds for the interaction energy of $ \\mathbb {Q}^{\\ast } $ , but with the intervals $ L_i^{\\ast } $ replaced by $ L_i $ at the end.", "This finishes the proof of the $ \\alpha > 1 $ case in (REF ).", "To finish the $ \\alpha = 1 $ case, we note that applying Jensen's inequality on the function $ t \\mapsto t^2 { \\log {2\\lambda t}}_+ $ for $ \\lambda > 0 $ yields $\\frac{1}{\\ell _j^{\\ast }} [\\Big ]{\\log [\\Big ]{ \\frac{\\lambda }{\\ell _j^{\\ast }} }}_+={} 4 \\ell _j^{\\ast } [\\Big ]{\\frac{1}{\\ell _j^{\\ast }} \\int _{L_j^{\\ast }} \\rho }^2 [\\Big ]{ \\log [\\Big ]{ \\frac{2 \\lambda }{\\ell _j^{\\ast }} \\int _{L_j^{\\ast }} \\rho }}_+\\leqslant {} 4 \\int _{L_j^{\\ast }} \\rho ^2 { \\log {2 \\lambda \\rho } }_+.$ Hence, continuing from (REF ), we get $\\sum \\limits _{i < j} \\frac{{1}_{\\mathrm {d}{L_i,L_j} < r_0}}{\\mathrm {d}{L_i,L_j}^{\\alpha }}\\leqslant {} \\sum \\limits _{i=1}^N \\frac{4}{\\eta _i} [\\Big ]{\\log [\\Big ]{\\frac{2 r_0}{\\eta _i}} }_+\\leqslant {} &8 \\sum \\limits _{i=1}^N \\frac{1}{\\ell _i^{\\ast }} [\\Big ]{\\log [\\Big ]{\\frac{2 r_0}{\\ell _i^{\\ast }}} }_+ \\\\\\leqslant {}& 2^5 \\sum \\limits _{i=1}^N \\int _{L_i^{\\ast }} \\rho ^2 { \\log {4 r_0 \\rho } }_+.$ Since the corresponding bound also holds for $ \\mathbb {Q}^{\\ast } $ , this concludes the proof.", "Remark 16 (Hard-core case) In the case where $ w $ has a hard-core with range $ r_0 > 0 $ , it follows from the proof above that $F_T {\\rho }\\leqslant [\\Big ]{\\frac{4 \\kappa s}{{s-1} r_0} +\\log {2} T} \\int _{\\mathbb {R}} \\rho + T \\int _{\\mathbb {R}} \\rho \\log \\rho $ for any density $ \\rho \\in L^1 {\\mathbb {R}} $ satisfying the (sub-optimal) condition $\\int _{x}^{x+r_0} \\rho \\leqslant \\frac{1}{2}$ for all $ x \\in \\mathbb {R}$ ." ], [ "Proof of Theorem ", "In the course of our proof we need to cover the support of our density using disjoint cubes separated by a distance depending on the local value of the density, in order to have a reasonable control of the interaction.", "We obtain such a covering by a variant of the Besicovitch lemma [18], which we first describe in this subsection.", "It is different from the standard formulation.", "For simplicity we work with a compactly supported density $\\rho $ with $\\int _{\\mathbb {R}^d}\\rho >1$ .", "For every $x\\in \\mathbb {R}^d$ , we define $\\ell (x)$ to be the largest number such that $\\int _{x+\\ell (x)\\mathcal {C}}\\rho (x)\\,\\mathrm {d}x=\\frac{1}{3^d(4^d+1)},$ where $\\mathcal {C}=(-1/2,1/2)^d$ is the unit cube centered at the origin.", "It is convenient to work with cubes instead of balls.", "It is important that the chosen value of the integral in (REF ) is universal and only depends on the space dimension $d$ .", "This value is motivated by the estimates which will follow, it could be any fixed number $<1$ at this point.", "The number $\\ell (x)$ always exists since the full integral is larger than 1.", "The function $x\\mapsto \\ell (x)$ is upper semi-continuous.", "To simplify our notation we denote by $\\mathcal {C}(x):=x+\\ell (x)\\mathcal {C}$ the cube centered at $x$ of side length $\\ell (x)$ .", "By Hölder's inequality we get $\\frac{1}{3^d(4^d+1)}=\\int _{\\mathcal {C}(x)}\\rho \\leqslant \\ell (x)^{\\frac{\\alpha d}{\\alpha +d}} [\\bigg ]{ \\int _{\\mathcal {C}(x)}\\rho ^{1+\\frac{\\alpha }{d}} }^{\\frac{d}{d+\\alpha }}$ and thus obtain the estimate $\\frac{1}{\\ell (x)^\\alpha }\\leqslant 3^{\\alpha +d}(4^d+1)^{1+\\frac{\\alpha }{d}}\\int _{\\mathcal {C}(x)}\\rho ^{1+\\frac{\\alpha }{d}},\\qquad \\forall x\\in \\mathbb {R}^d$ on the local length $\\ell (x)$ .", "The standard Besicovitch covering lemma (as stated for instance in [32], [18]) implies for compactly supported densities that there exists a set of points $x_j^{\\prime (k)}$ with $1\\leqslant k\\leqslant K^{\\prime }\\leqslant 4^d+1$ and $1\\leqslant j\\leqslant J_k$ such that the cubes $\\big (\\mathcal {C}(x_j^{\\prime (k)})\\big )_{\\begin{array}{c}1\\leqslant k\\leqslant K^{\\prime }\\\\ 1\\leqslant j\\leqslant J_k\\end{array}}$ cover the support of $\\rho $ and each $x\\in \\mathbb {R}^d$ is in at most $2^d$ such cubes, for every $k$ , the cubes $\\big (\\mathcal {C}(x^{\\prime (k)}_j)\\big )_{1\\leqslant j\\leqslant J_k}$ are all disjoint.", "We need to obtain different families which satisfy additional properties, namely we require the cubes to have a safety distance to all the larger cubes within the same family, this distance being comparable to the side length of the cube in question.", "The precise statement is the following.", "Lemma 17 (Besicovitch with minimal distance) Let $\\rho $ be a compactly supported density with $\\int _{\\mathbb {R}^d}\\rho >1$ .", "Then there exists a set of points $x_j^{(k)}$ with $1\\leqslant k\\leqslant K\\leqslant 3^d(4^d+1)$ and $1\\leqslant j\\leqslant J_k<\\infty $ such that the cubes $\\big (\\mathcal {C}(x_j^{(k)})\\big )_{\\begin{array}{c}1\\leqslant k\\leqslant K\\\\ 1\\leqslant j\\leqslant J_k\\end{array}}$ cover the support of $\\rho $ and each $x\\in \\mathbb {R}^d$ is in at most $2^d$ such cubes, for every $k$ , the cubes $\\big (\\mathcal {C}(x^{(k)}_j)\\big )_{1\\leqslant j\\leqslant J_k}$ in the $k$ th collection satisfy $\\mathrm {d}\\left(\\mathcal {C}(x^{(k)}_j),\\mathcal {C}(x^{(k)}_\\ell )\\right)\\geqslant \\frac{1}{2}\\min \\Big \\lbrace \\ell (x^{(k)}_j),\\ell (x^{(k)}_\\ell )\\Big \\rbrace .$ We start the proof by applying the standard Besicovitch covering lemma recalled above.", "We obtain $K$ collections of disjoint cubes.", "To impose the minimal distance we separate each family into $3^d$ subfamilies.", "Specifically, we use that the maximal number of disjoint cubes of side length $\\geqslant \\ell $ intersecting a cube of side length $2\\ell $ is at most $3^d$ .", "Thus if we look at a given cube of side length $\\ell $ , only $3^d-1$ other bigger cubes can be at distance $\\leqslant \\ell /2$ .", "By induction we can thus always distribute all our cubes into $3^d$ subfamilies, while ensuring the distance property for all the bigger cubes.", "Using Lemma REF we obtain the following partition of unity ${1 }_{\\operatorname{supp}\\rho }=\\sum _{k=1}^{K}\\sum _{j=1}^{J_k}\\frac{{1 }_{\\mathcal {C}(x^{(k)}_j) \\cap \\operatorname{supp}\\rho }}{\\eta },\\qquad {1 }_{\\operatorname{supp}\\rho } \\leqslant \\eta :=\\sum _{k=1}^{K}\\sum _{j=1}^{J_k}{1 }_{\\mathcal {C}(x^{(k)}_j)}\\leqslant 2^d$ which we are going to use to construct our trial state for the upper bound on $G_T[\\rho ]$ .", "We split the proof into several steps.", "We start with the case $\\alpha >d$ and treat the special case $\\alpha =d$ at the very end.", "Step 1.", "Less than one particle.", "If $\\int _{\\mathbb {R}^d}\\rho \\leqslant 1$ , we consider the probability $\\mathbb {P}=(\\mathbb {P}_n)$ given by $\\mathbb {P}_0=1-\\int _{\\mathbb {R}^d} \\rho ,\\qquad \\mathbb {P}_1=\\rho ,\\qquad \\mathbb {P}_n=0\\text{ for $n\\geqslant 2$,}$ which has density $\\rho $ and no interaction energy.", "Its free energy is thus just equal to the entropy term $-T \\mathcal {S}{\\mathbb {P}} = T\\left(1-\\int _{\\mathbb {R}^d}\\rho \\right)\\log \\left(1-\\int _{\\mathbb {R}^d}\\rho \\right)+T\\int _{\\mathbb {R}^d}\\rho \\log \\rho .$ The first term is negative and thus we obtain the desired inequality $G_T[\\rho ]\\leqslant T\\int _{\\mathbb {R}^d}\\rho \\log \\rho \\qquad \\text{for}\\quad \\int _{\\mathbb {R}^d}\\rho \\leqslant 1.$ Step 2.", "Compactly supported densities ($\\alpha >d$ ).", "Next we consider the case of a compactly supported density $\\rho $ with $\\int _{\\mathbb {R}^d}\\rho >1$ .", "Using the partition (REF ) we write $\\rho =\\frac{1}{K}\\sum _{k=1}^{K} [\\bigg ]{ \\sum _{j}\\rho _j^{(k)}} ,\\qquad \\rho _j^{(k)}:=\\frac{K\\rho {1 }_{Q_j^{(k)}}}{\\eta },$ where we abbreviated $Q_j^{(k)}=\\mathcal {C}(x^{(k)}_j)$ for simplicity.", "This is a (uniform) convex combination of the $K$ densities $\\rho ^{(k)}=\\sum _{j}\\rho _j^{(k)}$ .", "For fixed $k$ , the $\\rho _j^{(k)}$ have disjoint supports with distance greater or equal to $\\min \\lbrace \\ell (x_j^{(k)}),\\ell (x_{j^{\\prime }}^{(k)})\\rbrace /2$ .", "In addition, we have $\\int \\rho _j^{(k)}=K\\int _{Q_j^{(k)}} \\frac{\\rho }{\\eta }\\leqslant 3^d(4^d+1)\\int _{Q_j^{(k)}}\\rho \\leqslant 1.$ This is the reason for our choice of the constant in (REF ).", "Our trial state is given by $\\mathbb {P}:=\\frac{1}{K}\\sum _{k=1}^K\\mathbb {P}^{(k)}$ where $\\mathbb {P}^{(k)}=\\bigotimes _{j=1}^{J_k}\\left(\\left(1-\\int _{\\mathbb {R}^d}\\rho _j^{(k)}\\right)\\oplus \\rho _j^{(k)}\\oplus 0\\oplus \\ldots \\right)$ is the symmetrized tensor product of the states in (REF ), which has density $\\rho ^{(k)}$ .", "Using the concavity of the entropy, our upper bound is, thus, $G_T[\\rho ]&\\leqslant \\frac{1}{K}\\sum _k \\mathcal {G}_T(\\mathbb {P}^{(k)})\\\\&\\leqslant \\frac{1}{K}\\sum _{k=1}^{K}\\bigg (\\sum _{1\\leqslant i<j\\leqslant J_k}\\iint _{\\mathbb {R}^d\\times \\mathbb {R}^d}\\rho _i^{(k)}(x)\\rho _j^{(k)}(y)w(x-y)\\,\\mathrm {d}x\\,\\mathrm {d}y\\\\&\\qquad +T\\sum _{j=1}^{J_k}\\int _{Q_j^{(k)}}\\rho _j^{(k)}\\log \\rho _j^{(k)}\\bigg ).$ We have $\\frac{1}{K}\\sum _{k=1}^{K}\\sum _{j=1}^{J_k}\\int _{Q_j^{(k)}}\\rho _j^{(k)}\\log \\rho _j^{(k)}&=\\frac{1}{K}\\sum _{k=1}^{K}\\sum _{j=1}^{J_k}\\int _{Q_j^{(k)}}\\rho _j^{(k)}\\log \\frac{K\\rho }{\\eta }\\\\&=\\int _{\\mathbb {R}^d}\\rho \\log \\frac{K\\rho }{\\eta }\\leqslant \\int _{\\mathbb {R}^d}\\rho \\log \\rho +3d\\int _{\\mathbb {R}^d}\\rho $ since $K\\leqslant 15^d\\leqslant e^{3d}$ and $\\eta \\geqslant 1$ .", "Thus we obtain $G_T[\\rho ]\\leqslant \\frac{1}{K}\\sum _{k=1}^{K}\\sum _{1\\leqslant i<j\\leqslant J_k}\\iint _{\\mathbb {R}^d\\times \\mathbb {R}^d}\\rho _i^{(k)}(x)\\rho _j^{(k)}(y)w(x-y)\\,\\mathrm {d}x\\,\\mathrm {d}y\\\\+T\\int _{\\mathbb {R}^d}\\rho \\log \\rho +3Td\\int _{\\mathbb {R}^d}\\rho .$ Our next task is to estimate the interaction, for every fixed $k$ .", "By Assumption REF we have $w\\leqslant w_1+w_2$ with $w_1(x)=\\kappa (r_0/|x|)^{\\alpha }{1 }(|x|<r_0)$ and $w_2(x)=\\kappa (1+|x|^s)^{-1}$ .", "We first estimate the term involving the integrable potential $w_2$ using Young's inequality as $&\\sum _{1\\leqslant i<j\\leqslant J_k}\\iint _{\\mathbb {R}^d\\times \\mathbb {R}^d}\\rho _i^{(k)}(x)\\rho _j^{(k)}(y)w_2(x-y)\\,\\mathrm {d}x\\,\\mathrm {d}y\\\\&\\qquad \\leqslant \\frac{1}{2}\\iint _{\\mathbb {R}^d\\times \\mathbb {R}^d}\\rho ^{(k)}(x)\\rho ^{(k)}(y)w_2(x-y)\\,\\mathrm {d}x\\,\\mathrm {d}y\\\\&\\qquad \\leqslant \\frac{ \\left\\Vert w_2 \\right\\Vert _{L^1}}{2}\\int _{\\mathbb {R}^d}(\\rho ^{(k)})^2= \\frac{ \\left\\Vert w_2 \\right\\Vert _{L^1}}{2} K^2\\int _{\\cup _i Q_i^{(k)}}\\frac{\\rho ^2}{\\eta ^2}.$ After summing over $k$ this gives $\\frac{1}{K}\\sum _{k=1}^K\\sum _{1\\leqslant i<j\\leqslant J_k}\\iint _{\\mathbb {R}^d\\times \\mathbb {R}^d}\\rho _i^{(k)}(x)\\rho _j^{(k)}(y)w_2(x-y)\\,\\mathrm {d}x\\,\\mathrm {d}y\\leqslant \\frac{ \\left\\Vert w_2 \\right\\Vert _{L^1}}{2} K\\int _{\\mathbb {R}^d}\\frac{\\rho ^2}{\\eta }.$ Using for instance $\\int _{\\mathbb {R}^d}w_2=\\kappa |\\mathbb {S}^{d-1}|\\int _0^\\infty \\frac{r^{d-1}}{1+r^s}\\,\\mathrm {d}r\\leqslant \\kappa |\\mathbb {S}^{d-1}|\\frac{s}{d {s-d}},$ and recalling that $\\eta \\geqslant 1$ and $K\\leqslant 3^d(4^d+1)$ , we obtain $\\frac{1}{K} \\sum _{k=1}^{K}\\sum _{1\\leqslant i<j\\leqslant J_k}\\iint _{\\mathbb {R}^d\\times \\mathbb {R}^d}\\rho _i^{(k)}(x)\\rho _j^{(k)}(y)w_2(x-y)\\,\\mathrm {d}x\\,\\mathrm {d}y\\\\\\leqslant \\kappa \\frac{s|\\mathbb {S}^{d-1}|}{2d(s-d)}3^{d}(4^d+1)\\int _{\\mathbb {R}^d}\\rho ^2.$ Next we consider the more complicated term involving the singular part $w_1=\\kappa {1 }(|x|<r_0) (r_0/|x|)^{\\alpha }$ .", "To simplify our notation, we remove the superscript $(k)$ and thus consider the collection $(\\rho _j)_{j=1}^J$ of functions supported in the disjoint cubes $Q_j$ with the safety distance.", "For every $i\\ne j$ , using $\\int _{\\mathbb {R}^d}\\rho _j\\leqslant 1$ , we can estimate $\\iint \\rho _i(x)\\rho _j(y)w_1(x-y)\\,\\mathrm {d}x\\,\\mathrm {d}y\\leqslant \\frac{\\kappa r_0^\\alpha }{\\mathrm {d}(Q_i,Q_j)^\\alpha }.$ Recall that when $|Q_i|\\leqslant |Q_j|$ , the distance $\\mathrm {d}(Q_i,Q_j)$ is at least equal to $\\ell _i/2$ .", "We can order our $J$ cubes so that the volume is increasing: $|Q_1|\\leqslant |Q_2|\\leqslant \\cdots \\leqslant |Q_J|$ .", "We need to estimate $\\sum _{i=1}^{J-1}\\sum _{j=i+1}^J\\frac{1}{\\mathrm {d}(Q_i,Q_j)^\\alpha }=\\sum _{i=1}^{J-1}\\frac{1}{\\ell _i^\\alpha }\\sum _{j=i+1}^J\\frac{1}{\\mathrm {d}(\\mathcal {C},Q^{\\prime }_{i,j})^\\alpha }$ where $\\mathcal {C}=(-1/2,1/2)^d$ and for every $i$ , we have denoted by $Q^{\\prime }_{i,j}$ the cube centered at $(x_j-x_i)/\\ell _i$ , of volume $|Q_j|/|Q_i|\\geqslant 1$ .", "To estimate the sum in $j$ , we use the following lemma, which is based on the integrability at infinity of $|x|^{-\\alpha }$ and is similar to [49].", "Lemma 18 Let $\\mathcal {C}=(-1/2,1/2)^d$ be the unit cube and consider any collection of non-intersecting cubes $Q_j$ with the property that $|Q_j|\\geqslant 1$ and $\\mathrm {d}(\\mathcal {C},Q_j)\\geqslant \\frac{1}{2}$ .", "Then we have $\\sum _j \\frac{1}{\\mathrm {d}(\\mathcal {C},Q_j)^\\alpha }\\leqslant \\frac{3^\\alpha 2^{7d}d^2}{|\\mathbb {S}^{d-1}|(\\alpha -d)}\\,.$ The constant on the right of (REF ) is not at all optimal and is only displayed for concreteness.", "Let $X_j\\in \\mathcal {C}$ and $Y_j\\in Q_j$ be so that $\\mathrm {d}(\\mathcal {C},Q_j)=|X_j-Y_j|\\geqslant \\frac{1}{2}$ .", "For any $x\\in B(X_j,1/8)$ and $y\\in B(Y_j,1/8)$ we have $\\frac{|X_j-Y_j|}{2}\\leqslant |X_j-Y_j|-\\frac{1}{4}\\leqslant |x-y|\\leqslant |X_j-Y_j|+\\frac{1}{4}\\leqslant \\frac{3}{2}|X_j-Y_j|.$ Integrating over $x^{\\prime }\\in \\mathcal {C}\\cap B(X_j,1/8)$ and $y^{\\prime }\\in Q_j\\cap B(Y_j,1/8)$ we obtain $\\frac{1}{\\mathrm {d}(\\mathcal {C},Q_j)^\\alpha }&=\\frac{1}{|X_j-Y_j|^\\alpha }\\\\&\\leqslant \\frac{(3/2)^\\alpha }{|\\mathcal {C}\\cap B(X_j,1/8)|\\;| Q_j\\cap B(Y_j,1/8)|}\\int _\\mathcal {C}\\int _{Q_j}\\frac{\\mathrm {d}x\\,\\mathrm {d}y}{|x-y|^\\alpha }.$ The volume of the intersection of a ball of radius $1/8$ centered at $X_j$ in a cube of volume $\\geqslant 1$ and that of the other cube is bounded away from 0.", "It is in fact minimal when $X_j, Y_j$ are located at a corner, yielding $|\\mathcal {C}\\cap B(X_j,1/8)|\\geqslant \\frac{|\\mathbb {S}^{d-1}|}{2^{4d}d},\\qquad |Q_j\\cap B(Y_j,1/8)|\\geqslant \\frac{|\\mathbb {S}^{d-1}|}{2^{4d}d}.$ Thus, we obtain $\\frac{1}{\\mathrm {d}(\\mathcal {C},Q_j)^\\alpha }\\leqslant \\frac{3^\\alpha 2^{8d-\\alpha }d^2}{|\\mathbb {S}^{d-1}|^2}\\int _\\mathcal {C}\\int _{Q_j}\\frac{\\mathrm {d}x\\,\\mathrm {d}y}{|x-y|^\\alpha }.$ Summing over $j$ using that the cubes are disjoint we obtain $\\sum _j \\frac{1}{\\mathrm {d}(\\mathcal {C},Q_j)^\\alpha }\\leqslant \\frac{3^\\alpha 2^{8d-\\alpha }d^2}{|\\mathbb {S}^{d-1}|^2}\\int _\\mathcal {C}\\int _{\\mathbb {R}^d}\\frac{{1 }(|x-y|\\geqslant \\frac{1}{2}) \\,\\mathrm {d}x\\,\\mathrm {d}y}{|x-y|^\\alpha }= \\frac{3^\\alpha 2^{7d}d^2}{|\\mathbb {S}^{d-1}|(\\alpha -d)}$ as was claimed.", "From the estimates (REF ) and (REF ), we deduce that $\\sum _{1\\leqslant i<j\\leqslant J_k}\\iint _{|x-y|\\leqslant r_0}\\rho _i(x)\\rho _j(y)w_1(x-y)\\,\\mathrm {d}x\\,\\mathrm {d}y\\\\\\leqslant \\kappa r_0^\\alpha \\frac{d^23^{d+2\\alpha } 2^{7d}(4^d+1)^{1+\\frac{\\alpha }{d}}}{|\\mathbb {S}^{d-1}|(\\alpha -d)}\\int _{\\cup _iQ_i}\\rho ^{1+\\frac{\\alpha }{d}}.$ Using that $\\int _{\\cup _iQ_i}\\rho ^{1+\\frac{\\alpha }{d}} \\leqslant \\int _{\\mathbb {R}^d} \\rho ^{1+ \\frac{\\alpha }{d}}$ and summing over $ K $ , we obtain our final estimate $G_T[\\rho ]\\leqslant \\kappa \\frac{s|\\mathbb {S}^{d-1}|}{2d(s-d)}3^{d}(4^d+1)\\int _{\\mathbb {R}^d}\\rho ^2 +\\kappa r_0^\\alpha \\frac{d^23^{d+2\\alpha } 2^{7d}(4^d+1)^{1+\\frac{\\alpha }{d}}}{|\\mathbb {S}^{d-1}|(\\alpha -d)}\\int _{\\mathbb {R}^d}\\rho ^{1+\\frac{\\alpha }{d}}\\\\+T\\int _{\\mathbb {R}^d}\\rho \\log \\rho +3Td\\int _{\\mathbb {R}^d}\\rho .$ This is our final upper bound, with non-optimal constants only displayed for concreteness.", "Step 3.", "General densities ($\\alpha >d$ ).", "In order to be able to use the Besicovitch lemma, we restricted ourselves to compactly supported densities.", "We prove here that the exact same estimate holds for general densities.", "Let $\\rho \\in (L^1\\cap L^{1+\\alpha /d})(\\mathbb {R}^d,\\mathbb {R}_+)$ , $\\varepsilon \\in (0,1)$ and write $\\rho =(1-\\varepsilon )\\frac{\\rho {1 }_{\\mathcal {C}_L}}{1-\\varepsilon }+\\varepsilon \\frac{\\rho {1 }_{\\mathbb {R}^d\\setminus \\mathcal {C}_L}}{\\varepsilon }$ with $\\mathcal {C}_L=(-L/2,L/2)^d$ .", "Using the concavity of the entropy, we obtain $G_T[\\rho ]\\leqslant (1-\\varepsilon )G_T\\left[\\frac{\\rho {1 }_{\\mathcal {C}_L}}{1-\\varepsilon }\\right]+\\varepsilon \\, G_T\\left[\\frac{\\rho {1 }_{\\mathbb {R}^d\\setminus \\mathcal {C}_L}}{\\varepsilon }\\right].$ We choose $L$ so large that $\\int _{\\mathbb {R}^d\\setminus \\mathcal {C}_L}\\rho \\leqslant \\varepsilon ,$ which allows us to use (REF ) for the second term on the right of (REF ).", "For the first term we just use Step 2.", "We find $G_T[\\rho ]\\leqslant \\frac{C\\kappa r_0^\\alpha }{(1-\\varepsilon )^{\\frac{\\alpha }{d}}}\\int _{\\mathcal {C}_L}\\rho ^{1+\\frac{\\alpha }{d}}+\\frac{C\\kappa }{1-\\varepsilon }\\int _{\\mathcal {C}_L}\\rho ^{2}+CT\\int _{\\mathcal {C}_L}\\rho +T\\int _{\\mathbb {R}^d}\\rho \\log \\rho \\\\+T\\log \\varepsilon ^{-1}\\int _{\\mathbb {R}^d\\setminus \\mathcal {C}_L}\\rho +T\\log (1-\\varepsilon )^{-1}\\int _{\\mathcal {C}_L}\\rho .$ By passing first to the limit $L\\rightarrow \\infty $ and then $\\varepsilon \\rightarrow 0$ , we conclude that $\\rho $ satisfies the same estimate (REF ) as for compactly supported densities.", "Step 4.", "Case $\\alpha =d$ .", "The case when the core of the interaction behaves as $w_1(x)=\\kappa r_0^d |x|^{-d} {1 } {|x|\\leqslant r_0}$ is similar to the previous situation with some small changes.", "The function is not integrable around the origin which requires to have a safety distance between particles in our trial state.", "However this interaction is also non-integrable without cutoff at infinity so we need to use that the core of our interaction is compactly supported on the ball of radius $r_0$ .", "The following alternative to Lemma REF is going to be useful.", "Lemma 19 Let $\\mathcal {C}_0=(-\\ell _0/2,\\ell _0/2)^d$ and consider any collection of non-intersecting cubes $Q_j$ with the property that $|Q_j|\\geqslant \\ell _0^d$ and $\\mathrm {d}(\\mathcal {C}_0,Q_j)\\geqslant \\frac{\\ell _0}{2}$ .", "Then we have $\\sum _j \\frac{{1 }_{\\mathrm {d}(\\mathcal {C}_0,Q_j)\\leqslant r_0}}{\\mathrm {d}(\\mathcal {C}_0,Q_j)^d}\\leqslant C\\ell _0^{-d}\\left(\\log \\left(\\frac{2r_0}{\\ell _0}\\right)\\right)_+\\,.$ We assume $\\ell _0\\leqslant 2r_0$ otherwise there is nothing to prove.", "Let $X_j\\in \\mathcal {C}_0$ and $Y_j\\in Q_j$ be such that $\\mathrm {d}(\\mathcal {C}_0,Q_j)=|X_j-Y_j|\\geqslant \\frac{\\ell _0}{2}$ .", "For any $x\\in B(X_j,\\ell _{0}/8)$ and $y\\in B(Y_j,\\ell _{0}/8)$ we have $\\frac{|X_j-Y_j|}{2}\\leqslant |X_j-Y_j|-\\frac{\\ell _0}{4}\\leqslant |x-y|\\leqslant |X_j-Y_j|+\\frac{\\ell _0}{4}\\leqslant \\frac{3}{2}|X_j-Y_j|.$ Integrating over $x^{\\prime }\\in \\mathcal {C}_0\\cap B(X_j,\\ell _{0}/8)$ and $y^{\\prime }\\in Q_j\\cap B(Y_j,\\ell _{0}/8)$ we obtain $[4] \\frac{{1 }_{\\mathrm {d}(\\mathcal {C}_0,Q_j)\\leqslant r_0}}{\\mathrm {d}(\\mathcal {C}_0,Q_j)^d}={} \\frac{{1 }_{\\mathrm {d}(\\mathcal {C}_0,Q_j)\\leqslant r_0}}{|X_j-Y_j|^d}\\\\\\leqslant {}& \\frac{(3/2)^d}{|\\mathcal {C}_0\\cap B(X_j,\\ell _0/8)|\\;|Q_j\\cap B(Y_j,\\ell _0/8)|}\\int _{\\mathcal {C}_0}\\int _{Q_j}\\frac{{1 }_{\\mathrm {d}(|x-y|)\\leqslant r_0}}{|x-y|^d}\\,\\mathrm {d}x\\,\\mathrm {d}y.$ Summing over all cubes we get $\\sum _{j}\\frac{{1 }_{\\mathrm {d}(\\mathcal {C}_0,Q_j)\\leqslant r_0}}{\\mathrm {d}(\\mathcal {C}_0,Q_j)^d}\\leqslant \\frac{2^{7d}3^dd}{|\\mathbb {S}^{d-1}|\\ell _0^{d}}\\int _{\\frac{\\ell _0}{2}}^{r_0}r^{-1} \\,\\mathrm {d}r=\\frac{2^{7d}3^dd}{|\\mathbb {S}^{d-1}|}\\frac{\\log (2r_0/\\ell _0)}{\\ell _0^{d}}.$ Next we explain how to relate the right side of (REF ) with the density $\\rho $ .", "Recall from (REF ) that $\\int _{\\mathcal {C}(x)}\\rho (y)\\,\\mathrm {d}y=\\frac{1}{3^d(4^d+1)}$ where $\\mathcal {C}(x)$ is the cube of side length $\\ell (x)$ centered at $x$ .", "By Jensen's inequality, we have for every convex function $F$ $\\ell (x)^dF\\left(\\frac{1}{\\ell (x)^d3^d(4^d+1)}\\right)= \\ell {x}^d F [\\bigg ]{\\frac{1}{\\ell {x}^d} \\int _{\\mathcal {C} {x}} \\rho }\\leqslant \\int _{\\mathcal {C}(x)}F\\big (\\rho (y)\\big )\\,\\mathrm {d}y.$ Applying this to $F(t)=t^2\\Big (\\log \\big (6^d(4^d+1)r_0^dt\\big )\\Big )_+,$ we obtain $&\\ell (x)^{-d}\\left(\\log \\left(\\frac{2r_0}{\\ell (x)}\\right)\\right)_+\\nonumber \\\\&\\qquad \\leqslant \\frac{3^{2d}(4^d+1)^2}{d}\\int _{\\mathcal {C}(x)}\\rho (y)^2\\Big (\\log \\big (6^d(4^d+1)r_0^d\\rho (y)\\big )\\Big )_+\\,\\mathrm {d}y\\nonumber \\\\&\\qquad \\leqslant \\frac{3^{2d}(4^d+1)^2}{d}\\int _{\\mathcal {C}(x)}\\rho (y)^2\\Big (4d+\\big (\\log r_0^d\\rho (y)\\big )_+\\Big )\\,\\mathrm {d}y.$ The rest of the proof is similar to the case $\\alpha >d$ , using (REF ) and Lemma REF .", "We omit the details.", "This concludes the proof of Theorem REF .$\\Box $ Remark 20 (Hard-core case) The previous proof can be used in the hard core case $\\alpha =\\infty $ , under the (sub-optimal) condition that $\\int _{x+r_0\\mathcal {C}}\\rho <\\frac{1}{3^d(4^d+1)}$ for all $x$ , where $\\mathcal {C}=(-1/2,1/2)^d$ .", "The interaction can be bounded by $\\kappa C N$ as we have seen in (REF ), leading to the bound $G_T [\\rho ]&\\leqslant C\\kappa \\int _{\\mathbb {R}^d}\\rho +CT \\int _{\\mathbb {R}^d} \\rho + T \\int _{\\mathbb {R}^d} \\rho \\log \\rho \\,.$" ], [ "The local radius $R(x)$ in optimal transport", "Here we explain how to construct canonical trial states using a result from optimal transport, in order to obtain bounds at zero temperature for a singular interaction ($d\\leqslant \\alpha \\leqslant \\infty $ ).", "Consider any density $ \\rho $ with $ \\int \\rho > 1 $ , and recall the local radius $ R(x) $ from (REF ).", "Note that $ R {x} $ can never be zero because $ \\rho $ as a measure does not have any point mass.", "The function $ R $ is connected to the Hardy-Littlewood maximal function $ M_\\rho $ , defined by $M_\\rho {x} := \\sup _{r > 0} \\frac{1}{{B_r}} \\int _{B {x,r}} \\rho {y} \\, \\mathrm {d}y,$ where $ {B_r} $ denotes the volume of a ball in $ \\mathbb {R}^d $ of radius $ r $ .", "By definition of $ R $ it is clear that $\\frac{1}{{B_1} R {x}^d}= \\frac{1}{{B_1} R {x}^d} \\int _{B {x, R {x}}} \\rho {y} \\, \\mathrm {d}y\\leqslant M_\\rho {x},$ so that we have the pointwise bound $\\frac{1}{R {x}}\\leqslant {{B_1} M_\\rho {x}}^{\\frac{1}{d}}.$ Furthermore, using Hölder's inequality gives for any $ p > 0 $ , $1 = \\int _{B {x, R {x}}} \\rho \\leqslant {B {x, R {x}}}^{\\frac{p}{p+d}} [\\Big ]{\\int _{B {x,R {x}}} \\rho ^{1+\\frac{p}{d}}}^{\\frac{d}{p+d}},$ implying for $ \\rho \\in L_{\\mathrm {loc}}^{1+\\frac{p}{d}} {\\mathbb {R}^d} $ the bound $\\frac{1}{R {x}^p}\\leqslant {B_1}^{\\frac{p}{d}} \\int _{B {x,R {x}}} \\rho ^{1+\\frac{p}{d}}.$ It is also apparent that $ R $ is 1-Lipschitz-continuous, see e.g.", "[8].", "One might also remark that $ R $ always stays away from zero, i.e.", "$R_{\\rho }:= \\min _{x \\in \\mathbb {R}^d} R {x} > 0.$ This is an immediate consequence of the facts that $ R $ is continuous and that, necessarily, $ \\lim _{{x} \\rightarrow \\infty } R {x} = \\infty $ , because $ \\rho \\in L^1 {\\mathbb {R}^d} $ .", "To obtain an upper bound on the canonical energy at a fixed density $ 0 \\leqslant \\rho \\in L^1 {\\mathbb {R}^d} $ , it is convenient to have existence of states $ \\mathbb {P}$ in which the distance between the particles is bounded from below in terms of the function $ R $ from (REF ).", "The following is a consequence of a result from [8].", "Theorem 21 (Optimal transport state) Let $ 0 \\leqslant \\rho \\in L^1 {\\mathbb {R}^d} $ with $ N = \\int _{\\mathbb {R}^d} \\rho \\in \\mathbb {N}$ .", "There exists an $ N $ -particle state $ \\mathbb {P}$ with density $ \\rho _{\\mathbb {P}} = \\rho $ such that ${x_i - x_j} \\geqslant \\max \\Big ( R_{\\rho }, \\tfrac{R(x_i) + R(x_j)}{3} \\Big )\\quad \\text{for $1\\leqslant i\\ne j\\leqslant N$}$ $\\mathbb {P}$ –almost everywhere, where $ R $ is the function defined by (REF ), and $ R_{\\rho } $ is its minimum in (REF ).", "The proof is a simple application of [8].", "For any $ 0 < \\eta < 1 $ (we will choose $ \\eta = 1/3 $ in a moment) and any $ x \\in \\mathbb {R}^d $ , define a set $\\widetilde{B} {x} = { y \\in \\mathbb {R}^d {x-y} < \\eta {R {x} + R {y}} }.$ Then we have for any $ 0 < t < 1 -\\eta $ , using the Lipschitz continuity of $ R $ , $\\widetilde{B} {x}\\subseteq {}& {y \\in \\mathbb {R}^d t {x-y} < \\eta R {x} } \\cup {y \\in \\mathbb {R}^d {1-t} {x-y} < \\eta R {y} } \\\\\\subseteq {}& B {x, \\tfrac{\\eta }{t} R {x}} \\cup {y \\in \\mathbb {R}^d {1-t} {x-y} < \\eta { R {x} + {x-y}} } \\\\={}& B {x, \\tfrac{\\eta }{t} R {x}} \\cup B {x, \\tfrac{\\eta }{1-t-\\eta } R {x}}.$ We wish to choose $ t $ and $ \\eta $ such that the measure of right hand side is equal to one (with respect to the measure $ \\rho $ ).", "First, requiring the two balls to have the same radius leads to the choice $ t = \\frac{1-\\eta }{2} $ .", "Next, we choose $ \\eta $ such that $ \\frac{\\eta }{t} = \\frac{\\eta }{1-t-\\eta } = \\frac{2 \\eta }{1-\\eta } = 1 $ , which implies $ \\eta = 1/3 $ .", "Now, defining an open and symmetric set $ D \\subseteq \\mathbb {R}^d \\times \\mathbb {R}^d $ by $D = [\\Big ]{ {x,y} \\in \\mathbb {R}^d \\times \\mathbb {R}^d {x-y} < \\max \\left( R_{\\rho }, \\tfrac{R {x} + R {y}}{3} \\right) },$ then $ B {x} := {y \\in \\mathbb {R}^d {x,y} \\in D} $ satisfies $B {x} = B {x, R_{\\rho }} \\cup \\widetilde{B} {x}\\subseteq B {x, R {x}}.$ Thus, by definition of $ R $ , we have $ \\rho {B {x}} \\leqslant \\rho {B {x, R{x}}} = 1 $ , and since $ D $ is open and symmetric, [8] asserts the existence of a $ \\mathbb {P}$ with the claimed properties.", "Specifically, one can take $ \\mathbb {P}$ to be the optimizer for the multi-marginal optimal transport problem associated to the cost $c {x} = \\min { \\mathrm {d}{x, A}, 1},$ where $ A $ denotes the set containing the $(x_1,...,x_N)$ satisfying (REF )." ], [ "Proof of Theorem ", "The existence of the state from thm:clstate allows us to prove the last part of thm:GCbound about the canonical free energy at zero temperature.", "For convenience we state a proposition valid for any state $\\mathbb {P}$ for which the particles satisfy an inequality similar to (REF ).", "Along with prop:clbound3 below (which covers the case $ \\alpha = d $ ), this immediately implies Theorem REF in the canonical case.", "Proposition 22 (Zero temperature energy bound, $ d < \\alpha < \\infty $ ) Let $ w $ satisfy Assumption REF with $ d < \\alpha < \\infty $ .", "Let $\\mathbb {P}$ be any $ N $ -particle probability measure with one-body density $ \\rho :=\\rho _{\\mathbb {P}} $ satisfying ${x_i - x_j} \\geqslant \\eta \\big ( R {x_i} + R {x_j}\\big ) \\qquad \\text{ for $1\\leqslant i \\ne j\\leqslant N$,}$ $\\mathbb {P}$ –almost everywhere, for some $ 0 < \\eta \\leqslant 1 $ .", "Then the interaction energy in the state $ \\mathbb {P}$ is bounded by $F_0 {\\rho }\\leqslant {} \\mathcal {U}_{N} {\\mathbb {P}}\\leqslant {} \\frac{C\\kappa r_0^\\alpha }{\\eta ^{\\alpha }} \\int _{\\mathbb {R}^d} \\rho {x}^{1+\\frac{\\alpha }{d}} \\, \\mathrm {d}x + \\frac{C\\kappa }{\\eta ^d} \\int _{\\mathbb {R}^d} \\rho {x}^2 \\, \\mathrm {d}x$ with $C$ a constant depending only on $d,\\alpha ,s$ .", "In this proof we will not keep track of the exact value of the constants, since we will need the (unknown) one from the Hardy-Littlewood inequality.", "Hence $C$ denotes here a generic constant depending only on $d,\\alpha ,s$ .", "By the assumptions on $ w $ , we have $\\mathcal {U}_{N} {\\mathbb {P}}\\leqslant {} \\kappa \\int _{\\mathbb {R}^{dN}} \\sum _{1\\leqslant i<j\\leqslant N} \\left(\\frac{r_0^\\alpha {1 }(|x_i-x_j|\\leqslant r_0)}{{x_i - x_j}^{\\alpha }} + \\frac{1}{1+ {x_i-x_j}^s}\\right) \\mathrm {d}\\mathbb {P}{x}.$ Let $x= ( x_1, \\cdots , x_N) $ be in the support of $\\mathbb {P}$ .", "After permutation we can assume that $ R {x_1} \\leqslant R {x_2} \\leqslant \\cdots \\leqslant R {x_N} $ .", "We fix the index $ i $ and consider the points $ x_i - x_j $ in $ \\mathbb {R}^d $ for $ j = i+1, \\cdots , N $ .", "Because of (REF ), these points are all at a distance at least $ \\eta {R{x_i} + R {x_j}} $ from the origin, and ${{x_i - x_j} - {x_i - x_k}} = {x_j - x_k} \\geqslant \\eta {R {x_j} + R {x_k}}.$ Hence we can place $ N - i $ disjoint balls in $ \\mathbb {R}^d $ with radii $ \\eta R {x_j} $ , centered at the points $ x_i - x_j $ , respectively.", "Inside each of these balls, we place a smaller ball of radius $ \\frac{\\eta }{2} R {x_j} $ , centered at $z_j = [\\Big ]{1- \\frac{\\eta R {x_j} }{2 {x_i-x_j}}} {x_i - x_j}.$ Then $ x_i - x_j $ is the point on the boundary of $ B {z_j, \\frac{\\eta }{2} R {x_j} } $ which is the farthest from the origin (see fig:balls), so that $\\frac{1}{{x_i - x_j}^{\\alpha }} = \\min _{y \\in B {z_j, \\frac{\\eta }{2} R {x_j} } } \\frac{1}{{y}^{\\alpha }}.$ Figure: Sketch of constructionNote that the distance from $ B {z_j, \\frac{\\eta }{2} R {x_j} } $ to the origin is bounded from below by $\\mathrm {d}\\left(0, B {z_j, \\frac{\\eta }{2} R {x_j}} \\right)\\geqslant {z_j} - \\frac{\\eta }{2} R {x_j}= {x_i-x_j} - \\eta R {x_j}\\geqslant \\eta R {x_i}.$ Using this, along with the fact that all the balls are disjoint, we get the pointwise bound $\\sum \\limits _{j=i+1}^N \\frac{1}{{x_i - x_j}^{\\alpha }}\\leqslant {}& \\sum \\limits _{j=i+1}^N \\frac{1}{{B {z_j,\\frac{\\eta }{2} R {x_j}}}} \\int _{B {z_j, \\frac{\\eta }{2} R {x_j}}} \\frac{1}{{y}^{\\alpha }} \\, \\mathrm {d}y \\nonumber \\\\\\leqslant {}& \\frac{1}{{B {0,\\frac{\\eta }{2} R {x_i}}}} \\int _{B {0, \\eta R {x_i}}^c} \\frac{1}{{y}^{\\alpha }} \\, \\mathrm {d}y \\\\={}& \\frac{2^d}{{B_1}} \\frac{1}{{\\eta R {x_i}}^{\\alpha }} \\int _{B {0, 1}^c} \\frac{1}{{y}^{\\alpha }} \\, \\mathrm {d}y \\nonumber $ for $ \\mathbb {P}$ -a.e.", "$ x \\in \\mathbb {R}^{dN} $ .", "We conclude that the contribution to the energy from the core of $ w $ can be bounded by $\\int _{\\mathbb {R}^{dN}} \\sum \\limits _{i=1}^N \\sum \\limits _{j = i +1}^N \\frac{1}{{x_i - x_j}^{\\alpha }} \\, \\mathrm {d}\\mathbb {P}{x}\\leqslant {}& \\frac{C}{\\eta ^{\\alpha }} \\int _{\\mathbb {R}^{dN}} \\sum \\limits _{i = 1}^N \\frac{1}{R {x_i}^{\\alpha }} \\, \\mathrm {d}\\mathbb {P}{x} \\nonumber \\\\={}& \\frac{C}{\\eta ^{\\alpha }} \\int _{\\mathbb {R}^d} \\frac{\\rho {x}}{R {x}^{\\alpha }} \\, \\mathrm {d}x.$ Similarly, we get for the contribution from the tail of $ w $ , $\\sum \\limits _{j = i + 1} \\frac{1}{1 + {x_i - x_j}^s}\\leqslant {}& \\sum \\limits _{j = i + 1} \\frac{1}{ {B {z_j, \\frac{\\eta }{2} R {x_j}}} } \\int _{B {z_j, \\frac{\\eta }{2} R {x_j}}} \\frac{1}{1 + {y}^s} \\, \\mathrm {d}y \\\\\\leqslant {}& \\frac{2^d}{{B_1}} \\frac{1}{{\\eta R {x_i}}^d} \\int _{\\mathbb {R}^d} \\frac{1}{1 + {y}^s} \\, \\mathrm {d}y,$ so $\\int _{\\mathbb {R}^{dN}} \\sum \\limits _{i=1}^N \\sum \\limits _{j = i +1}^N \\frac{1}{1+{x_i - x_j}^s} \\, \\mathrm {d}\\mathbb {P}{x}\\leqslant {} \\frac{C}{\\eta ^d} \\int _{\\mathbb {R}^d} \\frac{\\rho {x}}{R {x}^d} \\, \\mathrm {d}x.$ Finally, recalling from (REF ) that $ R {x} $ is bounded from below in terms of the maximal function of $ \\rho $ , we apply the Hölder and Hardy-Littlewood maximal inequalities to obtain for any power $ p > 0 $ , $\\int _{\\mathbb {R}^d} \\frac{\\rho {x}}{R {x}^p} \\, \\mathrm {d}x\\leqslant {}& C \\int _{\\mathbb {R}^d} \\rho {x} {M_\\rho } {x}^{\\frac{p}{d}} \\, \\mathrm {d}x \\\\\\leqslant {}& C [\\Big ]{ \\int _{\\mathbb {R}^d} \\rho {x}^{1+ \\frac{p}{d}} \\, \\mathrm {d}x }^{\\frac{d}{d+p}} [\\Big ]{ \\int _{\\mathbb {R}^d} {M_\\rho } {x}^{1+\\frac{p}{d}} \\, \\mathrm {d}x}^{\\frac{p}{d+p}} \\\\\\leqslant {}& C \\int _{\\mathbb {R}^d} \\rho {x}^{1+\\frac{p}{d}}.$ Using this on (REF ) and (REF ), and combining with (REF ), we obtain the claimed bound (REF ).", "Proposition 23 (Special case $ \\alpha = d $ ) Let $ w $ be an interaction satisfying Assumption REF with $ \\alpha = d $ , and $ 0 \\leqslant \\rho \\in L^1 {\\mathbb {R}^d} $ a density with $ \\int \\rho = N $ .", "Then, for any $ N $ -particle probability measure $ \\mathbb {P}$ with one-body density $ \\rho _{\\mathbb {P}} = \\rho $ satisfying (REF ) for some $ 0 < \\eta \\leqslant 1 $ , the interaction energy is bounded by $F_0 {\\rho }\\leqslant {}& \\int _{\\mathbb {R}^{dN}} \\sum \\limits _{1 \\leqslant i < j \\leqslant N} w {x_i-x_j} \\, \\mathrm {d}\\mathbb {P}{x_1, \\cdots , x_N} \\nonumber \\\\\\leqslant {}& \\frac{\\kappa r_0^d C}{\\eta ^{2d}} \\int _{\\mathbb {R}^d} \\rho ^2 [\\Big ]{ \\log [\\Big ]{\\frac{c r_0^d}{\\eta ^{2d}} \\rho } }_+ + \\frac{\\kappa C}{\\eta ^{2d}} \\int _{\\mathbb {R}^d} \\frac{1}{1 + {y}^s} \\, \\mathrm {d}y \\int _{\\mathbb {R}^d} \\rho ^2,$ where the constants $ c $ and $ C $ depend only on the dimension $ d $ .", "The proof goes along the same lines as the proof of prop:clbound.", "However, complications arise due to the fact that $ 1/{x}^d $ is not integrable at infinity, so we need to take into account the finite range $ r_0 $ of the core of $ w $ .", "Incidentally, this also forces us to avoid using the Hardy-Littlewood maximal inequality later in the proof.", "Following the proof of prop:clbound up to (REF ) and noting that $ B {z_j, \\frac{\\eta }{2} R {x_j}} \\subseteq B {0, {x_i-x_j}} $ , we have $\\sum \\limits _{j=i+1}^N \\frac{{1} {{x_i - x_j} \\leqslant r_0}}{{x_i - x_j}^d}\\leqslant {}& \\sum \\limits _{j=i+1}^N \\frac{{1} {{x_i - x_j} \\leqslant r_0}}{{B {z_j,\\frac{\\eta }{2} R {x_j}}}} \\int _{B {z_j, \\frac{\\eta }{2} R {x_j}}} \\frac{1}{{y}^d} \\, \\mathrm {d}y \\\\\\leqslant {}& \\frac{1}{{B {0,\\frac{\\eta }{2} R {x_i}}}} \\int _{\\eta R {x_i} \\leqslant {y} \\leqslant r_0} \\frac{1}{{y}^d} \\, \\mathrm {d}y \\\\={}& \\frac{{\\mathbb {S}^{d-1}}}{{B {0,\\frac{\\eta }{2} R {x_i}}}} [\\Big ]{ \\int _{\\eta R {x_i}}^{r_0} \\frac{1}{r} \\, \\mathrm {d}r }_+ \\\\={}& \\frac{2^d}{\\eta ^d R {x_i}^d} [\\Big ]{ \\log [\\Big ]{ \\frac{r_0^d}{\\eta ^d R {x_i}^d} } }_+.$ This leads to the bound $[6] \\int _{\\mathbb {R}^{dN}} \\sum \\limits _{i=1}^N \\sum \\limits _{j = i +1}^N w {x_i - x_j} \\, \\mathrm {d}\\mathbb {P}{x} \\nonumber \\\\\\leqslant {}& \\kappa r_0^d \\frac{2^d}{\\eta ^d} \\int _{\\mathbb {R}^{dN}} \\sum \\limits _{i=1}^N \\frac{1}{R {x_i}^d} [\\Big ]{ \\log [\\Big ]{ \\frac{r_0^d}{\\eta ^d R {x_i}^d} } }_+ \\, \\mathrm {d}\\mathbb {P}{x} \\nonumber \\\\&+ \\kappa \\frac{2^d}{{B_1} \\eta ^d} \\int _{\\mathbb {R}^d} \\frac{1}{1 + {y}^s} \\, \\mathrm {d}y \\int _{\\mathbb {R}^{dN}} \\sum \\limits _{i=1}^N \\frac{1}{ R {x_i}^d} \\, \\mathrm {d}\\mathbb {P}{x},$ where, in this case, we cannot use the Hardy-Littlewood maximal inequality on the first term.", "However, this can be circumvented using the fact that $ {x_i - x_j} \\geqslant \\eta {R {x_i} + R {x_j}} $ on the support of $ \\mathbb {P}$ , which is the content of lem:Rconfigbound below.", "Using the lemma, we conclude $F_0 {\\rho }\\leqslant {}& \\frac{\\kappa r_0^d C}{\\eta ^{2d}} \\int _{\\mathbb {R}^d} \\rho ^2 [\\Big ]{ \\log [\\Big ]{\\frac{2^d {B_1} r_0^d}{\\eta ^{2d}} \\rho } }_+ + \\frac{\\kappa C}{\\eta ^{2d}} \\int _{\\mathbb {R}^d} \\frac{1}{1 + {y}^s} \\, \\mathrm {d}y \\int _{\\mathbb {R}^d} \\rho ^2,$ where the constant $ C $ depends only on the dimension $ d $ .", "Lemma 24 Let $ 0 \\leqslant \\rho \\in L^1 {\\mathbb {R}^d} $ be any density with $ \\int \\rho > 1 $ , and take any configuration of points $ x_1, \\cdots , x_M \\in \\mathbb {R}^d $ satisfying $ {x_i - x_j} \\geqslant \\eta {R {x_i} + R {x_j}} $ for $ i \\ne j $ , for some $ 0 < \\eta \\leqslant 1 $ .", "Then we have the bounds $\\sum \\limits _{i=1}^M \\frac{1}{R {x_i}^p}\\leqslant \\frac{C_{d,p}}{\\eta ^p} \\int _{\\mathbb {R}^d} \\rho ^{1+\\frac{p}{d}}$ for any $ p > 0 $ , and for any $ \\lambda > 0 $ , $\\sum \\limits _{i=1}^M \\frac{1}{R {x_i}^d} [\\Big ]{ \\log [\\Big ]{\\frac{\\lambda }{R {x_i}^d}} }_+\\leqslant \\frac{C_d}{\\eta ^d} \\int _{\\mathbb {R}^d} \\rho ^2 [\\Big ]{ \\log [\\Big ]{ \\frac{2^d \\lambda }{\\eta ^d} {B_1} \\rho } }_+.$ We consider any configuration $ x_1, \\cdots , x_M $ as in the statement, and seek to provide a bound on the sum $ \\sum _{i=1}^M \\frac{1}{R {x_i}^p} $ .", "We order the points such that $ R {x_1} \\leqslant \\cdots \\leqslant R {x_M} $ , and assume first for simplicity that all the balls $ B {x_j, R {x_j}} $ intersect the smallest ball $ B {x_1, R {x_1}} $ .", "The main idea of the following argument is to split the space $ \\mathbb {R}^d $ into shells of exponentially increasing width, centered around $ x_1 $ , and arguing that the number of points among $ x_2, \\cdots , x_M $ that can lie in each shell is universally bounded.", "To elaborate, take any $ \\tau > 1 $ and consider for $ m \\in \\mathbb {N}_0 $ the spherical shell of points $ y \\in \\mathbb {R}^d $ satisfying $\\tau ^m \\eta R {x_1}\\leqslant {x_1 - y}< \\tau ^{m+1} \\eta R {x_1}.$ Note that if $ x_j $ lies in this shell, then by Lipschitz continuity of $ R $ , $\\frac{2\\eta }{1+\\eta } R {x_j}\\leqslant {x_1-x_j}< \\tau ^{m+1} \\eta R {x_1},$ immediately implying that $R {x_j} < \\frac{1+\\eta }{2} \\tau ^{m+1} R {x_1}.$ This means that the ball $ B {x_j, \\eta R {x_j}} $ is contained in $B {x_j, \\eta R {x_j}}\\subseteq B {x_1, {x_1 - x_j} + \\eta R {x_j}}\\subseteq B [\\big ]{x_1, \\frac{3+\\eta }{2} \\tau ^{m+1} \\eta R {x_1}}.$ Furthermore, by the assumption that $ B {x_1, R {x_1}} \\cap B {x_j, R {x_j}} \\ne \\emptyset $ , we have that $\\frac{\\tau ^m}{2} \\eta R {x_1}\\leqslant \\frac{1}{2} {x_1 - x_j}< \\frac{1}{2} {R {x_1} + R {x_j}}\\leqslant R {x_j}.$ Since the balls $ B {x_j, \\eta R {x_j}} $ are all disjoint, we conclude that the number of $ x_j $ 's that can lie in the $ m $ 'th shell around $ x_1 $ is bounded by the ratio of the volumes $\\# {j \\mid \\tau ^m \\eta R {x_1} \\leqslant {x_1 - x_j} < \\tau ^{m+1} \\eta R {x_1}}\\leqslant {}& \\frac{ {B [\\big ]{x_1, \\frac{3+\\eta }{2} \\tau ^{m+1} \\eta R {x_1}}} }{ { B {0, \\frac{\\tau ^m}{2} \\eta R {x_1}}} } \\\\\\leqslant {}& 4^d \\tau ^d.$ Note also that no $ x_j $ can be placed inside the first shell (corresponding to $ {x_1- x_j} < \\eta R {x_1} $ ), because we always have $ {x_1 - x_j} \\geqslant \\eta {R {x_1} + R {x_j}} $ by assumption.", "Now, for any power $ p > 0 $ , this allows us to bound, using (REF ), $\\sum \\limits _{j = 1}^M \\frac{1}{R {x_j}^p}\\leqslant {}& 4^d \\tau ^d \\sum \\limits _{m=0}^{\\infty } \\frac{2^p}{{\\tau ^m \\eta R {x_1} }^p}\\leqslant {} \\frac{ 2^{p+2d} \\tau ^{p+d} }{ \\eta ^p {\\tau ^p - 1} } \\frac{1}{R {x_1}^p} \\nonumber \\\\\\leqslant {}& \\frac{ 2^{p+2d} \\tau ^{p+d} }{ \\eta ^p {\\tau ^p - 1} } {B_1}^{\\frac{p}{d}} \\int _{B{x_1, R {x_1}}} \\rho {y}^{1+\\frac{p}{d}} \\, \\mathrm {d}y.$ To bound the sum involving the logarithm, we note first that for any $ \\lambda > 0 $ , applying Jensen's inequality to the function $ t \\mapsto t^2 {\\log \\lambda t}_+ $ yields $\\frac{1}{R {x}^d} [\\Big ]{\\log [\\Big ]{ \\frac{\\lambda }{R {x}^d}}}_+={}& \\frac{{B_1}}{{B {x}}} [\\bigg ]{\\int _{B {x}} \\rho }^2 [\\Big ]{\\log [\\Big ]{\\frac{\\lambda {B_1}}{{B {x}}} \\int _{B {x}} \\rho }}_+ \\nonumber \\\\\\leqslant {}& {B_1} \\int _{B {x}} \\rho ^2 { \\log {\\lambda {B_1} \\rho }}_+.$ Using this, we obtain by again summing over all the shells, $\\sum \\limits _{i=1}^M \\frac{1}{R {x_i}^d} [\\Big ]{ \\log [\\Big ]{\\frac{\\lambda }{R {x_i}^d}} }_+\\leqslant {}& \\sum \\limits _{m=0}^{\\infty } \\frac{4^d \\tau ^d 2^d}{{\\tau ^m \\eta R {x_1} }^d} [\\Big ]{ \\log [\\Big ]{\\frac{2^d \\lambda }{{\\tau ^m \\eta R {x_1} }^d}} }_+ \\\\\\leqslant {}& \\frac{2^{3d} \\tau ^d}{\\eta ^d} \\sum \\limits _{m=0}^{\\infty } \\frac{1}{\\tau ^{dm} R {x_1}^d} [\\Big ]{ \\log [\\Big ]{\\frac{2^d \\lambda }{\\eta ^d R {x_1}^d}} }_+ \\\\\\leqslant {}& \\frac{2^{3d} \\tau ^{2d} {B_1}}{\\eta ^d {\\tau ^d -1}} \\int _{B {x_1, R {x_1}}} \\rho ^2 [\\Big ]{ \\log [\\Big ]{ \\frac{2^d \\lambda }{\\eta ^d} {B_1} \\rho } }_+.$ Finally, we generalize to the case where not all the balls $ B {x_j, R {x_j}} $ intersect the smallest ball $ B {x_1, R {x_1}} $ .", "We split the configuration $ {x_j}_{1 \\leqslant j \\leqslant M} $ into clusters $ [\\big ]{x_j^{{k}}}_{1 \\leqslant j \\leqslant n_k} $ with $ 1 \\leqslant k \\leqslant K $ , such that: For any $ k $ , $ R {x_1^{{k}}} \\leqslant \\cdots \\leqslant R {x_{n_k}^{{k}}} $ .", "$ B {x_1^{{k}}, R {x_1^{{k}}}} \\cap B {x_j^{{k}}, R {x_j^{{k}}}} \\ne \\emptyset $ for any $ j,k $ .", "The balls $ B {x_1^{{k}}, R {x_1^{{k}}}} $ are all pairwise disjoint for $ k = 1, \\cdots , K $ .", "Then, using (REF ) on each cluster, we get for instance $\\sum \\limits _{j = 1}^M \\frac{1}{R {x_j}^p}\\leqslant {}& \\frac{ 2^{p+2d} \\tau ^{p+d} }{ \\eta ^p {\\tau ^p - 1} } {B_1}^{\\frac{p}{d}} \\sum \\limits _{k=1}^K \\int _{B {x_1^{{k}}, R {x_1^{{k}}}}} \\rho {y}^{1+\\frac{p}{d}} \\, \\mathrm {d}y \\\\\\leqslant {}& \\frac{ 2^{p+2d} \\tau ^{p+d} }{ \\eta ^p {\\tau ^p - 1} } {B_1}^{\\frac{p}{d}} \\int _{\\mathbb {R}^d} \\rho {y}^{1+\\frac{p}{d}} \\, \\mathrm {d}y,$ which concludes the proof of (REF ).", "(REF ) follows in the same way.", "Remark 25 (Hard-core at zero temperature) As we have mentioned in Section REF , in the hard core case $\\alpha =+\\infty $ , we know from (REF ) that for any representable density $\\rho $ , we have $F_0 [\\rho ]\\leqslant \\frac{\\kappa C}{r_0^s} \\int _{\\mathbb {R}^d} \\rho {x} \\, \\mathrm {d}x,$ where the constant $ C $ depends only on $ d $ and $ s $ .", "The problem is to determine when $\\rho $ is representable.", "Using Theorem REF , this is the case when for instance $R_\\rho =\\min _{x} R(x)\\geqslant r_0$ ." ], [ "The block approximation", "While the state from thm:clstate is useful for obtaining energy bounds at zero temperature, it might be singular with respect to the Lebesgue measure on $ \\mathbb {R}^{dN} $ , leaving it unsuitable to use for the positive temperature case, because the entropy in this case will be infinite.", "Here we describe a simple way of regularizing states, while keeping the one-body density fixed, which is a slight generalization to any partition of unity of the construction in [9].", "Essentially, it works by cutting $ \\mathbb {R}^d $ into \"blocks\" and then locally replacing the state by a pure tensor product.", "Let $ \\sum \\chi _j = {1}_{\\mathbb {R}^d} $ be any partition of unity, and $ \\mathbb {P}$ any $ N $ -particle state with density $ \\rho $ .", "The corresponding block approximation is defined by $\\widetilde{\\mathbb {P}}:= \\sum \\limits _{i_1,...,i_N} \\mathbb {P}{\\chi _{i_1} \\otimes \\cdots \\otimes \\chi _{i_N}} \\frac{{ \\rho \\chi _{i_1}} \\otimes \\cdots \\otimes { \\rho \\chi _{i_N}} }{\\prod _{k=1}^N\\int _{\\mathbb {R}^d} \\rho \\chi _{i_k}},$ where we denote $\\mathbb {P}{\\chi _{i_1} \\otimes \\cdots \\otimes \\chi _{i_N}}:= \\int _{\\mathbb {R}^{dN}} \\chi _{i_1} \\otimes \\cdots \\otimes \\chi _{i_N} \\, \\mathrm {d}\\mathbb {P}.$ That is, $ \\widetilde{\\mathbb {P}} $ is a convex combination of tensor products of the normalized $ \\frac{\\rho \\chi _i}{\\int \\rho \\chi _i} $ .", "One can easily show that $ \\widetilde{\\mathbb {P}} $ has one-body density $ \\rho _{\\widetilde{\\mathbb {P}}} = \\rho $ .", "Furthermore, it is clear that $ \\widetilde{\\mathbb {P}} $ is a symmetric measure whenever $ \\mathbb {P}$ is, so we can also write $\\widetilde{\\mathbb {P}}= \\sum \\limits _{i_1,...,i_N} \\mathbb {P}{\\chi _{i_1} \\otimes \\cdots \\otimes \\chi _{i_N}} \\Pi _s [\\Big ]{ \\frac{\\rho \\chi _{i_1}}{\\int \\rho \\chi _{i_1}} \\otimes \\cdots \\otimes \\frac{\\rho \\chi _{i_N}}{\\int \\rho \\chi _{i_N}} },$ where $ \\Pi _s $ denotes the symmetrization operator in (REF ).", "In [9] the chosen partition of unity is just a tiling made of cubes, but in fact any partition works.", "Applying Jensen's inequality yields the following.", "Lemma 26 (Entropy of the block approximation) Suppose that the state $ \\mathbb {P}$ and the partition of unity $ {\\chi _j} $ are such that $ \\chi _{i_1}, \\cdots , \\chi _{i_N} $ all have disjoint supports whenever $ \\mathbb {P}{\\chi _{i_1} \\otimes \\cdots \\otimes \\chi _{i_N}} \\ne 0 $ .", "Then we have $\\int _{\\mathbb {R}^{dN}} \\widetilde{\\mathbb {P}} \\log { N!", "\\, \\widetilde{\\mathbb {P}}}\\leqslant {} \\int _{\\mathbb {R}^d} \\rho \\log \\rho + \\int _{\\mathbb {R}^d} \\rho \\sum \\limits _i \\chi _i \\log \\chi _i\\\\ -\\sum \\limits _{i} [\\Big ]{\\int _{\\mathbb {R}^d} \\rho \\chi _{i}} \\log [\\Big ]{\\int _{\\mathbb {R}^d} \\rho \\chi _{i}}.$ Remark 27 Since $ {\\chi _i} $ is a partition of unity, the term above involving $ \\chi _i \\log \\chi _i $ can always be estimated from above by zero.", "On the other hand, it is not clear that the sum in last term above is even finite for an arbitrary partition $ {\\chi _i} $ .", "However, it turns out to behave nicely in many situations.", "For instance, if $\\int \\rho \\chi _j\\leqslant 1$ for all $j$ , we can estimate it by $ 1/e $ times the number of terms in the partition of unity, which is typically finite when $\\rho $ has compact support.", "The entropy of the block approximation can be estimated using Jensen's inequality by $& \\int \\widetilde{\\mathbb {P}} \\log { N!", "\\, \\widetilde{\\mathbb {P}}} \\\\&\\ \\leqslant \\sum \\limits _{i_1,...,i_N} \\mathbb {P}{\\chi _{i_1}\\otimes \\cdots \\otimes \\chi _{i_N}} \\int \\Pi _s \\left( \\bigotimes _k \\frac{\\rho \\chi _{i_k}}{\\int \\rho \\chi _{i_k}} \\right) \\log \\left( N!", "\\, \\Pi _s \\bigotimes _k \\frac{\\rho \\chi _{i_k}}{\\int \\rho \\chi _{i_k}} \\right)\\\\&\\ = \\sum \\limits _{i_1,...,i_N} \\mathbb {P}{\\chi _{i_1}\\otimes \\cdots \\otimes \\chi _{i_N}} \\int \\bigotimes _k \\frac{\\rho \\chi _{i_k}}{\\int \\rho \\chi _{i_k}} \\log \\left(\\sum _{\\sigma \\in \\mathfrak {S}_N}\\bigotimes _k \\frac{\\rho \\chi _{i_{\\sigma (k)}}}{\\int \\rho \\chi _{i_{\\sigma (k)}}} \\right).$ We have here used the symmetry of $\\mathbb {P}$ to remove the first $\\Pi _s$ .", "It is important that the $N!$ has disappeared in the logarithm.", "For any non-zero term, the supports of the $ \\chi _{i_k} $ are all disjoint, hence only the case $\\sigma =\\text{Id}$ remains in the sum.", "Using that $\\int \\bigotimes _k \\frac{\\rho \\chi _{i_k}}{\\int \\rho \\chi _{i_k}} \\log \\left(\\bigotimes _k \\frac{\\rho \\chi _{i_k}}{\\int \\rho \\chi _{i_k}} \\right)= \\sum \\limits _{k=1}^N \\int \\frac{\\rho \\chi _{i_k}}{\\int \\rho \\chi _{i_k}} \\log \\frac{ \\rho \\chi _{i_k}}{\\int \\rho \\chi _{i_k}}$ and plugging this into the previous expression, we conclude that (REF ) holds." ], [ "Proof of Theorem ", "We assume first that the density $ \\rho $ is compactly supported, and then remove this assumption at the end.", "Applying the Besicovitch covering lemma [32], [18] on the cover $ {B {x, \\varepsilon R {x}} x \\in \\operatorname{supp}\\rho } $ gives the existence of a (finite) set of points $ {y_j} \\subseteq \\operatorname{supp}\\rho $ satisfying that $ {B_j} := {B {y_j, \\varepsilon R {y_j}}} $ covers the support of $ \\rho $ , and the multiplicity of the cover is universally bounded, i.e., $1 \\leqslant \\varphi {x}:= \\sum \\limits _j {1 }_{B_j} {x}\\leqslant C_d,\\qquad x \\in \\operatorname{supp}\\rho ,$ where the constant $ C_d $ depends only on the dimension $ d $ , and thus not on $ \\varepsilon $ or $ \\rho $ .", "This gives us a partition of unity $ {\\chi _j} $ defined by $ \\chi _j := \\frac{{1 }_{B_j}}{\\varphi } $ .", "One way of constructing the Besicovitch cover is to inductively maximize $ \\varepsilon R {y_j} $ over the remaining volume $ y_j \\in \\operatorname{supp}\\rho \\setminus \\bigcup _{k=1}^{j-1} B_k $ , supposing that $ y_1, \\cdots , y_{j-1} $ have already been chosen.", "This construction implies the bound on the distances ${y_j - y_k}\\geqslant \\max {\\varepsilon R {y_j}, \\varepsilon R {y_k}}\\geqslant \\frac{\\varepsilon }{2} {R {y_j} + R {y_k}}$ for all $ j \\ne k $ .", "We now take the optimal transport state $ \\mathbb {P}$ obtained from thm:clstate, and denote by $ m_j := \\int \\rho \\chi _j = \\int _{B_j} \\frac{\\rho }{\\varphi } $ the local mass of $ \\rho $ with respect to the partition of unity $ {\\chi _j} $ .", "As a trial state for the free energy, we take the block approximation (REF ) of $ \\mathbb {P}$ using the $ \\chi _j $ , i.e., $\\mathbb {P}_{\\varepsilon }:={} \\sum \\limits _{j_1, \\cdots , j_N} \\mathbb {P} {\\chi _{j_1} \\times \\cdots \\times \\chi _{j_N}} [\\Big ]{\\frac{\\rho \\chi _{j_1}}{m_{j_1}} } \\otimes \\cdots \\otimes [\\Big ]{\\frac{\\rho \\chi _{j_N}}{m_{j_N}} }.$ We show that the support of $ \\mathbb {P}_{\\varepsilon } $ satisfies the condition (REF ) for some $ \\eta $ .", "For any point $ {x_1, \\cdots , x_N} \\in \\operatorname{supp}\\mathbb {P}_{\\varepsilon } $ , there must be a term in the sum above such that $ \\mathbb {P}{\\chi _{j_1} \\times \\cdots \\times \\chi _{j_N}} \\ne 0 $ , and $ x_k \\in B_{j_k} = B{y_{j_k}, \\varepsilon R {y_{j_k}}} $ for all $ k $ .", "In particular, since the support of $ \\mathbb {P}$ satisfies (REF ), there exist $ z_1, \\cdots , z_N $ with $ z_k \\in B_{j_k} $ and $ {z_k - z_{\\ell }} \\geqslant \\frac{1}{3} {R {z_k} + R {z_{\\ell }}} $ for any $ k \\ne \\ell $ .", "By the Lipschitz continuity of $ R $ , $ x_k \\in B_{j_k} $ implies that $ R {y_{j_k}} \\leqslant \\frac{1}{1-\\varepsilon } R {x_k} $ , so ${x_k - z_k}\\leqslant 2 \\varepsilon R {y_{j_k}}\\leqslant \\frac{2\\varepsilon }{1-\\varepsilon } R {x_k}.$ Finally, this gives us the bound ${x_k - x_{\\ell }}\\geqslant {}& {z_k - z_{\\ell }} - {x_k - z_k} - {x_{\\ell } - z_{\\ell }} \\nonumber \\\\\\geqslant {}& \\frac{1}{3} {R {z_k} + R {z_{\\ell }}} - {x_k - z_k} - {x_{\\ell } - z_{\\ell }} \\nonumber \\\\\\geqslant {}& \\frac{1}{3} {R {x_k} + R {x_{\\ell }}} - \\frac{4}{3} {{x_k - z_k} + {x_{\\ell } - z_{\\ell }}} \\nonumber \\\\\\geqslant {}& \\frac{1}{3} [\\Big ]{1 - \\frac{8 \\varepsilon }{1-\\varepsilon }} {R {x_k} + R {x_{\\ell }}}.$ This argument also shows that if $ \\mathbb {P}{\\chi _{j_1} \\times \\cdots \\times \\chi _{j_N}} \\ne 0 $ , then the sets $ B_{j_k} $ are disjoint for $ k = 1, \\cdots , N $ , provided that $ \\varepsilon < \\frac{1}{9} $ .", "Now, since $ \\mathbb {P}_{\\varepsilon } $ satisfies (REF ), it follows from prop:clbound that the interaction energy (in case $ \\alpha > d $ ) is bounded by $\\mathcal {U}_{N} {\\mathbb {P}_{\\varepsilon }}\\leqslant {} C\\kappa r_0^\\alpha \\int _{\\mathbb {R}^d} \\rho ^{1+\\frac{\\alpha }{d}} + C\\kappa \\int _{\\mathbb {R}^d} \\rho ^2,$ and similarly for $ \\alpha = d $ , using prop:clbound3.", "Thus, to show (REF ), it only remains to provide a bound on the entropy of the state $ \\mathbb {P}_{\\varepsilon } $ .", "First, applying lem:blockentropy immediately gives $\\int _{\\mathbb {R}^{dN}} \\mathbb {P}_{\\varepsilon } \\log {N!", "\\, \\mathbb {P}_{\\varepsilon }}\\leqslant {} \\int _{\\mathbb {R}^d} \\rho \\log \\rho - \\sum \\limits _{j} m_j \\log m_j.$ Then, for any numbers $ s,t \\geqslant 0 $ , we can use the elementary bound $- s \\log {t s} \\leqslant \\frac{1}{et}$ to conclude that $- \\sum \\limits _{j} m_j \\log m_j={}& \\sum \\limits _{j} m_j \\log {R{y_j}^d} - m_j \\log {R {y_j}^d m_j} \\\\\\leqslant {}& \\sum \\limits _{j} d \\int \\rho {x} \\chi _{j} {x} \\log { {1+\\varepsilon } R {x} } \\, \\mathrm {d}x + \\frac{1}{e R {y_j}^d} \\\\\\leqslant {}& d \\log {1+\\varepsilon } \\int _{\\mathbb {R}^d} \\rho + d \\int _{\\mathbb {R}^d} \\rho \\log R + \\frac{C}{\\varepsilon ^d} \\int _{\\mathbb {R}^d} \\rho ^2,$ where the last inequality uses (REF ) and lem:Rconfigbound.", "This proves thm:OTblock for compactly supported densities.", "$\\Box $ Remark 28 (Hard-core case) In the hard core case $\\alpha =+\\infty $ , the above proof provides the bound $F_T [\\rho ]\\leqslant C\\frac{\\kappa }{r_0^d}\\int _{\\mathbb {R}^d}\\rho +CT \\int _{\\mathbb {R}^d} \\rho + T \\int _{\\mathbb {R}^d} \\rho \\log \\rho +\\frac{CTr_0^d}{(R_\\rho -r_0)^d}\\int _{\\mathbb {R}^d}\\rho ^2\\\\+T \\int _{\\mathbb {R}^d} \\rho \\log R^d$ under the assumption that $R_\\rho =\\min _x R(x)>r_0$ , where $C$ only depends on $d$ and $s$ .", "The main difference is the estimate on the distance between the particles in (REF ).", "We need to keep the maximum and use ${x_k - x_{\\ell }}\\geqslant {}& \\max \\left\\lbrace R_\\rho ,\\frac{1}{3} {R {x_k} + R {x_{\\ell }}}\\right\\rbrace - \\frac{8 \\varepsilon }{3(1-\\varepsilon )} {R {x_k} + R {x_{\\ell }}}\\\\\\geqslant {}& \\left(1-\\frac{8\\varepsilon }{1-\\varepsilon }\\right)R_\\rho .$ Taking $\\varepsilon =\\min (R_\\rho /r_0-1,1)/100$ provides (REF )." ], [ "Removal of the compactness condition", "To finish this section we describe how to extend a result holding for compactly supported densities to general integrable ones, using this time a compactness argument.", "Theorem 29 Assume that $w$ satisfies Assumption REF .", "If we have for some $1\\leqslant p\\leqslant q<\\infty $ with $q\\geqslant 2$ and some constants $C_j\\geqslant 0$ $F_T[\\rho ]\\leqslant C_0\\int \\rho +C_1\\int \\rho ^{p}+C_2\\int \\rho ^q +T\\int \\rho \\log \\rho \\\\+C_3\\int \\rho ^2 (\\log \\rho )_++C_4\\int \\rho \\log R$ for all $\\rho \\in L^1\\cap L^q$ of compact support, then the same holds with the same constants for all $\\rho \\in L^1\\cap L^q$ .", "If $T>0$ we assume in both cases that $\\int _{\\mathbb {R}^d}\\rho |\\log \\rho |<\\infty $ .", "Let us first assume $C_4=0$ for simplicity.", "Our proof uses that the energy $\\rho \\mapsto F_T[\\rho ]$ is lower semi-continuous for the strong topology of $L^1$ , as previously mentioned in Remark REF , that is, $F_T[\\rho ]\\leqslant \\liminf _{n\\rightarrow \\infty }F_T[\\rho _n]\\\\\\text{if $\\rho _n\\rightarrow \\rho $ strongly in $L^1(\\mathbb {R}^d)$ with $\\int \\rho _n^{q}+T\\int \\rho _n|\\log \\rho _n|\\leqslant C$.", "}$ The theorem then follows immediately by letting $\\rho _n:=\\frac{N}{\\int _{B_n}\\rho }\\;\\rho {1 }_{B_n}$ the truncation of $\\rho $ over the ball of radius $n$ .", "Note that $\\rho _n\\leqslant (1+o(1))\\rho $ .", "The sequence $\\rho _n$ clearly satisfies the convergence properties of (REF ) and therefore the lower semi-continuity provides $F_T[\\rho ]\\leqslant {}& \\!", "\\liminf _{n\\rightarrow \\infty } F_T[\\rho _n] \\\\\\leqslant {}&\\liminf _{n\\rightarrow \\infty } \\Bigg \\lbrace C_0N+ C_1 [\\bigg ]{ \\frac{N}{\\int _{B_n}\\rho } }^p\\int _{B_n}\\rho ^p+C_2 [\\bigg ]{ \\frac{N}{\\int _{B_n}\\rho }}^q\\int _{B_n}\\rho ^q \\\\&\\quad +T\\frac{N}{\\int _{B_n}\\rho }\\int _{B_n}\\rho \\log \\rho +T\\frac{N}{\\int _{B_n}\\rho } \\log [\\bigg ]{ \\frac{N}{\\int _{B_n}\\rho }} \\int _{B_n}\\rho \\\\&\\quad +C_3 [\\bigg ]{ \\frac{N}{\\int _{B_n}\\rho } }^2\\int _{B^{\\prime }_n}\\rho ^2\\log \\rho +2 [\\bigg ]{\\frac{N}{\\int _{B_n}\\rho }}^2 \\log [\\bigg ]{\\frac{N}{\\int _{B_n}\\rho } } \\int _{B^{\\prime }_n}\\rho ^2\\Bigg \\rbrace \\\\={}& C_0N+C_1\\int \\rho ^{p}+C_2\\int \\rho ^q +T\\int \\rho \\log \\rho +C_3\\int \\rho ^2(\\log \\rho )_+,$ where $B^{\\prime }_n:=B_n\\cap \\big \\lbrace \\rho \\geqslant N^{-1}\\int _{B_n}\\rho \\big \\rbrace $ .", "When $C_4>0$ the proof is similar.", "We need to use that $(1+|x|)/C\\leqslant R(x),R_n(x)\\leqslant C(1+|x|)$ for some $C>0$ (depending on $\\rho $ ), where $R_n(x)$ is the local radius of the truncated density $\\rho _n$ , which converges locally to $R$ .", "The uniform bounds on $R$ and $R_n$ imply that we must work under the assumptions that $\\int \\rho (\\log |x|)_+$ is finite (otherwise there is nothing to show).", "The limit follows from dominated convergence.", "For the convenience of the reader, we conclude by quickly recalling the proof of the lower semi-continuity (REF ).", "We consider an arbitrary sequence $\\rho _n$ converging to $\\rho $ strongly in $L^1$ and satisfying the bounds in (REF ).", "It is known that there exists an optimal $\\mathbb {P}_n$ for $F_T[\\rho _n]$ (but we could as well use a quasi-minimizer).", "From the upper bound we have $F_T[\\rho _n]\\leqslant C$ for some constant $C$ and therefore $C\\geqslant {}& F_T[\\rho _n]={} \\mathcal {F}_T(\\mathbb {P}_n)\\\\={}& \\int _{(\\mathbb {R}^d)^N} \\sum _{1\\leqslant j<k\\leqslant N}w(x_j-x_k) \\, \\mathbb {P}_n+T\\int \\mathbb {P}_n\\log (N!", "\\, \\mathbb {P}_n)\\\\={}& \\int _{(\\mathbb {R}^d)^N} [\\bigg ]{ \\sum _{1\\leqslant j<k\\leqslant N}w(x_j-x_k)+\\kappa N} \\mathbb {P}_n+T\\int \\mathbb {P}_n\\log \\left(\\frac{\\mathbb {P}_n}{(\\rho _n/N)^{\\otimes N}}\\right)\\\\&-\\kappa N+T\\int \\rho _n\\log \\rho _n+T\\log \\frac{N!", "}{N^N}.$ The first term is non-negative from the stability property of $w$ and the second is a relative entropy, hence is also non-negative.", "We have thus proved that $T\\int \\mathbb {P}_n\\log \\mathbb {P}_n\\leqslant C(\\rho ,N,T)$ where the constant can depend on $\\rho ,N,T$ but not on $n$ .", "On the other hand, we know that the sequence $(\\mathbb {P}_n)$ is tight, that is, $\\int _{\\max |x_j|\\geqslant R}\\,\\mathrm {d}\\mathbb {P}_n\\leqslant \\int \\sum _{j=1}^N{1 }(|x_j|\\geqslant R)\\,\\mathrm {d}\\mathbb {P}_n=\\int _{|x|\\geqslant R}\\rho _n$ where the right side is small due to the strong convergence in $L^1$ .", "After extraction of a subsequence, this implies $\\int F\\,\\mathrm {d}\\mathbb {P}_n\\rightarrow \\int F\\,\\mathrm {d}\\mathbb {P}$ for every $F\\in C^0_b$ .", "Taking $F(x_1,...,x_N)=\\sum _{j=1}^Nf(x_j)$ with $f\\in C^0_b$ , we find that $\\int f\\rho _{\\mathbb {P}_n}\\rightarrow \\int f\\rho _{\\mathbb {P}}$ , that is, $\\rho _{\\mathbb {P}}=\\rho $ .", "In addition, we have (by convexity) $T\\int \\mathbb {P}\\log \\mathbb {P}\\leqslant T\\liminf _{n\\rightarrow \\infty }\\int \\mathbb {P}_n\\log \\mathbb {P}_n\\leqslant C.$ Hence $\\mathbb {P}$ is admissible for $F_T[\\rho ]$ , and absolutely continuous with respect to the Lebesgue measure if $T>0$ .", "We thus have $\\liminf _{n\\rightarrow \\infty }\\int F\\,\\mathrm {d}\\mathbb {P}_n\\geqslant \\int F\\,\\mathrm {d}\\mathbb {P}$ for every measurable function $F\\geqslant 0$ if $T>0$ (using the absolute continuity of $\\mathbb {P}$ ) and for every lower semi-continuous function $F\\geqslant 0$ if $T=0$ .", "This is satisfied for our interaction $w$ by Assumption REF and therefore we obtain as we wanted $[8] \\liminf _{n\\rightarrow \\infty }\\int _{(\\mathbb {R}^d)^N} [\\bigg ]{ \\sum _{1\\leqslant j<k\\leqslant N}w(x_j-x_k)+\\kappa N }\\mathrm {d}\\mathbb {P}_n \\\\\\geqslant {}& \\int _{(\\mathbb {R}^d)^N} [\\bigg ]{ \\sum _{1\\leqslant j<k\\leqslant N}w(x_j-x_k)+\\kappa N } \\mathrm {d}\\mathbb {P}.$ Together with the entropy bound (REF ) when $T>0$ , this proves that $\\liminf _{n\\rightarrow \\infty }\\left(F_T[\\rho _n]+\\kappa N\\right)\\geqslant F_T[\\rho ]+\\kappa N$ which is the claimed lower semi-continuity (REF )." ], [ "Proof of Theorem ", "In this section we prove Theorem REF concerning densities which are uniformly bounded in terms of the tight packing density $\\rho _c(d)$ .", "We start by constructing a trial state with constant density by averaging a periodic tight packing over translations.", "Such a uniform average of a periodic lattice is often called a “floating crystal” [53], [55] in Physics and Chemistry.", "Finally, we estimate the entropic cost of “geometrically localizing” [48] this state to enforce the desired density.", "Step 1.", "Constant density.", "We have assumed $\\rho \\leqslant (1-\\varepsilon )^dr_0^{-d}\\rho _c(d)$ .", "Let $\\eta >0$ be a fixed small number which will later be chosen in terms of $\\varepsilon $ .", "From the definition of $\\rho _c(d)$ we can find a large cube $C_\\ell =(-\\ell /2,\\ell /2)^d$ and $n=(1+2\\eta )^{-d}r_0^{-d}\\rho _c(d)\\ell ^d\\in \\mathbb {N}$ points $x_1^0,...,x_n^0\\in C_\\ell $ satisfying $|x^0_j-x^0_k|\\geqslant r_0(1+\\eta )$ for all $j\\ne k$ .", "We can also assume that no point is at a distance less than $r_0$ to the boundary of $C_\\ell $ .", "We are using here that the tight packing density for $r_0(1+\\eta )$ is $(1+\\eta )^{-d}r_0^{-d}\\rho _c(d)>(1+2\\eta )^{-d}r_0^{-d}\\rho _c(d)$ and that the limit (REF ) is the same for cubes and for balls.", "Now, we replace each point $x_j^0$ by a smeared measure $\\chi _j^0(x)=\\frac{2^d}{(r_0\\eta )^d}\\chi [\\bigg ]{ 2\\frac{x-x_j^0}{r_0\\eta } }$ where $\\chi =|B_1|^{-1}{1 }_{B_1}$ .", "The smearing radius $\\eta r_0/2$ has been chosen so that the supports of the $\\chi ^0_j$ remain at distance at least $r_0$ .", "Finally, we consider $(2K+1)^d$ copies of our system ($K\\in \\mathbb {N}$ ), repeated in a periodic fashion so as to form a very large cube $C_L=(-L/2,L/2)^d$ of side length $L=(2K+1)\\ell $ .", "In other words, we define the $N:=(2K+1)^dn$ points $x_j^k:=x_j^0+kL$ with $k\\in \\lbrace -K,...,K\\rbrace ^d$ .", "The smeared measures $\\chi _j^k$ are defined similarly.", "The state $\\mathbb {P}=\\Pi _s\\bigotimes _{\\begin{array}{c}j\\in \\lbrace 1,...,n\\rbrace \\\\ k\\in \\lbrace -K,...K\\rbrace ^d\\end{array}} \\chi _j^k$ has the density $\\rho =\\sum _{j,k}\\chi _j^k$ and the finite entropy $\\int _{(\\mathbb {R}^d)^N}\\mathbb {P}\\log (N!\\,\\mathbb {P})=N\\int _{\\mathbb {R}^d}\\chi \\log \\chi =N\\log \\left(\\frac{2^d}{|B_1|r_0^d\\eta ^d}\\right)$ (recall $\\Pi _s$ is the symmetrization operator in (REF )).", "Finally, we average over translations of the big cube and define the trial state $\\widetilde{\\mathbb {P}}=\\frac{1}{\\ell ^d}\\int _{C_\\ell }\\mathbb {P}(\\cdot +\\tau )\\,\\mathrm {d}\\tau ,$ which has the density $\\widetilde{\\rho }=\\frac{1}{\\ell ^d}\\sum _{j}\\chi _j^0\\ast {1 }_{C_L}.$ The latter is constant, equal to $n/\\ell ^d=(1+2\\eta )^{-d}r_0^{-d}\\rho _c(d)$ well inside the large cube.", "Note that, by concavity, the entropy of $\\widetilde{\\mathbb {P}}$ can be estimated by that of $\\mathbb {P}$ .", "Step 2.", "Geometric localization.", "We assume for the rest of the proof that $\\rho $ has a compact support and we choose $K$ large enough so that $\\widetilde{\\rho }$ is constant on the support of $\\rho $ .", "Our estimates will not depend on $K$ .", "One can then deduce the bound for general densities by adapting the proof of Theorem REF , or by passing to the limit $K\\rightarrow \\infty $ in the formulas (REF )–(REF ) of the trial state.", "We pick $\\eta $ so that $(1-\\varepsilon )^d=(1+2\\eta )^{-d}$ , that is, $\\eta =\\frac{\\varepsilon }{2(1-\\varepsilon )}.$ Then we have $\\rho \\leqslant \\widetilde{\\rho }$ a.e.", "This enables us to consider the localization function $\\theta :=\\frac{\\rho }{\\widetilde{\\rho }}=\\frac{\\rho }{(1+2\\eta )^{-d}r_0^{-d}\\rho _c(d)}\\leqslant 1$ and the $\\theta $ –localized state $\\widetilde{\\mathbb {P}}_{|\\theta }$ , which has the desired density $\\theta \\rho _{\\widetilde{\\mathbb {P}}}=\\rho $ .", "We recall that the $\\theta $ –localization $\\mathbb {Q}_{|\\theta }$ of a state $\\mathbb {Q}$ (with $0\\leqslant \\theta \\leqslant 1$ ) is the unique state which has the correlation functions $\\rho ^{(k)}=\\rho ^{(k)}_{\\mathbb {Q}}\\theta ^{\\otimes k}$ for all $k$ , see [39], [48], [31].", "In our case we only need the definition for a tensor product since we have by linearity $\\widetilde{\\mathbb {P}}_{|\\theta }=\\frac{1}{\\ell ^d}\\int _{C_\\ell }\\mathbb {P}(\\cdot +\\tau )_{|\\theta }\\,\\mathrm {d}\\tau .$ For a symmetric tensor product $\\mathbb {Q}=\\Pi _s(q_1\\otimes \\cdots \\otimes q_N)$ with probabilities $q_j$ of disjoint support, the $\\theta $ -localized state can be expressed as $\\mathbb {Q}_{|\\theta }=\\bigoplus _{n=0}^N\\binom{N}{n} \\frac{1}{N!", "}\\sum _{\\sigma \\in \\mathfrak {S}_N} (\\theta q_{\\sigma (1)})\\otimes \\cdots \\otimes (\\theta q_{\\sigma (n)})\\times \\\\\\times \\left(1-\\int \\theta q_{\\sigma (n+1)}\\right)\\cdots \\left(1-\\int \\theta q_{\\sigma (N)}\\right).$ We will need the following.", "Lemma 30 (Entropy of localization of tensor products) Let $\\mathbb {Q}=\\Pi _s(q_1\\otimes \\cdots \\otimes q_N)$ be a symmetric tensor product, with $q_1,...,q_N$ probability measures of disjoint supports.", "For any $0\\leqslant \\theta \\leqslant 1$ , we have $\\mathcal {S}(\\mathbb {Q}_{|\\theta })=-\\sum _j \\int _{\\mathbb {R}^d} (\\theta q_{j})\\log (\\theta q_{j})-\\sum _j \\left(1-\\int _{\\mathbb {R}^d}\\theta q_{j}\\right)\\log \\left(1-\\int _{\\mathbb {R}^d}\\theta q_{j}\\right).$ In particular, we deduce $-\\mathcal {S}(\\mathbb {Q}_{|\\theta })\\leqslant \\sum _j \\int _{\\mathbb {R}^d} (\\theta q_{j})\\log (\\theta q_{j}).$ As a side remark we also note also that (REF ) provides $\\mathcal {S}(\\mathbb {Q}_{|\\theta })+\\mathcal {S}(\\mathbb {Q}_{|1-\\theta })\\\\=\\mathcal {S}(\\mathbb {Q})-\\sum _j \\left[\\left(1-\\int \\theta q_{j}\\right)\\log \\left(1-\\int \\theta q_{j}\\right)+\\left(\\int \\theta q_{j}\\right)\\log \\left(\\int \\theta q_{j}\\right)\\right]\\\\-\\int \\rho \\Big (\\theta \\log \\theta +(1-\\theta )\\log (1-\\theta )\\Big ).$ The additional terms are positive and therefore we recover the subadditivity of the entropy $\\mathcal {S}(\\mathbb {Q})\\leqslant \\mathcal {S}(\\mathbb {Q}_{|\\theta })+\\mathcal {S}(\\mathbb {Q}_{|1-\\theta })$  [39].", "Each tensor product $(\\theta q_{\\sigma (1)})\\otimes \\cdots \\otimes (\\theta q_{\\sigma (n)})$ appears exactly $(N-n)!$ times with the same weight in (REF ).", "We can thus write it in the better form $\\mathbb {Q}_{|\\theta }=\\bigoplus _{n=0}^N\\frac{1}{n!", "}\\sum _{j_1\\ne \\cdots \\ne j_n}(\\theta q_{j_1})\\otimes \\cdots \\otimes (\\theta q_{j_n})\\prod _{k\\notin \\lbrace j_1,...,j_n\\rbrace }\\left(1-\\int \\theta q_{k}\\right)$ where now the terms all have disjoint supports.", "We obtain that the entropy equals $\\mathcal {S}(\\mathbb {Q}_{|\\theta })=-\\sum _{n=0}^N\\frac{1}{n!", "}\\int _{\\mathbb {R}^{dn}}\\sum _{j_1\\ne \\cdots \\ne j_n}(\\theta q_{j_1})\\otimes \\cdots \\otimes (\\theta q_{j_n})\\prod _{k\\notin \\lbrace j_1,...,j_n\\rbrace }\\left(1-\\int \\theta q_{k}\\right)\\times \\\\\\times \\log [\\bigg ]{ (\\theta q_{j_1})\\otimes \\cdots \\otimes (\\theta q_{j_n})\\prod _{k\\notin \\lbrace i_1,...,i_n\\rbrace }\\left(1-\\int \\theta q_{k}\\right) }.$ Note that the $n!$ in the logarithm simplifies with the $1/n!$ .", "Expanding the logarithm and collecting the terms we obtain the claimed formula.", "In our case, we deduce by concavity that $-\\mathcal {S}(\\widetilde{\\mathbb {P}}_{|\\theta })&\\leqslant -\\frac{1}{\\ell ^d}\\int _{C_\\ell }\\mathcal {S}\\big (\\mathbb {P}(\\cdot -\\tau )_{|\\theta }\\big )\\,\\mathrm {d}\\tau \\\\&\\leqslant \\frac{1}{\\ell ^d}\\sum _{j,k}\\int _{C_\\ell }\\int _{\\mathbb {R}^d}\\theta (x)\\chi _j^k(x-\\tau )\\log \\big (\\theta (x)\\chi _j^k(x-\\tau ) \\big )\\,\\mathrm {d}\\tau \\,\\mathrm {d}x\\\\&=\\frac{1}{\\ell ^d}\\sum _{j,k}\\int _{C_\\ell }\\int _{\\mathbb {R}^d}\\theta (x)\\chi _j^k(x-\\tau )\\log \\frac{\\rho (x)\\chi _j^k(x-\\tau )}{(1+2\\eta )^{-d}r_0^{-d}\\rho _c(d)}\\,\\mathrm {d}\\tau \\,\\mathrm {d}x.$ We estimate $\\chi _j^k$ in the logarithm by its supremum $\\Vert \\chi _j^k\\Vert _\\infty =\\frac{2^d}{(r_0\\eta )^d|B_1|}$ and use that $\\frac{\\theta (x)}{\\ell ^d}\\sum _{j,k}\\int _{C_\\ell }\\chi _j^k(x-\\tau )\\,\\mathrm {d}\\tau =\\theta (x)\\widetilde{\\rho }(x)=\\rho (x).$ We obtain $-\\mathcal {S}(\\widetilde{\\mathbb {P}}_{|\\theta })&\\leqslant \\log \\left(\\frac{(1+2\\eta )^{d}}{\\eta ^dv_c(d)}\\right)\\int \\rho +\\int \\rho \\log \\rho \\\\&=\\log \\left(\\frac{2^d}{\\varepsilon ^dv_c(d)}\\right)\\int \\rho +\\int \\rho \\log \\rho .$ On the other hand, the energy bound (REF ) applies since we still have $|x_j-x_k|\\geqslant r_0$ on the support of the localized state $\\widetilde{\\mathbb {P}}_{|\\theta }$ .", "This concludes the proof of Theorem REF .$\\Box $" ] ]
2210.07785
[ [ "Stabilized exponential time differencing schemes for the convective\n Allen-Cahn equation" ], [ "Abstract The convective Allen-Cahn equation has been widely used to simulate multi-phase flows in many phase-field models.", "As a generalized form of the classic Allen-Cahn equation, the convective Allen-Cahn equation still preserves the maximum bound principle (MBP) in the sense that the time-dependent solution of the equation with appropriate initial and boundary conditions preserves for all time a uniform pointwise bound in absolute value.", "In this paper, we develop efficient first- and second-order exponential time differencing (ETD) schemes combined with the linear stabilizing technique to preserve the MBP unconditionally in the discrete setting.", "The space discretization is done using the upwind difference scheme for the convective term and the central difference scheme for the diffusion term, and both the mobility and nonlinear terms are approximated through the linear convex interpolation.", "The unconditional preservation of the MBP of the proposed schemes is proven, and their convergence analysis is presented.", "Various numerical experiments in two and three dimensions are also carried out to verify the theoretical results." ], [ "Introduction", "In this paper, we consider the convective Allen-Cahn equation taking the following form $\\partial _tu(\\mathbf {x},t)+\\mathbf {v}(\\mathbf {x},t)\\cdot \\nabla u(\\mathbf {x},t)=M(u(\\mathbf {x},t))\\left(\\epsilon ^2\\Delta u(\\mathbf {x},t)+f(u(\\mathbf {x},t))\\right),\\quad \\mathbf {x}\\in \\Omega , t >0,$ subjected to suitable boundary conditions, e.g., periodic, homogeneous Neumann, or Dirichlet boundary conditions.", "Here, $\\Omega \\subset \\mathbb {R}^{d} (d=1,2,3)$ is an open, connected, and bounded domain, $\\mathbf {x}$ is the spatial variable, $t\\ge 0$ is time, $u(\\mathbf {x},t)$ is the unknown scalar-valued phase variable, $\\epsilon >0$ is the parameter related to the interface width between two phases, $M(u)\\ge 0$ is the mobility function, $f(u)$ is the nonlinear reaction term, and $\\mathbf {v}(\\mathbf {x},t)$ is a given velocity field satisfying the divergence-free condition $\\nabla \\cdot \\mathbf {v}=0$ .", "In addition, we assume that $\\mathbf {v}$ is differentiable, and $f(\\cdot )$ and $M(\\cdot )$ are locally $C^1(\\mathbb {R})$ functions.", "Compared with the classical Allen-Cahn equation [1], the convective Allen-Cahn equation (REF ) is more complicated due to the presence of the velocity field.", "Nevertheless, the convective Allen-Cahn equation still preserves the maximum bound principle (MBP) in the sense that if the initial value and/or the boundary data are pointwisely bounded by a specific constant in absolute value, the solution is also bounded by the same constant everywhere for all time.", "It is well-known that the classic Allen-Cahn equation (the equation (REF ) without the convective term) with the periodic or homogeneous Neumann boundary condition can be regarded as the $L^2$ gradient flow with respect to the following energy functional $E[u]=\\frac{\\epsilon ^2}{2}(\\nabla u(\\mathbf {x},t),\\nabla u(\\mathbf {x},t))+(F(u(\\mathbf {x},t)),1),$ where $F(\\cdot )$ is the primitive fucntion of $-f$ (i.e., $f=-F^{\\prime }$ ) and $(\\cdot ,\\;\\cdot )$ represents the inner product on $L^2(\\Omega )$ with the corresponding norm denoted as $\\Vert \\cdot \\Vert _0$ .", "However, the solution of the convective Allen-Cahn equation (REF ) is usually not guaranteed to decrease the energy $E[u]$ along the time.", "The MBP is an important property of the convective Allen-Cahn equation (REF ), especially for the logarithmic type nonlinearity $F(\\cdot )$ and the degenerate mobility $M(\\cdot )$ .", "In such cases, if the numerical scheme does not satisfy MBP, complex values may occur in numerical simulations due to the logarithm arithmetic, and/or the nonlinear mobility could become negative, which would lead to unphysical numerical solutions.", "For the classical Allen-Cahn equation, the MBP-preserving numerical methods have been thoroughly studied.", "For the spatial discretizations, a partial list includes the works on the lumped-mass finite element method [33], [35], finite difference method [37], finite volume method [25], [26] and so on.", "For the temporal integration, the stabilized linear semi-implicit schemes were shown to preserve the MBP unconditionally for the first-order scheme [31], [32], [34] and conditionally for the second-order scheme.", "Some nonlinear second-order schemes were presented to preserve the MBP for the Allen-Cahn type equations in [24], [30].", "However, the works on the construction of numerical schemes for unconditional MBP preservation are rare.", "Shen et al.", "analyzed the stabilizing MBP-preserving schemes with the finite difference discretization in space for the convective Allen-Cahn equation and designed an adaptive time-stepping scheme in [29], but unconditionally MBP-preserving schemes with order higher than one are still lacking.", "Thus, it is highly desirable to find alternative ways to develop higher-order time integration schemes which preserve the MBP unconditionally.", "Combined with the linear stabilizing technique [36], the exponential time differencing (ETD) method has been recently proposed to preserve the MBP.", "The ETD schemes are based on the variation-of-constants formula, where the nonlinear terms are approximated by polynomial interpolations in time, followed by the exact integration of the resulting temporal integrals [2], [6], [11], [12].", "Therefore, the ETD method is applicable to a large class of semilinear parabolic equations, especially for those with a stiff linear part [11], [23], [16], [18], [19], [3], [4].", "The first and second-order stabilized ETD schemes were also applied to the nonlocal Allen-Cahn equation and were proven to be unconditionally MBP-preserving in [8].", "Then, an abstract framework on the MBP-preserving stabilized ETD schemes was established in [9] for a class of semilinear parabolic equations and later was also applied to the conservative Allen-Cahn equations in [22], [14].", "Recently, some third- and fourth-order MBP-preserving schemes were proposed for the Allen-Cahn equation by considering the integrating factor Runge-Kutta schemes (IFRK) [13], [21], [17], [38], [15], and an arbitrarily high-order ETD multistep method was presented in [20] by enforcing the maximum bound via extra cutoff postprocessing.", "The main contribution of the current paper is to develop the unconditional MBP-preserving ETD schemes with first- and second-order temporal accuracies for the convective Allen-Cahn equation.", "Following the framework in [9], we firstly reformulate the equation by introducing a stabilizing constant and approximate the nonlinear mobility function and nonlinear term with the convex linear interpolation in time.", "With the upwind scheme for the convective term, the fully-discrete ETD scheme of the convective Allen-Cahn equation satisfies the MBP unconditionally.", "The rest of the paper is organized as follows.", "Section reviews the basic assumptions on the mobility and nonlinear potential function so that the model equation has a unique solution and satisfies the MBP.", "In Section , the first- and second-order ETD schemes for time integration are constructed and shown to preserve the discrete MBP unconditionally.", "Next we discuss the fully-discrete schemes with the upwind spatial discretization for the convective term.", "Then the error analysis is carried out in Section .", "A variety of numerical tests are performed to verify the theoretical results in Section , and finally we end this paper with some concluding remarks in Section ." ], [ "Preliminaries and maximum bound principle", "In this section, we first review the convective Allen-Cahn equation (REF ) and the assumptions on the mobility function and the nonlinear function under which the MBP holds.", "Suppose that the initial value of the equation (REF ) is given as $u(\\mathbf {x},0)=u_0(\\mathbf {x}),\\quad \\mathbf {x}\\in \\overline{\\Omega },$ where $u_0\\in C(\\overline{\\Omega })$ .", "We also impose the periodic boundary condition (only for a regular rectangular domain $\\Omega =\\prod _{i=1}^d(a_i,b_i)$ ), the homogeneous Neumann boundary condition or the following Dirichlet boundary condition $u(\\mathbf {x},t)=g(\\mathbf {x},t),\\quad \\mathbf {x}\\in \\partial \\Omega ,\\, t>0,$ where $g\\in C(\\partial \\Omega \\times [0,\\infty ))$ .", "In practical applications, the nonnegative mobility function $M(\\cdot )\\ge 0$ could lead to a degenerate parabolic equation.", "To avoid the technical difficulties arising from the possible degeneracy of $M(\\cdot )$ , if unspecified in the subsequent discussion, we always assume there exists a positive constant $\\gamma >0$ such that $M(s)\\ge \\gamma ,\\quad \\forall \\, s\\in \\mathbb {R}.$ Using classical theory for quasi-linear parabolic equations [39], [40], under the condition (REF ), the convective Allen-Cahn equation (REF ) admits a unique smooth solution.", "Following the analysis for the MBP of semilinear parabolic equations [9], we make the following assumption on the nonlinear function $f(u)$ .", "There exists a constant $\\beta >0$ such that for the nonlinear function $f(u)$ , it holds $f(\\beta )\\le 0\\le f(-\\beta ).$ Let $\\Vert \\cdot \\Vert $ denote the maximum norm on $C(\\overline{\\Omega })$ and $\\Vert \\cdot \\Vert _{\\partial \\Omega }$ on $C(\\partial \\Omega )$ .", "Under the condition (REF ) and Assumption , for a smooth mobility function $M(\\cdot )$ , it holds that the convective Allen-Cahn equation (REF ) satisfies the MBP [10], [29]: if the initial value and/or the boundary data are pointwisely bounded by the constant $\\beta $ in absolute value, then its solution is also bounded by $\\beta $ for all time, i.e., for the periodic or homogeneous Neumann boundary condition case, $\\Vert u_0\\Vert \\le \\beta \\quad \\Longrightarrow \\quad |u(\\mathbf {x}, t)| \\le \\beta , \\;\\;\\; \\forall \\,\\mathbf {x}\\in \\overline{\\Omega },\\, t>0.$ and for the Dirichlet boundary condition case, $\\Vert u_0\\Vert \\le \\beta \\;\\; \\& \\;\\; \\Vert g(\\cdot ,t)\\Vert _{\\partial \\Omega }\\le \\beta ,\\; \\;\\forall \\, t>0 \\quad \\Longrightarrow \\quad |u(\\mathbf {x}, t)| \\le \\beta , \\;\\;\\; \\forall \\,\\mathbf {x}\\in \\overline{\\Omega },\\, t>0.$ We would like to note that by a standard approximation argument [5], [7], it can be proved that for the case of general mobility $M(\\cdot )\\ge 0$ , weak solutions to the convective Allen-Cahn equation (REF ) also exist and satisfy the MBP in the a.e.", "sense.", "In view of the MBP by (REF ) or (REF ), the condition (REF ) on the mobility $M(u)$ can be relaxed to $M(s)\\ge \\gamma ,\\,\\quad \\forall \\, |s|\\le \\beta .$ In addition, it is required that $M(\\cdot )$ is differentiable when restricted in the interval $[-\\beta ,\\beta ]$ .", "For a given scalar function $\\phi (\\mathbf {x})$ and a divergence-free vector field $\\mathbf {w}(\\mathbf {x})$ , let us define a linear elliptic differential operator $\\mathcal {L}_{\\mathbf {w}}[\\phi ]$ as $\\mathcal {L}_{\\mathbf {w}}[\\phi ] u =\\epsilon ^2M(\\phi (\\mathbf {x}))\\Delta u-\\mathbf {w}(\\mathbf {x})\\cdot \\nabla u,$ and a modified nonlinear function $\\widetilde{f}$ as $\\widetilde{f}(u)=M(u)f(u).$ According to the fact that the mobility is nonnegative, Assumption implies that $\\widetilde{f}(\\beta )\\le 0\\le \\widetilde{f}(-\\beta )$ .", "For the above operator $\\mathcal {L}_{\\mathbf {w}}[\\phi ]$ , the following lemma holds [9].", "Given $\\phi (\\mathbf {x})\\in C(\\overline{\\Omega })$ and $\\mathbf {w}(\\mathbf {x})\\in C(\\overline{\\Omega },\\mathbb {R}^d)$ , then the second-order linear elliptic differential operator $L_{\\mathbf {w}}[\\phi ]$ under appropriate boundary condition (periodic, homogeneous Neumann or homogeneous Dirichlet) generates a contraction semigroup (denoted as $e^{t\\mathcal {L}_{\\mathbf {w}}[\\phi ]}$ ) with respect to the supremum norm on the subspace of $C(\\overline{\\Omega })$ that satisfies the corresponding boundary condition, i.e., $\\Vert e^{t\\mathcal {L}_{\\mathbf {w}}[\\phi ]}\\Vert \\le 1,\\quad \\forall \\, \\phi \\in C(\\overline{\\Omega }), \\,t\\ge 0.$ Furthermore, for any $\\alpha >0$ , we have $\\Vert e^{t(\\mathcal {L}_{\\mathbf {w}}[\\phi ]-\\alpha )}\\Vert \\le e^{-\\alpha t},\\quad \\forall \\, \\phi \\in C(\\overline{\\Omega }), \\,t\\ge 0.$ The convective Allen-Cahn equation (REF ) can be re-written as $\\partial _tu(\\mathbf {x},t)=\\mathcal {L}_{\\mathbf {v}(t)}[u(t)]u(\\mathbf {x},t)+\\widetilde{f}(u(\\mathbf {x},t)), \\quad \\mathbf {x}\\in \\Omega ,\\, t>0.$ Here we have omitted the dependence on $\\mathbf {x}$ in $\\mathcal {L}_{\\mathbf {v}(t)}[u(t)]$ when there is no confusion, i.e., ${\\mathbf {v}(t)} = {\\mathbf {v}(\\mathbf {x},t)}$ and ${u(t)} = {{u}(\\mathbf {x},t)}$ are functions of $\\mathbf {x}$ .", "In this paper, we mainly focus on establishing the MBP for finite difference approximations of the convective Allen-Cahn equation (REF ) under Assumption and the condition (REF ) for mobility.", "Two typical nonlinear potentials $F(u)$ with $f(u)=-F^{\\prime }(u)$ will be numerically studied, i.e.", "the double-well potential $F(u)=\\frac{1}{4}\\left(u^{2}-1\\right)^{2},$ where the corresponding maximum bound $\\beta \\in [1,\\infty )$ (c.f.", "Assumption ), and the Flory-Huggins potential $F(u)=\\frac{\\theta }{2}[(1+u) \\ln (1+u)+(1-u) \\ln (1-u)]-\\frac{\\theta _{c}}{2}u^{2},$ with $ \\theta _c>\\theta >0$ , where the corresponding maximum bound $\\beta \\in [\\rho ,1)$ (c.f.", "Assumption ) with $\\rho $ being the positive root of $f(\\rho )=0$ .", "For the numerical solution purpose, by introducing a stabilizing constant $\\kappa >0$ , the equation (REF ) can be further formulated into an equivalent form $\\partial _tu(\\mathbf {x},t)=\\mathcal {L}^\\kappa _{\\mathbf {v}(t)}[u(t)]u(\\mathbf {x},t)+N(u(\\mathbf {x},t)),\\,\\quad \\forall \\, \\mathbf {x}\\in \\Omega ,\\, t>0,$ where $\\mathcal {L}^\\kappa _{\\mathbf {v}(t)}[u(t)]=\\mathcal {L}_{\\mathbf {v}(t)}[u(t)]-\\kappa \\mathcal {I},\\quad N(u)=\\kappa u+\\widetilde{f}(u).$ The stabilizing constant $\\kappa $ is required that $\\kappa \\ge \\max _{|\\xi |\\le \\beta }\\left|\\widetilde{f}^{\\prime }(\\xi )\\right|=\\max _{|\\xi |\\le \\beta }\\left|M^{\\prime }(\\xi )f(\\xi )+M(\\xi )f^{\\prime }(\\xi )\\right|.$ Note that (REF ) is always well-defined as long as $M(\\xi )$ and $f(\\xi )$ are continuously differentiable.", "Under Assumption and the choice of the stabilizing constant (REF ), we have (I) $|N(\\xi )|\\le \\kappa \\beta $ for any $\\xi \\in [-\\beta ,\\beta ]$ ; (II) $|N(\\xi _1)-N(\\xi _2)|\\le 2\\kappa |\\xi _1-\\xi _2|$ for any $\\xi _1,\\xi _2\\in [-\\beta ,\\beta ]$ .", "From (REF ), we have $N^{\\prime }(\\xi )=\\kappa +\\widetilde{f}^{\\prime }(\\xi )$ and $0\\le N^{\\prime }(\\xi )\\le 2\\kappa $ which implies (II).", "In addition, $N(\\xi )$ is an increasing function.", "Recalling Assumption , for any $\\xi \\in [-\\beta ,\\beta ]$ , we get $-\\kappa \\beta \\le -\\kappa \\beta +\\widetilde{f}(-\\beta )=N(-\\beta )\\le N(\\xi )\\le N(\\beta )=\\kappa \\beta +\\widetilde{f}(\\beta )\\le \\kappa \\beta ,$ and the conclusion (I) holds." ], [ "Exponential time differencing for temporal approximation", "In this section, we construct the temporal approximations of the convective Allen-Cahn equation (REF ) by using the ETD approach.", "First of all, let us define the $\\phi $ -functions as follows: $\\phi _0(s)=e^{s},\\quad \\phi _1(s)=\\frac{e^{s}-1}{s},\\quad \\phi _2(s)=\\frac{e^{s}-s-1}{s^2}.$ Assuming that Assumption and the condition (REF ) for mobility hold, we will begin with the equivalent form (REF ) to develop the MBP-preserving ETD schemes of first- and second-order in this section." ], [ "ETD schemes and discrete MBPs", "Choosing a time step size $\\tau >0$ , we have the time steps as $\\lbrace t^n=n\\tau \\rbrace _{n\\ge 0}$ .", "To construct the ETD schemes for time stepping of the model problems (REF ), below we take the Dirichlet boundary condition case for illustration and focus on the stabilized form (REF ) on the interval $[t^n,t^{n+1}]$ , or equivalently, the equation for $w(s):=w(\\mathbf {x},s)=u(\\mathbf {x},t^n+s)$ : $\\left\\lbrace \\begin{array}{ll}\\partial _sw(\\mathbf {x},s)=\\mathcal {L}^\\kappa _{\\mathbf {v}(t^n+s)}[w(s)] w(\\mathbf {x},s)+N(w(\\mathbf {x},s)),&\\quad \\mathbf {x}\\in \\Omega , s\\in (0,\\tau ],\\\\[4pt]w(\\mathbf {x},s)=g(\\mathbf {x},t^n+s),&\\quad \\mathbf {x}\\in \\partial \\Omega , s\\in (0,\\tau ],\\\\[4pt]w(\\mathbf {x},0)=u(\\mathbf {x},t^n),&\\quad \\mathbf {x}\\in \\Omega .\\end{array}\\right.$ Let us start with $u^0 = u_0$ and denote $u^n$ as an approximation of $u(t_n)$ .", "For the first-order-in-time approximation of (REF ), we use $\\mathcal {L}^\\kappa _{\\mathbf {v}(t^n+s)}[u(t^n+s)]=\\mathcal {L}^\\kappa _{\\mathbf {v}(t^n)}[u(t^n)]+O(\\tau )$ and $N(u(t^n+s))=N(u(t^n))+O(\\tau )$ to construct the first order ETD (ETD1) scheme: for $n\\ge 0$ and given $u^n$ , find $u^{n+1}=w^n(\\tau )$ by solving $\\left\\lbrace \\begin{array}{ll}\\partial _sw^n(\\mathbf {x},s)=\\mathcal {L}^\\kappa _{\\mathbf {v}^n}[u^n(\\mathbf {x})] w^n(\\mathbf {x},s)+N(u^n(\\mathbf {x})),&\\quad \\mathbf {x}\\in \\Omega , s\\in (0,\\tau ],\\\\[4pt]w^n(\\mathbf {x},s)=g(\\mathbf {x},t^n+s),&\\quad \\mathbf {x}\\in \\partial \\Omega , s\\in (0,\\tau ],\\\\[4pt]w^n(\\mathbf {x},0)=u^n(\\mathbf {x}),&\\quad \\mathbf {x}\\in \\Omega ,\\end{array}\\right.$ where $\\mathbf {v}^n=\\mathbf {v}(t^n)$ .", "Equivalently, $u^{n+1}(\\mathbf {x})=\\phi _0(\\tau \\mathcal {L}_{\\mathbf {v}^n}^\\kappa [u^n(\\mathbf {x})])u^n(\\mathbf {x})+\\tau \\phi _1(\\tau \\mathcal {L}_{\\mathbf {v}^n}^\\kappa [u^n(\\mathbf {x})])N(u^n(\\mathbf {x})).$ We refer to [9] for efficient calculations of the products of matrix exponentials and vectors needed in numerical implementation of the proposed ETD1 scheme.", "(Discrete MBP of the ETD1 scheme) The ETD1 scheme (REF ) preserves the discrete MBP unconditionally, i.e., for any time step size $\\tau >0$ , the ETD1 solution satisfies $\\Vert u^n\\Vert \\le \\beta $ for any $n>0$ if $\\Vert u_0\\Vert \\le \\beta $ and $ \\Vert g(\\cdot ,t)\\Vert _{\\partial \\Omega }\\le \\beta $ for any $t>0$ .", "Since $\\Vert u^0\\Vert \\le \\beta $ , by using mathematical induction we only need to show that $\\Vert u^n\\Vert \\le \\beta $ implies $|u^{n+1}|\\le \\beta $ for $n\\ge 0$ .", "First, we know $u^n,w^n(s)\\in C(\\overline{\\Omega })$ ($n\\ge 0$ , $s\\in [0,\\tau ]$ ) by standard PDE analysis for linear parabolic equations.", "Suppose that there exists $(\\mathbf {x}^*,s^*)\\in \\overline{\\Omega }\\times [0,\\tau ]$ such that $w^n(\\mathbf {x}^*,s^*)=\\max _{0\\le s\\le \\tau , \\mathbf {x}\\in \\overline{\\Omega }} w^n(\\mathbf {x},s).$ If $\\mathbf {x}^*\\in \\partial \\Omega $ or $s^*=0$ , the boundary data $|g(\\mathbf {x},t)|\\le \\beta $ for any $\\mathbf {x}\\in \\partial {\\Omega },\\, t>0$ or the initial condition $\\Vert u^n\\Vert \\le \\beta $ at $s^*=0$ implies $w^n(\\mathbf {x}^*,s^*)\\le \\beta $ .", "Otherwise, $\\mathbf {x}^*\\in \\Omega $ and $s^*\\in (0,\\tau ]$ , we have $\\partial _sw^n(\\mathbf {x}^*,s^*)\\ge 0,\\quad \\nabla w^n(\\mathbf {x}^*,s^*)=\\mathbf {0},\\quad \\Delta w^n(\\mathbf {x}^*,s^*) \\le 0,$ which implies $\\mathcal {L}_{\\mathbf {v}^n}[u^n]w^n(\\mathbf {x}^*,s^*)\\le 0,$ then $\\kappa w^n(\\mathbf {x}^*,s^*)\\le N(u^n(\\mathbf {x}^*,s^*)).$ Since $\\Vert u^n\\Vert \\le \\beta $ , Lemma leads to $N(u^n(\\mathbf {x}^*,s^*))\\le \\kappa \\beta $ , furthermore $w^n(\\mathbf {x}^*,s^*)\\le \\beta $ .", "Similarly, if there exists $(\\mathbf {x^{**}},s^{**})\\in \\Omega \\times [0,T]$ such that $w^n(\\mathbf {x}^{**},s^{**})=\\min _{0\\le s\\le \\tau ,\\mathbf {x}\\in \\overline{\\Omega }}w(\\mathbf {x},s),$ one can show $w^n(\\mathbf {x}^{**},s^{**})\\ge -\\beta $ .", "Thus, we conclude $\\Vert u^{n+1}\\Vert \\le \\beta $ .", "Next, we consider the second-order-in-time approximation of (REF ) by using $\\begin{aligned}\\mathcal {L}^\\kappa _{\\mathbf {v}(t^n+s)}[u(t^n+s)]&=\\left(1-\\frac{s}{\\tau }\\right)\\mathcal {L}^\\kappa _{\\mathbf {v}^n}[u(t^n)]+\\frac{s}{\\tau }\\mathcal {L}^\\kappa _{\\mathbf {v}^{n+1}}[u(t^{n+1})]+O(\\tau ^2),\\\\N(u(t^n+s))&=\\left(1-\\frac{s}{\\tau }\\right)N(u(t^n))+\\frac{s}{\\tau }N(u(t^{n+1}))+O(\\tau ^2).\\end{aligned}$ To construct a second-order-in-time approximation, we need to make the approximation of the differential operator part $\\mathcal {L}^\\kappa _{\\mathbf {v}(t_n+s)}[u(t^n+s)]$ to be $s$ -independent, so that ETD approach could be applied.", "In order to do this, we learn from (REF ) and reformulate (REF ) as $\\partial _s w(\\mathbf {x},s)=\\;&\\frac{1}{2}\\left(\\mathcal {L}^\\kappa _{\\mathbf {v}^n}[u(\\mathbf {x},t^n)]+\\mathcal {L}^\\kappa _{\\mathbf {v}^{n+1}}[u(\\mathbf {x},t^{n+1})]\\right)w(\\mathbf {x},s)\\\\&{+\\left(1-\\frac{s}{\\tau }\\right)N(u(\\mathbf {x},t^n))+\\frac{s}{\\tau }N(u(\\mathbf {x},t^{n+1}))}\\\\&+\\left(\\frac{s}{\\tau }-\\frac{1}{2}\\right)\\left(\\mathcal {L}^\\kappa _{\\mathbf {v}^{n+1}}[u(\\mathbf {x},t^{n+1})]-\\mathcal {L}^\\kappa _{\\mathbf {v}^n}[u(\\mathbf {x},t^n)]\\right)w(\\mathbf {x},s)+O(\\tau ^2).$ Integrating both sides of the above equation from 0 to $\\tau $ and applying the midpoint rule to the fourth term on the RHS, we derive the second order ETD Runge-Kutta (ETDRK2) scheme: for $n\\ge 0$ and given $u^n$ , find $u^{n+1}=w^n(\\tau )$ by solving $\\left\\lbrace \\begin{array}{ll}\\partial _sw^n(\\mathbf {x},s)=\\frac{1}{2}\\left(\\mathcal {L}_{\\mathbf {v}^n}^\\kappa [u^n(\\mathbf {x})]+\\mathcal {L}_{\\mathbf {v}^{n+1}}^\\kappa [\\hat{u}^{n+1}(\\mathbf {x})]\\right)w^n(\\mathbf {x},s)\\\\[4pt]\\qquad \\qquad \\qquad \\qquad +(1-\\frac{s}{\\tau })N(u^n(\\mathbf {x}))+\\frac{s}{\\tau }N(\\hat{u}^{n+1}(\\mathbf {x})),&\\quad \\mathbf {x}\\in \\Omega ,s\\in (0,\\tau ],\\\\[4pt]w(\\mathbf {x},s)=g(\\mathbf {x},t^n+s),&\\quad \\mathbf {x}\\in \\partial \\Omega ,s\\in (0,\\tau ],\\\\[4pt]w^n(\\mathbf {x},0)=u^n,&\\quad \\mathbf {x}\\in \\Omega ,\\end{array}\\right.$ where $\\hat{u}^{n+1}$ is generated from the ETD1 scheme (REF ).", "Equivalently, $\\hat{u}^{n+1}(\\mathbf {x})&=&\\phi _0(\\tau \\mathcal {L}_{\\mathbf {v}^n}^\\kappa [u^n])u^n(\\mathbf {x})+\\tau \\phi _1(\\tau \\mathcal {L}_{\\mathbf {v}^n}^\\kappa [u^n(\\mathbf {x})])N(u^n(\\mathbf {x})),\\\\u^{n+1}(\\mathbf {x})&=&\\phi _0\\left(\\frac{\\tau }{2}(\\mathcal {L}_{\\mathbf {v}^n}^\\kappa [u^n(\\mathbf {x})]+\\mathcal {L}_{\\mathbf {v}^{n+1}}^\\kappa [\\hat{u}^{n+1}(\\mathbf {x})])\\right)u^n(\\mathbf {x})\\\\&&+\\tau \\phi _1\\left(\\frac{\\tau }{2}(\\mathcal {L}_{\\mathbf {v}^n}^\\kappa [u^n(\\mathbf {x})]+\\mathcal {L}_{\\mathbf {v}^{n+1}}^\\kappa [\\hat{u}^{n+1}(\\mathbf {x})])\\right)u^n(\\mathbf {x})\\\\&&+\\tau \\phi _2\\left(\\frac{\\tau }{2}(\\mathcal {L}_{\\mathbf {v}^n}^\\kappa [u^n(\\mathbf {x})]+\\mathcal {L}_{\\mathbf {v}^{n+1}}^\\kappa [\\hat{u}^{n+1}(\\mathbf {x})])\\right)\\left(N(\\hat{u}^{n+1}(\\mathbf {x}))-N(u^n(\\mathbf {x}))\\right).$ It is worth noting that both ETD1 and ETDRK2 are linear schemes.", "In view of the linear interpolation (REF ) for the differential operator part $\\mathcal {L}^\\kappa _{\\mathbf {v}(t_n+s)}[u(t^n+s)]$ , it is easy to verify that the midpoint rule is the exact evaluation for the ETDRK2 scheme (REF ).", "(Discrete MBP of the ETDRK2 scheme) The ETDRK2 scheme (REF ) preserves the discrete MBP unconditionally, i.e., for any time step size $\\tau >0$ , the ETDRK2 solution satisfies $\\Vert u^n\\Vert \\le \\beta $ for any $n>0$ if $\\Vert u_0\\Vert \\le \\beta $ and $ \\Vert g(\\cdot ,t)\\Vert _{\\partial \\Omega }\\le \\beta $ for any $ t>0$ .", "The proof is quite similar to the ETD1 case and thus is omitted here for brevity.", "For the case of periodic or homogeneous boundary condition, the ETD1 and ETDRK2 schemes can be similarly formulated as (REF ) and (REF ) by removing $w(\\mathbf {x},s)=g(\\mathbf {x},t^n+s)$ for $\\mathbf {x}\\in \\partial \\Omega ,s\\in (0,\\tau ]$ and imposing the respective boundary condition.", "Consequently, Theorems REF and REF for the MBP still hold by removing the extra boundary value requirement $ \\Vert g(\\cdot ,t)\\Vert _{\\partial \\Omega }\\le \\beta $ as done in [9]." ], [ "Fully discrete ETD schemes", "In this section, we focus on the spatial discretization.", "To this end, we recall the continuity of a function defined on a set $D\\subset \\mathbb {R}^d$ can be described as follows [28]: $w \\text{~is continuous at~} \\mathbf {x}^*\\in D\\Longleftrightarrow \\forall \\, \\mathbf {x}_k\\rightarrow \\mathbf {x}^* \\text{~in~} D \\text{~implies~} w(\\mathbf {x}_k)\\rightarrow w(\\mathbf {x}^*).$ Let $C(X)$ be the continuous function space over $X$ , where $X$ is the set of all spatial grid points (boundary and interior points).", "Denote $U$ as the values of $u$ on $X$ , i.e.", "$U(\\mathbf {x},t)=u(\\mathbf {x},t),\\ \\forall \\, \\mathbf {x}\\in X, \\;t>0$ .", "The corresponding space-discrete equation of (REF ) becomes an ordinary differential system taking the same form: $\\partial _t U(\\mathbf {x},t)=\\mathcal {L}_{\\mathbf {v}(t),h}[U(t)]U(\\mathbf {x},t)+f(U(\\mathbf {x},t)),\\quad t>0,\\,\\mathbf {x}\\in X^*,$ with $U(\\mathbf {x},0)=u_0(\\mathbf {x})$ for all $\\mathbf {x}\\in X$ where $X^*=X\\cap \\overline{\\Omega }^+$ with $\\overline{\\Omega }^+=\\prod \\limits _{i=1}^d(a_i,b_i]$ for the periodic boundary condition case, $X^*=X$ for the homogeneous Neumann boundary condition case, and $X^*$ is the set of all interior grid points for the Dirichlet boundary condition case.", "In order to establish the MBP for the discrete-in-space convective Allen-Cahn system (REF ), as well as its time discretizations (to produce the fully discrete schemes) proposed later, we make the following specific assumptions on the discrete operator $\\mathcal {L}_{\\mathbf {v}(t),h}[U(t)]$ .", "For any given $U\\in C(X)$ and $\\mathbf {v}\\in (C(X))^d$ , the operator $\\mathcal {L}_{\\mathbf {v},h}[U]$ satisfies that for any $W\\in C(X)$ and $\\mathbf {x}_0\\in X^*$ , if $W(\\mathbf {x}_0)=\\max _{\\mathbf {x}\\in X\\cap \\overline{\\Omega }}W(\\mathbf {x}),$ then $\\mathcal {L}_{\\mathbf {v},h}[U]W(\\mathbf {x})\\big |_{\\mathbf {x}=\\mathbf {x}_0}\\le 0$ .", "Next, we shall describe the concrete discrete scheme concerning the Dirichlet boundary condition for 1D ($\\mathbf {x}=x\\in \\Omega $ ), and the extensions to 2D/3D rectangular domains and other boundary conditions are straightforward.", "Given $\\Omega =(a,b)$ , the interval is divided into $N$ subintervals with uniform mesh size $h=(b-a)/N$ and the grids points are $X=\\lbrace x_i=a+ih\\rbrace _{i=0}^{N}$ with $X^*=\\lbrace x_i,\\,i=1,\\ldots ,N-1\\rbrace $ for the Dirichlet boundary condition.", "Let $U_i:=U_i(t)$ be the approximation of $u(x,t)$ at $x_i\\in X$ .", "We only need describe how $\\lbrace U_i(t)\\rbrace _{i=1}^{N-1}$ evolve since the boundary values are explicitly given by $U_0(t)=g(x_0,t)$ and $U_N(t)=g(x_N,t)$ .", "For convenience, we can view $U$ as a vector function of time $U=(U_1,U_2,\\ldots ,U_{N-1})^T$ when needed.", "The Laplace opertaor is discretized by the central finite difference method $u_{x x}\\left(x_{i}, \\cdot \\right) \\approx \\frac{u(x_{i-1},\\cdot )-2 u(x_{i},\\cdot )+u(x_{i+1},\\cdot )}{h^{2}},\\quad i=1,2,\\cdots ,N-1,$ and the matrix $D_{h}$ as the discrete approximation of the Laplace operator under the homogeneous Dirichlet boundary condition can be expressed as $D_{h}=\\frac{1}{h^{2}}\\left[\\begin{array}{ccccc}-2 & 1 & & &\\\\1 & -2 & 1 & & \\\\& \\ddots & \\ddots & \\ddots & \\\\& & 1 & -2 & 1 \\\\& & & 1 & -2\\end{array}\\right]_{(N-1) \\times (N-1)}.$ In addition, we need define $G_D= \\frac{1}{h^{2}}(g(x_0,t),0,\\cdots ,0,g(x_N,t))^T$ for contribution of the boundary values to the Laplace operator.", "The convective term is approximated by the upwind difference scheme $\\mathbf {v}u_{x}|_{x=x_i}\\approx \\mathbf {v}_i^{+} U_{i}^{-}+\\mathbf {v}_i^{-} U_{i}^{+},$ where $\\mathbf {v}_i^{+}$ and $\\mathbf {v}_i^{-}$ are defined as $\\mathbf {v}_i^{+}=\\max \\lbrace 0, \\mathbf {v}(x_i,t)\\rbrace , \\quad \\mathbf {v}_i^{-}=\\min \\lbrace 0, \\mathbf {v}(x_i,t)\\rbrace ,$ and $U_{i}^{-}$ and $U_{i}^{+}$ are defined as $U_{i}^{-}=\\frac{-U_{i-1}+U_{i}}{h}, \\quad U_{i}^{+}=\\frac{U_{i+1}-U_{i}}{h}.$ Denote $\\mathbf {v}_i:=\\mathbf {v}(x_i,t)$ ($i=1,\\ldots ,N-1$ ) and define $A_{\\mathbf {v}(t)}=\\frac{1}{2 h}\\left[\\begin{array}{cccccc}-2 & 1-\\operatorname{sign}\\left(\\mathbf {v}_{1}\\right) & &&\\\\1+\\operatorname{sign}\\left(\\mathbf {v}_{2}\\right) & -2 & 1-\\operatorname{sign}\\left(\\mathbf {v}_{2}\\right) & \\\\& \\ddots & \\ddots & \\ddots \\\\& & 1+\\operatorname{sign}\\left(\\mathbf {v}_{N-2}\\right) & -2 & 1-\\operatorname{sign}\\left(\\mathbf {v}_{N-2}\\right) \\\\& & & 1+\\operatorname{sign}\\left(\\mathbf {v}_{N-1}\\right) & -2\\end{array}\\right]$ and $\\Lambda _{\\mathbf {v}(t)}=\\operatorname{diag}\\left(\\left|\\mathbf {v}_{1}\\right|, \\ldots ,\\left|\\mathbf {v}_{N-1}\\right|\\right), \\quad \\Lambda _U=\\operatorname{diag}\\left(M\\left(U_{1}\\right), \\ldots , M\\left(U_{N-1}\\right)\\right),$ where $\\operatorname{diag}(V)$ is the diagonal matrix with diagonal elements being components of the vector $V$ .", "We also define $G_C= \\frac{1}{2h}((1+\\operatorname{sign}(\\mathbf {v}_{1}))g(x_0,t),0,\\cdots ,0,(1-\\operatorname{sign}(\\mathbf {v}_{N-1}))g(x_N,t))^T$ for the contribution of the boundary values to the convective term.", "Then $\\mathcal {L}_{\\mathbf {v}(t),h}[U(t)]$ can be written in the matrix form as $\\mathcal {L}_{\\mathbf {v}(t),h}[U(t)]= \\epsilon ^2\\Lambda _{U(t)} D_{h}+\\Lambda _{\\mathbf {v}(t)}A_{\\mathbf {v}(t)}.$ It is easy to verify that at any fixed time $t^*$ , the combination of the upwind finite difference approximation of the convective term and the central difference approximation of the diffusion term in $\\mathcal {L}_{\\mathbf {v}(t^*),h}[u(t^*)]$ satisfies Assumption REF .", "The generated contraction semigroup can be simply treatd as a matrix exponential $e^{t\\mathcal {L}_{\\mathbf {v}(t^*),h}[u(t^*)]}$ .", "The discrete-in-space system of the stabilized formulation (REF ) then reads $\\partial _t U=\\mathcal {L}_{\\mathbf {v}(t),h}^\\kappa [U(t)]U+(N(U)+ \\epsilon ^2\\Lambda _UG_D+\\Lambda _{\\mathbf {v}(t)}G_C),\\qquad t>0,$ where $\\mathcal {L}_{\\mathbf {v}(t),h}^\\kappa [U(t)]=\\mathcal {L}_{\\mathbf {v}(t),h}[U(t)]-\\kappa I$ .", "The fully discrete ETD1 and ETDRK2 schemes then can be produced by applying ETD1 (REF ) and ETDRK2 (REF ) to (REF ), respectively.", "In the following, we present the equivalent formulas of the above fully ETD1 and ETDRK2 schemes that are usually more convenient for practical implementation.", "With the above 1D Dirichlet set-up, adopting the temporal discretizations in Section , we denote $U_i^n$ to be the numerical approximation of $u(x_i,t^n)$ ($i=1,\\ldots ,N-1$ , $n\\ge 0$ ) and $U^n$ to be a function defined on $X^*$ (also viewed as the solution vector as $U^n=(U_1^n,\\ldots ,U_{N-1}^n)^T$ ).", "The fully discrete ETD1 scheme can be written as $U^{n+1}=\\phi _0(\\tau \\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n])U^n+\\tau \\phi _1(\\tau \\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n])\\widetilde{N}(U^n),$ where $\\mathbf {v}^n$ is understood as $\\mathbf {v}(x,t^n)$ taking values at $x_i\\in X^*$ and $\\widetilde{N}(U^n) = N(U^n)+\\epsilon ^2\\Lambda _{U^n}G^n_D+\\Lambda _{\\mathbf {v}^n}G^n_C)$ .", "The fully discrete ETDRK2 scheme reads $\\hat{U}^{n+1}&=&\\phi _0(\\tau \\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n])U^n+\\tau \\phi _1(\\tau \\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n])\\widetilde{N}(U^n),\\\\U^{n+1}&=&\\phi _0\\left(\\frac{\\tau }{2}(\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n]+\\mathcal {L}_{\\mathbf {v}^{n+1},h}^\\kappa [\\hat{U}^{n+1}])\\right)U^n\\\\&&+\\tau \\phi _1\\left(\\frac{\\tau }{2}(\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n]+\\mathcal {L}_{\\mathbf {v}^{n+1},h}^\\kappa [\\hat{U}^{n+1}])\\right)\\widetilde{N}(U^n)\\\\&&+\\tau \\phi _2\\left(\\frac{\\tau }{2}(\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n]+\\mathcal {L}_{\\mathbf {v}^{n+1},h}^\\kappa [\\hat{U}^{n+1}])\\right)\\left(\\widetilde{N}(\\hat{U}^{n+1})-\\widetilde{N}(U^n)\\right).$ For the periodic boundary condition case, we have $X^*=\\lbrace x_i,\\,i=1,\\ldots ,N\\rbrace $ , $U_0=U_N$ , $G_D=G_C={\\bf 0}$ , and $D_{h}=\\frac{1}{h^{2}}\\left[\\begin{array}{ccccc}-2 & 1 & & &1\\\\1 & -2 & 1 & & \\\\& \\ddots & \\ddots & \\ddots & \\\\& & 1 & -2 & 1 \\\\1& & & 1 & -2\\end{array}\\right]_{N \\times N},$ $A_{\\mathbf {v}(t)}=\\frac{1}{2 h}\\left[\\begin{array}{cccccc}-2 & 1-\\operatorname{sign}\\left(\\mathbf {v}_{1}\\right) & &&1+\\operatorname{sign}\\left(\\mathbf {v}_{1}\\right)\\\\1+\\operatorname{sign}\\left(\\mathbf {v}_{2}\\right) & -2 & 1-\\operatorname{sign}\\left(\\mathbf {v}_{2}\\right) & \\\\& \\ddots & \\ddots & \\ddots \\\\& & 1+\\operatorname{sign}\\left(\\mathbf {v}_{N-1}\\right) & -2 & 1-\\operatorname{sign}\\left(\\mathbf {v}_{N-1}\\right) \\\\1-\\operatorname{sign}\\left(\\mathbf {v}_{N}\\right)& & & 1+\\operatorname{sign}\\left(\\mathbf {v}_{N}\\right) & -2\\end{array}\\right],$ $\\Lambda _{\\mathbf {v}(t)}=\\operatorname{diag}\\left(\\left|\\mathbf {v}_{1}\\right|, \\ldots ,\\left|\\mathbf {v}_{N}\\right|\\right), \\quad \\Lambda _U=\\operatorname{diag}\\left(M\\left(U_{1}\\right), \\ldots , M\\left(U_{N}\\right)\\right).$ For the homogeneous Neumann boundary condition case, we have $X^*=X=\\lbrace x_i,\\,i=0,1,\\ldots ,N\\rbrace $ , $G_D=G_C={\\bf 0}$ , and $D_{h}=\\frac{1}{h^{2}}\\left[\\begin{array}{ccccc}-2 & 2 & & &\\\\1 & -2 & 1 & & \\\\& \\ddots & \\ddots & \\ddots & \\\\& & 1 & -2 & 1 \\\\& & & 2 & -2\\end{array}\\right]_{(N+1) \\times (N+1)},$ $A_{\\mathbf {v}(t)}=\\frac{1}{2 h}\\left[\\begin{array}{cccccc}-2 & 2 & &&\\\\1+\\operatorname{sign}\\left(\\mathbf {v}_{1}\\right) & -2 & 1-\\operatorname{sign}\\left(\\mathbf {v}_{1}\\right) & \\\\& \\ddots & \\ddots & \\ddots \\\\& & 1+\\operatorname{sign}\\left(\\mathbf {v}_{N-1}\\right) & -2 & 1-\\operatorname{sign}\\left(\\mathbf {v}_{N-1}\\right) \\\\& & & 2 & -2\\end{array}\\right],$ $\\Lambda _{\\mathbf {v}(t)}=\\operatorname{diag}\\left(\\left|\\mathbf {v}_{0}\\right|, \\ldots ,\\left|\\mathbf {v}_{N}\\right|\\right), \\quad \\Lambda _U=\\operatorname{diag}\\left(M\\left(U_{0}\\right), \\ldots , M\\left(U_{N}\\right)\\right).$ Similar to Theorems REF and REF , we then obtain the unconditional MBP preservation for the above fully discrete ETD1 and ETDRK2 schemes as: $\\Vert U^n\\Vert _{X}\\le \\beta ,\\quad \\forall \\, n\\ge 0,$ where $\\Vert \\cdot \\Vert _{X}$ denotes the maximum norm on $C(X)$ ." ], [ "Convergence analysis", "As an important application of the MBP, we now consider the convergence of the ETD1 (REF ) and ETDRK2 ()-() schemes.", "For simplicity of the analysis, we again take the 1D problem with $\\Omega =(a,b)$ and set the homogeneous Dirichlet boundary condition.", "In this case, $u(x,t)=g(x,t) = 0$ for $x\\in \\partial \\Omega ,\\;t>0$ , thus $U_0(t) = U_N(t) = 0$ , $G_D=G_C={\\bf 0}$ and $\\widetilde{N}(U^n) = {N}(U^n)$ in (REF ) and ()-().", "The error estimation results can be naturally extended to the problems with other boundary conditions and rectangular domains in higher dimensions.", "Let the spatial interpolation $\\mathcal {I}_h:C(X)\\rightarrow C(\\overline{\\Omega })$ or $C(\\overline{\\Omega })\\rightarrow C(\\overline{\\Omega })$ be the piecewise linear interpolation operator with respect to the nodes associated with the mesh.", "More precisely, for any function $v(x)\\in C(X)$ or $C(\\overline{\\Omega })$ , the interpolation $\\mathcal {I}_hv\\in C(\\overline{\\Omega })$ is piecewise linear and $\\mathcal {I}_hv(x)=\\sum _{i=1}^Nv(x_i)\\psi _i(x),\\quad \\forall \\,x\\in \\Omega ,$ where $\\psi _i(x)$ is the piecewise linear basis function satisfying $\\psi _i(x_i)=1$ and $\\psi _i(x_j)=0$ when $i\\ne j$ .", "Note that under the homogeneous Dirichlet boundary condition, $C(X^*)$ can be regarded as a linear subspace of $C(X)$ in the sense of isometric isomorphism through the zero extension to the boundary nodes.", "We have the following error estimates.", "For the fixed terminal time $T>0$ , assume that $\\mathbf {v}\\in C^1([0,T],C^1(\\Omega ))$ , the exact solution $u$ to the model problem (REF ) belongs to $C^1([0,T],C^3(\\overline{\\Omega })\\cap C_0(\\overline{\\Omega }))$ ($C_0(\\overline{\\Omega })=\\lbrace \\phi (x)\\in C(\\overline{\\Omega }),\\,\\phi |_{\\partial \\Omega }=0\\rbrace $ ) and $\\lbrace U^n\\in C(X)\\rbrace _{n\\ge 0}$ is generated by the fully discrete ETD1 scheme (REF ) with $U^0=u_0(X)$ .", "Then for any $\\tau >0$ , $h>0$ , it holds that $\\Vert u(\\cdot ,t^n)-\\mathcal {I}_hU^n(\\cdot )\\Vert \\le C(\\tau +h),\\quad \\forall \\, t^n\\le T,$ where the constant $C>0$ is independent of $\\tau $ and $h$ .", "First of all, by Taylor expansion and triangle inequality, recalling the homogeneous boundary conditions, there holds $\\Vert u(\\cdot ,t^n)-\\mathcal {I}_hU^n(\\cdot )\\Vert \\le & \\Vert \\mathcal {I}_hu(\\cdot ,t^n)-\\mathcal {I}_hU^n(\\cdot )\\Vert +\\Vert u(\\cdot ,t^n)-\\mathcal {I}_hu(\\cdot ,t^n)\\Vert \\nonumber \\\\\\le &\\Vert u(x,t^n)-U^n(x)\\Vert _{X^*}+h\\Vert \\partial _xu(\\cdot ,t^n)\\Vert .$ Under the assumptions of Theorem , the MBP implies that $\\Vert u(t)\\Vert \\le \\beta $ and $\\Vert \\mathcal {I}_hU\\Vert \\le \\beta $ .", "Hence, it suffices to consider the following “error function\" $e^n(x)\\in C(X^*)$ with the homogeneous Dirichlet boundary condition as $e^n(x)=U^n(x)-u(x,t^n),\\quad x\\in X^*.$ From $t^n$ to $t^{n+1}$ , the ETD1 solution (REF ) is given by $U^{n+1}(x)=W_1^n(x,\\tau )$ for $x\\in X^*$ with the function $W_1^n:=W_1^n(s)=W_1^n(x,s)$ solving $\\left\\lbrace \\begin{array}{ll}\\partial _s {W_1^n}=\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n]W_1^n+N\\left(U^{n}\\right), &\\quad x\\in X^*, s \\in (0, \\tau ], \\\\[4pt]W_1^n(x,0)=U^{n}(x), & \\quad x\\in X^*,\\end{array}\\right.$ where $\\mathbf {v}^n=\\mathbf {v}(t^n)$ .", "Based on the spatial discretization (REF ), $u(x,t^n+s)$ satisfies the following equation for $x\\in X^*$ and $s \\in (0, \\tau ]$ , ${\\partial _s u(x,t^n+s)}=\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n(x)]u(x,t^n+s)+N\\left(u(x,t^n)\\right)+R^n(x,s),$ where the local truncation error function $R^n(s):=R^n(x,s)\\in C(X^*)$ for any $s\\in (0,\\tau ]$ is given by $R^n(s)=&\\mathcal {L}^\\kappa _{\\mathbf {v}(t_n+s)}[u(t^n+s)]u(t^n+s)-\\mathcal {L}^\\kappa _{\\mathbf {v}^n,h}[u(t^n)]u(t^n+s)+N\\left(u(t^n+s)\\right)-N\\left(u(t^{n})\\right)\\nonumber \\\\&+\\mathcal {L}^\\kappa _{\\mathbf {v}^n,h}[u(t^n)]u(t^n+s)-\\mathcal {L}^\\kappa _{\\mathbf {v}^n,h}[U^n]u(t^n+s).$ Under the assumptions of Theorem , using Taylor expansion and the MBP properties, it is easy to check that $&\\left\\Vert \\mathcal {L}^\\kappa _{\\mathbf {v}(t^n+s)}[u(t^n+s)]u(t^n+s)-\\mathcal {L}^\\kappa _{\\mathbf {v}^n,h}[u(t^n)]u(t^n+s)\\right\\Vert _{X^*}\\le C_1(\\tau +h),\\\\&\\left\\Vert \\mathcal {L}^\\kappa _{\\mathbf {v}^n,h}[u(t^n)]u(t^n+s)-\\mathcal {L}^\\kappa _{\\mathbf {v}^n,h}[U^n]u(t^n+s)\\right\\Vert _{X^*}\\le C_2\\Vert u(t^n)-U^n\\Vert _{X^*},\\\\&\\left\\Vert N\\left(u(t^n+s)\\right)-N\\left(u(t^{n})\\right)\\right\\Vert _{X^*}\\le C_3\\tau ,$ and $\\Vert R^n(s)\\Vert _{X^*}\\le C_2\\Vert u(t^n)-U^n\\Vert _{X^*}+(C_1+C_3)\\tau +C_1h.$ Define the local error function $e_1^n(s):=e_1^n(x,s)=W_1^n(x,s)-u(x,t^n+s)\\in C(X^*)$ for any $s\\in [0,\\tau ]$ and subtracting (REF ) from (REF ), we have ${\\left\\lbrace \\begin{array}{ll}{\\partial _s e_1^n(s)}=\\mathcal {L}_{\\mathbf {v},h}^\\kappa [U^n]e_1^n(s)+N(U^n)-N(u(t^n))-R_1^n(s), & x\\in X^*, s \\in (0, \\tau ],\\\\e_1^n(x,0)=e^n(x), & x\\in X^*.\\end{array}\\right.", "}$ The MBP property and Lemma imply $\\Vert N(U^n)-N(u(t^n))\\Vert _{X^*}\\le 2\\kappa \\Vert U^n-u(t^n)\\Vert _{X^*}=2\\kappa \\Vert e^n\\Vert _{X^*}.$ By Duhamel's principle, we obtain from (REF ) that $e_1^n(\\tau )=e^{\\tau \\mathcal {L}_{\\mathbf {v},h}^\\kappa [U^n]}e^n+\\int _0^\\tau e^{(\\tau -s) \\mathcal {L}_{\\mathbf {v},h}^\\kappa [U^n] }\\left(N(U^n)-N(u(t^n))-R^n(s)\\right)\\,ds.$ Since $\\mathcal {L}_{\\mathbf {v},h}[U^n]$ satisfies Assumption REF , by Lemma the corresponding contraction semigroup enjoys the property of $\\Vert e^{\\tau \\mathcal {L}_{\\mathbf {v},h}^\\kappa [U^n]}e^n\\Vert _{X^*}\\le e^{-\\tau \\kappa }\\Vert e^n\\Vert _{X^*},$ combining (REF ) and (REF ), we have from (REF ) that ($e^{n+1}=e^n_1(\\tau )$ ) $\\left\\Vert e^{n+1}\\right\\Vert _{X^*} \\le & \\mathrm {e}^{-\\kappa \\tau }\\left\\Vert e^{n}\\right\\Vert _{X^*}+\\int _{0}^{\\tau }e^{-\\kappa (\\tau -s)}\\left(\\Vert R^n_1(s)\\Vert _{X^*}+\\Vert N\\left(u^n\\right)-N(u(t^n))\\Vert _{X^*}\\right)\\,ds \\\\\\le & \\mathrm {e}^{-\\kappa \\tau }\\left\\Vert e^{n}\\right\\Vert _{X^*}+\\left(2\\kappa \\Vert e^n\\Vert _{X^*}+C_2\\Vert e^n\\Vert _{X^*}+C_4(\\tau +h)\\right)\\int _{0}^{\\tau } \\mathrm {e}^{-\\kappa (\\tau -s)} \\,ds \\\\=& \\mathrm {e}^{-\\kappa \\tau }\\left\\Vert e^{n}\\right\\Vert _{X^*}+\\frac{1-\\mathrm {e}^{-\\kappa \\tau }}{\\kappa }\\left(2\\kappa \\Vert e^n\\Vert _{X^*}+C_2\\Vert e^n\\Vert _{X^*}+C_4(\\tau +h)\\right)\\\\\\le &(1+C_5\\tau )\\left\\Vert e^{n}\\right\\Vert _{X^*}+C_6 \\tau (\\tau +h),$ where we have used the fact that $1-s\\le e^{-s} \\le 1+s$ for $s>0$ .", "Therefore, recalling $e^0(x)=0$ for any $x\\in X^*$ ), we obtain $\\begin{aligned}\\left\\Vert e^{n+1}\\right\\Vert _{X^*} & \\le (1+C_5 \\tau )^{n+1}\\left\\Vert e^{0}\\right\\Vert _{X^*}+C_6 \\tau (\\tau +h) \\sum _{k=0}^{n}(1+C_5\\tau )^{k} \\\\&=\\frac{C_6}{C_5}(\\tau +h)\\left[(1+C_5\\tau )^{n+1}-1\\right] \\le C_7e^{C_5 (n+1) \\tau } (\\tau +h),\\end{aligned}$ and the desired estimates at $t^{n+1}$ hold in view of (REF ).", "It is easy to check all the constants $C_j$ appearing in the proof are independent of $\\tau $ and $h$ .", "For the second order ETDRK2 scheme ()-(), the error estimates can be established as follows.", "For the fixed terminal time $T>0$ , assume that $\\mathbf {v}\\in C^1([0,T],C^1(\\Omega ))$ , the exact solution $u$ to the model problem (REF ) belongs to $C^2([0,T],C^4(\\overline{\\Omega })\\cap C_0(\\overline{\\Omega }))$ and $\\lbrace U^n\\in C(X)\\rbrace _{n\\ge 0}$ is generated by the fully discrete ETDRK2 scheme ()-() with $U^0=u_0(X)$ .", "Then for any $\\tau ,h\\in (0,1]$ , it holds that $\\Vert u(\\cdot ,t^n)-\\mathcal {I}_hU^n(\\cdot )\\Vert \\le C(\\tau ^2+h),\\quad \\forall \\, t^n\\le T,$ where the constant $C>0$ is independent of $\\tau $ and $h$ .", "The proof is similar to that of the ETD1 case and we shall only sketch the proof below.", "First of all, the ETDRK2 solution is given by $U^{n+1}(x)=W_2^n(x,\\tau )\\in C(X^*)$ with the function $W_2^n(s)$ solving ${\\left\\lbrace \\begin{array}{ll}\\partial _s W_2^n=\\frac{1}{2}\\left(\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n]+\\mathcal {L}_{\\mathbf {v}^{n+1},h}^\\kappa [\\hat{U}^{n+1}]\\right)W_2^n\\\\\\qquad \\qquad \\qquad \\qquad +(1-\\frac{s}{\\tau })N(U^n)+\\frac{s}{\\tau }N(\\hat{U}^{n+1}),& x\\in X^*, s \\in (0, \\tau ],\\\\W_2^n(x,0)=U^{n}(x),& x\\in X^*,\\end{array}\\right.", "}$ where $\\hat{U}^{n+1}\\in C(X^*)$ is given by $\\hat{U}^{n+1}=W_1^n(\\tau )$ with $W_1^n(s)$ satisfying (REF ).", "Based on the proof of Theorem , we know $\\left\\Vert \\hat{U}^{n+1}-u(t^{n+1})\\right\\Vert _{X^*}\\le (1+C_1\\tau )\\Vert e^n\\Vert _{X^*}+C_2\\tau (\\tau +h).$ We define the local truncation error function $R^n(s):=R^n(x,s)\\in C(X^*)$ ($x\\in X^*$ ) for any $s\\in (0,\\tau ]$ as $&R^n(s)=\\frac{\\partial u(t^n+s)}{\\partial s}-\\frac{1}{2}\\left(\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n]+\\mathcal {L}_{\\mathbf {v}^{n+1},h}^\\kappa [\\hat{U}^{n+1}]\\right)u(t^n+s)\\nonumber \\\\&\\qquad \\qquad -(1-\\frac{s}{\\tau })N(u(t^n))-\\frac{s}{\\tau }N(u(t^{n+1})),$ which can be decomposed as follows in view of (REF ), (REF ) and Taylor expansion (as in the ETD1 case and details are omitted): $R^n(s)=R_1^n(s)+R_2^n(s)+R_3^n(s),$ with $&\\Vert R_1^n(s)\\Vert _{X^*}\\le C_4\\left(\\Vert e^n\\Vert _{X^*}+\\left\\Vert \\hat{U}^{n+1}-u(t^{n+1})\\right\\Vert _{X^*}\\right)\\le C_5\\left(\\Vert e^n\\Vert _{X^*}+\\tau (\\tau +h)\\right),\\\\&R_2^n(s)=\\left(\\frac{s}{\\tau }-\\frac{1}{2}\\right)\\left(\\mathcal {L}^\\kappa _{\\mathbf {v}^{n+1}}[u(t^{n+1})]-\\mathcal {L}^\\kappa _{\\mathbf {v}^n}[u(t^n)]\\right)u(t^n),\\\\&\\Vert R_3^n(s)\\Vert _{X^*}\\le C_6(\\tau ^2+h).$ Define the local error function in the interval $[t^n,t^{n+1}]$ as $e_{2}^n(s)=W_2^n(x,s)-u\\left(x,t^n+s\\right)\\in C(X^*)$ .", "By subtracting (REF ) from (REF ), it yields ${\\left\\lbrace \\begin{array}{ll}\\partial _s e_2^n(x,s)=\\frac{1}{2}\\left(\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n]+\\mathcal {L}_{\\mathbf {v}^{n+1},h}^\\kappa [\\hat{U}^{n+1}]\\right)e_2^n(x,s)+(1-\\frac{s}{\\tau })N(U^n)+\\frac{s}{\\tau }N(\\hat{U}^{n+1})\\\\\\qquad \\qquad \\quad -(1-\\frac{s}{\\tau })N(u(t^n))-\\frac{s}{\\tau }N(u(t^{n+1}))-R^n(x,s), \\quad s \\in (0, \\tau ], \\,x\\in X^*,\\\\e_2^{n}(0,x)=e^{n}(x), \\quad x\\in X^*.\\end{array}\\right.", "}$ Denoting $\\begin{split}f^n(s):=f^n(x,s)=&(1-\\frac{s}{\\tau })N(U^n)+\\frac{s}{\\tau }N(\\hat{U}^{n+1})-(1-\\frac{s}{\\tau })N(u(t^n))-\\frac{s}{\\tau }N(u(t^{n+1}),\\end{split}$ by using the MBP, Lemma and (REF ), we have $\\left\\Vert f^n(s)\\right\\Vert _{X^*}\\le &2\\kappa (1-\\frac{s}{\\tau })\\left\\Vert U^n-u(t^n)\\right\\Vert _{X^*}+2\\kappa \\frac{s}{\\tau }\\left\\Vert \\hat{U}^{n+1}-u(t^{n+1})\\right\\Vert _{X^*}\\nonumber \\\\\\le & C_7\\left(\\Vert e^n\\Vert _{X^*}+\\tau (\\tau +h)\\right).$ Applying Duhamel's principle to (REF ), we get $e^n_2(\\tau )=&e^{\\frac{\\tau }{2}\\left(\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n]+\\mathcal {L}_{\\mathbf {v}^{n+1},h}^\\kappa [\\hat{U}^{n+1}]\\right)}e^n\\nonumber \\\\&+\\int _0^\\tau e^{\\frac{\\tau -s}{2}\\left(\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n]+\\mathcal {L}_{\\mathbf {v}^{n+1},h}^\\kappa [\\hat{U}^{n+1}]\\right)}\\left(f^n(s)-R_1^n(s)-R_2^n(s)-R_3^n(s)\\right)\\,ds.$ Since $\\int _0^\\tau R_2^n(s)\\,ds=0$ , by applying Taylor expansion to the matrix exponential, we have $&\\left\\Vert \\int _0^\\tau e^{\\frac{\\tau -s}{2}\\left(\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n]+\\mathcal {L}_{\\mathbf {v}^{n+1},h}^\\kappa [\\hat{U}^{n+1}]\\right)}R_2^n(s)\\,ds\\right\\Vert _{X^*}\\nonumber \\\\&\\qquad \\le C_8\\tau ^2\\left\\Vert \\left(\\mathcal {L}_{\\mathbf {v}^{n+1},h}^\\kappa [u(t^{n+1})]-\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [u(t^n)]\\right)u(t^n)\\right\\Vert _{X^*}\\le C_9\\tau ^3,$ where $C_9$ depends on the $C^2([0,T],C^4(\\overline{\\Omega }))$ norm of $u(x,t)$ .", "Combining (REF )-(), (REF ) and (REF ) and noticing that $e^{n+1}=e_2^n(\\tau )$ and the semigroup $e^{\\frac{\\tau }{2}\\left(\\mathcal {L}_{\\mathbf {v}^n,h}^\\kappa [U^n]+\\mathcal {L}_{\\mathbf {v}^{n+1},h}^\\kappa [\\hat{U}^{n+1}]\\right)}$ has an upper bound $e^{-\\kappa \\tau }$ in the $C(X^*)$ norm, we can get $\\Vert e^{n+1}\\Vert _{X^*}\\le &e^{-\\tau \\kappa }\\Vert e^n\\Vert _{X^*}+C_{10}(\\Vert e^n\\Vert _{X^*}+\\tau ^2+h)\\int _0^\\tau e^{-\\kappa (\\tau -s)}\\,ds\\nonumber +C_9\\tau ^3\\\\\\le &(1+C_{11}\\tau )\\Vert e^n\\Vert _{X^*}+C_{12}\\tau (\\tau ^2+h).$ Following the proof for the ETD1 case, we then conclude $\\Vert e^n\\Vert _{X^*}\\le C_{13}(\\tau ^2+h)$ for $0 \\le t^n\\le T$ and the error estimates in Theorem hold based on (REF ).", "We note that the regularity assumptions in Theorems and are not sharp and can be weakened.", "The derived error estimates also hold for the cases of periodic and homogeneous Neumann boundary conditions." ], [ "Numerical experiments", "In this section, we present various 2D and 3D numerical experiments to demonstrate the accuracy and the discrete MBP of the proposed fully discrete ETD1 (REF ) and ETDRK2 ()-() schemes.", "ETDRK2 scheme is used for all examples, while the ETD1 scheme is only considered in the temporal convergence test due to the lack of accuracy." ], [ "Convergence tests", "Let us take the domain $\\Omega =(-0.5,0.5)^2$ and the terminal time $T=0.1$ .", "We consider the 2D convective Allen-Cahn equation (REF ) with the mobility function $M(u)\\equiv 1$ and $\\epsilon =0.01$ .", "In addition, the nonlinear reaction $f=-F^\\prime $ is given by the double-well potential case (REF ), thus the stabilizing coefficient is taken as $\\kappa =2$ so that (REF ) is satisfied [9].", "The initial value is $u_0(x,y)=\\cos (2\\pi x)\\cos (2\\pi y)$ with the velocity field $\\mathbf {v}=[1,1]^T$ .", "The periodic boundary condition is imposed.", "By fixing the space mesh size $h=1/1024$ , we first test the convergence in time with various time step sizes.", "To compute the solution errors, we set the solution obtained by the ETDRK2 scheme with $\\tau =1/1024$ as the referential value.", "The $L^\\infty $ and $L^2$ errors of the numerical solutions at the terminal time $T=0.1$ with different time step sizes and their corresponding convergence rates in time are reported in Table REF , where the expected temporal convergence rates (order 1 for ETD1 and order 2 for ETDRK2) are clearly observed.", "Table: Results on L ∞ L^\\infty and L 2 L^2 errors of the numerical solutions at T=0.1T=0.1 and their corresponding convergence rates in time for the fully discrete ETD schemes.Next, we test the convergence with respect to the spatial mesh size $h$ by fixing the temporal step size $\\tau =1/2048$ .", "The numerical solution of the convective Allen-Cahn equation obtained by the ETDRK2 scheme with $h=1/512$ is treated as the referential value.", "The $L^\\infty $ and $L^2$ errors of the numerical solutions at the terminal time $T=0.1$ along the spatial mesh refinement and their corresponding convergence rates are presented in Table REF .", "It is observed that the spatial convergence rates gradually converge to 1, which is consistent with the upwind finite difference approximation as expected.", "Table: Results on L ∞ L^\\infty and L 2 L^2 errors of the numerical solutions at T=0.1T=0.1 and their corresponding convergence ratesin space for the fully discrete ETDRK2 scheme." ], [ "MBP tests", "Next, we numerically simulate the 2D convective Allen-Cahn equation (REF ) with $\\epsilon =0.01$ to investigate the preservation of the discrete MBP in the long-time phase separation process.", "Let us take the computational domain $\\Omega =(-0.5,0.5)^2$ and the terminal time $T=50$ .", "We set the initial configuration $u_0(x,y)=0.9\\sin (100\\pi x)\\sin (100\\pi y)$ , the velocity field $\\mathbf {v}(x,y,t)=e^{-t}[\\sin (2\\pi x),-\\cos (2\\pi y)]^T$ and $M(u)=1-u^2$ .", "The homogeneous Neumann boundary condition is imposed.", "The mesh size is set to be $h=1/64$ .", "The ETDRK2 scheme is taken for all tests in this subsection.", "First, the nonlinear reaction $f=-F^\\prime $ is chosen as the double-well potential case (REF ).", "The MBP bound constant is given by $\\beta =1$ and consequently and the stabilizing coefficient $\\kappa =1$ in view of the fact that $\\max \\limits _{|u|\\le 1}\\left|\\widetilde{f}^{\\prime }(u)\\right|=1$ .", "Fig.", "REF shows the snapshots of the numerical solutions at $t=0.1, 1, 8, 50$ , respectively with different time step sizes $\\tau =0.1$ and $\\tau =0.01$ .", "We can clearly see the ordering and coarsening phenomena, and the rotation effect caused by $\\mathbf {v}(x,y,t)$ is well observed along the whole process.", "These two simulations produced by different time step sizes give us overall similar evolution processes.", "The corresponding evolutions of the supremum norm and the classic free energy (defined in (REF )) of the numerical solutions are shown in Fig.", "REF .", "We also observe that the discrete MBP for the convective Allen-Cahn equation is preserved perfectly along the time evolution.", "Moreover, the energy also decays monotonically, although it is not guaranteed theoretically for the target problem.", "Figure: The numerical solutions at tt = 0.1, 1, 8, 50, respectively (top to bottom) for the 2D convective Allen-Cahn equation with the double-well potential.", "Left column: τ\\tau =0.1, right column: τ\\tau =0.01.Figure: Evolutions of the supremum norm (left) and the energy (right) of the numerical solutions for the 2D convective Allen-Cahn equation with the double-well potential.", "Top: τ\\tau =0.1, bottom: τ\\tau =0.01.Next, $f=-F^\\prime $ is chosen as the Flory-Huggins potential case (REF ) with $\\theta =0.8$ and $\\theta _c=1.6$ .", "The MBP bound constant is now given by $\\beta \\approx 0.9575$ , and consequently the stabilizing coefficient is set to be $\\kappa =1$ since $ \\max \\limits _{|u|\\le \\beta }\\left|\\widetilde{f}^{\\prime }(u)\\right|\\approx 0.9801$ so that (REF ) is satisfied in this case.", "Fig.", "REF shows the snapshots of the numerical solutions at $t$ =0.1, 1, 8, 50, respectively with $\\tau =0.1$ and $\\tau =0.01$ , and the corresponding time evolutions of the supremum norm and the energy are presented in Fig.", "REF .", "Again, the ordering and coarsening phenomena and the rotation effect caused by $\\mathbf {v}(x,y,t)$ are clearly observed along the process.", "It is also clear that the energy decays monotonically and the discrete MBP for the convective Allen-Cahn equation is well preserved numerically.", "The two simulations by different time step sizes again produce similar evolution processes.", "Figure: The numerical solutions at tt = 0.1,1,8, 50, respectively (top to bottom) for the 2D convective Allen-Cahn equation with the Flory-Huggins potential.", "Left column: τ\\tau = 0.1, right column: τ\\tau = 0.01.Figure: Evolutions of the supremum norm (left) and the energy (right) of the numerical solutions for the 2D convective Allen-Cahn equation with the Flory-Huggins potential.", "Top: τ\\tau = 0.1, bottom: τ\\tau = 0.01." ], [ "Convective test", "Now we consider the 2D Allen-Cahn equation (REF ) with the mobility $M(u)=1-u^2$ , the velocity field $\\mathbf {v}(x,y,t)=[y,-x]^T$ and $\\epsilon =0.01$ in the L-shape domain $(x,y)\\in \\Omega = (0,1)^2/[0,0.5]^2$ subjected to the Dirichlet boundary condition specified on the boundary $\\partial \\Omega $ as $u(x,y,t)=\\left\\lbrace \\begin{array}{ll}1,&\\quad y=0;\\\\0,& \\quad \\text{otherwise}.\\end{array}\\right.$ The nonlinear reaction $f=-F^\\prime $ is given by the double-well potential (REF ).", "The initial data $u_0(x,y)=0$ except on the domain boundary $y=0$ where $u_0(x,0)=1$ (compatible with the boundary condition).", "For this special example [27], the solution $u(x,y,t)$ is always located in the interval $[0,1]$ according to the result $0\\le N(\\xi )\\le \\kappa $ for any $\\xi \\in [0,1]$ in Lemma .", "Fig.", "REF shows the snapshots of the numerical solutions generated by the ETDRK2 scheme with $\\kappa =1$ at $t$ =0.1,1,1.3,10, respectively.", "The ordering and coarsening phenomena as well as the counter-clockwise rotation effect due to the convective term are clearly observed.", "Moreover, we see from Fig.", "REF that the numerical solutions remain between 0 and 1 very well at these times.", "Figure: The numerical solutions at tt=0.1,1,1.3,10, respectively (top to bottom and left to right) for the 2D convective Allen-Cahn equation with the double-well potential." ], [ "3D simulations", "In this subsection, some 3D simulations are performed for the convective Allen-Cahn equation (REF ) with $\\epsilon =0.01$ under the periodic boundary condition.", "The computational domain is set to be $\\Omega =(-0.5,0.5)^3$ .", "The initial configuration is the quasi-uniform state $u_0(\\cdot )=0.9\\,\\text{rand}(\\cdot )$ , where $\\text{rand}(\\cdot )$ generates the random numbers from uniform distribution on $[-1,1]$ .", "The mobility function is $M(u)\\equiv 1$ and the velocity field is $\\mathbf {v}=[1,1,1]^T$ .", "The time step is set to be $\\tau =0.01$ and the mesh size is adopted with $h=1/128$ .", "Again the ETDRK2 scheme is used.", "First, we choose the double-well potential case(REF ) and the corresponding stabilizing coefficient is $\\kappa =2$ .", "Fig.", "REF shows the phase structures of the numerical solutions at $t$ =0.1, 1, 5, 8, respectively.", "The time evolutions of the supremum norm and the energy of numerical solutions are presented in Fig.", "REF .", "It is observed that the MBP for the convective Allen-Cahn equation is numerically preserved perfectly and the energy decays monotonically.", "Figure: The simulated phase structures at tt = 0.1,1,5,8, respectively (top to bottom and left to right) for the 3D convective Allen-Cahn equation with the double-well potential.Figure: The evolutions of the supremum norm (left) and the energy (right) of numerical solutions for the 3D convective Allen-Cahn equation with the double-well potential.Next, we test the Flory-Huggins potential case (REF ) with $\\theta =0.8$ and $\\theta _c=1.6$ and the corresponding stabilizing coefficient is $\\kappa =8.02$ [9].", "Fig.", "REF depicts the phase structures of the numerical solutions at $t$ =0.1, 1, 5, 8, respectively.", "The time evolutions of the supremum norm and the energy of numerical solutions are shown in Fig.", "REF .", "It is again observed that the MBP for the convective Allen-Cahn equation is numerically preserved perfectly and the energy decays monotonically as the double-well potential case.", "Figure: The simulated phase structures at tt = 0.1,1,5,8, respectively (top to bottom and left to right) for the 3D convective Allen-Cahn equation with the Flory-Huggins potential.Figure: The evolutions of the supremum norm (left) and the energy (right) of numerical solutions for the 3D convective Allen-Cahn equation with the Flory-Huggins potential." ], [ "Conclusion", "In this paper, we studied numerical solutions to the convective Allen-Cahn equation, including nonlinear mobility, convective term, and nonlinear reaction.", "We proposed and analyzed stabilized ETD1 and ETDRK2 schemes which are linear and preserve the discrete MBP unconditionally.", "Various numerical examples are also tested to verify our theoretical results on MBP and error estimates of the proposed schemes.", "It is worth noting that the upwind difference scheme used for discretizing the convective term in our method is only first-order accuracy in space.", "Many existing numerical results have indicated that the low-order spatial discrete schemes may not be able to capture the phase interface evolution well for the very small interface coefficient $\\epsilon \\ll 1$ .", "Thus, it is highly desirable to design high-order spatial approximation schemes with the unconditional MBP preservation to simulate the convective Allen-Cahn equation, which remains as one of our future works.", "In addition to the ETD schemes, the integrating factor method is also a widely-used time integration method.", "If the convective and nonlinear mobility terms are treated explicitly, the standard Runge-Kutta method could be applied.", "Therefore, designing accurate and efficient integrating factor Runge-Kutta (IFRK) schemes with conditional or unconditional MBP preservation is also a very interesting topic for future study." ] ]
2210.07827
[ [ "Prompt Conditioned VAE: Enhancing Generative Replay for Lifelong\n Learning in Task-Oriented Dialogue" ], [ "Abstract Lifelong learning (LL) is vital for advanced task-oriented dialogue (ToD) systems.", "To address the catastrophic forgetting issue of LL, generative replay methods are widely employed to consolidate past knowledge with generated pseudo samples.", "However, most existing generative replay methods use only a single task-specific token to control their models.", "This scheme is usually not strong enough to constrain the generative model due to insufficient information involved.", "In this paper, we propose a novel method, prompt conditioned VAE for lifelong learning (PCLL), to enhance generative replay by incorporating tasks' statistics.", "PCLL captures task-specific distributions with a conditional variational autoencoder, conditioned on natural language prompts to guide the pseudo-sample generation.", "Moreover, it leverages a distillation process to further consolidate past knowledge by alleviating the noise in pseudo samples.", "Experiments on natural language understanding tasks of ToD systems demonstrate that PCLL significantly outperforms competitive baselines in building LL models." ], [ "Introduction", "Task-oriented dialogue (ToD) systems are of great importance in advanced AI applications [75].", "However, most existing ToD systems are developed under the assumption that the data distribution remains unchanged [84].", "Unless the entire system is retrained, this setup may not be realistic when the ToD system deployed in practice needs to support new features and provides more services over time based on user demands.", "Without incurring the high cost of retraining, Lifelong Learning (LL) is able to acquire new knowledge continuously while preserving previously learned knowledge [13].", "Hence, it's crucial to equip natural language understanding (NLU) modules, the vital components of ToD systems, with the lifelong learning ability.", "Figure: The training process of our model PCLL.The main issue for lifelong learning is catastrophic forgetting [46], [49], which refers to the phenomenon that a model forgets previously learned tasks when learning new tasks.", "Various approaches have been proposed to alleviate this issue [59], [1], [57], [2].", "The replay-based methods are among the most effective and widely used ones [55], [61], [12].", "The main idea of replay-based methods is to retrain samples or representations from already seen tasks when learning new tasks [48].", "Some methods explicitly store previously seen real samples for replaying (experience replay) [55], [8].", "However, this setting will be infeasible when data from previous tasks is unavailable due to data security concerns.", "Other methods try to generate pseudo samples using a generative model (generative replay).", "This variant relieves the burden of storing previously seen data and has been widely adopted in previous studies [13], [61], [35], [78].", "The key to generative replay is to produce pseudo samples to approximate the real data distribution of previous tasks.", "Intuitively, higher quality pseudo samples can better preserve learned tasks and lead to less forgetting in LL.", "However, the generation of pseudo samples for each seen task in previous studies [63], [9] is usually controlled by a single task-specific token.", "It has been observed that this scheme is usually insufficient to constrain the PLM [63], due to limited information involved.", "Consequently, the generated pseudo samples suffer from problems such as not being fluent or not corresponding well to the designated task.", "Moreover, those special tokens are only introduced in the fine-tuning stage of the PLM [83], [67], [82], [80], [81].", "This enlarges the gap between pre-training and fine-tuning of the PLM [22] and harms the quality of the generated pseudo samples.", "In addition, generated noisy pseudo samples may degenerate the LL performance.", "To address the above issues, we propose a novel method, Prompt Conditioned VAE for Lifelong Learning (PCLL), to enhance generative replay on NLU tasks of ToD systems.", "To impose strong control over the pseudo-sample generation, PCLL explicitly models latent task-specific distributions using a conditional variational autoencoder (CVAE) [36], [76].", "Then it incorporates the corresponding task statistics to guide the generation of pseudo samples.", "To reduce the gap between pretraining and finetuning, we construct natural language prompts to unify different NLU tasks while being specific to each task.", "These prompts not only contain meaningful semantics compared to special tokens, but also serve as conditions to assist CVAE in capturing task distributions.", "Moreover, PCLL employs a knowledge distillation scheme to alleviate the impact of noisy pseudo samples during the replay process.", "Leveraging the above strategies, PCLL can generate high-quality pseudo samples that better approximate the real distributions of previous tasks while tackling the aforementioned issues.", "We validate our method on NLU tasks of ToD systems including both intent detection and slot filling.", "The results indicate that our approach generates high-quality pseudo samples and significantly outperforms competitive baselines.", "Our main contributions are as follows, (1) We propose a novel method, PCLL, to enhance generative replay for building lifelong NLU modules of ToD systems.", "(2) Conditioned on prompts, PCLL models latent task distributions with CVAE to guide the pseudo-sample generation and leverages knowledge distillation to further avoid forgetting.", "(3) Our extensive experiments and comprehensive analyses demonstrate the superior performance of PCLL and the high quality of its generated samples." ], [ "Lifelong Learning", "There are generally three categories of LL methods: Regularization-based Methods aim to strike a balance between protecting already learned tasks while granting sufficient flexibility for a new task [48].", "Some methods [59], [1], [71], [16] impose constraints on the modification of important weights.", "Other methods introduce a distillation loss to constrain predicted features of the LL model.", "[40], [15], [53].", "However, these additional regularization terms may downgrade the model performance [49].", "Architecture-based Methods dedicate model parameters for each task to prevent forgetting [13].", "Some studies [18], [60], [31], [34] use static architectures and rely on task specific information to route through the architecture [48], while other studies [57], [2], [72], [45], [21] dynamically grow the architecture in the LL training process.", "However, these methods either require capacity allocation for tasks at the beginning or are not feasible when model expansion is prohibited with limited resources [63].", "Figure: The architecture of the prompt conditioned VAE generator in PCLL.", "It captures the task distribution conditioned on prompts and incorporates the latent variable zz (or z ' z^{\\prime }) into tokens' embeddings to guide the decoding.Replay-based Methods aim to preserve previous knowledge by replaying data from learned tasks.", "One line of studies [55], [8], [44], [47], [24], [42] keeps a small number of real samples from old tasks for replaying.", "However, these methods are unpractical when data from old tasks are unavailable.", "Another line of studies [61], [35], [70] utilizes a generative model to reproduce pseudo samples or representations from old tasks.", "In this paper, we focus on improving generative replay, as it does not require allocating extra parameters or model capacity and can be used with any LL model.", "Specifically, [63] propose a general framework LAMOL for lifelong language learning to replay pseudo samples of previous tasks.", "[9] improve LAMOL by training an extra teacher model before learning each new task, however, this increases the burden of the LL process.", "[33] freeze critical parameters in LAMOL based on rationales, but those rationales are not always available for NLP tasks.", "All these previous works do not take task statistics into consideration, whereas our PCLL method incorporates the information of tasks' distributions to enhance generative replay." ], [ "Prompt-based Learning in NLP", "Prompt-based learning has been found to be more effective than typical finetuning to use PLM [58].", "With prompts, we can convert various downstream tasks to a unified language modeling task [4], [58], [25].", "Prompts can be either manually designed [50] or generated automatically [62], [32], [20].", "Some recent studies employ prompt tuning on continual learning for dialogue state tracking [84] and few-shot learning [51]." ], [ "Problem Definition", "We aim to build an LL model to learn a stream of NLU tasks sequentially $\\mathcal {T}^{T}=\\lbrace t\\rbrace _{t=1}^{T}$ in dialogue systems, where $T$ can be infinite potentially.", "For each task $t$ , a set of samples $\\mathcal {D}_{t} = \\lbrace (x_k,y_k)\\rbrace _{k=1}^{N_{t}}$ are drawn from its underlying data distribution.", "Here, $x_k$ denotes the input utterance, and $y_k$ denotes the output label of NLU.", "In intent detection tasks, $y_k$ is the intent label of $x_k$ ; in slot filling tasks, $y_k$ is the slot-value pairs contained in $x_k$ .", "Our objective is to learn a model that can perform well on all seen tasks and forget as little as possible." ], [ "Overview", "We start with a brief overview of our proposed PCLL method for generative replay (See Fig.", "REF ).", "PCLL consists of two components: an LM-based task solver to solve NLU tasks and a CVAE-based generator to generate pseudo samples with the help of task-specific latent distributions.", "For the first task, PCLL is initialized with PLMs along with other parameters randomly initialized.", "Before learning a new task $t$ , we first use the PCLL model trained on previous tasks to generate pseudo samples for each of the learned tasks $\\mathcal {T}^{t-1}$ .", "Then we interleave these pseudo samples with the training data in $\\mathcal {D}_{t}$ and continue to train PCLL.", "In this way, the model can learn the new task $t$ while consolidating the knowledge of past tasks.", "In the following sections, we first illustrate how PCLL learns the current task (Sec.", "REF , REF ).", "Then we describe the pseudo-sample generation process (Sec.", "REF ), and finally, we introduce a knowledge distillation process to further improve the LL performance (Sec.", "REF )." ], [ "LM-based Task Solver", "Following recent studies [63], [9], PCLL unifies different NLU tasks into a language modeling (LM) task and implements a task solver based on a PLM.", "Different from previous studies that introduce randomly initialized special tokens in the fine-tuning stage [63], we construct task-specific natural language prompts for the solver.", "These prompts carry rich semantic information to alleviate the mismatch between fine-tuning and pre-training of PLM.", "For each input-output pair $(x, y)$ from task $t$ , our task solver is a LM that takes a prompt $g_t(x)$ as an input and predicts $y$ .", "Specifically, $g_t(x)$ is constructed as $g_t(x) = g_t^{pre}\\oplus x\\oplus g_t^{post}$ , where $g_t^{pre}$ and $g_t^{post}$ are prompt prefix and postfix designed for task $t$ , respectively, and $\\oplus $ means the concatenation of word tokens.", "For instance, if the task $t$ is an intent detection task, we design $g_t(x)$ as: “For an utterance from the ID task, x has the following intent ”, where “ID” represents the task name of $t$ .", "After serializing the output $y$ into a token sequence, we can obtain a natural language sentence by simply concatenating $g_t(x)$ with $y$ .", "We list detailed examples in Appendix  REF .", "Then the PLM $f_{\\theta _t}$ for the current task $t$ is optimized on the concatenated sentence $\\small g_t(x,y)=g_t^{pre}\\oplus x\\oplus g_t^{post}\\oplus y,$ by maximizing the following objective (see Fig.", "REF ): $\\begin{split}\\mathcal {L}_{LM} = \\log p_{\\theta }(g_t(x,y)) + \\lambda \\log p_{\\theta }(y | g_t(x)),\\end{split}$ in which the first term learns to decode the constructed sentence given the start token [BOS], and the second term learns to predict the output $y$ after reading the prompt $g_t(x)$ .", "$\\lambda $ is a scalar used to balance these two terms.", "Figure: The LM-based solver for NLU tasks.The input-output pair (x,y)(x,y) is converted into a natural language prompts with g t pre g_t^{pre} and g t post g_t^{post}." ], [ "Prompt Conditioned VAE Generator", "To construct high-quality pseudo-samples, PCLL leverages a CVAE module to build a pseudo-sample generator so that it can incorporate tasks' statistics to guide the generation of pseudo samples.", "The CVAE module captures task-specific latent distributions by taking utterances as the input, conditioned on prefix prompts, and reconstructing the input during training.", "Specifically, given an input utterance $x$ in task $t$ , we assume a random variable $z$ captures the latent distribution over $x$ .", "We define a conditional distribution as $p(x, z| t) = p(x|z, t) p(z| t)$ , where we approximate $p(z| t)$ and $p(x|z, t)$ using deep neural networks with parameters $\\phi $ and $\\theta $ , respectively.", "We refer to $p_\\phi (z|t)$ as the prior network and $p_\\theta (x | z, t)$ as the decoder.", "To reconstruct $x$ , a latent variable $z$ is first sampled from $p_\\phi (z|t)$ and then $x$ is decoded through $p_\\theta (x | z, t)$ .", "In this study, we assume the prior of $z$ to be a multivariate Gaussian distribution with a diagonal covariance matrix, and introduce a recognition network $q_\\psi (z | x, t)$ to approximate the intractable true posterior $p(z | x, t)$ .", "The goal of CVAE is to maximize the conditional log-likelihood $\\log p(x | t) = \\int p(x|z, t) p(z| t) d z$ .", "Employing variational inference, we can get the following evidence lower bound (ELBO) [76] to maximize: $\\small \\begin{aligned}& \\mathcal {L}_{\\text{CVAE}} = \\underbrace{\\mathbb {E}_{q_{\\psi }(z | x, t)} \\log p_\\theta (x | z, t)}_{\\mathcal {L}_{\\text{REC}}} \\\\& - \\beta \\underbrace{\\mathrm {KL}\\left(q_\\psi (z | x, t) \\Vert p_\\phi (z | t)\\right)}_{\\mathcal {L}_{\\text{KL}}} \\le \\log p(x | t),\\end{aligned}$ where $\\beta $ is a scalar to balance the reconstruction term $\\mathcal {L}_{\\text{REC}}$ and the Kullback–Leibler (KL) divergence term $\\mathcal {L}_{\\text{KL}}$ and is adjusted by a cyclic annealing schedule [19] to alleviate the vanishing latent variable issue [3]." ], [ "CVAE Implementation.", "When implementing each network in Eq.REF , we use the prompt prefix $g_t^{pre}$ to represent the task $t$ because $g_t^{pre}$ involves the task name that can exclusively identify $t$ .", "Fig.", "REF shows the overall architecture of our PCLL model, in which we use an unidirectional transformer [66] to encode the concatenated sentence $g_t^{pre}\\oplus x$ into hidden representations.", "Then an attention-average block [17] is introduced to pool the hidden representations of $g_t^{pre}$ and $g_t^{pre}\\oplus x$ to single vectors, which are further fed into a prior network $p_\\phi (z|t)$ and recognition network $q_\\psi (z | x, t)$ respectively.", "Next, the reparametrization trick [36] is used to obtain latent variables $z$ from the prior and posterior distributions.", "Then $z$ is injected to the decoder $p_\\theta (x | z, t)$ by adding to each token embedding (word embedding and position embedding, elementwisely) of the prompt [17], [38].", "In PCLL, the decoder $p_\\theta (x | z, t)$ shares the same parameters with the PLM-based task solver $f_\\theta $ .", "This allows us to inherit the advantage of PLM and leverage a unified model to solve each task and generate pseudo samples simultaneously." ], [ "Pseudo Sample Generation", "Generating pseudo samples for learned tasks involves two steps: (1) PCLL generates a pseudo input utterance $x$ guided by a latent task distribution using the CVAE-based generator.", "Specifically, for each seen task $t^{\\prime },~(t^{\\prime }<t)$ , the model samples a latent variable $z_{t^{\\prime }}$ from the prior network $p_\\phi (z_{t^{\\prime }}|t^{\\prime })$ with the constructed prompt prefix $g_{t^{\\prime }}^{pre}$ as the input.", "Then the decoder takes $z_{t^{\\prime }}$ and $g_{t^{\\prime }}^{pre}$ , and decodes them into the pseudo input $x$ using top-k sampling Using other diversity enhanced decoding scheme may help produce more diverse pseudo samples [68].", "We leave it for future works.", "[28].", "(2) PCLL generates the output $y$ associated with $x$ using the solver (i.e., following Fig.", "REF )." ], [ "Knowledge Distillation", "Previous generative replay approaches indistinguishably interleave pseudo data with the current task's training data.", "However, this naive approach hurts the model performance since these pseudo data may contain noise and may drift from the real data distribution.", "In this study, we utilize a knowledge distillation (KD) [27] process to prevent our model from being affected by these noisy pseudo data.", "When training on a new task $t$ , we treat the model obtained on previous tasks $\\mathcal {T}^{t-1}$ as a fixed teacher model $f_{\\theta _\\text{Tch}}$ .", "For each input-output pair $(x,y)$ in the pseudo data, $f_{\\theta _{\\text{Tch}}}$ is distilled on the generated pseudo data to the current model $f_\\theta $ by maximizing the token-level distillation objective: $\\begin{split}& \\mathcal {L}_{\\text{LM}}^{\\text{KD}} =\\scriptstyle \\sum \\limits _{l=1}^{|g_t(x,y)|}\\sum \\limits _{v \\in \\mathcal {V}} p_{\\theta _\\text{Tch}}(v | g_t(x,y)_{<l}) \\log p_{\\theta } (v | g_t(x,y)_{<l}) \\\\\\!\\!\\!& + \\scriptstyle \\sum \\limits _{l=1}^{|y|} \\sum \\limits _{v \\in \\mathcal {V}} p_{\\theta _\\text{Tch}}(v | g_t(x), y_{<l}) \\log p_{\\theta } (v | g_t(x), y_{<l}),\\!\\!\\end{split}$ where $g_t(x,y)<l$ and $y<l$ refers to the token sequence before the $l$ -th token in $g_t(x,y)$ and $y$ , respectively.", "$\\mathcal {V}$ represents the vocabulary set.", "Similarly, when training the CVAE module, we replace the reconstruction term $\\mathcal {L}_{REC}$ of in Eq.", "REF with a distillation objective: $\\begin{split}\\mathcal {L}_{\\text{REC}}^{\\text{KD}} = \\mathop {\\mathbb {E}}\\limits _{q_{\\psi }(z | x, t)} \\scriptstyle \\sum \\limits _{l=1}^{|x|} \\sum \\limits _{v \\in \\mathcal {V}} & p_{\\theta _\\text{Tch}}(v |z,t, x_{<l}) \\times \\\\& \\log p_{\\theta } (v |z, t, x_{<l}),\\end{split}$ and thus we maximize the following objective over the pseudo data $\\mathcal {L}_{\\text{CVAE}}^{\\text{KD}} = \\mathcal {L}_{\\text{REC}}^{\\text{KD}} - \\beta \\mathcal {L}_{\\text{KL}}$ .", "Note that the KD process introduced in PCLL is different from previous KD approaches in LL.", "Specifically, [9] train an independent teacher model on every new task, while PCLL uses the model from previous tasks as the teacher model.", "Moreover, [64] distills hidden states, while our method learns predicted distributions of the teacher model.", "Fig.REF illustrates the training objectives used in our method.", "Specifically, when learning a new task $t$ , we optimize PCLL on training samples of $t$ with the following objective: $\\mathcal {L}_{\\text{\\text{LM}}} + \\mathcal {L}_{\\text{CVAE}}$ .", "For pseudo samples of previous tasks $t^{\\prime }, (t^{\\prime }<t)$ , we optimize the loss $\\mathcal {L} = \\alpha (\\mathcal {L}_{\\text{LM}}^{\\text{KD}} + \\mathcal {L}_{\\text{CVAE}}^{\\text{KD}}) + (1-\\alpha )(\\mathcal {L}_{\\text{LM}}+\\mathcal {L}_{\\text{CVAE}}),$ where $\\alpha \\in [0,1]$ is a scalar used to adjust knowledge distillation terms." ], [ "Datasets", "We evaluate the PCLL method on intent detection and slot filling based on public NLU benchmarks: For intent detection, we collect six datasets that carry intent annotations: HWU [43], BANKING [5], CLINC [37], SNIPS [11], AITS [26], and TOP [23].", "The dataset TOP is divided into three disjoint subsets TOP-S1, TOP-S2, and TOP-S3, and these three subsets along with the other five datasets are regarded as separate LL tasks to increase the total number of tasks for sequential training.", "Finally, we have eight tasks to be learned sequentially for this intent detection experiment.", "For slot filling, we adopt five datasets that provide slot labels: SNIPS, AITS, DSTC [54], MIT-MOVIE, and MIT-RESTAURANT groups.csail.mit.edu/sls/downloads.", "Each dataset above is regarded as a separate LL task, and thus five tasks are learned in lifelong slot filling experiments.", "More descriptions about datasets are in Appendix ." ], [ "Implementation Details", "We use the pretrained 12-layer GPT2 model [52] to initialize the encoder and decoder of our CVAE model.", "The prior network and the recognition network are both set to be a 2-layer MLP with hidden size of 128.", "When learning a new task $t$ , PCLL balances the training data of $t$ and pseudo samples by generating $\\gamma N_{t}$ pseudo samples for previously learned tasks.", "$\\gamma $ is the sampling ratio and $\\gamma $ is set to 0.2 in our experiment following [63].", "Each task for intent detection and slot filling is trained for 5 and 10 epochs, respectively.", "We train PCLL on six random permutations of the task order.", "See REF and REF for more details." ], [ "Baselines", "We compare PCLL with the following baselines: Fine-tune directly fine-tunes the model on the task stream without preventing catastrophic forgetting; EWC [59] and MAS [1] are two regularization methods that mitigate forgetting by penalizing changes of important parameters for learned tasks; LAMOL-g and LAMOL-t [63] are two variants of the generative replay method LAMOL that control the generation of pseudo samples either using a global special token (LAMOL-g) or task-specific special tokens (LAMOL-t); L2KD [9] improves LAMOL by assigning an extra teacher for each new task to perform knowledge distillation; ER [56] preserves previously seen real samples for replay to prevent forgetting.", "We also consider some architecture-based baselines: HAT [60] creates a task-based hard attention during training; CTR [34] inserts continual learning plug-ins into BERT to mitigate forgetting and encourage knowledge transfer; Adapter [45] builds residual adapter for each task independently.", "Since works in [42] and [51] are specially designed for dialogue state tracking and few-shot learning, respectively, we do not consider them as our baselines.", "Besides the above baselines, we further evaluate the model performance when all tasks are trained simultaneously in a multitask learning setting (Multi), which is often seen as an upper bound of LL.", "For fair comparisons, all baselines are implemented following either the settings of [63], or their own reported settings.", "For ER, we store 1% of previously seen samples in memory following the setting of [45]." ], [ "Evaluation Metrics", "We use the accuracy score, and macro-averaged F1 score [10] to evaluate the performance of intent detection and slot filling tasks, respectively.", "Moreover, we consider access to a test set for each of the $T$ tasks to learn in the LL process, and define $R_{i,j}$ as the test score of the task $j$ after finishing learning the task $i$ .", "We follow previous studies [44], [6] to use the following two metrics to evaluate the performance of LL: (1) Average Score (Score) is defined as the average test score of all $T$ tasks after the LL process: $\\mathrm {Score} = \\frac{1}{T}\\sum _{j=1}^T R_{T,j}$ .", "(2) Learning Curve Area (LCA) is the area under the $Z_b$ curve, which captures the model's performance on all $T$ tasks [7].", "Specifically, $Z_b$ is the average score for all seen tasks at the training step $b$ .", "Here, high Score and high LCA are preferred for a good LL model." ], [ "Main Results", "Table REF shows the performances of our model PCLL and all the baselines.", "Our method PCLL significantly outperforms all baselines by a large margin on both intent detection and slot filling tasks.", "To better understand the LL process, we also plot the curve of the average score for all the models when trained using the same task order (see Fig.", "REF ).", "From those results, we can observe that: (1) Regularization-based methods (EWC and MAS) suffer from serious catastrophic forgetting, consistent with the observation of [45].", "(2) Generative replay methods LAMOL-g, LAMOL-t, and L2KD alleviate the forgetting issue to some extent.", "However, replaying real samples (i.e., ER) performs much better.", "This indicates that the quality of samples used for replaying is critical to addressing catastrophic forgetting, which matches our motivation to improve generative replay by generating high-quality pseudo samples.", "Our method PCLL achieves higher performance than ER, indicating that PCLL can generate high-quality pseudo samples under the guidance of task distributions.", "Our analyses in Sec.", "REF further prove this claim.", "(3) Architecture-based methods HAT, CTR, and Adapter achieve good performance.", "However, PCLL still outperforms these baselines.", "This further validates the effectiveness of PCLL.", "Note that replay-based methods such as PCLL can be used together with these architecture-based methods to further improve the LL performance.", "(4) From Fig REF , we can notice that when switching to new tasks, PCLL retains more knowledge about previous tasks (less performance degradation) compared to the baselines.", "This suggests that PCLL has a better ability to consolidate knowledge and mitigate catastrophic forgetting for LL.", "Figure: Learning curves of different methods on intent detection tasks.", "The dotted lines mean task switching." ], [ "Ablation Studies", "We conduct ablation studies to verify the effectiveness of each proposed component in PCLL.", "(1) w/o Latent means no latent distribution is modeled for each task, i.e., the CVAE model in Section REF is removed, and pseudo samples are generated by directly feeding the prompt prefix into the LM $f_\\theta $ without incorporating task-specific statistics.", "(2) w/o Task ID means no task indicators are involved in the prompts.", "In other words, we design a task-independent prompt prefix by replacing the task ID with a general description “current task” (see Appendix REF for more details).", "In this way, the CVAE model degenerates to a VAE model that captures a global latent space for all tasks.", "(3) w/o KD means that the knowledge distillation process in Section REF is not applied.", "Table: Ablation studies on two NLU tasks.", "Each result is an average of 6 random task orders.From Table  REF , we can see that: (1) Capturing task-specific latent distributions and incorporating them in the pseudo-sample generation process is crucial for building better LL models (w/o Latent).", "(2) Using task-specific prompts helps to generate high-quality pseudo samples, thereby improving the LL performance (w/o Task ID).", "(3) The proposed knowledge distillation process does mitigate the effects of noisy pseudo-samples and is beneficial for consolidating previously learned knowledge to prevent forgetting (w/o KD)." ], [ "Soft Prompts vs. Manual Prompts", "We conduct analyses on soft prompts by replacing manually designed prompts with soft tokens in PCLL.", "Specifically, the prompt prefix $g_t^{pre}$ and postfix $g_t^{post}$ in Eq.", "REF are replaced by several randomly initialized task-specific soft (learnable) tokens [41].", "We also vary the lengths of these soft prompts to analyze their behaviors.", "Results in Table REF show that: (1) Longer prefix prompts (i.e.", "more parameters guiding the pseudo-sample generation) generally lead to better LL performance; (2) Longer postfix prompts may not always lead to better LL performance.", "This may be because the postfix prompts are less important than prefix prompts since they do not participate in the pseudo-sample generation.", "Longer postfix prompts may bring in more noise, degenerating the performance; (3) Using manual prompts in PCLL outperforms all its soft-prompt variants even though some soft prompts are much longer than manual prompts.", "This justifies our claim that manual prompts carrying rich semantic information help to alleviate the mismatch between fine-tuning and pre-training of PLM and capture tasks' distributions, and thus mitigate catastrophic forgetting in lifelong learning.", "Table: Applying soft prompts on lifelong intent detection tasks.", "#prefix and #postfix indicate the lengths of prefix and postfix prompts, respectively.", "Each result is an average of 6 random task orders." ], [ "Different Designs.", "We validate different designs of manual prompts in PCLL.", "Specifically, we implement five different prompt templates with different lengths (Appendix REF ).", "We observe that different manual prompts yield almost the same performance.", "This indicates that our method is robust to the design of manual prompts.", "(See Table REF in the Appendix for more details)." ], [ "Visualization of Attentions.", "We provide the visualization of the attention scores over several manual prompts employed by PCLL.", "High attention scores of task names in Fig.", "REF indicate that the task indicators play an important role in our manually designed prompts (see Appendix REF )." ], [ "Qualities of Pseudo Samples", "We validate the quality of pseudo samples generated by PCLL and all our generative replay baselines on intent detection tasks.", "We use the distinct score Dist-n [39] to measure the proportion of unique n-grams in the generated pseudo samples' inputs ($n$ =1,2,3,4).", "Higher Dist-n indicates more diverse generated pseudo samples, which is usually preferred because diverse samples help to approximate task distributions.", "As shown in Table REF , PCLL can generate more diverse pseudo samples compared to other generative replay methods.", "This demonstrates that pseudo samples constructed by our method are closer to real samples.", "Further, we measure whether the generated pseudo samples can restore the distribution of real samples by visualizing samples' feature space with t-SNE [65].", "As shown in Fig.", "REF , pseudo samples generated by PCLL are clustered in a similar pattern compared to real samples, while those of LAMOL-t are scattered in the feature space.", "It shows that the pseudo samples generated by PCLL share closer distribution with the real samples compared to our baselines (see Appendix  REF for more details).", "Table: Distinct scores for generated pseudo samples." ], [ "Analyses of Latent Variables", "To further analyze the behavior of the pseudo sample generator, we visualize the latent space captured by the recognition network on slot filling tasks.", "Specifically, for each sample in the test dataset, we extract a latent variable $z$ from its posterior distribution and use the t-SNE algorithm [65] to visualize these variables in 2D space.", "It can be seen from Figure REF that the latent spaces of different tasks are well clustered and clearly separated.", "This indicates that the latent variable $z$ is able to capture task-specific knowledge among learned tasks.", "We also analyze the influence of $z$ 's dimensions in Appendix REF .", "Figure: t-SNE visualization of latent variables." ], [ "Influence of Sampling Ratio $\\gamma $", "We analyze the influence of the sampling ratio $\\gamma $ (ranging from 0.01 to 1.0) on the performance of PCLL.", "The results in Table REF indicate that PCLL is more effective in improving the LL performance when considering a small number of pseudo samples (See more details in Appendix REF )." ], [ "Case Study", "We present several pseudo samples generated by PCLL and baselines (Appendix ).", "From Table REF , we can notice that pseudo samples from PCLL are of high quality and consistent with each task.", "This indicates that capturing tasks' distribution by PCLL to enhance generative replay is effective for LL." ], [ "Conclusion", "In this paper, we propose PCLL to enhance generative replay for addressing catastrophic forgetting of lifelong learning in building NLU modules of ToD systems.", "To construct high-quality pseudo samples, PCLL captures task-specific distributions with a prompt conditioned VAE to guide the generation of pseudo samples.", "Empirical results on two NLU tasks and extensive analyses demonstrate the superior performance of PCLL and the high quality of its generated pseudo samples.", "Currently, we do not consider lifelong learning in the low-resource setting where only limited labeled data are available.", "In the future, we will extend our framework to lifelong few-shot learning." ], [ "Limitations", "In this paper, we propose a novel method PCLL to enhance generative replay by incorporating tasks' statistics into pseudo sample generation using a prompt conditioned VAE.", "Here are some limitations of our work: We have not investigated lifelong learning in the low-resource setting where only limited labeled data are available.", "In future works, we will consider combining PCLL with meta-learning [77] to extend our framework to a lifelong few-shot learning setting.", "We will also extend previous approaches of using unlabeled data [74] to build lifelong learning dialogue models.", "Currently, we only focus on improving the generative-replay based lifelong learning approach, and have not considered architecture-based methods for lifelong learning.", "However, our method PCLL can be readily combined with the architecture-based approach by leveraging parameter-efficient modules (e.g., Adapter [29], [73], LoRA [30]) into the model architecture to further mitigate the catastrophic forgetting issue.", "We will explore this direction in the future.", "Moreover, although our prompt templates shared among tasks have already encouraged the transfer of knowledge among different tasks to some extent, we have not measured the effects of knowledge transfer in this work.", "It is straightforward to combine PCLL with some specifically designed prompt sharing strategy (like L2P [69]) to encourage knowledge transfer among different tasks.", "We will leave these studies as future work.", "Our experiments are limited to text-based tasks.", "In future works, we will consider to investigate features brought by multi-modal dialogue inputs [79].", "Lifelong learning of intent detection and slot filling tasks are crucial for building a human-like task-oriented dialogue system.", "All our experiments are conducted on public available datasets to avoid ethical concerns.", "All terms for using these datasets are strictly followed in our study.", "The metrics used in our paper are automatic and do not need manual labor.", "There are no direct ethical concerns in our study.", "Research on this paper was supported by Alibaba Group through Alibaba Research Intern Program and Hong Kong Research Grants Council (Grant No.", "16204920)." ], [ "Details of Datasets", "We list the statistics of datasets for the intent detection and slot filling in Table  REF and give detailed descriptions as follows.", "ATIS consists of audio recordings and corresponding manual transcripts about humans asking for flight information on automated airline travel inquiry systems.", "The data consists of 17 unique intent categories.", "BANKING contains 13,083 utterances related to banking domain with 77 different fine-grained intents.", "CLINC contains 10 domains (e.g., travel, kitchen, utility, etc.)", "and 150 different intent classes.", "DSTC consists of slot annotations spanning 4 domains (buses, events, homes, rental cars).", "HWU includes 64 intents spanning 21 domains (e.g., alarm, music, IoT, news, calendar, etc.)", "MIT_RESTAURANT is a semantically tagged training and test corpus in BIO format.", "MIT_MOVIE is a semantically tagged training and test corpus in BIO format.", "We choose “eng” corpus for implementation which consists of simple queries.", "TOP is a dataset of 44K utterances where each utterance is annotated with a hierarchical semantic representation.", "SNIPS contains crowdsourced queries distributed among 7 user intents of various complexity.", "Table: Statistics of datasets for intent detection and slot filling." ], [ "Prompt Examples of NLU Tasks", "We provide some detailed examples for inputs and outputs of the model with the designed prompts in PCLL.", "For intent detection, when we train on “BANKING” task, an input utterance $x$ of the language model (LM) for a sample is modified as “For an utterance from the BANKING task, “I already have one of your cards, how do I link them?” has the following intent ”, the output of LM $y$ is its corresponding intent annotation: “Card linking”.", "For the ablation study of w/o Task ID, the prompt of the above sample becomes “For an utterance from the current task, “I already have one of your cards, how do I link them?””.", "For slot filling, when we train on the “MIT-RESTAURANT” task, an input utterance x is “Does the Casanova restaurant at Kendall Square offer a fixed price menu?” of LM is modified as “In the MIT-RESTAURANT task, if there are any slots and values, what are they in this sentence: “Does the Casanova restaurant at Kendall Square offer a fixed price menu?”?", "Answer: ”, the output $y$ locating the contained slot-value pairs is modified as “Restaurant name: Casanova; Location: Kendall Square.”.", "Here, different slot-value pairs are formatted as “slot: value” separated with “;”.", "If the input $x$ does not contain any slot-value pairs, we use the sentence “No slot in this sentence.” as the output $y$ ." ], [ "Different Task Orders", "We list the six random permutations of tasks that we use to implement all competing methods in Table  REF .", "Table: Results of the best performing baselines ER and LAMOL when using the GPT2-Medium model.", "Each result is an average of six random task orders." ], [ "Model Implementation Details", "We use a pre-trained GPT2 model [52] as the initialization for the encoder and decoder of CVAE in PCLL.", "We set the maximum context length as 256.", "Our model contains a total number of 240M parameters.", "We train all competing methods on 1 Tesla-V100 GPU and it takes around 6 to 10 hours to train all the tasks.", "Moreover, the training and testing batch sizes are set to 64.", "The maximum learning rate is $5e-5$ , the Adam optimizer is used with parameters $\\beta _1=0.9$ , $\\beta _2=0.98$ and $\\epsilon =1e-8$ .", "The number of cycles for the cyclic annealing schedule is set to 4 in each epoch.", "When generating pseudo samples, the maximum decoded sequence length is set to 96." ], [ "Analysis of Latent Variable Dimensions", "We choose the dimension of the latent variable $z$ among 32, 128, 256, 512.", "The results of these dimensions are listed in Table  REF .", "We can notice that when we select the dimension of $z$ as 128, it can reach the best performance of lifelong intent detection tasks.", "This phenomenon is reasonable, when the dimension of $z$ is small, it may not catch enough information to model the task distribution; when the dimension is large, it may contain some noisy information, leading to poorer performance.", "Table: Analysis of different dimensions of the latent variable zz of PCLL on lifelong intent detection tasks.", "Each result is an average of six random task orders." ], [ "Analysis of Manual Prompts Designs", "We list five different manual templates as the designed prompts of intent detection in Table REF , where Prompt1 is the one we use in Table REF .", "Let ID refers the task name, x refers the input utterance and y means the intent of x.", "Table: Applying different manual prompts on lifelong intent detection tasks.", "Each result is an average of 6 random task orders.Table: Different manual prompts are designed for intent detection module of a ToD system." ], [ "Analysis of Prompt Attention", "We provide the visualization of the attention scores over several samples employed with our designed prompts for intent detection tasks.", "Specifically, the attention score on each prompt token is calculated using the averaged attention it receives when generating the output prediction.", "From the following Fig REF , we can notice that the task names do contain meaningful information to be attended to when generating predictions.", "Figure: Visualization of attention scores for the natural language prompts of PCLL." ], [ "Analysis of Pseudo-sample Quality", "We analyze the quality of generated pseudo samples with PCLL and other generative replay-based baselines.", "Specifically, we first fine-tune a pre-trained BERT [14] model using these observed real samples to construct a task classifier.", "This classifier can determine the task identity of a given sample, and it reaches an accuracy of 98.67% on a hold-out test set.", "The fine-tuned BERT is used to extract the representation vector of each sample, and the t-SNE algorithm [65] is used to map these vectors into 2-dimensions.", "For a specific task order “TOP-S1, HWU, SNIPS, BANKDING, CLINC, TOP-S2, TOP-S3, ATIS” in LL, we gather pseudo samples generated when learning the last task and visualize the feature space of these samples.", "Note that the last task, ATIS, is not shown in Fig.", "REF since there is no need to replay the last task." ], [ "Analysis of Sampling Ratio", "Table REF shows the results on intent detection tasks.", "It can be seen that generating more pseudo samples helps to improve the LL performance.", "Besides, the performance gain slows down as the sampling ratio $\\gamma $ exceeds $0.2$ , i.e., generating 5 times more pseudo samples from $\\gamma = 0.01$ to $\\gamma = 0.05$ yields 10.48 absolute improvement on the Score metric, while increasing $\\gamma $ from 0.2 to 1.0 only yields 1.63 absolute improvement.", "Table: The LL performance on various sampling ratio γ\\gamma .", "Each result is an average of 6 random task orders." ], [ "Case Study", "We provide examples of generated pseudo samples from PCLL and LAMOL-t Table REF on the BANKING task of intent detection.", "We can observe that: 1) Compared to LAMOL-t, pseudo samples produced by PCLL are closer to real samples from the BANKING dataset; 2) Some pseudo samples generated by LAMOL-t are inconsistent with the current task.", "For example, LAMOL-t generates samples for the weather domain, which is not related to the BANKING task; 3) LAMOL-t may also generate unmatched inputs and outputs in its pseudo samples (last line in Table REF ).", "The above observations verify our claim that a single task-specific token is too weak to constrain the PLM, and our method PCLL helps to generate high-quality pseudo samples which are consistent with each task.", "Table: Real samples and generated pseudo samples for the BANKING task.We present more generated pseudo samples from PCLL and LAMOL along with real samples in Table  REF .", "For intent detection, we list real and pseudo samples from HWU tasks; for slot filling, we list those samples from MIT-RESTAURANT and DSTC tasks in Table REF .", "Table: Six random permutations of tasks for intent detection and slot filling.Table: Real samples and generated pseudo samples by PCLL and LAMOL-t." ] ]
2210.07783
[ [ "Robust Supermassive Black Hole Spin Mass-Energy Characteristics: A New\n Method and Results" ], [ "Abstract The rotational properties of astrophysical black holes are fundamental quantities that characterization the black holes.", "A new method to empirically determine the spin mass-energy characteristics of astrophysical black holes is presented and applied here.", "Results are obtained for a sample of 100 supermassive black holes with collimated dual outflows and redshifts between about zero and two.", "An analysis indicates that about two-thirds of the black holes are maximally spinning, while one-third have a broad distribution of spin values; it is shown that the same distributions describe the quantity $\\rm{(M_{rot}/M_{irr})}$.", "The new method is applied to obtain the black hole spin mass-energy, $\\rm{M_{spin}}$, available for extraction relative to: the maximum possible value, the irreducible black hole mass, and the total black hole mass, $\\rm{M_{dyn}}$.", "The total energy removed from the black hole system and deposited into the circumgalactic medium via dual outflows over the entire outflow lifetime of the source, $\\rm{E_T}$, is studied relative to $\\rm{M_{dyn}}$ and relative to the spin energy available per black hole, $\\rm{E_{spin}/(M_{\\odot}c^2)}$.", "The mean value of $\\rm{Log(E_T/M_{dyn})}$ is about $(-2.47\\pm 0.27)$.", "Several explanations of this and related results are discussed.", "For example, the energy input to the ambient gas from the outflow could turn off the accretion, or the impact of the black hole mass loss on the system could destabilize and terminate the outflow.", "The small values and restricted range of values of $\\rm{Log(E_T/M_{dyn})}$ and $\\rm{Log(E_T/E_{spin})}$ could suggest that these are fundamental properties of the primary process responsible for producing the dual collimated outflows." ], [ "Introduction", "Black holes are ubiquitous in the universe.", "Supermassive black holes reside at the centers of galaxies and stellar-mass black holes populate galaxies.", "The two primary characteristics that describe an astrophysical black hole are the mass and spin of the hole (assuming the black hole has negligible charge).", "Often in astrophysical contexts, the mass of a black hole is empirically determined by the dynamics and properties of matter and light in the vicinity of the black hole.", "The black hole mass that will be measured is the total black hole mass, $\\rm {M}$ , which has contributions from the irreducible mass, $\\rm {M_{irr}}$ , and the mass-energy associated with the spin angular momentum of the black hole $J$ : $\\rm {M = (M_{irr}^2 + (Jc/(2GM_{irr})^2)^{1/2}}$ , where G is the gravitational constant and $c$ is the speed of light (e.g.", "Christodoulou 1970; Bardeen, Press, & Teukolsky 1972; Misner, Thorne, & Wheeler 1973; Rees 1984; Blandford 1990); this equation may be rewritten in the form of eqs.", "(3) and (9).", "The mass-energy that can be extracted from the spinning black hole, referred to here as the \"spin mass-energy,\" is $\\rm {M_{spin} = M - M_{irr}}$ , as described in detail by Thorne et al.", "(1986) (see their eq.", "3.88 and related discussion).", "Thus, there are two different quantities that have been referred to as spin or rotational mass-energy of the black hole in the literature (e.g.", "Rees 1984; Thorne et al.", "1986; Gerosa, Fabbri, & Sperhake 2022).", "For clarity, throughout this paper, the quantity $\\rm {M_{rot} \\equiv Jc/(2GM_{irr})}$ is referred to as the \"rotational mass.\"", "The quantity $\\rm {M_{spin} \\equiv M - M_{irr} \\equiv E_{spin} c^{-2}} $ is referred to as the \"spin mass-energy\" of the black hole and indicates the mass-energy that is available to be extracted from the black hole (see, for example, Blandford & Znajek 1977; Rees 1984; Thorne et al.", "1986, and Blandford 1990 for detailed discussions).", "The mass, $\\rm {M}$ , is also referred to as the dynamical mass, $\\rm {M_{dyn}}$ , since it is the total black hole mass that will be inferred by dynamical and other astronomical studies.", "The irreducible mass of an isolated black hole can not be reduced or decreased, but mass-energy associated with black hole spin can be extracted, thereby decreasing the total mass of the hole (Penrose 1969; Penrose & Floyd 1971; Blandford & Znajek 1977).", "Collimated outflows from supermassive black holes associated with active galactic nuclei (AGN) and stellar-mass black holes associated with X-ray binaries are likely to be powered, at least in part, by black hole spin (e.g.", "Blandford & Znajek 1977; MacDonald & Thorne 1982; Phinney 1983; Begelman, Blandford, & Rees 1984; Blandford 1990; Daly 1994, 1995; Moderski, Sikora, & Lasota 1998; Meier 1999; Koide et al.", "2000; Wan et al.", "2000; Punsly 2001; Daly & Guerra 2002; De Villiers, Hawley, & Krolik 2003; Gammie et al 2004; Komissarov & McKinney 2007; Beckwith, Hawley, & Krolik 2008; King, Pringle, & Hofmann 2008; Miller et al.", "2009; O'Dea et al.", "2009; Daly 2009a,b; Tchekhovskoy et al.", "2010; Daly 2011; Gnedin et al.", "2012; King et al.", "2013; Ghisellini et al.", "2014; Yuan & Narayan 2014; Daly & Sprinkle 2014; Daly 2016; Gardner & Done 2018; Krause et al.", "2019; Reynolds 2019; Daly 2019).", "In this case, the spin energy extracted during the outflow will cause the black hole mass to decrease.", "A source that undergoes multiple outflow events could significantly drain the spin energy of the hole and thereby decrease the black hole mass.", "The amount of spin energy extracted during outflow events have been estimated for radio sources with large-scale outflows such as FRI sources in galaxy-cluster environments (McNamara et al.", "2009; Daly 2009a,b, 2011), FRII sources (Daly 2009a,b; 2011), and several types of AGN and stellar-mass black holes (Daly 2020).", "FRI sources are extended radio sources that are \"edge-darkened\" while FRII sources, also known as classical doubles, are \"edge-brightened\" (Fanaroff & Riley 1974).", "In addition, the fraction of the spin energy extracted per outflow event has been estimated for FRII sources (Daly 2011), and is roughly a few to several percent.", "Thus, in models in which collimated outflows from the vicinity of a black hole are powered by black hole spin, the spin and spin-energy of the hole are expected to decrease as a result of the outflow.", "The fact that the mass-energy associated with black hole spin may be extracted, modified, or reduced, and thus that the total or dynamical mass of a black hole can be reduced may introduce dispersion in relationships between black hole mass and properties of the host galaxy (e.g.", "Kormendy & Richstone 1995; Ferrarese & Ford 2005; Kormendy & Ho 2013; Shankar 2013; Sesana et al.", "2014; Zubovas & King 2019; King & Nealon 2019; King & Pounds 2015).", "If black hole spin evolves with redshift, this is likely to cause an evolution in these relationships and their dispersion.", "Additionally, black hole spin is expected to evolve with redshift as a result of the merger and accretion history of the black hole (e.g.", "Hughes & Blandford 2003; Gammie et al.", "2004; Volonteri et al.", "2005, 2007; King & Pringle 2006, 2007; King et al.", "2008; Berti & Volonteri 2008; Ghisellini et al.", "2013).", "Thus, the study of black hole spin evolution provides insight into the merger and accretion history of supermassive black holes.", "Black hole spin may depend upon galaxy type or environment (e.g.", "Sesana et al.", "2014; Antonini et al.", "2015; King & Pounds 2015; Barausse et al.", "2017; King & Nealon 2019), which may lead to environmental changes in the relationship between black hole mass and galaxy properties, or a change in the dispersion of relationships (e.g.", "Zubovas & King 2012).", "The dispersion introduced may be complex and will depend upon the initial spin and irreducible mass of the black hole, the processes responsible for spinning up the hole such as accretion or mergers, processes which tap or reduce the spin of the hole, and the complex interaction of feedback, accretion, outflows, and other processes associated with the black hole, which are likely to play a role in determining the spin and thus spin mass-energy and dynamical mass of the hole (e.g.", "Belsole et al.", "2007; Worrall 2009; Voit et al.", "2015; Hardcastle & Croston 2020).", "In addition, it is likely that some sources undergo multiple outflow events (e.g.", "Hardcastle et al.", "2019; Bruni et al.", "2019, 2020; Shabala et al.", "2020), so that even if a small amount of the spin energy is extracted per outflow event, over time a substantial amount of spin energy can be extracted due to multiple outflow events.", "The distinction between dynamical mass, spin mass-energy, and irreducible mass of a black hole is also important when comparing empirically determined quantities with theoretically predicted quantities, such as those indicated by numerical simulations.", "Numerical simulations predict the expected black hole spin and mass evolution in the context of different black hole merger and accretion histories (e.g.", "King et al.", "2008; Volonteri et al.", "2013; Dubois, Volonteri, & Silk 2014; Sesana et al.", "2014; Kulier et al.", "2015).", "A comparison of simulation results with empirically determined results provides an important diagnostic of the merger and accretion histories of black holes located at the centers of galaxies.", "The number of available black hole spin values, and therefore black hole spin energies, has recently increased substantially.", "The development of the \"outflow method\" of empirically determining black hole spin and accretion disk properties developed and described by Daly (2016) and Daly (2019) (hereafter D16 and D19) and Daly et al.", "(2018), allow the empirical determination of the black hole spin function, spin, and, accretion disk properties such as the mass accretion rate and disk magnetic field strength for over 750 sources.", "D19 showed that the fundamental equation that describes an outflow powered at least in part by black hole spin, $\\rm {L_j \\propto B_p^2 \\rm {M_{dyn}}^2 F^2}$ (e.g.", "Blandford & Znajek 1977; Meier 1999; Tchekhovskoy et al.", "2010; Yuan & Narayan 2014) is separable and may be written as $\\rm {(L_j/L_{Edd}) = g_j ~(B/B_{Edd})^2 ~F^2}$ (see eq.", "6 from D19); here $\\rm {B_p}$ is the poloidal component of the accretion disk magnetic field, $\\rm {B}$ is the magnitude of disk magnetic field, $\\rm {B_{Edd}}$ is the Eddington magnetic field strength (e.g.", "Rees 1984; Blandford 1990; Dermer et al.", "2008; D19), $\\rm {B_{Edd}} \\approx 6 \\times 10^4 (\\rm {M_{dyn}}/10^8 M_{\\odot })^{-1/2}$ G, $\\rm {F^2} \\equiv \\rm {f(j)/f_{max}}$ is the normalized spin function (discussed in more detail in section 2), $\\rm {f_{max}}$ is the maximum value of the spin function $\\rm {f(j)}$ , and $\\rm {g_j}$ is the normalization factor for the beam power $\\rm {L_j}$ in units of the Eddington Luminosity, $\\rm {L_{Edd}}$ , $\\rm {(L_j/L_{Edd})(max) = g_j}$ .", "Note that the ratio $\\rm {(B_p/B)^2}$ is absorbed into the normalization factor $\\rm {g_j}$ , thus $\\rm {g_j}$ may depend upon AGN type, as discussed in sections 3.1 and 4 of D19.", "(Also note that even though the maximum value of the spin function $\\rm {f(j=1) = f_{max}} = 1$ , the normalization term $\\rm {f_{max}}$ is included in eqs.", "(1) and (7) for completeness since in some numerical simulations $\\rm {f(j)}$ is described by modified representations (e.g.", "Tchekhovskoy et al.", "2010).", "Here, since $\\rm {f_{max}} = 1$ , the terms \"spin function\" and \"normalized spin function\" are used interchangeably.)", "The spin functions obtained by D19 were converted to dimensionless black hole spin angular momentum values, and compared with values obtained with independent methods such as those discussed by Azadi et al.", "(2020) and Reynolds (2019); see also, for example, Gnedin et al.", "(2012), Patrick et al.", "(2012), King et al.", "(2013), Walton et al.", "(2013), Wang et al.", "(2014), García et al.", "(2015), Mikhailov et al.", "(2015, 2019), Vasudevan et al.", "(2016), Piotrovich et al.", "(2017, 2020), and Mikhailov & Gnedin (2018).", "A comparison of black hole spin parameters obtained independently with the outflow method and the continuum fitting method was possible for 15 of the sources studied with both methods (Azadi et al.", "2020), and consistent spin parameters were obtained with the two methods.", "And, a comparison was possible and very good agreement was found for six AGN and one stellar mass black hole studied with both the outflow method and the X-ray reflection method (e.g.", "Fabian et al.", "1989; Iwasawa et al.", "1997; Miller et al.", "2002; Reynolds 2019), which included all of the sources for which a comparison is currently possible.", "Thus, all sources for which independent spin angular momentum values could be compared indicate good agreement between independently determined values.", "The high spin values obtained are also consistent with expectations based on AGN luminosities (e.g.", "Sun & Malkan 1989; Davis & Laor 2011; Wu et al.", "2013; Trakhtenbrot 2014; Brandt & Alexander 2015; Trakhtenbrot, Volonteri, & Natarajan 2017).", "So, the expectation is that some significant fraction of black holes are likely to have high spin; there may also be a population of black holes with lower spin, and, of course, black hole spin is likely to be an evolving quantity.", "Here, a new method to study the spin mass-energy characteristics of black holes is presented and applied to a sample of 100 supermassive black holes with empirically determined black hole spin functions.", "This is important because the spin mass-energy of a black hole can be extracted, thereby reducing the total black hole mass, and energy channelled away from the hole can significantly impact the near and far field environments of the black hole.", "The traditional and new methods of empirically determining the spin mass-energy characteristics of black holes are described in sections 2.1 and 2.2, respectively.", "Empirically determined spin functions, $F^2$ , are used to obtain the spin mass-energy characteristics of the sample of 100 supermassive black holes, bypassing the use of dimensionless spin angular momenta, $j$ .", "The properties of the spin functions (see eq.", "7) are described and analyzed in section 2.3, and it is found that the sources are well described by a population of maximally spinning black holes plus a population of holes with a broad distribution of spin values.", "In section 3, the empirically determined spin functions are applied to obtain for each black hole: the spin mass-energy relative to the maximum possible value; the spin mass-energy relative to the irreducible and dynamical black hole mass; the spin mass-energy in units of solar masses; the ratio of the total outflow energy to the black hole spin energy; the ratio of the total outflow energy to the dynamical black hole mass; and the ratios of the rotational mass relative to the irreducible and dynamical black hole masses.", "The results are discussed and summarized in sections 4 and 5.", "All quantities are obtained in a spatially flat cosmological model with two components, a mean mass density relative to the critical value at the current epoch of $\\Omega _m = 0.3$ and a similarly normalized cosmological constant of $\\Omega _{\\Lambda } = 0.7$ .", "A value for Hubble’s constant of $H_0 = 70 ~\\rm {km~ s}^{-1} \\rm {Mpc}^{-1}$ is assumed throughout." ], [ "Method", "The traditional method of obtaining black hole spin mass-energy characteristics and some of the difficulties that arise in the application of this method to empirically determine spin mass-energy properties of astrophysical black holes are described in section 2.1.", "The new method avoids these difficulties by characterizing the spin mass-energy characteristics in terms of the spin function; the new method that will be applied here is presented in section 2.2.", "The properties of the empirically determined spin functions that will be applied to obtain and study the spin mass-energy characteristics of 100 supermassive black holes are discussed in section 2.3." ], [ "The Traditional Method", "As described in section 1, the rotational energy of a black hole contributes to the total dynamical black hole mass, $\\rm {M_{dyn}}$ , which is the mass that will be measured by a distant observer, and $\\rm {M_{dyn}^2 = M_{irr}^2 + M_{rot}^2}$ (see eq.", "3), where $\\rm {M_{rot} \\equiv (Jc/(2GM_{irr}))}$ .", "The spin energy $\\rm {E_{spin}}$ that may be extracted is $\\rm {E_{spin} = M_{spin} c^2}$ , where $\\rm {M_{spin} = M_{dyn} - M_{irr}}$ , and is referred to here as the black hole spin energy or spin mass-energy.", "Relationships between the dimensionless black hole spin angular momentum, $j$ , the total black hole mass, $\\rm {M}$ (also referred to as $\\rm {M_{dyn}}$ ), and the mass-energy that can be extracted from the black hole, $\\rm {M_{spin}}$ , are discussed, for example, by Misner, Thorne, & Wheeler (1973), Rees (1984), Blandford (1990), and Thorne et al.", "(1986).", "The dimensionless black hole spin angular momentum $j$ is defined in the usual way in terms of the spin angular momentum $J$ and the total black hole mass $M$ , $j \\equiv Jc/(G M^2)$ ; in other work, $j$ is sometimes represented with the symbol $a_*$ or $a/M$ .", "As described in section 1, the work of Thorne et al.", "(1986) (see also Rees 1984; Blandford 1990), indicates the following set of equations: $\\rm { M \\equiv M_{dyn}} = \\rm {M_{irr}} + \\rm {E_{spin}} c^{-2} = \\rm {M_{irr}} + \\rm {M_{spin}}$ and $\\rm {M_{irr}} = \\rm {M_{dyn}} \\left({{1 + (1-j^2)^{1/2}} \\over 2}\\right)^{1/2}~,$ where eq.", "(3) follows from $\\rm {M_{dyn}^2 = M_{irr}^2 + (Jc/(2GM_{irr}))^2}$ , discussed in section 1.", "Eqs.", "(2) and (3) indicate that ${\\rm {M_{spin}} \\over \\rm {M_{dyn}}} = 1 - \\left({\\rm {M_{irr}} \\over \\rm {M_{dyn}}}\\right)$ and ${\\rm {M_{spin}} \\over \\rm {M_{irr}}} = \\left({\\rm {M_{dyn}} \\over \\rm {M_{irr}}}\\right) -1~.$ Eqs.", "(3) & (5) indicate that ${\\rm {E_{spin}} \\over E_{spin,max}} = \\left({\\sqrt{2}(1+ \\sqrt{1-j^2})^{-0.5} - 1]\\over \\sqrt{2} -1} \\right) \\approx 2.41 \\left(\\rm {{M_{spin}} \\over {M_{irr}}}\\right),$ where $(\\rm {E_{spin}/E_{spin,max}})$ is obtained by dividing eq.", "(5) as a function of j by eq.", "(5) with $j=1$ , since $(\\rm {E_{spin,max}/M_{irr}})$ is obtained with eqs.", "(3) and (5) assuming a value of $j=1$ .", "Figure: The solid line shows the black hole spin energy available for extraction, E spin \\rm {E_{spin}} inunits of the maximum possible value of this energy, E spin , max \\rm {E_{spin,max}},versus thedimensionless black hole spin angular momentum jj (defined in section 2.1).", "The dotted line provides a comparison to a linear relationship.Figure: The solid line showsthe available black hole spin energy E spin \\rm {E_{spin}}normalized to the maximum possible value of the spin energy, E spin , max .\\rm {E_{spin,max}}.", "as a function of F\\rm {F}, the square root of the black hole spin function (defined by eq.", "7).", "The dotted line provides a comparison to a linear relationship.There are several factors that indicate it is preferable to rewrite equations (3 - 5) in terms of the spin function $\\rm {F^2 \\equiv f(j)/f_{max}}$ .", "In the application of the outflow method (D16, D19), the quantity that is determined empirically is $\\rm {F}$ , so it is preferable to be able to obtain the quantities on the left hand sides of equations (3-6) directly in terms of $\\rm {F}$ , where $F ~\\equiv ~ \\sqrt{f(j) \\over f_{max}}~ = ~{j \\over (1+\\sqrt{1-j^2})}~.$ To further complicate the use of $j$ to empirically characterize the spin properties of a black hole, the quantity $\\sqrt{(1-j^2)}$ indicates that values of $j$ that are greater than one cannot be accommodated, though the uncertainties associated with empirically determined black hole spin characteristics should allow for this possibility (e.g.", "Daly 2020).", "In addition, the relationship between the normalized spin energy of the black hole and the dimensionless spin angular momentum $j$ is highly non-linear as indicated by eq.", "(6) and illustrated by Fig.", "1.", "The spin energy $E_{1/2}$ is about half of the maximum possible spin energy for $j$ of about 0.9; the spin energy $E_{1/4}$ is about one quarter of the maximum value when $j$ is about 0.7; and the spin energy $E_{1/10}$ is about one tenth of the maximum value when $j$ is about 0.5.", "It is clear that the relationship between the normalized spin energy and $j$ is quite non-linear, and relatively high dimensionless spin angular momentum values of 0.5 and 0.7 indicate relatively low spin energy values of only 1/10 and 1/4, respectively, of the maximum possible value.", "Another way to state this is that relatively low values of spin energy indicate substantial values of dimensionless black hole spin angular momentum $j$ .", "Thus, if a black hole has any spin energy at all, it is expected to have a value of $j$ substantially different from zero.", "The facts that the relationship between the dimensionless spin angular momentum $j$ and the normalized spin energy $\\rm {(E_{spin}}/E_{spin,max})$ is highly non-linear, that empirically determined values of $j$ cannot exceed unity due to the term $\\sqrt{1-j^2}$ even though observational uncertainties require this flexibility, and that the empirically determined quantity is $\\rm {F}$ , suggest that some other function should be used to empirically determine the spin mass-energy properties of black holes.", "And, as noted earlier, it is important to determine how much of the dynamical mass could be extracted and thereby decrease the black hole mass.", "Spin energy also indicates the potential impact of the \"spin energy reservoir\" that is stored in spinning black holes on the near and far field environments of black holes." ], [ "The New Method", "In contradistinction to these issues, the relationship between both the black hole spin function $F$ , $F^2$ , or $\\rm {Log(F)}$ and the black hole mass-energy associated with the spin do not suffer from the limitations described in section 2.1, as illustrated in Figs.", "2, 3, and 4.", "Though theoretically $F$ is not expected to exceed unity, empirically determined values of $F$ will exceed unity due to measurement uncertainties.", "There are several additional reasons why it is preferable to use the quantity $F$ , $F^2$ , or $\\rm {Log(F)}$ : (1) there is no mathematical constraint analogous to that for $j$ (described in section 2.1) that requires that $F$ must be less than or equal to one; (2) the relationship between $F$ and spin energy is only slightly non-linear as illustrated in Fig.", "2; (3) the spin energy in units of the maximum spin energy is very well approximated by the value of $F^2$ for values of $F^2$ between about zero and 1.5 or so as illustrated in Fig.", "3.", "Allowing the exponent of $\\rm {F}$ to vary, it is found that $\\rm {Log(\\rm {E_{spin}}/E_{spin,max}}) \\approx 1.75~ \\rm {Log(F)}$ , as illustrated in Fig.", "4.", "It should be noted that equations (9-12) should be used to obtain uncertainties for $\\rm {(M_{irr}}/\\rm {M_{dyn})}$ , $\\rm {(M_{spin}}/\\rm {M_{dyn})}$ , $\\rm {(M_{spin}}/\\rm {M_{irr})}$ , $\\rm {(E_{spin}}/\\rm {E_{spin,max})}$ and related quantities, which are listed below.", "To re-write equations (3-6) in terms of $F$ , we manipulate the relationship between $F$ and $j$ obtained and discussed by D19 to obtain $0.5~ [1+ (1-j^2)^{1/2}] = (F^2 +1)^{-1}.$ This is then substituted into eq.", "(3) to obtain: ${\\rm {M_{irr}} \\over \\rm {M_{dyn}}} = ~(F^2 +1)^{-1/2}~.$ Combining eqs.", "(2) and (9) indicates that ${\\rm {M_{spin}} \\over \\rm {M_{dyn}}} = 1 - \\left({\\rm {M_{irr}} \\over \\rm {M_{dyn}}}\\right)= ~[1-(F^2+1)^{-1/2}],$ and ${\\rm {M_{spin}} \\over \\rm {M_{irr}}} = \\left({\\rm {M_{dyn}} \\over \\rm {M_{irr}}}\\right) -1~= ~[(F^2+1)^{1/2} -1].$ Eq.", "(11) indicates that ${\\rm {E_{spin}} \\over E_{spin,max}}={\\rm {M_{spin}} \\over M_{spin,max}} = \\left({\\sqrt{F^2+1} -1\\over \\sqrt{2} -1}\\right) \\approx 2.41 \\left({\\rm {M_{spin}} \\over \\rm {M_{irr}}}\\right)$ where $(\\rm {E_{spin}/E_{spin,max}})$ is obtained by dividing eq.", "(11) as a function of F by eq.", "(11) with $F=1$ , since $(\\rm {E_{spin,max}/M_{irr}})$ is obtained with eqs.", "(9) and (11) assuming a value of $F=1$ .", "Figure: The solid line shows the available black hole spin energy E spin \\rm {E_{spin}}normalized to the maximum possible value of this spin energy, E spin , max \\rm {E_{spin,max}}, as a function of the black hole spin function, F 2 \\rm {F^2}(defined by eq.", "7).", "The dotted line provides a comparison to a linear relationship.Figure: Log (E spin /E spin , max )\\rm {Log(\\rm {E_{spin}}/E_{spin,max})}as a function of the Log (F)\\rm {Log(F)} is illustrated with input values of Log (F)\\rm {Log(F)}selected to match those of the 100 FRII sources that will be considered here,but the input values of Log (F)\\rm {Log(F)} could have been selected as in Figs.", "2 and 3.The unweighted best fit line (solid line) has a slope of 1.75±0.011.75 \\pm 0.01, y-interceptof -0.011±0.002-0.011 \\pm 0.002, and χ 2 =0.03\\chi ^2 = 0.03.", "Here the exponent that F\\rm {F} is raised to is allowed to vary, while in Fig.", "3 this exponent is fixed.The symbols and colors are as in Fig.", "6.Equations (9-12) will be applied to empirically determine the ratio of the dynamical black hole mass to the irreducible black hole mass, $\\rm {M_{dyn}}/\\rm {M_{irr}}$ , the ratio of the spin mass-energy to the dynamical mass $\\rm {M_{spin}}/\\rm {M_{dyn}}$ , the ratio of the spin mass-energy to the irreducible black hole mass $\\rm {M_{spin}}/\\rm {M_{irr}}$ , and the spin energy in terms of the maximum spin energy, ${\\rm {E_{spin}}/E_{spin,max}}$ for a sample of 100 FRII sources.", "Of course, the maximum value of $\\rm {M_{spin}}$ obtained from eqs.", "(10) and (11) with $F=1$ , remain $(\\rm {M_{spin}}/\\rm {M_{dyn}})(max) \\simeq 0.29$ and $(\\rm {M_{spin}}/\\rm {M_{irr}})(max) \\simeq 0.41$ , while the maximum value of $(\\rm {M_{dyn}}/\\rm {M_{irr}})$ obtained with $F=1$ (or $j = 1$ ) is $\\sqrt{2}$ .", "Eqs.", "(9-12) indicate the following uncertainties: $\\delta (\\rm {M_{dyn}}/\\rm {M_{irr}}) = F [(F^2+1)^{-0.5}]~ \\delta {F}$ ; $\\delta (\\rm {M_{spin}}/\\rm {M_{dyn}}) = F [(F^2+1)^{-1.5}] ~\\delta {F}$ ; $\\delta (\\rm {M_{spin}}/\\rm {M_{irr}}) = \\delta (\\rm {M_{dyn}}/\\rm {M_{irr}})$ , and $\\delta (\\rm {E_{spin}}/E_{spin,max}) = F (F^2+1)^{-0.5} (\\sqrt{2}-1)^{-1} \\delta F$ .", "Of course, $\\rm {\\delta (Log(x)) = (\\delta (x)/x)/ln}(10)$ .", "These uncertainties are included in the Tables and shown in plots of quantities versus redshift.", "The total mass-energy associated with the spin of the black hole can be obtained by multiplying $\\rm {(M_{spin}}/\\rm {M_{dyn})}$ by the empirically determined mass of the black hole, $\\rm {M}$ .", "This is the first time the empirically determined black hole mass is required.", "Clearly, ${\\rm {E_{spin}} \\over (M_{\\odot } c^2)} = {\\rm {M_{spin}} \\over M_{\\odot }} = M \\times \\left({\\rm {M_{spin}} \\over \\rm {M_{dyn}}}\\right) .$ All empirically determined black hole masses are dynamical black hole masses.", "This will be discussed in more detail in section 4.", "Empirically determined black hole masses and their uncertainties are indicated by the Eddington luminosities listed in D16 & D19.", "The rotational mass, defined and described in section 1, can also be represented in terms of the spin function, $\\rm {F^2}$ .", "It is easy to show that $\\rm {{M_{rot} \\over M_{irr}} = F}$ and $\\rm {M_{rot} \\over M_{dyn}} = {F \\over \\sqrt{F^2 +1}}.$ Each of these quantities can easily be obtained for the 100 sources studied here from Tables 1 and 2, given that $\\rm {Log(M_{rot}/M_{irr}) = Log(F)}$ and $\\rm {Log(M_{rot}/M_{dyn})} = \\rm {[Log(F) - Log(M_{dyn}/M_{irr})]}$ , as discussed in section 4.1." ], [ "Properties of the Spin Function", "Spin functions, $F^2$ , for the 100 supermassive black holes associated with FRII sources obtained by D19 are considered here.", "The values of $\\rm {Log(F)}$ and their uncertainties are listed in Tables 1 and 2, and a histogram of values is shown in Fig.", "5.", "The sources are drawn from the flux limited 3CRR catalogue of radio sources (Laing, Riley & Longair 1983), and thus are subject to well-known selection effects such as the loss of lower-luminosity sources as source redshift increases (e.g.", "see Fig.", "1 of McLure et al.", "2004).", "To illustrate the impact of this selection effect on the histogram of each quantity, the redshift distribution of each quantity is provided and discussed; for example, the redshift distribution of $\\rm {Log(F)}$ is shown in Fig.", "6, and will be discussed below.", "Figure: Thehistogram of Log (F)\\rm {Log(F)} is shown as the solid line.", "The population is well described with a two component model: a population of maximally spinning black holes with Log (F)=0\\rm {Log(F)} = 0 and standard deviation σ=0.15\\sigma = 0.15,illustrated by the Gaussian (dotted line), plus a population with-0.6< Log (F)<0-0.6 < \\rm {Log(F)} < 0 with a tilted distribution (see section 2.3).Of the sample of 100 black holes studied, about 2/3 are maximally spinning,and about 1/3 have a slowly declining distribution of spin functionstoward lower values of Log (F)\\rm {Log(F)}.", "Log (M rot /M irr )= Log (F)\\rm {Log(M_{rot}/M_{irr})} = \\rm {Log(F)}(see eq.", "14), so this is also the distribution of thevalues of Log (M rot /M irr )\\rm {Log(M_{rot}/M_{irr})}.For all histograms,the bin size is selected to be close to the mean value of the uncertainty of thequantity listed in Tables 1 and 2.The values of $\\rm {Log(F)}$ are illustrated with the histogram shown in Fig.", "5.", "The bin size in this and all subsequent histograms is selected to be close to the mean value of the uncertainty for the quantity displayed.", "A maximally spinning black hole is expected to have a value of $\\rm {Log(F)} = 0$ , and sources that are not maximally spinning are expected to have values of $\\rm {Log(F)} < 0$ .", "There are several sources with values of $F$ greater than one, or $\\rm {Log(F)}$ greater than zero.", "To see if the number of such sources is similar to that expected given the mean uncertainty of $\\delta \\rm {Log(F)} = 0.15$ per source for the sources listed in Tables 1 and 2, consider dividing the sources into two populations: those that are maximally spinning, and thus have $\\rm {Log(F)} = 0$ , and a second population with some distribution of $\\rm {Log(F)}$ , all of which have $\\rm {Log(F)}$ less than zero.", "The population of maximally spinning sources is illustrated with a Gaussian distribution centered on $\\rm {Log(F)} = 0$ , with a standard deviation equal to the mean uncertainty per source, $\\sigma = 0.15$ , and the peak height is determined by the number of sources that are maximally spinning (and it is determined below that about 66 of the 100 sources fall into this category) so the maximum height of the Gaussian is $66/\\sqrt{(2 \\pi )}$ , as illustrated with the dotted line in Fig.", "5.", "This Gaussian provides a good description of the sources with $\\rm {Log(F)} > 0$ , and the properties of the second population can be deduced by assuming the population of maximally spinning holes is symmetric about $\\rm {Log(F)} = 0$ and subtracting this population from the total number of sources.", "There are 22 sources with $0 \\le \\rm {Log(F)} \\le 0.15$ , which indicates that for a population of sources with an intrinsic value of $\\rm {Log(F)} =0$ and a Gaussian distribution of uncertainties we expect there to be about nine sources with $0.15 \\le \\rm {Log(F)} \\le 0.3$ $\\rm {Log(F)}$ , and about one with $0.3 \\le \\rm {Log(F)} \\le 0.45$ .", "For the sample studied here, there are ten sources between +1 and +2 $\\sigma $ for $\\rm {Log(F)} > 0$ (including the two sources with $\\rm {Log(F)} \\simeq 0.31$ with this group), one source between +2 and +3 $\\sigma $ , and zero sources that deviate by more than +3 $\\sigma $ .", "This is just about as expected for a population of sources centered at $\\rm {Log(F)} =0$ with $\\sigma \\simeq 0.15$ .", "Extending this to the sources with $\\rm {Log(F)} < 0$ , we can obtain the number of sources over and above that expected based on a symmetric distribution about $\\rm {Log(F)} = 0$ of the population of maximally spinning black holes to study the properties of the second population of sources.", "This indicates that, over and above the sources expected from the Gaussian distribution (based on the numbers listed above), there are 13 additional sources with $-0.15 \\le \\rm {Log(F)} < 0$ , 11 additional sources with $-0.3 \\le \\rm {Log(F)} < -0.15$ , and ten additional sources with $-0.6 < \\rm {Log(F)} < -0.3$ , for a total of 34 sources above those expected from the Gaussian distribution.", "This suggests that the sources studied here consist of two populations: a single population of maximally spinning black holes with $\\rm {Log(F)} = 0$ and $\\sigma (\\rm {Log(F)}) = 0.15$ with a total of 66 sources, plus another population that has a tilted distributed in $\\rm {Log(F)}$ , all of which have $-0.6 \\le \\rm {Log(F)} < 0$ , with a total of 34 sources.", "The number of sources per unit $\\rm {Log(F)}$ in this second population is about 90 for $-0.15 \\le \\rm {Log(F)} < 0$ , about 70 for $-0.3 \\le \\rm {Log(F)} < -0.15$ , and about 30 for the remainder of the sources, which have $-0.6 < \\rm {Log(F)} < -0.3$ .", "Part or all of this decline in the number of sources per unit $\\rm {Log(F)}$ as $\\rm {Log(F)}$ decreases could be due to observational selection effects, although part could be due to an intrinsic decline.", "These values indicate that about two-thirds of the sample of 100 supermassive black holes are maximally spinning and are described by a Gaussian distribution about $\\rm {Log(F) = 0}$ with $\\sigma \\simeq 0.15$ .", "About one-third of the sample are less than maximally spinning and have a tilted distribution of spin functions, with the $\\rm {Log(F)}$ ranging from about (-0.6 to 0), with the number of sources per unit $\\rm {Log(F)}$ declining as $\\rm {Log(F)}$ decreases, as illustrated in Fig.", "5.", "The redshift distribution of $\\rm {Log(F)}$ is shown in Fig.", "6.", "The FRII radio sources are categorized based on their spectroscopic nuclear properties.", "The sample considered here includes high excitation galaxies (HEG), low excitation galaxies (LEG), quasars (Q), and weak sources (W) and each type is represented by a different color; the classifications listed here were obtained from Grimes et al.", "(2004).", "An unweighted fit is provided, and the fitted parameters are summarized in Table 3.", "It is clear from Fig.", "6 that sources with low values of $\\rm {Log(F)}$ drop out of the sample as redshift increases.", "This is because sources with lower radio luminosity have lower beam power and thus lower values of $\\rm {Log(F)}$ ; the beam power is discussed in more detail in section 4.3 (see also D16 and D19).", "The radio selection effect that causes sources with lower radio luminosity to drop out of the sample as redshift increases causes sources with lower values of $\\rm {Log(F)}$ to drop out of the sample as redshift increases.", "Thus, supermassive black holes with a broad range of values of $\\rm {Log(F)}$ are present at low redshift, while those with low values of $\\rm {Log(F)}$ drop out as redshift increases from zero to two.", "This selection effect causes a dearth of sources with low values of $F$ or $\\rm {Log(F)}$ at high redshift, which is clearly evident in Figs.", "5 and 6, and is due to the flux-limited nature of the survey from which the sources studied here are drawn.", "This same selection effect is also apparent in all of the quantities that depend only upon $\\rm {Log(F)}$ .", "Figure: Theredshift (z) distribution of Log (F)\\rm {Log(F)} is shown here.The theoreticallyexpected maximum value of this quantity is 0.In this and all similar figures, HEG are denoted by open black circles, Q are denotedby red stars, LEG are denoted by blue squares, and W are denoted by green triangles.The parameters describing the best fit line in this and all similar figures are listed inTable 3; all fits are unweighted.", "This is also the redshift distribution forthe quantity Log (M rot /M irr )\\rm {Log(M_{rot}/M_{irr})} (see eq.", "14).The data for a sample of 100 FRII sources presented and discussed by D16 & D19 are considered and applied here.", "The results are listed in Tables 1 and 2, and summarized in Table 3, where the typical uncertainty per source of each quantity is included in (brackets).", "Full details obtained with high excitation radio galaxies (HEG) are included in Table 1 while those obtained with low excitation galaxies (LEG), radio loud quasars (Q), and weak sources (W) (as defined by Grimes et al.", "2004) are listed in Table 2.", "Included in Tables 1 and 2 are the $\\rm {Log(F)}$ values obtained by D19 and the uncertainty of each value is also included here.", "The values of $F$ listed in Tables 1 and 2 were substituted into eqs.", "(9-12) to solve for $\\rm ({M_{dyn}}/\\rm {M_{irr}})$ , $\\rm ({M_{spin}}/\\rm {M_{dyn}})$ , $\\rm ({M_{spin}}/\\rm {M_{irr}})$ and $\\rm ({E_{spin}}/\\rm {E_{spin,max}})$ , and the results are listed in Tables 1 and 2 and illustrated in Figures 7 - 14.", "Uncertainties of these quantities are obtained using the expressions listed at the end of section 2.2.", "Black hole masses obtained from McLure et al.", "(2004, 2006) and listed by D19 were applied using eq.", "(13) to obtain $\\rm {M_{spin}}$ ; the results are illustrated in Figs.", "15 and 16 and and listed in the Tables.", "The total outflow energy, $\\rm {E_T}$ , was obtained as described by O'Dea et al.", "(2009) (see also Leahy et al.", "1989; Daly 2002), and the values relative to the spin energy available for extraction, $\\rm ({E_T/E_{spin}})$ , and relative to the black hole dynamical mass, $\\rm ({E_T/M_{dyn}})$ , are listed in the Tables and illustrated in Figs.", "(17-20).", "Uncertainties for all quantities were obtained by propagating through from the original uncertainties on all quantities.", "In all of the histograms, the bin size was selected to be similar to the mean uncertainty of the quantity presented.", "It is helpful to consider the redshift distribution of each quantity when viewing the histograms to get some perspective on the contributions to the histograms from sources at different redshift.", "For many quantities of interest, sources with low values drop out as the redshift increases, which causes the low end of the histogram to be depleted of similar sources that are likely to exist at higher redshift.", "This can be explained by the fact that the parent population of sources is derived from a flux limited sample, as discussed for example in section 2.3.", "Figure: Histogram of Log (E spin /E spin , max )\\rm {Log(\\rm {E_{spin}}/\\rm {E_{spin,max}})}.", "Avalue of F=1\\rm {F=1} (i.e.", "Log (F)=0\\rm {Log(F)} = 0) substituted into eq.", "(12)indicates an expected maximum value of this quantity of 0.", "The sourceswith values greater than about zero are the sources with values of Log (F)>0\\rm {Log(F)} > 0.For more information, see the caption to Fig.", "5.Figure: The redshift distribution of Log (E spin /E spin , max )\\rm {Log(\\rm {E_{spin}}/\\rm {E_{spin,max}})}.A value of F=1\\rm {F=1} or Log (F)=0\\rm {Log(F)} = 0indicates a value of Log (E spin /E spin , max )\\rm {Log(\\rm {E_{spin}}/\\rm {E_{spin,max}})} of zero.The symbols are as in Fig.", "6 and the fit is unweighted.", "Values describing thebest fit line can be deduced from those listed for Log (M spin /M irr )\\rm {Log(M_{spin}/M_{irr}}) in Table 3, as described in the footnote to that Table.Figure: Histogram of Log (M dyn /M irr )\\rm {Log(\\rm {M_{dyn}}/\\rm {M_{irr}})}.", "Avalue of F=1\\rm {F=1} substituted into eq.", "(9)indicates an expected maximum value of this quantity of about 0.15.", "The sourceswith values greater than about 0.15 are the sources with values of Log (F)>0\\rm {Log(F)} > 0; almost all of these sources haveuncertainties δ Log (M dyn /M irr )\\delta \\rm {Log(\\rm {M_{dyn}}/\\rm {M_{irr}})} that are within one to two sigma of 0.15.", "The same is true for all of the histograms that follow.For more information, see the caption to Fig.", "5.Figure: The redshift distribution of Log (M dyn /M irr )\\rm {Log(\\rm {M_{dyn}}/\\rm {M_{irr}})}.A value of F=1\\rm {F=1} indicates avalue of Log (M dyn /M irr )\\rm {Log(\\rm {M_{dyn}}/\\rm {M_{irr}})} of about 0.15.The symbols are as in Fig.", "6 and the fit is unweighted.Figure: Histogram of Log (E spin /(M dyn c 2 )\\rm {Log(\\rm {E_{spin}}/(\\rm {M_{dyn}} c^2)}, or Log (M spin /M dyn )\\rm {Log(\\rm {M_{spin}}/\\rm {M_{dyn}})}.A value of F=1\\rm {F=1} substituted into eq.", "(11)indicates an expected maximum value of this quantity of about -0.53.", "For more information, see the caption to Fig.", "5.Figure: The redshift distribution of Log (E spin /(M dyn c 2 )\\rm {Log(\\rm {E_{spin}}/(\\rm {M_{dyn}} c^2)},or Log (M spin /M dyn )\\rm {Log(\\rm {M_{spin}}/\\rm {M_{dyn}})}.", "The theoreticallyexpected maximum value of this quantity is about -0.53.Symbols and information are as in Fig.", "6.Figure: Histogram of Log (E spin /(M irr c 2 )\\rm {Log(\\rm {E_{spin}}/(\\rm {M_{irr}} c^2)}, or Log (M spin /M irr )\\rm {Log(\\rm {M_{spin}}/\\rm {M_{irr}})}.A value of F=1\\rm {F=1} substituted into eq.", "(11)indicates an expected maximum value of this quantity of about -0.38.", "For more information, see the caption to Fig.", "5.Figure: The redshift distribution of Log (E spin /(M irr c 2 )\\rm {Log(\\rm {E_{spin}}/(\\rm {M_{irr}} c^2)}, or Log (M spin /M irr )\\rm {Log(\\rm {M_{spin}}/\\rm {M_{irr}})}.The theoreticallyexpected maximum value of this quantity is about -0.38.Symbols and information are as in Fig.", "6.Figure: Histogram of Log (E spin /(M ⊙ c 2 )\\rm {Log(\\rm {E_{spin}}/(M_{\\odot } c^2)}, or Log (M spin /M ⊙ )\\rm {Log(\\rm {M_{spin}}/M_{\\odot })} obtained with eq.", "(13).For more information, see the caption to Fig.", "5.Figure: The redshift distribution of Log (E spin /(M ⊙ c 2 )\\rm {Log(\\rm {E_{spin}}/(M_{\\odot } c^2)}, or Log (M spin /M ⊙ )\\rm {Log(\\rm {M_{spin}}/M_{\\odot })} obtained with eq.", "(13).Symbols and information are as in Fig.", "6.Figure: Histogram of the Log of the total energy output by the dual collimated jets during theoutflow event, E T \\rm {E_T}, relative to the black hole spin energy available, E spin \\rm {E_{spin}}.The theoreticallyexpected maximum value of this quantity is 0.For more information, see the caption to Fig.", "5.Figure: Log of the total energy output in the form of dual collimated jetsduring the outflow event, E T \\rm {E_T}, relative to the spin energyavailable, E spin \\rm {E_{spin}}, vs Log(1+z).", "The theoreticallyexpected maximum value of this quantity is 0.Symbols and information are as in Fig.", "6.Figure: Histogram of the Log of the total energy output in the formof dual collimated jetsduring the outflow event, E T \\rm {E_T}, relative to the total (dynamical) black holemass, M dyn \\rm {M_{dyn}}.", "For more information, see the caption to Fig.", "5.Figure: Log of the total energy output in the form of dual collimated jets during the outflow event, E T \\rm {E_T}, relative tototal dynamical black hole mass, M dyn \\rm {M_{dyn}}, vs Log(1+z).Symbols and information are as in Fig.", "6." ], [ "Characteristics that Depend only upon the Spin Function", "The properties of the spin function are described in section 2.3.", "The properties of the quantities obtained with the spin function reflect the properties of the spin function.", "The fraction of the total dynamical black hole mass, $\\rm {M_{dyn}}$ that is associated with the black hole spin mass-energy, $\\rm {M_{spin}}= \\rm {E_{spin}}/c^2$ , typically is close to the maximum possible value for the classical double radio sources studied here.", "For example, the mean values of HEG, Q, and LEG sources for the quantities $\\rm {Log(\\rm {M_{dyn}}/\\rm {M_{irr}})}$ , $\\rm {Log(\\rm {M_{spin}}/\\rm {M_{dyn}})}$ , $\\rm {Log(\\rm {M_{spin}}/\\rm {M_{irr}})}$ , $\\rm {Log(F)}$ , and $\\rm {Log(E_{spin}/E_{spin,max}})$ are less than though close to the predicted maximum values of these quantities of about 0.15, -0.53, -0.38, 0, and 0, respectively (see Table 3).", "The W sources, which are all low redshift sources, have smaller mean values of all of these quantities relative to the other source types (and all of their values are close to the y-intercept values).", "This is not surprising since these quantities shown as a function of redshift clearly illustrate that sources with lower values of these quantities drop out as redshift increases due to well known selection effects.", "The classical double sources studied have redshifts between about zero and 2, and are selected from the 178 MHz radio flux limited 3CRR sample, described by Laing, Riley, & Longair (1983), as discussed in section 2.3.", "It is easy to see the impact of missing lower luminosity sources as redshift increases.", "Note that the upper envelope of the distributions provides a guide as to how parameters that describe sources with the largest spin functions, which are typically sources with the highest beam powers relative to the Eddington luminosity, evolve with redshift.", "The fact that black holes associated with the production of the classical double radio sources studied here have values of $\\rm {F}$ close to unity and thus are very rapidly spinning is not surprising.", "Given that classical double radio sources are among the most powerful long-lived outflows observed in the universe, it is expected that they would be produced by rapidly spinning black holes with spin energies close to the maximum possible value (e.g.", "Rees 1984; Begelman, Blandford, & Rees 1984; Blandford 1990).", "The values of $\\rm {Log(E_{spin}/E_{spin,max})}$ , $\\rm {Log(\\rm {M_{dyn}}/\\rm {M_{irr}})}$ , $\\rm {Log(\\rm {M_{spin}}/\\rm {M_{dyn}})}$ , and $\\rm {Log(\\rm {M_{spin}}/\\rm {M_{irr}})}$ are listed in Tables 1 and 2 are consistent with or less than the maximum expected values about 0, 0.15, -0.53, and -0.38 within one to two sigma, and have distributions that reflect those of the spin functions used to obtain these values.", "Equation (14) indicates that the rotational mass defined in section 1 relative to the irreducible mass is equal to $\\rm {F}$ , the square root of the spin function.", "This means that the distribution of values of $\\rm {Log(M_{rot}/M_{irr})}$ is the same as that discussed for $\\rm {Log(F)}$ in section 2.3 for the 100 supermassive black holes studied here.", "Thus, about 2/3 (or about 66) of the 100 sources studied here have a Gaussian distribution of $\\rm {Log(M_{rot}/M_{irr})}$ with a mean value of zero and standard deviation of about 0.15.", "The remaining 1/3 (or about 34 sources) have $\\rm {Log(M_{rot}/M_{irr})} < 0$ , with the tilted distribution described in section 2.3.", "This also means that the values of $\\rm {Log(F)}$ obtained by D19 for black holes associated with 656 additional AGN and 102 measurements of four stellar mass black holes translate directly to empirically determined values of $\\rm {Log(M_{rot}/M_{irr})}$ .", "Finally, eq.", "(15) indicates that values of $\\rm {Log(M_{rot}/M_{dyn})}$ can also be obtained for the 100 sources studied here plus the additional AGN and stellar mass black holes mentioned above." ], [ "Spin Mass-Energy", "The spin mass-energy per source available for extraction is obtained using eq.", "(13) where $\\rm {M} = \\rm {M_{dyn}}$ is the empirically determined dynamical mass of the black hole.", "The black hole masses listed in D16 and D19 are applied here, and were obtained from McLure et al.", "(2004, 2006).", "In computing the uncertainty of the spin mass-energy that is listed in the Tables, the way that the empirically determined black hole mass enters into the empirically determined black hole spin function $F \\propto \\rm {M_{dyn}}^{-0.28}$ (e.g.", "D19) is taken into account.", "The spin mass-energy associated with black holes is an energy reservoir that is available to be tapped and when tapped can significantly affect the black hole environment; this is referred to as the \"spin energy reservoir.\"", "For supermassive black holes, this can significantly affect the host galaxy and the environment in the vicinity of the host galaxy, as discussed in section 1 (see also Donahue & Voit 2022 and references therein).", "As indicated in Figs.", "15 and 16, the energy that is available per black hole is quite substantial.", "Since the black hole mass associated with classical double radio sources is strongly evolving with redshift, so is the spin mass-energy (see eq.", "13).", "It is clear that sources at lower redshift contribute to the low mass end of the histogram while sources at higher redshift contribute to the high mass end of the histogram.", "The spectroscopic types that contribute to the lower spin energy end of the histogram include LEG and W sources, which are prevalent at lower redshift, while Q sources are prevalent at higher redshift and contribute preferentially to the high spin energy end of the histogram.", "The HEG sources contribute at all redshifts, as is evident from Fig.", "16." ], [ "Total Outflow Energy Relative to Spin Mass-Energy and Relative to Dynamical Black Hole Mass", "The fraction of the available spin energy that is produced per outflow event, $(\\rm {E_T/E_{spin}})$ , is obtained by dividing the total energy that is carried away from the black hole system during the outflow event, $\\rm {E_T}$ , by the spin energy that is available, $\\rm {E_{spin}}$ .", "And, the total outflow energy relative to the total (dynamical) black hole mass is $\\rm ({E_T/M_{dyn}})$ .", "Note that the empirically determined quantities $\\rm {E_T}$ and $\\rm {M_{dyn}}$ are obtained with completely independent methods.", "The range of values for the total outflow energy per source, $\\rm {E_T}$ , span about an order of magnitude (e.g.", "see Figs.", "40 and 41 from O'Dea et al.", "2009), the range of values of $\\rm {E_{spin}}$ span about two orders of magnitude (see Figs.", "15 and 16), and the range of values of values of $\\rm {M_{dyn}}$ span about two orders of magnitude (see Fig.", "3 of D19).", "The total outflow energy is obtained by multiplying the total outflow timescale by the beam power, where the beam power is the energy per unit time output in the form of dual jets from the black hole system (e.g.", "O'Dea et al.", "2009).", "It has been shown conclusively for classical double (FRII) sources such as those studied here that the total outflow timescale is very well characterized as a function of only the beam power (Daly 1994; Daly et al.", "2008, 2009).", "Note that the relationship between the total outflow timescale and the beam power is the foundation of the use of classical double radio galaxies for cosmological studies.", "The fact that this application for cosmological studies yields results that are very similar to and consistent with those obtained with other methods indicates that this model is on secure footing, as discussed in detail, for example, by Daly et al.", "(2008, 2009).", "The total outflow energy per source obtained by O'Dea et al.", "(2009) is used here, and an identical method is applied to obtain the total outflow lifetime from the beam power and thus the total outflow energy for the remaining sources in the sample.", "The total outflow energy per source, referred to as $\\rm {E_T}$ , is divided by the spin energy $\\rm {E_{spin}}$ to obtain the fraction of the spin energy that could be extracted per outflow event, $(\\rm {E_T/E_{spin}})$ .", "And, $\\rm {E_T}$ is divided by the black hole mass $\\rm {M_{dyn}}$ to obtain the fraction of the black hole mass that is produced per outflow event, $(\\rm {E_T/M_{dyn}})$ .", "Note that the total outflow energy $\\rm {E_T} $ is independent of the black hole mass and only depends on the beam power of the source, which is empirically determined using the strong shock method (reviewed in detail by O'Dea et al.", "2009).", "The results obtained here indicate that only a small fraction, about 1.5% of the spin energy available per black hole is produced per outflow event; see the values listed in Tables 1 and 2, and summarized in Table 3.", "The fraction $\\rm {(E_T/E_{spin})}$ is independent of source type (see Table 3), except for the W sources, and there are only three low redshift W sources in the sample.", "The results indicate that the mean value of $\\rm {Log(E_T/E_{spin})}$ for the 100 sources studied is about $\\rm {Log(E_T/E_{spin})} \\simeq -1.81 \\pm 0.26$ .", "This translates to a small fraction of the black hole dynamical mass being output per outflow event, as indicated by the values of $\\rm {Log(E_T/M_{dyn})}$ listed in Tables 1 and 2 and summarized in Table 3.", "The mean value of this quantity is $\\rm {Log(E_T/M_{dyn})} \\simeq -2.47 \\pm 0.27$ for the 100 sources studied.", "This translates to a mean value of the total outflow energy relative to dynamical black hole mass of about $\\rm {(E_T/M_{dyn}}) \\simeq 3.4 \\times 10^{-3}$ .", "These results are consistent with those obtained by Daly (2009a) who studied a sample of 19 classical double radio sources and found that about a few $\\times 10^{-3}$ of the black hole dynamical mass is output in the form of large-scale jets per source per outflow event.", "As mentioned earlier, there is no overlap in the methods used to obtain $\\rm {E_T}$ and $\\rm {M_{dyn}}$ .", "There are several possible explanations for the fact that the total energy output over the source lifetime in the form of large-scale jets is small compared with the black hole dynamical mass and compared with the spin energy available for extraction, and that each has a relatively narrow distribution.", "1.", "When a certain fraction of the black hole mass-energy is deposited into the ambient gas, the gas is heated and expands, and the accretion is shut off; this would be consistent with the result obtained here and by Daly (2009a).", "2.", "The spin energy extraction, which decreases the black hole dynamical mass, destabilizes the black hole - accretion disk -magnetic field configuration causing the spin energy extraction to be terminated.", "3.", "The black hole masses have been overestimated, and the total spin energy available for extraction is smaller than obtained based on current black hole mass estimates; this would increase the ratio $\\rm {(E_T/E_{spin})}$ and the ratio $\\rm {(E_T/M_{dyn})}$ .", "4.", "The beam powers are much much larger than indicated empirically, and thus carry away significantly more energy than already accounted for.", "5.", "The black hole spin function $F$ , and thus dimensionless spin angular momentum and spin energy, has been overestimated.", "This would only impact $\\rm {E_{spin}}$ and thus $\\rm {(E_T/E_{spin}})$ , but would not impact $\\rm {M_{dyn}}$ and thus would not impact $\\rm {(E_T/M_{dyn})}$ .", "6.", "Only transitions between particular spin states are allowed, as described by Pugliese & Quevedo (2022) and Pugliese & Stuchlík (2021).", "7.", "Something else.", "Each of these possibilities is considered.", "Possibility 1. could explain the observed values and small range of values of the quantities $\\rm {(E_T/E_{spin})}$ and $\\rm {(E_T/M_{dyn})}$ obtained here and by Daly (2009a).", "The results indicate that the energy deposited into the ambient gas over the entire lifetime of an FRII source relative to the black hole dynamical mass is about $\\rm {Log(E_T/M_{dyn})} \\simeq (-2.47 \\pm 0.27)$ (see Table 3 and Fig.", "19 in this work, and Table 1 and Fig.", "1 from Daly 2009a).", "These results are consistent with the empirically determined value of about $-2.3 \\pm 0.5$ obtained by Donahue & Voit (2022) (see their Fig.", "20) based on empirical studies of the energy input required to heat and lift the circumgalactic medium and shut off accretion for a sample of relatively low redshift sources.", "One interesting caveat is that the FRII sources studied here have redshifts between about zero and two, and the source sizes change significantly with redshift (e.g.", "Fig.", "8 of Guerra, Daly, & Wan 2000), so the result obtained here would have to be independent of the details of the energy input such as where in the galactic and circumgalactic medium the energy is deposited and independent of the structure (density and temperature) of the galactic and circumgalactic medium.", "In this scenario, the accretion would be shut off by the heating and lifting of the circumgalactic medium; the medium would eventually settle down and another outflow episode would occur.", "Each outflow event would decrease the black hole spin energy by a very small amount, as long as the angular momentum extracted during the outflow event exceeds that gained by the black hole during the accretion event.", "One puzzling factor for this interpretation is that the range of values of $\\rm {(E_T/M_{dyn})}$ and $\\rm {(E_T/E_{spin})}$ obtained here and by Daly (2009a) are narrow, and seemingly independent of radio source size and source redshift (see Fig.", "20, and the value of the slope listed in Table 3).", "One rather radical idea to explain the small values and small range of these quantities is to posit that the majority of the spin energy is extracted per outflow event, but most of it does not end up in the form of a dual collimated outflow (which would comprise a set fraction of the total energy extracted per unit time), but is in some difficult to detect form such as neutrinos, or gravitational waves.", "In the outflow method, the normalization of eq.", "(1) is a free parameter that is empirically determined.", "The empirically determined value is consistent with the theoretical prediction in the Meier (1999) model (see section 3.3 of D19), and is also consistent with the normalization in the Blandford & Znajek (1977) model.", "Thus, this hypothetical other process would occur simultaneously with the Blandford & Znajek (1977) or Meier (1999) mechanism but would extract substantially more spin energy per unit time, by factors of about (10 - 100), and the energy extracted would be in some form that is not readily observable.", "This process could work hand-in-hand with possibilities 2 and/or 3.", "Note that for FRII sources the outflow timescale depends only upon the beam power, indicating that the accretion timescale must exceed the outflow timescale unless some process directly related to the beam power shuts off the accretion.", "Otherwise, the outflow timescale would be set by the accretion timescale and would not be a function of only the beam power, as has been shown conclusively by Daly et al.", "(2009).", "Possibility 2. is quite interesting.", "As the spin energy is extracted, the black hole mass decreases causing the accretion disk to expand slightly and over a long period of time; the outflow timescales are typically a few $\\times 10^7$ years (e.g.", "O'Dea et al.", "2009).", "If the stability of the magnetic field that plays a crucial role in the spin energy extraction requires a particular ratio of the disk thickness to the disk radius, as the disk expands the thin disk may be disrupted.", "That is, it is possible that the disk and thus the anchor of the magnetic field is disrupted when the fraction of the black hole dynamical mass is decreased by the particular value of a few tenths of a percent found here and by Daly (2009a).", "The decrease of the black hole mass would have a small impact on the radius of the disk, but could have a large impact on the disk thickness, which is likely to be small relative to the disk radius (see, for example, Blandford & Globus 2022; Kolos et al.", "2021).", "Possibility 2. could work hand-in-hand with possibility 1.", "It's not clear how large a fraction of the black hole mass-energy would have to be removed to de-stabilize the accretion disk - magnetic field - black hole configuration and thus terminate the outflow.", "This possibility would be more palatable if the fraction of the black hole mass removed was larger, as considered in point 3.", "This brings us to possibility 3.", "If the black hole masses have been systematically overestimated, then the spin energy values obtained with eq.", "(13) decrease and the ratios $\\rm {(E_T/E_{spin})}$ and $\\rm {(E_T/M_{dyn})}$ increase.", "There are some recent studies that suggest that black hole dynamical masses may be systematically overestimated (e.g.", "Grier et al.", "2019).", "However, the brightest sources studied here and by D19 have a bolometric accretion disk luminosity that is right at the Eddington luminosity (see Fig.", "4 of D19), and any decrease in black hole mass would cause these sources to be radiating at super-Eddington levels.", "Possibility 4 is very unlikely based on the following.", "The direct comparison between the total outflow energy and the black hole mass indicates that the outflow energy is a roughly constant fraction of the black hole mass, with $\\rm {(E_T} /\\rm {M_{dyn})} \\approx 3 \\times 10^{-3}$ independent of the spin properties of the black hole (see Figs.", "19 and 20, Tables 1, 2, and 3, and Daly 2009a).", "As noted by O'Dea et al.", "(2009), the total outflow energy scales as the beam power $L_j^{0.5}$ , so to significantly increase the outflow energy by factors of 10 to 100, the beam power would have to increase by factors of $10^2 - 10^4$ , which is highly unlikely since the beam power is insensitive to offsets from minimum energy conditions (e.g.", "O'Dea et al.", "2009).", "In addition, the largest beam powers are about 10 % of the Eddington luminosity (e.g.", "Daly et al.", "2018), so this would require the maximum beam powers to be significantly larger than the Eddington luminosity.", "And, as noted above, the empirically determined beam power normalizations match those predicted theoretically in the Meier (1999) and Blandford & Znajek (1977) models.", "Possibility 5. is unlikely because independent spin determinations for supermassive black holes associated with classical double radio sources agree with those obtained with the outflow method, and indicate high spin values (e.g.", "Azadi et al.", "2020).", "Fifteen of the quasars studied by D19 with the outflow method overlap with those studied by Azadi et al.", "(2020) with the continuum fitting method, and the spin values obtained with the independent methods agree.", "Similarly, for local AGN, spin values obtained with the outflow method agree with those obtained independently with the X-ray reflection method for the six sources for which a comparison was possible (D19).", "Possibility 5. would require that spin determinations published to date for supermassive black holes by other groups using independent methods are incorrect by large factors.", "Other options are possibility 6., only transitions between particular black hole spin states are allowed as described by Pugliese & Quevedo (2022) and Pugliese & Stuchlík (2021), or possibility 7, something else." ], [ "Summary", "Mass-energy characteristics of black holes are obtained in terms of the black hole spin function, $\\rm {F^2}$ .", "Empirically determined black hole spin functions are used to obtain and study the spin mass-energy properties of a sample of 100 supermassive black holes associated with classical double (FRII) radio sources with dual collimated outflows; the sources have redshifts between about zero and two.", "Black hole spin mass-energy that is available to be extracted from the black hole is $\\rm {M_{spin} = M- M_{irr}}$ , where $\\rm {M \\equiv M_{dyn}}$ (see eq.", "2).", "The mass-energy associated with the black hole spin angular momentum $\\rm {J}$ , referred to here as $\\rm {M_{rot}}$ and defined in section 1, contributes to the total black hole mass, M: $\\rm {M^2 = M_{irr}^2 +M_{rot}^2}$ , which leads to eqs.", "(3) and (9).", "These equations are combined to obtain expressions that describe black hole spin mass-energy characteristics in terms of the spin function, which are then applied to quantify and study empirically determined black hole spin mass-energy properties for a sample of 100 supermassive black holes.", "It is important to be able to empirically determine black hole spin mass-energy characteristics because these impact the total black hole mass, and because this energy can be extracted, which may impact the near and far field environments of astrophysical black holes.", "The relationship between the beam power in Eddington units and bolometric accretion disk luminosity in Eddington units for the sample of supermassive black holes studied here is very similar to and consistent with that obtained for three other samples of sources with very different ranges and values of Eddington normalized beam power and bolometric disk luminosity (Daly et al.", "2018).", "The samples studied include the 100 sources studied here plus 656 AGN and 102 measurements of four stellar-mass black holes that are in X-ray binary systems, and include several different types of AGN.", "This suggests that the outflows in all of these systems are produced by a common physical mechanism.", "Since many of the sources studied by Daly et al.", "(2018) have beam powers that are much larger (by factors of 10 to 100) than the bolometric accretion disk luminosity, these sources are likely to have spin-powered outflows.", "Since the outflows in all of the sources studied are likely to be produced by a common physical mechanism, this suggests that all of the sources, including those studied here, have spin powered outflows.", "Quantities that characterize the spin mass-energy properties of astrophysical black holes in terms of the black hole spin function, $\\rm {F^2}$ , are presented in section 2.2.", "This is preferable for astrophysical black holes for several reasons.", "For example, when attempting to use the dimensionless black hole spin angular momentum $\\rm {j \\equiv Jc/(G M^2)}$ to empirically characterize and determine the spin properties of astrophysical black holes, several difficulties are encountered, as described in section 2.1.", "These issues may be avoided and circumvented by writing the black hole spin mass-energy characteristics in terms of the black hole spin function $\\rm {F^2}$ .", "Furthermore, in the context of the outflow method, the empirically determined quantity is $\\rm {F}$ .", "Relationships between the black hole spin mass-energy characteristics and the black hole spin function $\\rm {F^2}$ are obtained and presented in section 2.2.", "It is found that there is roughly a linear relationship between the black hole spin function and the normalized spin mass-energy of the black hole $\\rm {(E_{spin}/E_{spin,max}) \\approx F^2}$ , and allowing the exponent of $\\rm {F}$ to vary, that $\\rm {Log(E_{spin}/E_{spin,max}) \\approx 1.75~ Log(F)}$ over the range of values relevant to the current studies.", "In addition, the method allows for empirically determined values of the spin function that exceed unity, which can occur due to the uncertainties associated with empirically determined quantities for astrophysical black holes.", "The method described in section 2.2 is applied to a sample of 100 supermassive black holes with redshifts between about zero and 2.", "The values of $\\rm {Log(F)}$ studied here were obtained by D19, and are listed along with their uncertainties in Tables 1 and 2.", "It is shown in section 2.3 that the sample is well represented as having two components: about 2/3 of the 100 sources are maximally spinning, and about 1/3 are less than maximally spinning with the number of sources per unit $\\rm {Log(F)}$ declining as $\\rm {Log(F)}$ decreases.", "The decreasing number of sources as $\\rm {Log(F)}$ decreases could be due to observational selection effects, a real decline with $\\rm {Log(F)}$ , or a combination of the two.", "The 100 FRII sources studied include four sub-samples based on their spectroscopic nuclear properties; HEG, LEG, Q, and W sources, as described in section 2.3.", "As is evident from Table 3, the results presented here are, for the most part, independent of source spectroscopic nuclear properties, except for the W sources, and there are only three low-redshift W sources in the sample.", "Interestingly, it turns out that $\\rm {Log(M_{rot}/M_{irr}) = Log(F)}$ (see eq.", "14), so all of the comments and results obtained for $\\rm {Log(F)}$ directly apply to $\\rm {Log(M_{rot}/M_{irr})}$ .", "Thus, the distribution of values of $\\rm {Log(F)}$ described in section 2.3 can be interpreted as the empirically determined distribution of values of $\\rm {Log(M_{rot}/M_{irr})}$ .", "The empirically determined values of $\\rm {Log(F)}$ and their uncertainties for an additional 656 AGN and 102 measurements of four stellar mass black holes listed and discussed by D19 also directly translate to values of $\\rm {Log(M_{rot}/M_{irr})}$ for those sources.", "The quantity $\\rm {Log(M_{rot}/M_{dyn})}$ can be obtained from eq.", "(15), which indicates that $\\rm {Log(M_{rot}/M_{dyn})} = Log(F) - Log(M_{dyn}/M_{irr})$ , both of which are listed in Tables 1 and 2.", "Results describing the spin mass-energy characteristics of the 100 sources are presented and discussed in sections 3 and 4.", "Many of the sources are highly spinning, and the sources with lower values of black hole spin are at low redshift, as expected due to the flux limited nature of the parent population of the sources.", "Thus, the fact that many of the sources are highly spinning may be a selection effect in that the most highly spinning sources have the brightest and most powerful radio emission, and less powerful sources drop out of the sample at high redshift due to the flux limited nature of the parent population, as described in sec.", "2.3.", "The spin mass-energy values obtained from the black hole spin functions are studied relative to the total or dynamical black hole mass and relative to the irreducible black hole mass.", "For maximally spinning black holes, the mass-energy associated with the black hole spin contributes about 41 % relative to the irreducible black hole mass or about 29 % relative to the total dynamical black hole mass.", "This mass-energy can be extracted (Penrose 1969).", "Thus, the mass of the black hole can be decreased due to the extraction of the spin energy.", "In addition, the extraction of the spin energy can significantly affect the short and long range environment of each black hole.", "Since these are all FRII (classical double) radio sources, these sources channel energy significant distances (hundreds of kpc) from the supermassive black hole.", "The spin mass-energy relative to the dynamical (i.e.", "total) black hole mass can be combined with empirical determinations of the black hole mass to solve for the total spin energy available for extraction per source, as discussed in detail in sections 1 and 2 (see eq.", "13).", "The spin energy per supermassive black hole is substantial, and represents an important reservoir of energy that can be tapped; this is referred to as the \"spin energy reservoir.\"", "Tapping even small amounts of the spin energy can have a substantial impact on the near and far field environments of the sources, as discussed in sections 4.2 and 4.3.", "The total spin energy available per source is compared with the total energy output from the black hole system in the form of dual oppositely directed jets over the active lifetime of each source, $\\rm {E_T}$ , as described in sections 3 and 4.3.", "For the 100 black hole systems studied, the range of values of $(\\rm {E_T} /\\rm {E_{spin}})$ , the ratio of the total outflow energy to the spin energy available, is very narrow, with most of the sources having a value of about one percent or so: $\\rm {Log(E_T/E_{spin})} \\simeq -1.8 \\pm 0.3$ for the 100 FRII sources studied here.", "This is consistent with the results obtained here and by Daly (2009a) that indicated a small value and range of values of total outflow energy relative to black hole dynamical mass: $\\rm {Log(E_T/M_{dyn})} \\simeq -2.5 \\pm 0.3$ for the 100 FRII sources studied here (see sections 3 and 4.3).", "The value obtained here is consistent with that obtained by Daly (2009a) and that with obtained with a different method applied to different types of sources by Donahue & Voit (2022), who find $\\rm {Log(E_T/M_{dyn})} \\simeq -2.3 \\pm 0.5$ for a sample of low redshift sources.", "The small value and restricted range of values of $\\rm {Log(E_T/M_{dyn})}$ could suggest that this is a fundamental property of the primary process responsible for producing the dual collimated outflows.", "Several possible explanations for the relatively small value and range of values of $(\\rm {E_T/M_{dyn}})$ or $(\\rm {E_T} /\\rm {E_{spin}})$ are considered in section 4.3.", "For example, it could be that when a specific amount of energy relative to the dynamical black hole mass is dumped into the ambient medium, the ambient gas is heated and expands, shutting off the accretion.", "Another possibility is that as the spin energy is extracted and the black hole mass decreases, the magnetic field and/or the structure of the accretion disk is altered and the spin energy extraction is halted.", "Or, it could be that much of the spin energy is extracted and then the process shuts down - if the black hole masses have been systematically overestimated, then the black hole mass that enters into eq.", "(13) is decreased and the spin energies decrease, so a correspondingly larger fraction of the spin energy is extracted per outflow event.", "Another possibility discussed in section 4.3 is that there is some other process that occurs simultaneously with the process that leads to dual large-scale jets, and this other process is extracting the majority of the spin energy, but the extracted energy is released in a form that is not readily observable.", "For example, most of the spin energy could be carried away in the form of neutrinos or gravitational waves, and only a small fraction of the energy extracted would be channeled into the jetted dual outflow.", "The new method of obtaining black hole spin mass-energy characteristics directly from the spin function presented here is applicable to the study of astrophysical black holes in a broad range of contexts." ], [ "Acknowledgments", "Thanks are extended to the referee, Kastytis Zubovas, for a careful reading of the manuscript and for providing very helpful comments and suggestions.", "It is a pleasure to thank Megan Donahue, Jim Pringle, and Mark Voit for detailed discussions and suggestions related to this work.", "I would also like to thank Jean Brodie, Margaret Daly, Joshua Deal, Yan-Fei Jiang, Chiara Mingarelli, Chris O'Dea, Masha Okounkova, Enrico Ramirez-Ruiz, Biny Sebastian, and Rosie Wyse for helpful conversations related to this work.", "It is a pleasure to thank the Center for Computational Astrophysics and the Flatiron Institute, which is supported by the Simons Foundation, for their hospitality.", "This work was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611." ], [ "DATA AVAILABILITY STATEMENT", "The data underlying this article are available in the article or are listed in D19." ] ]
2210.07779
[ [ "Early stopping for $ L^2 $-boosting in high-dimensional linear models" ], [ "Abstract Increasingly high-dimensional data sets require that estimation methods do not only satisfy statistical guarantees but also remain computationally feasible.", "In this context, we consider $ L^{2} $-boosting via orthogonal matching pursuit in a high-dimensional linear model and analyze a data-driven early stopping time $ \\tau $ of the algorithm, which is sequential in the sense that its computation is based on the first $ \\tau $ iterations only.", "This approach is much less costly than established model selection criteria, that require the computation of the full boosting path.", "We prove that sequential early stopping preserves statistical optimality in this setting in terms of a fully general oracle inequality for the empirical risk and recently established optimal convergence rates for the population risk.", "Finally, an extensive simulation study shows that at an immensely reduced computational cost, the performance of these type of methods is on par with other state of the art algorithms such as the cross-validated Lasso or model selection via a high dimensional Akaike criterion based on the full boosting path." ], [ "Introduction", "Iterative estimation procedures typically have to be combined with a data-driven choice $ \\widehat{m} $ of the effectively selected iteration in order to avoid under- as well as over-fitting.", "In the context of increasingly high-dimensional data sets, which require that estimation methods do not only provide statistical guarantees but also ensure computational feasibility, established model selection criteria for $\\widehat{m} $ such as cross-validation, unbiased risk estimation, Akaike's information criterion or Lepski's balancing principle suffer from a disadvantage: They involve computing the full iteration path up to some large $ m_{ \\max } $ , which is computationally costly, even if the final choice $ \\widehat{m} $ is much smaller than $ m_{ \\max } $ .", "In comparison, sequential early stopping, i.e., halting the procedure at an iteration $ \\widehat{m} $ depending only on the iterates $ m \\le \\widehat{m} $ , can substantially reduce computational complexity while maintaining guarantees in terms of adaptivity.", "For inverse problems, results were established in Blanchard and Mathé [5], Blanchard et al.", "[3], [4], Stankewitz [20] and Jahn [14].", "A Poisson inverse problem was treated in Mika and Szkutnik [16] and general kernel learning in Celisse and Wahl [8].", "In this work, we analyze sequential early stopping for an iterative boosting algorithm applied to data $Y = ( Y_{i} )_{ i \\le n }$ from a high-dimensional linear model $Y_{i} = f^{*}( X_{i} ) + \\varepsilon _{i}= \\sum _{ j = 1 }^{p} \\beta _{j}^{*} X_{i}^{ (j) }+\\varepsilon _{i},\\qquad i = 1, \\dots , n,$ where $f^{*}(x) = \\sum _{ j = 1 }^{p} \\beta ^{*}_{j} x^{ (j) },x \\in \\mathbb {R}^{p}$ , is a linear function of the columns of the design matrix, $\\varepsilon : = ( \\varepsilon _{i} )_{ i \\le n }$ is the vector of centered noise terms in our observations and the parameter size $ p $ is potentially much larger than the sample size $ n $ .", "A large body of research has focused on developing methods that, given reasonable assumptions on the design $\\mathbf {X}: = ( X_{i}^{ (j) } )_{ i \\le n, j \\le p }$ and the sparsity of the coefficients $ \\beta ^{*} $ , consistently estimate $ f^{*} $ despite the fact that $ p \\gg n $ .", "Figure: Empirical risk for different methods.", "Table: Computation times for different methods.", "Typically, approaches rely either on penalized least squares estimation such as the Lasso, see e.g., Bühlmann and van der Geer [7], or on boosting type algorithms which iteratively aggregate “weak” estimators with low accuracy to “strong” estimators with high accuracy, see Schapire and Freund [19] and Bühlmann [6].", "Here, we focus on $ L^{2} $ -boosting based on orthogonal matching pursuit (OMP), which is one of the standard algorithms, particularly in signal processing, see e.g., Tropp and Gilbert [24] or Needell and Vershynin [17].", "Temlyakov [22] provided one of the first deterministic analyses of OMP under the term orthogonal greedy algorithm (OGA).", "In a statistical setting, where the non-linearity of OMP further complicates the analysis, optimal convergence rates based on a high-dimensional Akaike criterion have only been derived recently in Ing and Lai [13] and Ing [12].", "We sequentially stop OMP at $ \\widehat{m} = \\tau $ with $\\tau : = \\inf \\lbrace m \\ge 0: r_{m}^{2} \\le \\kappa \\rbrace \\qquad \\text{ and } \\qquad r_{m}^{2}: = \\Vert Y - \\widehat{F}^{ (m) } \\Vert _{n}^{2},\\quad m \\ge 0,$ where, at iteration $ m $ , $ \\widehat{F}^{ (m) } $ is the OMP-estimator of $ f^{*} $ and $ r_{m}^{2} $ is the squared empirical residual norm.", "$ \\kappa > 0 $ is a critical value chosen by the user.", "We consciously switch the notation from $ \\widehat{m} $ , for a general data-driven selection criterion, to $ \\tau $ , indicating that the sequential early stopping time is in fact a stopping time in the sense of stochastic process theory.", "It is closely related to the discrepancy principle which has been studied in the analysis of inverse problems, see Engl et al.", "[9].", "Most important to our analysis are the ideas developed in Blanchard et al.", "[3] and Ing [12].", "As an initial impression, Figure REF displays boxplots of the empirical risk for five methods of stopping OMP: (i) Early stopping with the choice $ \\kappa $ equal to the true empirical noise level $\\Vert \\varepsilon \\Vert _{n}^{2}=n^{-1} \\sum _{ i = 1 }^{n} \\varepsilon _{i}^{2},$ which will be justified later; (ii) Early stopping with $ \\kappa $ equal to an estimated noise level $ \\widehat{ \\sigma }^{2} $ , which approximates $ \\Vert \\varepsilon \\Vert _{n}^{2} $ ; (iii) OMP based on the full high-dimensional Akaike selection (HDAIC) from Ing [12]; (iv) A two-step procedure that combines early stopping based on an estimated noise level with an additional Akaike model selection step performed only over the iterations $ m \\le \\tau $ .", "The plots are based on Monte Carlo simulations from model (REF ) with $ p = n = 1000 $ and a signal $ f^{*} $ , the sparsity of which is unknown to the methods.", "As benchmarks, we additionally provide the values of the risk at the classical oracle iteration $m^{ \\mathfrak {o} }:=\\operatornamewithlimits{arg\\!\\min }_{ m \\ge 0 } \\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}$ and the default method LassoCV from the python library scikit-learn [18] based on 5-fold cross-validation.", "The exact specifications of the simulation are in Section .", "Table REF contains the computation times for the different methods.", "The results suggest that sequential early stopping performs as well as established exhaustive model selection criteria at an immensely reduced computational cost, requiring only the computation of $ \\tau $ iterations of OMP.", "The contribution of this paper is to provide rigorous theoretical guarantees that justify this statement.", "In the remainder of Section , we present our main results, which are a fully general oracle inequality for the empirical risk at $ \\tau $ and an optimal adaptation guarantee for the population risk in terms of the rates from Ing [12].", "In Section , we study the stopped empirical risk in detail and provide precise bounds for important elementary quantities, which are used to extend our results to the population risk in Section .", "The analysis, which is conducted $ \\omega $ -pointwise on the underlying probability space, is able to avoid some of the saturation phenomena which occurred in previous works, see Blanchard et al.", "[3] and Celisse and Wahl [8] in particular.", "Both of our main theorems require access to a rate-optimal estimator $ \\widehat{ \\sigma }^{2} $ of $ \\Vert \\varepsilon \\Vert _{n}^{2} $ .", "Section presents a noise estimation result which shows that such estimators do exist and can be computed efficiently.", "Section provides a simulation study, which illustrates our main findings numerically.", "Finally, in the two-step procedure from (iv), we combine early stopping with a second model selection step over the iterations $ m \\le \\tau $ .", "This procedure, which empirically outperforms the others, inherits the guarantees for early stopping from our main results, while robustifying the methodology against deviations in the stopping time.", "In order to state results for sequential early stopping of OMP in model (REF ), as minimal assumptions, we require that the rows $ ( X_{i} )_{ i \\le n } $ of the design matrix $\\mathbf {X} = ( X_{i}^{ (j) } )_{ i \\le n, j \\le p }$ are independently and identically distributed such that $ \\mathbf {X} $ has full rank $ n $ almost surely.", "We also require that the noise terms $ ( \\varepsilon _{i} )_{ i \\le n } $ are independently and identically distributed and assume that, conditional on the design, a joint subgaussian parameter for the noise terms exists.", "(SubGE): Conditional on the design, the noise terms are centered subgaussians with a joint parameter $ \\overline{ \\sigma }^{2} > 0 $ , i.e., for all $ i \\le n $ and $ u \\in \\mathbb {R} $ , $\\mathbb {E} ( e^{ u \\varepsilon _{i} } | X_{i} )& \\le e^{ \\frac{ u^{2} \\overline{ \\sigma }^{2} }{2} }\\qquad \\text{almost surely.", "}$ Complementary to $ \\overline{ \\sigma }^{2} $ , we set $\\underline{ \\sigma }^{2}: = \\text{Var}( \\varepsilon _{1} )$ .", "By conditioning, we have $\\underline{ \\sigma }^{2}=\\mathbb {E} ( \\mathbb {E} ( \\varepsilon _{1}^{2} | X_{1} ) )\\le \\overline{ \\sigma }^{2}$ .", "Assumption [assSubGaussianErrors] (SubGE) permits heteroscedastic error terms $ ( \\varepsilon _{i} )_{ i \\le n } $ , allowing us to treat both regression and classification.", "Example 1.1 (Gaussian Regression): For $\\varepsilon _{1}, \\dots \\varepsilon _{n} \\sim N( 0, \\sigma ^{2} )$ i.i.d., we have $\\underline{ \\sigma }^{2} = \\sigma ^{2} = \\overline{ \\sigma }^{2}$ .", "(Classification): For classification, we consider i.i.d.", "observations $Y_{i} \\sim \\text{Ber}( f^{*}( X_{i} ) ), \\qquad i = 1, \\dots , n.$ Then, the noise terms are given by $\\varepsilon _{i} = Y_{i} - f^{*}( X_{i} )$ with $\\mathbb {E} ( \\varepsilon _{i} | X_{i} )& =f^{*}( X_{i} ) ( 1 - f^{*}( X_{i} ) )+( 1 - f^{*}( X_{i} ) ) ( - f^{*}( X_{i} ) )=0.$ Conditional on the design, the noise is bounded by one.", "This implies that $ \\overline{ \\sigma }^{2} \\le 1 $ .", "For the asymptotic analysis, we assume that the observations stem from a sequence of models of the form (REF ), where $ p = p^{ (n) } \\rightarrow \\infty $ and $ \\log ( p^{ (n) } ) / n \\rightarrow 0 $ for $ n \\rightarrow \\infty $ .", "We allow the quantities $\\mathbf {X} = \\mathbf {X}^{ (n) }, \\beta ^{*} = ( \\beta ^{*} )^{ (n) }\\text{ and } \\varepsilon = \\varepsilon ^{ (n) }$ to vary in $ n $ .", "For notational convenience, we keep this dependence implicit.", "In this setting, $ L^{2} $ -boosting based on OMP is used to estimate $ f^{*} $ and perform variable selection at the same time.", "Empirical correlations between data vectors are measured via the empirical inner product $\\langle a, b \\rangle _{n}: = n^{-1} \\sum _{ i = 1 }^{n} a_{i} b_{i}$ with norm $\\Vert a \\Vert _{n}: = \\langle a, a \\rangle _{n}^{ 1 / 2 },$ for $ a, b \\in \\mathbb {R}^{n} $ .", "By $\\widehat{ \\Pi }_{J}: \\mathbb {R}^{n} \\rightarrow \\mathbb {R}^{n},$ we denote the orthogonal projection with respect to $ \\langle \\cdot , \\cdot \\rangle _{n} $ onto the span of the columns $ \\lbrace X^{ (j) }: j \\in J \\rbrace $ of the design matrix.", "OMP is initialized at $ \\widehat{F}^{ (0) }: = 0 $ and then iteratively selects the covariates $ X^{ (j) }, j \\le p $ , which maximize the empirical correlation with the residuals $ Y - \\widehat{F}^{ (m) } $ at the current iteration $ m $ .", "The estimator is updated by projecting onto the subspace spanned by the selected covariates.", "Explicitly, the procedure is given by the following algorithm: [H] Orthogonal matching pursuit (OMP) [1] px $\\widehat{F}^{ (0) } \\leftarrow 0,\\widehat{J}_{0} \\leftarrow \\emptyset $ px $ m = 0, 1, 2, \\dots $ $\\widehat{j}_{m + 1}\\leftarrow \\operatornamewithlimits{arg\\!\\max }_{ j \\le p }\\Big |\\Big \\langle Y - \\widehat{F}^{ (m) },\\frac{ X^{ (j) } }{ \\Vert X^{ (j) } \\Vert _{n} }\\Big \\rangle _{n}\\Big |$ px $\\widehat{J}_{ m + 1 }\\leftarrow \\widehat{J}_{m} \\cup \\big \\lbrace \\widehat{j}_{ m + 1 } \\big \\rbrace $ px $\\widehat{F}^{ ( m + 1 ) }\\leftarrow \\widehat{ \\Pi }_{ \\widehat{J}_{ m + 1 } } Y$ px px px Maximizing the empirical correlation between the residuals $ Y - \\widehat{F}^{ (m) } $ and $ X^{ ( \\widehat{j}_{ m + 1 } ) } $ at iteration $ m $ is equivalent to minimizing $\\Vert Y - \\widehat{F}^{ ( m + 1 ) } \\Vert _{n}^{2}$ , i.e., OMP performs greedy optimization for the residual norm.", "It is therefore natural to stop this procedure at $ \\tau $ from Equation (REF ) when the residual norm reaches a critical value.", "From a statistical perspective, we are interested in the risk of the estimators $ \\widehat{F}^{ (m) }, m \\ge 0 $ .", "Initially, we consider the empirical risk $\\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}& =\\Vert ( I - \\widehat{ \\Pi }_{m} ) f^{*} \\Vert _{n}^{2}+\\Vert \\widehat{ \\Pi }_{m} \\varepsilon \\Vert _{n}^{2}=b_{m}^{2} + s_{m},$ where we introduce the notation $\\widehat{ \\Pi }_{m}: = \\widehat{ \\Pi }_{ \\widehat{J}_{m} },b_{m}^{2}: = \\Vert ( I - \\widehat{ \\Pi }_{m} ) f^{*} \\Vert _{n}^{2}$ for the squared empirical bias and $s_{m}: = \\Vert \\widehat{ \\Pi }_{m} \\varepsilon \\Vert _{n}^{2}$ for the empirical stochastic error.In the term $ b_{m}^{2} $ , we use the standard overloading of notation, letting $ f^{*} $ denote $ ( f^{*}( X_{i} ) )_{ i \\le n } $ , see Section REF .", "Note that at this point, we cannot simply take expectations, due to the non-linear, stochastic choice of $ \\widehat{ J }_{ m } $ .", "The definition of the orthogonal projections $ ( \\widehat{ \\Pi }_{m} )_{ m \\ge 0 } $ with respect to $ \\langle \\cdot , \\cdot \\rangle _{n} $ guarantees that the mappings $ m \\mapsto b_{m}^{2} $ and $ m \\mapsto s_{m} $ are monotonously decreasing and increasing, respectively.", "This reveals the fundamental problem of selecting an iteration of the procedure in Algorithm REF .", "We need to iterate far enough to sufficiently reduce the bias, yet not too far as to blow up the stochastic error.", "For $ m \\ge n $ , we have $b_{m}^{2} = 0 \\text{ but also }s_{m} = \\Vert \\varepsilon \\Vert _{n}^{2}$ , which converges to $ \\underline{ \\sigma }^{2} $ by the law of large numbers.", "In particular, this means iterating Algorithm REF indefinitely will not produce a consistent estimator of the unknown signal $ f^{*} $ .", "Since the decay of the bias depends on $ f^{*} $ , no a priori, i.e., data independent, choice of the iteration will perform well in terms of the risk uniformly over different realizations of $ f^{*} $ .", "Therefore, Algorithm REF needs to be combined with a data-driven choice $ \\widehat{m} $ of the effectively selected iteration, which is adaptive.", "This means either, without prior knowledge of $ f^{*} $ , the choice $ \\widehat{m} $ satisfies an oracle inequality relating its performance to that of the ideal oracle iteration $m^{ \\mathfrak {o} }=m^{ \\mathfrak {o} }( f^{*} ):=\\operatornamewithlimits{arg\\!\\min }_{ m \\ge 0 } \\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}$ or, in terms of convergence rates for the risk, $ \\widehat{m} $ performs optimally for multiple classes of signals without prior knowledge of the class to which the true signal $ f^{*} $ belongs.", "Our analysis in Section shows that in order to derive such an adaptation result for the sequential early stopping time $ \\tau $ in Equation (REF ), ideally, the critical value $ \\kappa $ should be chosen depending on the iteration as $\\kappa =\\kappa _{m}=\\Vert \\varepsilon \\Vert _{n}^{2} + \\frac{ C_{ \\tau } m \\log p }{n},\\qquad m \\ge 0,$ where $ C_{ \\tau } \\ge 0 $ is a non-negative constant.", "Since the empirical noise level $ \\Vert \\varepsilon \\Vert _{n}^{2} $ is unknown, it has to be replaced by an estimator $ \\widehat{ \\sigma }^{2} $ and we redefine $\\tau :=\\inf \\lbrace m \\ge 0: r_{m}^{2} \\le \\kappa _{m} \\rbrace \\qquad \\text{ with } \\qquad \\kappa _{m}:=\\widehat{ \\sigma }^{2} + \\frac{ C_{ \\tau } m \\log p }{n},\\qquad m \\ge 0.$ Our first main result is an oracle inequality for the stopped empirical risk at $ \\tau $ .", "Theorem 1.2 (Oracle inequality for the empirical risk) Under Assumption [assSubGaussianErrors] (SubGE), the empirical risk at the stopping time $ \\tau $ in Equation (REF ) with $C_{ \\tau } \\ge 8 \\overline{ \\sigma }^{2}$ satisfies $\\Vert \\widehat{F}^{ ( \\tau ) } - f^{*} \\Vert _{n}^{2}& \\le \\min _{ m \\ge 0 }\\Big (7 \\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}+\\frac{ ( 8 \\overline{ \\sigma }^{2} + C_{ \\tau } ) m \\log p }{n}\\Big )+| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |\\\\& \\le 7 \\Vert \\widehat{F}^{ ( m^{ \\mathfrak {o} } ) } - f^{*} \\Vert _{n}^{2}+\\frac{( 8 \\overline{ \\sigma }^{2} + C_{ \\tau } )m^{ \\mathfrak {o} } \\log p}{n}+| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |$ with probability converging to one.", "The oracle inequality is completely general in the sense that no assumption on $ f^{*} $ is required.", "In particular, the result also holds for non-sparse $ f^{*} $ .", "The first term on the right-hand side involving the iteration $ m^{\\mathfrak {o} } $ from Equation (REF ) is of optimal order and the second term matches the upper bound for the empirical stochastic error at iteration $ m^{ \\mathfrak {o} } $ we derive in Lemma REF .", "The last term is the absolute estimation error of $ \\widehat{ \\sigma }^{2} $ for the empirical noise level.", "The result is closely related to Theorem 3.3 in Blanchard et al.", "[3].", "Whereas they state their oracle inequality in expectation, ours is formulated $\\omega $ -pointwise on the underlying probability space, which is slightly stronger.", "In particular, this leads to the term $| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |$ in the inequality, which will be essential for the noise estimation problem, see Section ." ], [ "Optimal adaptation for the population risk", "The population counterpart of the empirical inner product is $\\langle f, g \\rangle _{ L^2 }: = \\mathbb {E} ( f( X_{1} ) g( X_{1} ) )$ with norm $\\Vert f \\Vert _{ L^2 }: = \\langle f, f \\rangle _{ L^2 }^{ 1 / 2 }$ for functions $f, g \\in L^{2}( \\mathbb {P}^{ X_{1} } ),$ where $ \\mathbb {P}^{ X_{1} } $ denotes the distribution of one observation of the covariates.", "Identifying $ \\widehat{F}^{ (m) } $ with its corresponding function in the covariates, the population risk of the estimators is given by $\\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{ L^{2} }^{2},m \\ge 0$ .", "Assuming that all of the covariates are square-integrable, for $J \\subset \\lbrace 1, \\dots , p \\rbrace $ , let $\\Pi _{J}: L^{2}( \\mathbb {P}^{ X_{1} } ) \\rightarrow L^{2}( \\mathbb {P}^{ X_{1} } )$ denote the orthogonal projection with respect to $\\langle \\cdot , \\cdot \\rangle _{ L^{2} }$ onto the span of the covariates $\\lbrace X_{1}^{ (j) }: j \\in J \\rbrace $ .", "Setting $\\Pi _{m}: = \\Pi _{ \\widehat{J}_{m} }$ , the population risk decomposes into $\\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{ L^{2} }^{2}& =\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^{2} }^{2}+\\Vert \\widehat{F}^{ (m) } - \\Pi _{m} f^{*} \\Vert _{ L^{2} }^{2}=B_{m}^{2} + S_{m},$ where $B_{m}^{2}:a=\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^{2} }^{2}$ is the squared population bias and $S_{m}:=\\Vert \\widehat{F}^{ (m) } - \\Pi _{m} f^{*} \\Vert _{ L^{2} }^{2}$ is the population stochastic error.", "Note that $ B_{m}^{2} $ and $ S_{m} $ are not the exact population counterparts of the empirical quantities $ b_{m}^{2} $ and $ s_{m} $ , since we have to account for the difference between $ \\widehat{ \\Pi }_{m} $ and $\\Pi _{m} $ .", "The challenge of selecting the iteration in Algorithm REF discussed in the previous section is the same for the population risk.", "The mapping $ m \\mapsto B_{m}^{2} $ is monotonously decreasing, $ S_{0} = 0$ and $ S_{m} $ approaches $ \\text{Var}( \\varepsilon _{1} ) $ for $ m \\rightarrow \\infty $ assuming that the difference between $ \\widehat{ \\Pi }_{m} $ and $\\Pi _{m} $ becomes negligible.", "Due to this difference, however, the mapping $ m \\mapsto S_{m} $ is no longer guaranteed to be monotonous.", "Both $ B_{m} $ and $ S_{m} $ are still random quantities due to the randomness of $ \\widehat{J}_{m} $ .", "In order to derive guarantees for the population risk, additional assumptions are required.", "We quantify the sparsity of the coefficients $ \\beta ^{*} $ of $ f^{*} $ : (Sparse): We assume one of the two following assumptions holds.", "$ \\beta ^{*} $ is $ s $ -sparse for some $ s \\in \\mathbb {N}_{0} $ , i.e., $\\Vert \\beta ^{*} \\Vert _{0} \\le s$ , where $ \\Vert \\beta ^{*} \\Vert _{0} $ is the cardinality of the support $S: = \\lbrace j \\le p: | \\beta ^{*}_{j} | \\ne 0 \\rbrace $ .", "Additionally, we require that $s \\Vert \\beta ^{*} \\Vert _{1}^{2}=s \\Big ( \\sum _{ j = 1 }^{p} | \\beta ^{*}_{j} | \\Big )^{2}=o \\Big ( \\frac{n}{ \\log p } \\Big ),\\quad \\Vert f^{*} \\Vert _{ L^{2} }^{2} \\le C_{ f^{*} }\\quad \\text{ and } \\quad \\min _{ j \\in S } | \\beta ^{*}_{j} |\\ge \\underline{ \\beta },$ where $ C_{ f^{*} }, \\underline{ \\beta } > 0 $ are numerical constants.", "$ \\beta ^{*} $ is $ \\gamma $ -sparse for some $\\gamma \\in [ 1, \\infty )$ , i.e., $\\Vert \\beta ^{*} \\Vert _{2} \\le C_{ \\ell ^{2} }$ and $\\sum _{ j \\in J } | \\beta _{j}^{*} |& \\le C_{ \\gamma }\\Big (\\sum _{ j \\in J } | \\beta _{j}^{*} |^{2}\\Big )^{ \\frac{ \\gamma - 1 }{ 2 \\gamma - 1 } }\\qquad \\text{ for all } J \\subset \\lbrace 1, \\dots , p \\rbrace ,$ where $ C_{ \\ell ^{2} }, C_{ \\gamma } > 0 $ are numerical constants.", "Assumptions like [assSparse] (Sparse) (i) are standard in the literature on high dimensional models, see e.g., Bühlmann and van de Geer [7].", "Note that the conditions in (i) imply that $s = o( ( n / \\log p )^{ 1 / 3 } )$ .", "[assSparse] (Sparse) (ii) encodes a decay of the coefficients $ \\beta ^{*} $ .", "It includes several well known settings as special cases.", "Example 1.3 ($ \\ell ^{ 1 / \\gamma } $ -boundedness): For $\\gamma \\in [ 1, \\infty )$ and some $C_{ \\ell ^{ 1 / \\gamma } } >0$ , let the coefficients satisfy $\\sum _{ j = 1 }^{p} | \\beta ^{*}_{j} |^{ 1 / \\gamma }\\le C_{ \\ell ^{ 1 / \\gamma } }$ .", "Then, Hoelder's inequality yields $\\sum _{ j \\in J }| \\beta ^{*}_{j} |& =\\sum _{ j \\in J }| \\beta ^{*}_{j} |^{ \\frac{1}{ 2 \\gamma - 1 } }| \\beta ^{*}_{j} |^{ \\frac{ 2 \\gamma - 2 }{ 2 \\gamma - 1 } }\\le \\Big (\\sum _{ j \\in J }| \\beta ^{*}_{j} |^{ \\frac{1}{ \\gamma } }\\Big )^{ \\frac{ \\gamma }{ 2 \\gamma - 1 } }\\Big (\\sum _{ j \\in J }| \\beta ^{*}_{j} |^{2}\\Big )^{ \\frac{ \\gamma - 1 }{ 2 \\gamma - 1 } }\\\\& \\le ( C_{ \\ell ^{ 1 / \\gamma } } )^{ \\frac{ \\gamma }{ 2 \\gamma - 1 } }\\Big (\\sum _{ j \\in J }| \\beta ^{*}_{j} |^{2}\\Big )^{ \\frac{ \\gamma - 1 }{ 2 \\gamma - 1 } }\\qquad \\text{ for all } J \\subset \\lbrace 1, \\dots , p \\rbrace ,$ i.e., Assumption [assSparse] (Sparse) (ii) is satisfied with $C_{ \\gamma }=( C_{ \\ell ^{ 1 / \\gamma } } )^{ \\frac{ \\gamma }{ 2 \\gamma - 1 } }$ .", "For $ \\gamma \\rightarrow \\infty $ , this approaches the setting in (i), in which the support $ S $ is finite.", "(Polynomial decay): For $\\gamma \\in ( 1, \\infty )$ and $C_{ \\gamma }^{\\prime } \\ge c_{ \\gamma }^{\\prime } > 0$ , let the coefficients satisfy $c_{ \\gamma }^{\\prime } j^{ - \\gamma }\\le | \\beta ^{*}_{ (j) } |\\le C_{ \\gamma }^{\\prime } j^{ - \\gamma }\\qquad \\qquad \\text{ for all } j \\le p,$ where $( \\beta ^{*}_{ (j) } )_{ j \\le p }$ is a reordering of the $( \\beta ^{*}_{j} )_{ j \\le p }$ with decreasing absolute values.", "Then, Assumption [assSparse] (Sparse) (ii) is satisfied with $ C_{ \\gamma } $ proportional to $C_{ \\gamma }^{\\prime }( c_{ \\gamma }^{\\prime } )^{ - ( 2 \\gamma - 2 ) / ( 2 \\gamma - 1 ) },$ see Lemma A1.2 of Ing [12].", "For the covariance structure of the design, we assume subgaussianity and some additional boundedness conditions.", "(SubGD): The design variables are centered subgaussians in $ \\mathbb {R}^{p} $ with unit variance, i.e., there exists some $ \\rho > 0 $ such that for all $x \\in \\mathbb {R}^{p} \\text{ with } \\Vert x \\Vert = 1,$ $\\mathbb {E} e^{ u \\langle x, X_{1} \\rangle }& \\le e^{ \\frac{ u^{2} \\rho ^{2} }{2} },\\quad u \\in \\mathbb {R}\\qquad \\text{ and } \\qquad \\text{Var} ( X_{1}^{ (j) } ) = 1\\quad \\text{ for all } j \\le p.$ Remark 1.4 (Inclusion of an Intercept) Assumption [assSubGaussianDesign] (SubGD) still allows to include an intercept additional to the design variables.", "If the intercept is selected, we have just applied Algorithm REF to the data $ Y $ centered at their empirical mean for which [assSubGaussianDesign] (SubGD) is satisfied up to a negligible term.", "If it is not selected, the result is identical to applying the Algorithm without an intercept.", "(CovB): The complete covariance matrix $ \\Gamma : = \\text{Cov}( X_{1} ) $ of one design observation is bounded from below, i.e., there exists some $ c_{ \\lambda } > 0 $ such that the smallest eigenvalue of $ \\Gamma $ satisfies $\\lambda _{ \\min }( \\Gamma ) \\ge c_{ \\lambda } > 0.$ Further, we assume that there exists $C_{ \\text{Cov} } > 0$ such that the partial population covariance matrices $\\Gamma _{J}: = ( \\Gamma _{ j k } )_{ j, k \\in J },$ for $J \\subset \\lbrace 1, \\dots , p \\rbrace ,$ satisfy $\\sup _{ | J | \\le M_{n}, k \\notin J }\\Vert \\Gamma ^{-1}_{J} v_{k} \\Vert _{1}<C_{ \\text{Cov} }$ with $M_{n}:=\\sqrt{ n / ( ( \\bar{\\sigma }^{2} + \\rho ^{4} ) \\log p ) }$ , where $v_{k}: = ( \\text{Cov}( X_{1}^{ (k) }, X_{1}^{ (j) } ))_{ j \\in J }\\in \\mathbb {R}^{ | J | }$ is the vector of covariances between the $ k $ -th covariate and the covariates from the set $ J $ .", "$( \\Gamma _{J}^{-1} v_{k} )_{ j \\in J }$ is the vector of coefficients for the $X_{1}^{ (j) }, j \\in J$ , in the conditional expectation $\\mathbb {E} ( X_{1}^{ (k) } | X_{1}^{ (j) },\\\\ j \\in J )$ .", "$ M_{ n } $ will be the largest iteration of Algorithm REF for which we need control over the covariance structure.", "Condition (REF ) imposes a restriction on the correlation between the covariates.", "Example 1.5 () (Uncorrelated design): For $ \\Gamma = I_{p} $ , condition (REF ) is satisfied for any choice $ C_{ \\text{Cov} } > 0 $ , since the left-hand side of the condition is zero.", "(Bounded cumulative coherence): For $ m \\ge 0 $ and $ J \\subset \\lbrace 1, \\dots , p \\rbrace $ , let $\\mu _{1}(m):=\\max _{ k \\le p }\\max _{ | J | \\le m, k \\notin J }\\sum _{ j \\in J } \\text{Cov}( X_{1}^{ (k) }, X_{1}^{ (j) } )$ be the cumulative coherence function.", "Then, $\\sup _{ | J | \\le M_{ n }, k \\notin J }\\Vert \\Gamma ^{-1}_{J} v_{k} \\Vert _{1}& \\le \\sup _{ | J | \\le M_{ n }, k \\notin J }\\Vert \\Gamma _{J}^{-1} \\Vert _{1}\\mu _{1}( M_{ n } ),$ where $ \\Vert \\Gamma _{J}^{-1} \\Vert _{1} $ denotes the column sum norm.", "Under the assumption that both quantities on the right-hand side are bounded, condition (REF ) is satisfied.", "Under the stronger assumption that $ \\mu _{1}( M_{ n } ) < 1 / 2, $ it can be shown that condition (REF ) is satisfied with $ C_{ \\text{Cov} } = 1 $ .", "This is the exact recovery condition in Theorem 3.5 of Tropp [23].", "As in Ing [12], condition (REF ) guarantees that the coefficients $\\beta ( ( I - \\Pi _{J} ) f^{*} )$ of the population residual term $( I - \\Pi _{J} ) f^{*}$ satisfy $\\Vert \\beta ( ( I - \\Pi _{J} ) f^{*} ) \\Vert _{1}=\\Vert \\beta ^{*} - \\beta ( \\Pi _{J} f^{*} ) \\Vert _{1}\\le ( C_{ \\text{Cov} } + 1 )\\sum _{ j \\notin J } | \\beta ^{*}_{j} |\\\\\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\text{ for all } | J | \\le M_{ n },$ where $ J $ ranges over all subsets of $ \\lbrace 1, \\dots , p \\rbrace $ .", "A derivation is stated in Lemma REF .", "Equation (REF ) provides a uniform bound on the vector difference of the finite time predictor coefficients $ \\beta ( \\Pi _{J} f^{*} ) $ of $ \\Pi _{J} f^{*} $ and the infinite time predictor coefficients $ \\beta ^{*} $ .", "In the literature on autoregressive modeling, such an inequality is referred to as a uniform Baxter's inequality, see Ing [12] and the references therein, Baxter [2] and Meyer et al.", "[15].", "Under Assumptions [assSparse] (A1) - [assCovB] (A4), explicit bounds for the population bias and the stochastic error are available.", "In the formulation of the results, the postpositioned “with probability converging to one” always refers to the whole statement including quantification over $ m \\ge 0 $ , see also Section REF .", "Lemma 1.6 (Bound for the population stochastic error, Ing [12]) Under Assumptions [assSubGaussianErrors] (SubGE), [assSparse] (Sparse), [assSubGaussianDesign] (SubGD) and [assCovB] (CovB), there is a constant $ C_{ \\text{Stoch} } > 0 $ such that $S_{m}\\le C_{ \\text{Stoch} }\\begin{dcases}\\frac{(\\overline{ \\sigma }^{2}+\\Vert \\beta ^{*} \\Vert _{1}^{2} \\rho ^{4} \\mathbf {1} \\lbrace m \\le \\tilde{m} \\rbrace )m \\log p}{n},& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) m \\log p }{n},& \\beta ^{*} \\ \\gamma \\text{-sparse}\\end{dcases}\\\\\\qquad \\text{ for all } m \\ge 0$ with probability converging to one, where $\\tilde{m} = \\inf \\lbrace m \\ge 0: S \\subset \\widehat{J}_{m} \\rbrace $ .", "The stochastic error grows linearly in $ m $ , whereas, up to lower order terms, the bias decays exponentially when $ \\beta ^{*} $ is $ s $ -sparse and with a rate $ m^{ 1 - 2 \\gamma } $ when $ \\beta ^{*} $ is $ \\gamma $ -sparse.", "Proposition 1.7 (Bound for the population bias, Ing [12]) Under Assumptions [assSubGaussianErrors] (SubGE), [assSparse] (Sparse), [assSubGaussianDesign] (SubGD) and [assCovB] (CovB), there are constants $c_{ \\text{Bias} }, C_{ \\text{Bias} } > 0$ such that $B_{m}^{2}& \\le C_{ \\text{Bias} }\\begin{dcases}\\Vert f^{*} \\Vert _{ L^{2} }^{2}\\exp \\Big ( \\frac{ - c_{ \\text{Bias} } m }{s} \\Big )+\\Vert \\beta ^{*} \\Vert _{1}^{2}\\frac{ s \\log p }{n},& \\beta ^{*} \\ s\\text{-sparse}, \\\\m^{ 1 - 2 \\gamma }+\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - \\frac{1}{ 2 \\gamma } },& \\beta ^{*} \\ \\gamma \\text{-sparse} \\\\\\end{dcases}\\\\& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\ \\ \\ \\text{ for all } m \\ge 0$ with probability converging to one.", "When $ \\beta ^{*} $ is $ s $ -sparse, on the corresponding event, there is a constant $ C_{ \\text{supp} } > 0 $ such that $S \\subset \\widehat{J}_{ C_{ \\text{supp} } s }$ .", "Lemma REF and Proposition REF are essentially proven in Ing [12] but not stated explicitly.", "We include derivations in Appendix to keep this paper self-contained.", "Under $ s $ -sparsity, the definition of the population bias guarantees that $B_{m}^{2} = 0 \\qquad \\text{ for all } m \\ge C_{ \\text{supp} } s$ with probability converging to one and under $ \\gamma $ -sparsity, the upper bounds from Lemma REF and Proposition REF balance at an iteration of size $(n / ( ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p ))^{ 1 / ( 2 \\gamma ) }$ .", "We obtain that for $m^{*}_{ s, \\gamma }:=\\begin{dcases}C_{ \\text{supp} } s,& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\Big (\\frac{n}{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }\\Big )^{ \\frac{1}{ 2 \\gamma } },& \\beta ^{*} \\ \\gamma \\text{-sparse}, \\\\\\end{dcases}$ there exists a constant $ C_{ \\text{Risk} } > 0 $ such that with probability converging to one, the population risk satisfies $\\Vert \\widehat{F}^{ ( m^{*}_{ s, \\gamma } ) } - f^{*} \\Vert _{ L^2 }^{2}& \\le C_{ \\text{Risk} }\\mathcal {R}( s, \\gamma )$ with the rates $\\mathcal {R}( s, \\gamma ):=\\begin{dcases}\\frac{ \\overline{ \\sigma }^{2} s \\log p }{n},& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - \\frac{1}{ 2 \\gamma } },& \\beta ^{*} \\ \\gamma \\text{-sparse}.\\end{dcases}$ In Lemmas REF and REF , we show that the empirical quantities $ b_{m}^{2} $ and $ s_{m} $ satisfy bounds analogous to those stated in Proposition REF and Lemma REF , such that also $\\Vert \\widehat{F}^{ ( m^{*}_{ s, \\gamma } ) } - f^{*} \\Vert _{n}^{2}& \\le C_{ \\text{Risk} }\\mathcal {R}( s, \\gamma )$ under the respective assumptions.", "In general, we cannot expect to improve the rates $ \\mathcal {R}( s, \\gamma ) $ neither for the population nor for the empirical risk, see Ing [12] and our discussion in Section REF .", "Consequently, under $ s $ -sparsity, we call a data-driven selection criterion $ \\widehat{m} $ adaptive to a parameter set $ T \\subset \\mathbb {N}_{0} $ for one of the two risks, if the choice $ \\widehat{m} $ attains the rate $ \\mathcal {R}( s, \\gamma ) $ simultaneously over all $ s $ -sparse signals $ f^{*} $ with $ s \\in T $ , without any prior knowledge of $ s $ .", "We call $ \\widehat{m} $ optimally adaptive, if the above holds for $ T = \\mathbb {N}_{0} $ .", "Under $ \\gamma $ -sparsity, we define adaptivity analogously with parametersets $ T \\subset [ 1, \\infty ) $ instead.", "Ideally, $ \\widehat{m} $ would be optimally adaptive both under $ s $ - and $ \\gamma $ -sparsity even without any prior knowledge about what class of sparsity assumption is true for a given signal.", "Ing [12] proposes to determine $ \\widehat{m} $ via a high-dimensional Akaike criterion, which is in fact optimally adaptive for the population risk under both sparsity assumptions.", "In order to compute $ \\widehat{m} $ , however, the full iteration path of Algorithm REF has to be computed as well.", "Our second main result states that optimal adaptation is also achievable by a computationally efficient procedure, given by the early stopping rule in Equation (REF ).", "The proof of Theorem REF is developed in Section .", "Theorem 1.8 (Optimal adaptation for the population risk) Under Assumptions [assSubGaussianErrors] (SubGE), [assSparse] (Sparse), [assSubGaussianDesign] (SubGD) and [assCovB] (CovB), choose $ \\widehat{ \\sigma }^{2} $ in Equation (REF ) such that there is a constant $ C_{ \\text{Noise} } > 0 $ for which $| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |& \\le C_{ \\text{Noise} } \\mathcal {R}( s, \\gamma )$ with probability converging to one.", "Then, the population risk at the stopping time in Equation (REF ) with $C_{ \\tau } = c ( \\overline{ \\sigma }^{2} + \\rho ^{4} )$ for any $ c > 0 $ satisfies $\\Vert \\widehat{F}^{ ( \\tau ) } - f^{*} \\Vert _{ L^2 }^{2}& \\le C_{ \\text{PopRisk} } \\mathcal {R}( s, \\gamma )$ with probability converging to one for a constant $ C_{ \\text{PopRisk} } > 0 $ .", "Under the additional Assumptions [assSparse] (Sparse), [assSubGaussianDesign] (SubGD) and [assCovB] (CovB), the bounds in Lemmas REF and REF also allow to translate Theorem REF into optimal convergence rates by setting $ m = m^{*}_{ s, \\gamma } $ from Equation (REF ): Corollary 1.9 (Optimal adaptation for the empirical risk) Under Assumptions [assSubGaussianErrors] (SubGE), [assSparse] (Sparse), [assSubGaussianDesign] (SubGD) and [assCovB] (CovB), the empirical risk at the stopping time in Equation (REF ) with $ C_{ \\tau } = C ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) $ with $ C \\ge 12 $ satisfies $\\Vert \\widehat{F}^{ ( \\tau ) } - f^{*} \\Vert _{n}^{2}& \\le C_{ \\text{EmpRisk} } \\mathcal {R}( s, \\gamma )+| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |,$ with probability converging to one for a constant $ C_{ \\text{EmpRisk} } > 0 $ .", "In order for sequential early stopping to be adaptive over a parameter subset $ T $ from $ \\mathbb {N}_{0} $ or $ [ 1, \\infty ) $ , all of our results above require an estimator of the empirical noise level that attains the rates $ \\mathcal {R}( s, \\gamma ) $ for the absolute loss.", "In Proposition REF , we show that such an estimator does in fact exist, even for $ T $ equal to $ \\mathbb {N}_{0} $ and $ [ 1, \\infty ) $ .", "Together, this establishes that an optimally adaptive, fully sequential choice of the iteration in Algorithm REF is possible.", "This is a strong positive result, given the fact that in previous settings adaptations has only been possible for restricted subsets of parameters, see Blanchard et al.", "[3] and Celisse and Wahl [8].", "The two-step procedure, which we analyze in detail in Section , further robustifies this method against deviations in the stopping time and reduces the assumptions necessary for the noise estimation." ], [ "Further notation", "We overload both the notation of the empirical and the population inner products with functions and vectors respectively, i.e., for $f, g: \\mathbb {R}^{p} \\rightarrow \\mathbb {R},$ we set $\\Vert f \\Vert _{n}^{2}:=\\frac{1}{n} \\sum _{i = 1}^{n} f(X_{i})^{2}\\qquad \\text{ and } \\qquad \\langle f, g \\rangle _{n}:=\\frac{1}{n} \\sum _{i = 1}^{n} f(X_{i}) g(X_{i})$ and also, e.g., $\\langle \\varepsilon , f \\rangle _{n}:=\\frac{1}{n} \\sum _{ i = 1 }^{n} \\varepsilon _{i} f( X_{i} )\\qquad \\text{ or }\\qquad \\Vert Y \\Vert _{ L^2 }^{2}:=\\mathbb {E} Y_{1}^{2}.$ Further, as in Bühlmann [6], for $ j \\le p $ , we denote the $ j $ -th coordinate projection as $g_{j}(x): = x^{(j)}, x \\in \\mathbb {R}^{p},$ and vectors of dot products via $\\langle \\cdot , g_{J} \\rangle _{n}:=( \\langle \\cdot , g_{j} \\rangle _{n} )_{ j \\in J }\\in \\mathbb {R}^{ | J | }\\qquad \\text{ and }\\qquad \\langle \\cdot , g_{J} \\rangle _{ L^2 }:=( \\langle \\cdot , g_{j} \\rangle _{ L^2 } )_{ j \\in J }\\in \\mathbb {R}^{ | J | }$ for $ J \\subset \\lbrace 1, \\dots , p \\rbrace $ .", "This way, Equation (REF ) in Assumption [assCovB] (CovB) can be restated as $\\sup _{ | J | \\le M_{n}, k \\notin J }\\Vert \\Gamma _{J}^{-1} \\langle g_{k}, g_{J} \\rangle _{ L^{2} } \\Vert _{1}\\le C_{ \\text{Cov} }.$ Analogously to the population covariance matrix $\\Gamma = ( \\langle g_{j}, g_{k} \\rangle _{ L^2 } )_{ j, k \\le p }$ , we define the empirical covariance matrix $\\widehat{ \\Gamma }:=( \\langle g_{j}, g_{k} \\rangle _{n} )_{ j, k \\le p }$ .", "Using the same notation for partial matrices as in Assumption [assCovB] (CovB) and $X^{ (J) }=( X_{i}^{ (j) } )_{ i \\le n, j \\in J }\\in \\mathbb {R}^{ n \\times | J | }$ , the projections $ \\widehat{ \\Pi }_{J} $ and $ \\Pi _{J} $ can be written as $\\widehat{ \\Pi }_{J}: \\mathbb {R}^{n} \\rightarrow \\mathbb {R}^{n},\\qquad \\qquad \\qquad \\ \\ \\ \\widehat{ \\Pi }_{J} y:& =( X^{ (J) } )^{ \\top }\\widehat{ \\Gamma }_{J}^{-1}\\langle y, g_{J} \\rangle _{n},\\\\\\Pi _{J}: L^{2}( \\mathbb {P}^{ X_{1} } ) \\rightarrow L^{2}( \\mathbb {P}^{ X_{1} } ),\\qquad \\Pi _{J} h:& =g_{J}^{ \\top } \\Gamma _{J}^{-1} \\langle h, g_{J} \\rangle _{ L^2 }$ for $ J \\subset \\lbrace 1, \\dots , p \\rbrace $ .", "At points where we switch between a linear combination $ f $ of the columns of the design and its coefficients, we introduce the notation $ \\beta (f) $ .", "We use this, e.g., for the coefficients of the population residual function $\\beta ( ( I - \\Pi _{m} ) f^{*} )$ as in Equation (REF ).", "For coefficients $ \\beta \\in \\mathbb {R}^{p} $ , we also use the general set notation $\\beta _{J}: = ( \\beta _{j} )_{ j \\in J } \\in \\mathbb {R}^{ | J | }$ for $J \\subset \\lbrace 1, \\dots , p \\rbrace $ .", "Throughout the paper, variables $ c > 0 $ and $ C > 0 $ denote small and large constants respectively.", "They may change from line to line and can depend on constants defined in our assumptions.", "They are, however, independent of $ n, \\overline{ \\sigma }^{2} $ and $ \\rho ^{2} $ .", "Many statements in our results are formulated with a postpositioned “with probability converging to one”.", "This always refers to the whole statement including quantifiers.", "E.g.", "in Lemma REF , the result is to be read as: There exists an event with probability converging to one on which for all iterations $ m \\ge 0 $ , the inequality $s_{m} \\le C \\overline{ \\sigma }^{2} m \\log (p) / n$ is satisfied." ], [ "Empirical risk analysis", "Since the stopping time $ \\tau $ in Equation (REF ) is defined in terms of the squared empirical residual norm $r_{m}^{2} m \\ge 0$ , its functioning principles are initially best explained by analyzing the stopped empirical risk $\\Vert \\widehat{F}^{ ( \\tau ) } - f^{*} \\Vert _{n}^{2}$ .", "We begin by formulating an intuition for why $ \\tau $ is adaptive." ], [ "An intuition for sequential early stopping", "Ideally, an adaptive choice $ \\widehat{m} $ of the iteration in Algorithm REF would approximate the classical oracle iteration $ m^{ \\mathfrak {o} } $ from Equation (REF ), which minimizes the empirical risk.", "The sequential stopping time $ \\tau $ , however, does not have a direct connection to $ m^{ \\mathfrak {o} } $ .", "In fact, its sequential definition guarantees that $ \\tau $ does not incorporate information about the squared bias $ b_{m}^{2} $ for iterations $ m > \\tau $ .", "Instead, $ \\tau $ mimics the balanced oracle iteration $m^{ \\mathfrak {b} }=m^{ \\mathfrak {b} }(f^{*}):=\\inf \\lbrace m \\ge 0: b_{m}^{2} \\le s_{m} \\rbrace .$ Fortunately, the empirical risk at $ m^{ \\mathfrak {b} } $ is essentially optimal up to a small discretization error, which opens up the possibility of sequential adaptation in the first place.", "Lemma 2.1 (Optimality of the balanced oracle) The empirical risk at the balanced oracle iteration $ m^{ \\mathfrak {b} } $ satisfies $\\Vert \\widehat{F}^{ ( m^{ \\mathfrak {b} } ) } - f^{*} \\Vert _{n}^{2}& \\le 2 \\Vert \\widehat{F}^{ ( m^{ \\mathfrak {o} } ) } - f^{*} \\Vert _{n}^{2}+\\Delta ( s_{ m^{ \\mathfrak {b} } } )=2 \\min _{ m \\ge 0 }\\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}+\\Delta ( s_{ m^{ \\mathfrak {b} } } ),$ where $\\Delta ( s_{m} ): = s_{m} - s_{ m - 1 }$ is the discretization error of the empirical stochastic error at $ m $ .", "[Proof]If $m^{ \\mathfrak {b} } > m^{ \\mathfrak {o} }$ , then the definition of $ m^{ \\mathfrak {b} } $ and the monotonicity of $ m \\mapsto b_{m}^{2} $ yield $\\Vert \\widehat{F}^{ ( m^{ \\mathfrak {b} } ) } - f^{*} \\Vert _{n}^{2}& =b_{ m^{ \\mathfrak {b} } }^{2}+s_{ m^{ \\mathfrak {b} } }\\le 2 b_{ m^{ \\mathfrak {b} } }^{2}+\\Delta ( s_{ m^{ \\mathfrak {b} } } )\\le 2 b_{ m^{ \\mathfrak {o} } }^{2} + \\Delta ( s_{ m^{ \\mathfrak {b} } } )\\\\& \\le 2 \\Vert \\widehat{F}^{ ( m^{ \\mathfrak {o} } ) } - f^{*} \\Vert _{n}^{2}+\\Delta ( s_{ m^{ \\mathfrak {b} } } ).$ Otherwise, if $m^{ \\mathfrak {b} } \\le m^{ \\mathfrak {o} }$ , then analogously, the monotonicity of $ m \\mapsto s_{m} $ yields $\\Vert \\widehat{F}^{ ( m^{ \\mathfrak {b} } ) } - f^{*} \\Vert _{n}^{2}\\le 2 s_{ m^{ \\mathfrak {b} } }\\le 2 \\Vert \\widehat{F}^{ ( m^{ \\mathfrak {o} } ) } - f^{*} \\Vert _{n}^{2}$ .", "The connection between $ \\tau $ and $ m^{ \\mathfrak {b} } $ can be seen by decomposing the squared residual norm $ r_{m}^{2} $ into $r_{m}^{2}& =\\Vert ( I - \\widehat{ \\Pi }_{m} ) f^{*} \\Vert _{n}^{2}+2 \\langle ( I - \\widehat{ \\Pi }_{m} ) f^{*}, \\varepsilon \\rangle _{n}+\\Vert ( I - \\widehat{ \\Pi }_{m} ) \\varepsilon \\Vert _{n}^{2}\\\\& =b_{m}^{2} + 2 c_{m} + \\Vert \\varepsilon \\Vert _{n}^{2} - s_{m}$ with the cross term $c_{m}: = \\langle ( I - \\widehat{ \\Pi }_{m} ) f^{*}, \\varepsilon \\rangle _{n},\\qquad m \\ge 0.$ Indeed, Equation (REF ) yields that the stopping condition $ r_{m}^{2} \\le \\kappa $ is equivalent to $b_{m}^{2} + 2 c_{m}& \\le s_{m} + \\kappa - \\Vert \\varepsilon \\Vert _{n}^{2}.$ Assuming that $ c_{m} $ can be treated as a lower order term, this implies that, up to the difference $ \\kappa - \\Vert \\varepsilon \\Vert _{n}^{2} $ , $ \\tau $ behaves like $ m^{ \\mathfrak {b} } $ .", "The connection between a discrepancy-type stopping rule and a balanced oracle was initially drawn in Blanchard et al.", "[3], [4].", "Whereas their oracle quantities were defined in terms of non-random population versions of bias and variance, ours have to be defined $ \\omega $ -pointwise on the underlying probability space.", "This is owed to the fact that, even conditional on the design $ \\mathbf {X} $ , the squared bias $ b_{m}^{2} $ is still a random quantity due to the random selection of $ \\widehat{J}_{m} $ in Algorithm REF .", "This is a subtle but important distinction, which leads to a substantially different analysis." ], [ "A general oracle inequality", "In this section, we derive the first main result in Theorem REF .", "As in Blanchard et al.", "[3], the key ingredient is that via the squared residual norm $ r_{m}^{2}, m \\ge 0 $ , the stopped estimator $ \\widehat{F}^{ ( \\tau ) } $ can be compared with any other estimator $ \\widehat{F}^{ (m) } $ in empirical norm.", "Note that the statement in Lemma REF is completely deterministic.", "Lemma 2.2 (Empirical norm comparison) For any $ m \\ge 0 $ , the stopped estimator $ \\widehat{F}^{ ( \\tau ) } $ with $ \\tau $ from Equation (REF ) satisfies $\\Vert \\widehat{F}^{ ( \\tau ) } - \\widehat{F}^{ (m) } \\Vert _{n}^{2}& \\le \\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}+2 | c_{m} |\\\\& +( \\kappa - \\Vert \\varepsilon \\Vert _{n}^{2} )\\mathbf {1} \\lbrace \\tau < m \\rbrace +( \\Vert \\varepsilon \\Vert _{n}^{2} + \\Delta ( r_{ \\tau }^{2} ) - \\kappa )\\mathbf {1} \\lbrace \\tau > m \\rbrace ,$ where $ c_{m} $ is the cross term from Equation (REF ) and $\\Delta ( r_{m}^{2} ): = r_{m}^{2} - r_{ m - 1 }^{2}$ is the discretization error of the squared residual norm at $ m $ .", "[Proof]Fix $ m \\ge 0 $ .", "We have $\\Vert \\widehat{F}^{ ( \\tau ) }- \\widehat{F}^{ (m) }\\Vert _{n}^{2}& =\\Vert Y - \\widehat{F}^{ ( \\tau ) }+ \\widehat{F}^{ (m) } - Y\\Vert _{n}^{2}=r_{ \\tau }^{2}-2 \\langle ( I - \\widehat{ \\Pi }_{ \\tau } ) Y,( I - \\widehat{ \\Pi }_{ m } ) Y\\rangle _{n}+r_{m}^{2}\\\\& =( r_{m}^{2} - r_{ \\tau }^{2} )\\mathbf {1} \\lbrace \\tau > m \\rbrace +( r_{ \\tau }^{2} - r_{m}^{2} )\\mathbf {1} \\lbrace \\tau < m \\rbrace .$ On $ \\lbrace \\tau > m \\rbrace $ , we use the definition of $ \\tau $ in Equation (REF ) to estimate $r_{m}^{2} - r_{ \\tau }^{2}& \\le r_{m}^{2} - \\kappa + \\Delta ( r_{ \\tau }^{2} )=b_{m}^{2} + 2 c_{m} + \\Vert \\varepsilon \\Vert _{n}^{2} - s_{m}- \\kappa + \\Delta ( r_{ \\tau }^{2} )\\\\& \\le \\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}+2 c_{m}+\\Vert \\varepsilon \\Vert _{n}^{2} - \\kappa + \\Delta ( r_{ \\tau }^{2} ).$ On $ \\lbrace \\tau \\le m \\rbrace $ , analogously, we obtain $r_{ \\tau }^{2} - r_{m}^{2}\\le \\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}-2 c_{m}+\\kappa - \\Vert \\varepsilon \\Vert _{n}^{2},$ which finishes the proof.", "In order to translate this norm comparison to an oracle inequality, it suffices to control the cross term and the discretization error of the residual norm.", "This is already possible under Assumption [assSubGaussianErrors] (SubGE).", "The proof of Lemma REF is deferred to Appendix .", "Lemma 2.3 (Bounds for the cross term and the discretization error) Under Assumption [assSubGaussianErrors] (SubGE), the following statements hold: With probability converging to one, the cross term satisfies $| c_{m} |& \\le b_{m}\\sqrt{ \\frac{ 4 \\overline{ \\sigma }^{2} ( m + 1 ) \\log p }{n} }\\qquad \\text{ for all } m \\ge 0.$ With probability converging to one, the discretization error of the squared residual norm satisfies $\\Delta ( r_{m}^{2} )& \\le 2 b_{ m - 1 }^{2}+\\frac{ 8 \\overline{ \\sigma }^{2} m \\log p }{n}\\qquad \\text{ for all } m \\ge 1.$ Together, Lemmas REF and REF motivate the choice $ \\kappa = \\kappa _{m} $ in Equation (REF ), where the additional term $ C_{ \\tau } m \\log (p) / n $ accounts for the discretization error of the residuals norm.", "With this choice of $ \\kappa $ , Lemma REF yields for any fixed $ m \\ge 0 $ that $\\Vert \\widehat{F}^{ ( \\tau ) } - f^{*} \\Vert _{n}^{2}& \\le 2(\\Vert \\widehat{F}^{ ( \\tau ) } - \\widehat{F}^{ (m) } \\Vert _{n}^{2}+\\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2})\\\\& \\le 2\\big (2 \\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}+2 | c_{m} |+( \\kappa _{ \\tau } - \\Vert \\varepsilon \\Vert _{n}^{2} )\\mathbf {1} \\lbrace \\tau < m \\rbrace .\\\\& \\qquad \\ \\ \\quad \\qquad \\qquad \\qquad \\qquad \\ \\ +(\\Vert \\varepsilon \\Vert _{n}^{2}+\\Delta ( r_{ \\tau }^{2} ) - \\kappa _{ \\tau })\\mathbf {1} \\lbrace \\tau > m \\rbrace \\big ).$ Under Assumption [assSubGaussianErrors] (SubGE), with probability converging to one, the estimates from Lemma REF then imply that on $ \\lbrace \\tau < m \\rbrace $ , $\\Vert \\widehat{F}^{ ( \\tau ) } - f^{*} \\Vert _{n}^{2}& \\le 6 \\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}+\\frac{ ( 8 \\overline{ \\sigma }^{2} + C_{ \\tau } ) m \\log p }{n}+\\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2},$ using that $ 4 ( m + 1 ) \\le 8 m $ .", "Analogously, on $ \\lbrace \\tau > m \\rbrace $ , we obtain $\\Vert \\widehat{F}^{ ( \\tau ) } - f^{*} \\Vert _{n}^{2}& \\le 7 \\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}+\\frac{ 8 \\overline{ \\sigma }^{2} m \\log p }{n}+\\frac{ ( 8 \\overline{ \\sigma }^{2} - C_{ \\tau } ) \\tau \\log p }{n}+\\Vert \\varepsilon \\Vert _{n}^{2} - \\widehat{ \\sigma }^{2},$ where we have used that $ b_{ \\tau - 1 }^{2} \\le b_{m}^{2} $ .", "Combining the events and taking the infimum over $ m \\ge 0 $ yields the result in Theorem REF .", "We reiterate that here, it is the $ \\omega $ -pointwise analysis that preserves the term $| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |$ in the result." ], [ "Explicit bounds for empirical quantities", "In order to derive a convergence rate from Theorem REF , we need explicit bounds for the empirical quantities involved.", "These will also be essential for the analysis of the stopped population risk.", "We begin by establishing control over the most basic quantities.", "Lemma 2.4 (Uniform bounds in high probability) Under Assumptions [assSubGaussianErrors] (SubGE) and [assSubGaussianDesign] (SubGD), the following statements hold: There exists some $ C_{g} > 0 $ such that with probability converging to one, $\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle _{ L^{2} }|\\le C_{g}\\sqrt{ \\frac{ \\rho ^{4} \\log p }{n} }.$ There exists some $ C_{ \\varepsilon } > 0 $ such that with probability converging to one, $\\sup _{ j \\le p }| \\langle \\varepsilon , g_{j} \\rangle _{n} |\\le C_{ \\varepsilon }\\sqrt{ \\frac{ \\overline{ \\sigma }^{2} \\log p }{n} }.$ There exists some $ C_{ \\Gamma } > 0 $ such that for any fixed $ c_{ \\text{Cov} } > 0 $ , $\\sup _{ | J | \\le c_{ \\text{Cov} } n / \\log p }\\frac{ \\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op} } }{ \\rho ^{2} }\\le c_{ \\text{Cov} } C_{ \\Gamma }$ with probability converging to one.", "There exist $ c_{ \\text{Cov} }, C_{ \\Gamma ^{-1} } > 0 $ such that with probability converging to one, $\\sup _{ | J | \\le c_{ \\text{Cov} } n / \\log p }\\Vert \\widehat{ \\Gamma }_{J}^{-1} \\Vert _{ \\text{op} }\\le C_{ \\Gamma ^{-1} }.$ Some version of this is needed in all results for $ L^{2} $ -boosting in high-dimensional models, see Lemma 1 in Bühlmann [6], Lemma A.2 in Ing and Lai [13] or Assumptions (A1) and (A2) in Ing [12].", "A proof for our setting is detailed in Appendix .", "Note that Lemma REF (iii) and (iv) improve the control to subsets $ J $ with cardinality of order up to $ n / \\log p $ from Lemma A.2 in Ing and Lai [13], where only subsets of order $ \\sqrt{ n / \\log p } $ could be handled.", "For our results, we only need that $M_{ n } \\le c_{ \\text{Cov} } n / \\log p$ for $ n $ sufficiently large, however, this could open up further research into the setting where $ \\gamma \\in ( 1 / 2, 1 ) $ , i.e., when $ m^{*}_{ s, \\gamma } $ can be of order $ n / \\log p $ , see also Barron et al.", "[1].", "From Lemma REF , we obtain that the empirical stochastic error $ s_{m} $ satisfies a similar upper bound as its population counterpart $ S_{m} $ .", "Lemma 2.5 (Bound for the empirical stochastic error) Under Assumptions [assSubGaussianErrors] (SubGE) and [assSubGaussianDesign] (SubGD), the empirical stochastic error satisfies $s_{m} & \\le C \\frac{ \\overline{ \\sigma }^{2} m \\log p }{n}\\qquad \\text{ for all } m \\ge 0$ with probability converging to one.", "[Proof]For $ m \\le c_{ \\text{iter} } n / \\log p $ , using the notation from Equation (REF ), we can write $s_{m}& =\\langle \\varepsilon , \\widehat{ \\Pi }_{m} \\varepsilon \\rangle _{n}=\\langle \\varepsilon ,g_{ \\widehat{J}_{m} }^{ \\top }\\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1}\\langle \\varepsilon , g_{ \\widehat{J}_{m} } \\rangle _{n}\\rangle _{n}\\le \\Vert \\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1}\\langle \\varepsilon , g_{ \\widehat{J}_{m} } \\rangle _{n}\\Vert _{1}\\sup _{ j \\le p }| \\langle \\varepsilon , g_{j} \\rangle _{n} |\\\\& \\le \\sqrt{m}\\Vert \\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1}\\langle \\varepsilon , g_{ \\widehat{J}_{m} } \\rangle _{n}\\Vert _{2}\\sup _{ j \\le p }| \\langle \\varepsilon , g_{j} \\rangle _{n} |\\le \\Vert \\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1} \\Vert _{ \\text{op} }m\\sup _{ j \\le p }| \\langle \\varepsilon , g_{j} \\rangle _{n} |^{2}.$ This yields the result for $ m \\le M_{n}^{2} $ by taking the supremum and applying the bounds from Lemma REF (ii) and (iv).", "For $ m > c_{ \\text{iter} } n / \\log p $ , the bound is also satisfied, since $s_{m} \\le \\Vert \\varepsilon \\Vert _{n}^{2}\\le C \\text{Var}( \\varepsilon _{1} )= C \\underline{ \\sigma }^{2}\\le C \\overline{ \\sigma }^{2}$ with probability one.", "In order to relate the empirical bias to the population bias, we use a norm change inequality from Ing [12], which we extend to the $ s $ -sparse setting.", "A complete derivation, which is based on the uniform Baxter inequality in (REF ), is stated in Appendix .", "Proposition 2.6 (Fast norm change for the bias) Under Assumptions [assSparse] (Sparse) and [assCovB] (CovB), the squared population bias $B_{m}^{2} = \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}$ satisfies the norm change inequality $& \\ \\ \\ \\ |\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{n}^{2}-\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}|\\\\& \\le C\\begin{dcases}( s + m )\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}-\\langle g_{j}, g_{k} \\rangle _{ L^2 }|,& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\big (\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}\\big )^{ \\frac{ 2 \\gamma - 2 }{ 2 \\gamma - 1 } }\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}-\\langle g_{j}, g_{k} \\rangle _{ L^2 }|,& \\beta ^{*} \\ \\gamma \\text{-sparse} \\\\\\end{dcases}$ for any $ m \\le M_{n} $ and $ n $ large enough.", "Proposition REF will appear again in analyzing the stopping condition (REF ) in Section .", "Initially, it guarantees that the squared empirical bias $ b_{m}^{2} $ satisfies the same bound as its population counterpart $ B_{m}^{2} $ .", "Lemma 2.7 (Bound for the empirical bias) Under assumptions [assSubGaussianErrors] (SubGE), [assSparse] (Sparse), [assSubGaussianDesign] (SubGD) and [assCovB] (CovB), the squared empirical bias satisfies $b_{m}^{2}& \\le C\\begin{dcases}\\Vert f^{*} \\Vert _{ L^{2} }^{2}\\exp \\Big ( \\frac{ - c_{ \\text{Bias} } m }{s} \\Big )+\\Vert \\beta ^{*} \\Vert _{1}^{2}\\frac{ s \\log p }{n},& \\beta ^{*} \\ s \\text{-sparse}, \\\\m^{ 1 - 2 \\gamma }+\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - 1 / ( 2 \\gamma ) },& \\beta ^{*} \\ \\gamma \\text{-sparse} \\\\\\end{dcases}\\\\& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\text{ for all } m \\ge 0$ with probability converging to one.", "[Proof]For a fixed $ m \\le M_{n} $ and $ n $ large enough, Proposition REF yields the estimate $b_{m}^{2}& =\\Vert ( I - \\widehat{ \\Pi }_{m} ) f^{*} \\Vert _{n}^{2}\\le \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{n}^{2}\\\\& \\le \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^{2} }^{2}\\\\& +C\\begin{dcases}( s + m )\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^{2} }^{2}\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle _{ L^2 }|,& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\big (\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}\\big )^{ \\frac{ 2 \\gamma - 2 }{ 2 \\gamma - 1 } }\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle _{ L^2 }|,& \\beta ^{*} \\ \\gamma \\text{-sparse}.", "\\\\\\end{dcases}$ Applying Lemma REF (i) and Proposition REF then yields the result for $ m \\le M_{n} $ .", "The monotonicity of $ m \\mapsto b_{m}^{2} $ implies that the claim is also true for any $ m > M_{n} $ under $ \\gamma $ -sparsity.", "Under $ s $ -sparsity, $ b_{m}^{2} = 0 $ for all $m \\ge C_{ \\text{supp} } s$ with $ s = o( ( n / \\log p )^{ 1 / 3 } ) $ .", "This finishes the proof.", "Analogous to Equation (REF ), Lemmas REF and REF imply that at iteration $ m^{*}_{ s, \\gamma } $ from Equation (REF ), the empirical risk satisfies the bound $\\Vert \\widehat{F}^{ ( m^{*}_{ s, \\gamma } ) } - f^{*} \\Vert _{n}^{2}& \\le C \\mathcal {R}( s, \\gamma )$ with probability converging to one.", "This yields the result Corollary REF .", "For the empirical risk, we can also argue precisely that such a result cannot be improved upon: Remark 2.8 (Optimality of the rates) For simplicity, we consider $ p = n $ , a fixed, orthogonal (with respect to $ \\langle \\cdot , \\cdot \\rangle _{n} $ ) design matrix $ \\mathbf {X} $ and $\\varepsilon \\sim N( 0, \\sigma ^{2} I_{n} )$ .", "Conceptually, $ \\rho ^{ 2 } = 0 $ in this setting.", "When $ \\beta ^{*} $ is $ s $ -sparse, the squared empirical bias satisfies $b_{m}^{2}& =\\Vert ( I - \\widehat{ \\Pi }_{m} ) f^{*} \\Vert _{n}^{2}\\ge \\Vert \\beta ( \\widehat{ \\Pi }_{m} f^{*} ) - \\beta ^{*} \\Vert _{2}^{2}\\ge \\underline{ \\beta }^{2}$ for any $ m \\le s $ .", "Similarly, when $ \\beta ^{*} $ is $ \\gamma $ -sparse, $b_{m}^{2}& =\\Vert ( I - \\widehat{ \\Pi }_{m} ) f^{*} \\Vert _{n}^{2}\\ge \\Vert \\beta ^{*}_{ m \\text{-term} } - \\beta ^{*} \\Vert _{2}^{2},$ where $ \\beta ^{*}_{ m \\text{-term} } $ is the best $ m $ -term approximation of $ \\beta ^{*} $ with respect to the euclidean norm.", "For $ \\beta ^{*} $ with polynomial decay as in Equation (REF ), the right-hand side in Equation (REF ) is larger than $c m^{ 1 - 2 \\gamma }$ , see e.g., Lemma A.3 in Ing [12].", "Conversely, for $ f^{*} = 0 $ , the greedy procedure in Algorithm REF guarantees that $s_{m}& =\\Vert \\widehat{ \\Pi }_{m} \\varepsilon \\Vert _{n}^{2}=\\frac{1}{n}\\sum _{ j = 1 }^{m}Z_{ ( n - j + 1 ) }^{2}\\ge \\frac{ m Z_{ ( p - m + 1 ) }^{2} }{n},$ where $ Z_{ (j) } $ denotes the $ j $ -th order statistic of the $Z_{j}: = \\langle X^{ (j) }, \\varepsilon \\rangle _{n}$ , $ j \\le p $ , which are again independent, identically distributed Gaussians with variance $ \\sigma ^{2} $ .", "Noting that $s = o( ( n / \\log p )^{ 1 / 3 } )$ , for both $ m = s $ and $m = ( n / ( \\sigma ^{2} \\log p ) )^{ 1 / ( 2 \\gamma ) }$ , the order statistic $Z_{ p - m + 1 }$ is larger than $c \\sqrt{ \\sigma ^{2} \\log p }$ with probability converging to one, see Lemma REF .", "Consequently, by distinguishing the cases where $ m $ is smaller or greater than $ s $ under $ s $ -sparsity and the cases where $ m $ is smaller or greater than $( n / ( \\sigma ^{2} \\log p ) )^{ 1 / ( 2 \\gamma ) }$ under $ \\gamma $ -sparsity, we obtain $\\inf _{ m \\ge 0 }\\sup _{ f^{*} }\\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}\\ge c\\begin{dcases}\\frac{ \\sigma ^{2} s \\log p }{ n },& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\Big ( \\frac{ \\sigma ^{2} \\log p }{ n } \\Big )^{ 1 - \\frac{1}{ 2 \\gamma } },& \\beta ^{*} \\ \\gamma \\text{-sparse}\\end{dcases}$ with probability converging to one, where the infimum is taken over either all $ f^{*} $ satisfying [assSparse] (Sparse) (i) or over all $ f^{*} $ satisfying [assSparse] (Sparse) (ii).", "In this section, we analyze the stopped population risk $\\Vert \\widehat{F}^{ ( \\tau ) } - f^{*} \\Vert _{ L^2 }^{2}$ with $ \\tau $ from Equation (REF ).", "Unlike the empirical risk, the population risk cannot be expressed in terms of the residuals straight away.", "Instead, we examine the stopping condition $ r_{m}^{2} \\le \\kappa _{m} $ , i.e., $b_{m}^{2} + 2 c_{m}& \\le \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2}+\\frac{ C_{ \\tau } m \\log p }{n} + s_{m}.$ We show separately that for a suitable choice of $ \\widehat{ \\sigma }^{2} $ , condition (REF ) guarantees that $ \\tau $ stops neither too early nor too late.", "In combination, this yields Theorem REF .", "For the analysis, it becomes essential that we have access to the fast norm change for the population residual term $( I - \\Pi _{m} ) f^{*}$ from Proposition REF , which guarantees that empirical and population norm remain of the same size until the squared population bias reaches the optimal rate $ \\mathcal {R}( s, \\gamma ) $ .", "This control is not already readily available by standard tools, e.g., Wainwright [26]." ], [ "No stopping too early", "The sequential procedure stops too early if the squared population bias $B_{m}^{2} = \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}$ has not reached the optimal rate of convergence yet, i.e., $ \\tau < \\tilde{m}_{ s, \\gamma , G } $ , where $\\tilde{m}_{ s, \\gamma , G}:=\\begin{dcases}\\inf \\lbrace m \\ge 0: S \\subset \\widehat{J}_{m} \\rbrace ,& \\beta ^{*} \\ s\\text{-sparse}, \\\\\\inf \\Big \\lbrace m \\ge 0: \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}\\le G\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - \\frac{1}{ 2 \\gamma } }\\Big \\rbrace ,& \\beta ^{*} \\ \\gamma \\text{-sparse}\\end{dcases}$ for any constant $ G > 0 $ .", "Note that under [assCovB] (CovB), for $ s $ -sparse $ \\beta ^{*} $ , $B_{m}^{2}& =\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^{2} }^{2}{\\left\\lbrace \\begin{array}{ll}\\ge c_{ \\lambda } \\Vert \\tilde{\\beta }\\Vert _{2}^{2}\\ge c_{ \\lambda } \\underline{ \\beta }^{2},& m < \\tilde{m}_{ s, \\gamma , G }, \\\\= 0, & m \\ge \\tilde{m}_{ s, \\gamma , G }.\\end{array}\\right.", "}$ Therefore, a condition for stopping too early is given by $\\exists m < \\tilde{m}_{ s, \\gamma , G }:b_{m}^{2} + 2 c_{m}& \\le \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2}+\\frac{ C_{ \\tau } m \\log p }{n} + s_{m},$ where we may vary $ G > 0 $ .", "Using the norm change inequality for the bias in Proposition REF , we can derive that the left-hand side of condition (REF ) is of the same order as $ B_{m}^{2} $ , i.e., $b_{m}^{2} + 2 c_{m}& \\ge \\begin{dcases}\\frac{ c_{ \\lambda } }{8} \\underline{ \\beta }^{2},& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\frac{G}{8}\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - \\frac{1}{ 2 \\gamma } },& \\beta ^{*} \\ \\gamma \\text{-sparse}\\end{dcases}$ with probability converging to one.", "At the same time, Proposition REF guarantees that $\\tilde{m}_{ s, \\gamma , G} \\le m^{*}_{ s, \\gamma }$ from Equation (REF ) with probability converging to one for $ G $ large enough.", "Therefore, if $ \\widehat{ \\sigma }^{2} $ does not substantially overestimate the empirical noise level and $ C_{ \\tau } $ is chosen proportional to $ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) $ , Lemma REF implies that the right-hand side of condition (REF ) satisfies $\\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2}+\\frac{ C_{ \\tau } m \\log p }{n}+s_{m}\\le C \\mathcal {R}( s, y )$ with probability converging to one for a constant $ C > 0 $ independent of $ G $ .", "For $ G $ large enough, condition (REF ) can therefore only be satisfied on an event with probability converging to zero.", "Together, this yields the following result: Proposition 3.1 (No stopping too early) Under Assumptions [assSubGaussianErrors] (SubGE), [assSparse] (Sparse), [assSubGaussianDesign] (SubGD) and [assCovB] (CovB), choose $ \\widehat{ \\sigma }^{2} $ in Equation (REF ) such that $\\widehat{ \\sigma }^{2}\\le \\Vert \\varepsilon \\Vert _{n}^{2} + C \\mathcal {R}( s, \\gamma )$ with probability converging to one.", "Then, for $ G > 0 $ large enough and any choice $C_{ \\tau } = c ( \\overline{ \\sigma }^{2} + \\rho ^{4} )$ in (REF ) with $ c \\ge 0 $ , the sequential stopping time satisfies $ \\tilde{m}_{ s, \\gamma , G} \\le \\tau < \\infty $ , with probability converging to one.", "On the corresponding event, it holds that $B_{ \\tau }^{2}=\\Vert ( I - \\Pi _{ \\tau } ) f^{*} \\Vert _{ L^2 }^{2}\\le \\begin{dcases}0,& \\beta ^{*} \\ s \\text{-sparse}, \\\\G\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - \\frac{1}{ 2 \\gamma } },& \\beta ^{*} \\ \\gamma \\text{-sparse}.\\end{dcases}$ The technical details of the proof are provided in Appendix .", "Proposition REF guarantees that $ \\tau $ controls the population bias on an event with probability converging to one.", "It is noteworthy that in order to do so, it is only required that $ \\widehat{ \\sigma }^{2} $ is smaller than the empirical noise level up to a lower order term and also the choice $ C_{ \\tau } = 0 $ in Equation (REF ) is allowed.", "We will further discuss this in Section ." ], [ "No stopping too late", "The sequential procedure potentially stops too late when the bound in Lemma REF no longer guarantees that the population stochastic error $ S_{ \\tau } $ is of optimal order, i.e., when there is no constant $ H > 0 $ such that $ \\tau $ can be bounded by $ H m^{*}_{ s, \\gamma } $ on a large event for $ m^{*}_{ s, \\gamma } $ from Equation (REF ).", "For stopping too late, we therefore consider the condition $ r_{m}^{2} > \\kappa _{m} $ , i.e., $b_{m}^{2} + 2 c_{m}>\\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2}+\\frac{ C_{ \\tau } m \\log p }{n}+s_{m}\\qquad \\text{ for } \\qquad m = H m^{*}_{ s, \\gamma }.$ For $ s $ -sparse $ \\beta ^{*} $ and $ H > 1 $ , the left-hand side vanishes with probability converging to one due to Proposition REF .", "For $ \\gamma $ -sparse $ \\beta ^{*} $ , the results in Lemma REF and Lemma REF (i) yield that the left-hand side of condition (REF ) is at most of order $\\sqrt{H} (( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log (p) / n)^{ 1 - 1 / ( 2 \\gamma ) }$ on an event with probability converging to one.", "At the same time, for $ \\widehat{ \\sigma }^{2} $ large enough and a choice $ C_{ \\tau } = c ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) $ with $ c > 0 $ , the right-hand side is at least of order $ H \\mathcal {R}( s, \\gamma ) $ also on an event with probability converging to one.", "Note that this requires a choice $ c > 0 $ , since Lemma REF only provides an upper bound for $ s_{m} $ .", "For $ H > 0 $ sufficiently large, this yields that condition (REF ) can only be satisfied on an event with probability converging to zero.", "We obtain the following result: Proposition 3.2 (No stopping too late) Under Assumptions [assSubGaussianErrors] (SubGE), [assSparse] (Sparse), [assSubGaussianDesign] (SubGD) and [assCovB] (CovB), choose $ \\widehat{ \\sigma }^{2} $ in Equation (REF ) such that $\\widehat{ \\sigma }^{2}\\ge \\Vert \\varepsilon \\Vert _{n}^{2} - C \\mathcal {R}( s, \\gamma )$ with probability converging to one.", "Then, for any choice $C_{ \\tau } = c ( \\overline{ \\sigma }^{2} + \\rho ^{4} )$ in (REF ) with $ c > 0 $ , the sequential stopping time satisfies $\\tau \\le H m^{*}_{ s, \\gamma }$ with probability converging to one for some $ H > 0 $ large enough.", "On the corresponding event, it holds that $S_{ \\tau }& =\\Vert \\widehat{F}^{ ( \\tau ) } - \\Pi _{ \\tau } f^{*} \\Vert _{ L^{2} }^{2}\\le C H \\mathcal {R}( s, \\gamma ).$ The details of the proof can be found in Appendix .", "Proposition REF complements Proposition REF in that it guarantees that $ \\tau $ controls the stochastic error on an event with probability converging to one.", "Together, the two results imply Theorem REF .", "As in Theorem REF , it is the $ \\omega $ -pointwise analysis of the stopping condition preserves the term $| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |$ in the condition of the result." ], [ "Estimation of the empirical noise level", "For any real application, the results in Theorem REF and Theorem REF require access to a suitable estimator $ \\widehat{ \\sigma }^{2} $ of the empirical noise level $ \\Vert \\varepsilon \\Vert _{n}^{2} $ .", "In this section, we demonstrate that under reasonable assumptions, such estimators do in fact exist.", "In particular, we analyze the Scaled Lasso noise estimate $ \\widehat{ \\sigma }^{2} $ from Sun and Zhang [21] in our setting.", "It is noteworthy that our estimation target is the empirical noise level $ \\Vert \\varepsilon \\Vert _{n}^{2} $ rather than its almost sure limit $ \\text{Var}( \\varepsilon _{1} ) $ .", "Remark 4.1 (Estimating $ \\Vert \\varepsilon \\Vert _{ n }^{2} $ vs. $ \\text{Var}( \\varepsilon _{1} ) $ ) In general, it is easier to estimate $ \\Vert \\varepsilon \\Vert _{n}^{2} $ than $ \\text{Var}( \\varepsilon _{1} ) $ .", "We illustrate this fact in the simple location-scale model $Y_{i} = \\mu + \\varepsilon _{i}, \\qquad i = 1, \\dots , n,$ where $ \\varepsilon _{i} \\sim N( 0, \\sigma ^{2} ) $ i.i.d.", "Simple calculations yield that the standard noise estimator $\\widehat{ \\sigma }^{2}:=\\Vert Y - \\widehat{ \\mu } ( 1, \\dots , 1 )^{ \\top } \\Vert _{n}^{2}$ with $\\widehat{ \\mu } = n^{-1} \\sum _{ i = 1 }^{n} Y_{i}$ satisfies $\\mathbb {E}_{ \\sigma ^{2} }| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |\\le C \\frac{ \\sigma ^{2} }{n}\\qquad \\text{ for all } \\sigma ^{2} > 0,$ where the subscript $ \\sigma ^{2} $ denotes the expectation with respect to $ N( 0, \\sigma ^{2} ) $ .", "Conversely, for $ \\text{Var}( \\varepsilon _{1} ) = \\sigma ^{2} $ , a convergence rate of $ n^{-1} $ can only be reached for the squared risk.", "Indeed, it follows from an application of van-Trees's inequality, see Gill and Levit [11], that for any $ \\delta ^{2} > 0 $ , $\\inf _{ \\widehat{ \\sigma }^{2} }\\sup _{ \\sigma ^{2} > \\delta ^{2} }\\mathbb {E}_{ \\sigma ^{2} } ( \\widehat{ \\sigma }^{2} - \\sigma ^{2} )^{2}& \\ge c \\frac{ \\delta ^{4} }{n}$ for $ n \\in \\mathbb {N} $ sufficiently large, where the infimum is taken over all measurable functions $ \\widehat{ \\sigma }^{2} $ .", "This indicates that for the absolute risk we cannot expect a rate faster than $ n^{ - 1 / 2 } $ .", "The fact that in general, $ \\Vert \\varepsilon \\Vert _{n}^{2} $ can be estimated with a faster rate than $ \\text{Var}( \\varepsilon _{1} ) $ together with the $ \\omega $ -pointwise analysis is essential in circumventing a lower bound restriction as in Corollary 2.5 of Blanchard et al.", "[3].", "We briefly recall the approach in Sun and Zhang [21].", "The authors consider the joint minimizer $( \\widehat{ \\beta }, \\widehat{ \\sigma } )$ of the Scaled Lasso objective $L_{ \\lambda _{0} }( \\beta , \\sigma ):=\\frac{ \\Vert Y - X \\beta \\Vert _{2}^{2} }{ 2 n \\sigma }+\\frac{ \\sigma }{2}+\\lambda _{0} \\Vert \\beta \\Vert _{1},\\qquad \\beta \\in \\mathbb {R}^{p}, \\sigma > 0,$ where $ \\lambda _{0} $ is a penalty parameter chosen by the user.", "Since $ L_{ \\lambda _{0} } $ is jointly convex in $ ( \\beta , \\sigma ) $ , the minimizer can be implemented efficiently.", "For $ \\lambda > 0 $ and $ \\xi > 1 $ , they set $\\mu ( \\lambda , \\xi ):=( \\xi + 1 )\\min _{ J \\subset \\lbrace 1, \\dots , p \\rbrace }\\inf _{ \\nu \\in ( 0, 1 ) }\\max \\Big (\\frac{ \\Vert \\beta ^{*}_{ J^{c} } \\Vert _{1} }{ \\nu },\\frac{ \\lambda | J | / ( 2 ( 1 - \\nu ) ) }{ \\kappa ^{2}( ( \\xi + \\nu ) / ( 1 - \\nu ), J ) }\\Big ),$ with the compatibility factor $\\kappa ^{2}( \\xi , J ):=\\min \\Big \\lbrace \\frac{ \\Vert X \\Delta \\Vert _{2}^{2} | J | }{ n \\Vert \\Delta _{J} \\Vert _{1}^{2} }:\\Vert \\Delta _{ J^{c} } \\Vert _{1} \\le \\xi \\Vert \\Delta _{J} \\Vert _{1}\\Big \\rbrace ,$ see Bühlmann and van de Geer [7].", "In Theorem 2 of [21], Sun and Zhang then show that $\\max \\Big (1 - \\frac{ \\widehat{ \\sigma } }{ \\Vert \\varepsilon \\Vert _{n} },1 - \\frac{ \\Vert \\varepsilon \\Vert _{n} }{ \\widehat{ \\sigma } }\\Big )& \\le \\alpha ^{*}: = \\frac{\\lambda _{0}\\mu ( \\Vert \\varepsilon \\Vert _{n} \\lambda _{0}, \\xi )}{ \\Vert \\varepsilon \\Vert _{n} }$ on the event $\\Omega _{ \\text{Noise} }:=\\Big \\lbrace \\sup _{ j \\le p } \\langle g_{j}, \\varepsilon \\rangle _{n}\\le ( 1 - \\alpha ^{*} )\\frac{ \\xi - 1 }{ \\xi + 1 }\\Vert \\varepsilon \\Vert _{n} \\lambda _{0}\\Big \\rbrace .$ In our setting, due to Lemma REF (i), $ \\Omega _{ \\text{Noise} } $ is an event with probability converging to one when $ \\lambda _{0} $ is of order $ \\sqrt{ \\log (p) / n } $ .", "Further, it can be shown that in this case, $\\mu ( \\Vert \\varepsilon \\Vert _{n} \\lambda _{0}, \\xi )& \\le C\\begin{dcases}s \\sqrt{ \\frac{ \\Vert \\varepsilon \\Vert _{n}^{2} \\log p }{n} },& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\Big (\\frac{ \\Vert \\varepsilon \\Vert _{n}^{2} \\log p }{n}\\Big )^{ \\frac{1}{2} - \\frac{1}{ 2 \\gamma } },& \\beta ^{*} \\ \\gamma \\text{-sparse}, \\\\\\end{dcases}$ as long as the compatibility factor from Equation (REF ) is strictly positive.", "Usually this can be guaranteed on an event with high probability: When the rows of design matrix $ \\mathbf {X} $ are given by $( X_{i} )_{ i \\le n } \\sim N( 0, \\Gamma )$ i.i.d., e.g., Theorem 7.16 in Wainwright [26] guarantees that the compatibility factor satisfies $\\kappa ^{2}( \\xi , J )& \\ge \\frac{ \\lambda _{ \\min }( \\Gamma ) }{ 16 }\\qquad \\text{whenever }| J | \\le \\frac{ ( 1 + \\xi )^{ - 2 } }{ 800 }\\frac{ \\lambda _{ \\min }( \\Gamma ) }{ \\max _{ j \\le p } \\Gamma _{ j j } }\\frac{n}{ \\log p }$ with probability larger than $1 - e^{ - n / 32 } / ( 1 - e^{ - n / 32 } )$ .", "The combination of these results allow to formulate Proposition REF , which is applicable to the setting in Corollary REF and Theorem REF .", "The proof is given Appendix .", "Proposition 4.2 (Fast noise estimation) Under Assumptions [assSubGaussianErrors] (SubGE), [assSparse] (Sparse) and [assCovB] (CovB) with Gaussian design $( X_{i} )_{ i \\le n } \\sim N( 0, \\Gamma )$ i.i.d., set $ \\xi > 1 $ and $\\lambda _{0}=C_{ \\lambda _{0} } ( \\xi + 1 ) / ( \\xi - 1 ) \\sqrt{ \\log (p) / n }$ with $C_{ \\lambda _{0} }\\ge 2 C_{ \\varepsilon } \\overline{ \\sigma } / \\underline{ \\sigma }$ .", "Then, with probability converging to one, the Scaled lasso noise estimator $ \\widehat{ \\sigma }^{2} $ satisfies $| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |& \\le \\frac{ 2 \\Vert \\varepsilon \\Vert _{n}^{2} \\alpha ^{*} }{ ( 1 - \\alpha ^{*} )^{2} }\\qquad \\text{ with } \\qquad \\alpha ^{*}=\\frac{\\lambda _{0}\\mu ( \\Vert \\varepsilon \\Vert _{n} \\lambda _{0}, \\xi )}{ \\Vert \\varepsilon \\Vert _{n} }.$ In particular, for any fixed choice $ \\xi > 1 $ , this implies that with probability converging to one, $| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |& \\le C\\begin{dcases}\\frac{ \\overline{ \\sigma }^{2} s \\log p }{n},& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\Big (\\frac{ \\overline{ \\sigma }^{2} \\log p }{n}\\Big )^{ 1 - 1 / ( 2 \\gamma ) },& \\beta ^{*} \\ s \\text{-sparse}.\\end{dcases}$ The rates in Proposition REF match the rates in Corollary REF and Theorem REF .", "Under the respective assumptions, the combination of the stopping time $ \\tau $ from Equation (REF ) with the estimator $ \\widehat{ \\sigma }^{2} $ therefore provides a fully data-driven sequential procedure which guarantees optimal adaptation to the unknown sparsity parameters $ s \\in \\mathbb {N}_{0} $ or $ \\gamma \\in [ 1, \\infty )$ ." ], [ "Numerical simulations and a two-step procedure", "In this section, we illustrate our main results by numerical simulations.", "Here, we focus on the noise estimation aspects of our results in the regression setting with uncorrelated design.", "A more extensive simulation study, including correlated design and the classification setting from Example REF (b) with heteroscedastic error terms is provided in Appendix .", "The simulations confirm our theoretical results but also reveal some shortcomings stemming from the sensitivity of our method with regard to the noise estimation.", "We address this by proposing a two-step procedure that combines early stopping with an additional model selection step.", "Figure: Relative efficiencies for early stopping with the true empiricalnoise level.", "Figure: Relative efficiencies for early stopping with the estimated empiricalnoise level for λ 0 =log(p)/n \\lambda _{0} = \\sqrt{ \\log (p) / n } ." ], [ "Numerical simulations for the main results", "All of our simulations are based on 100 Monte-Carlo runs of a model in which both sample size $ n $ and parameter size $ p $ are equal to 1000.", "We examine signals $ f $ with coefficients $ \\beta $ corresponding to the two sparsity concepts in Assumption [assSparse] (Sparse).", "We consider the $ s $ -sparse signals $\\beta ^{ (15) }_{j} & = \\mathbf {1} \\lbrace 1 \\le j \\le 5 \\rbrace + 0.5 \\cdot \\mathbf {1} \\lbrace 6 \\le j \\le 10 \\rbrace + 0.25 \\cdot \\mathbf {1} \\lbrace 11 \\le j \\le 15 \\rbrace , \\\\\\beta ^{ (60) }_{j} & = \\mathbf {1} \\lbrace 1 \\le j \\le 20 \\rbrace + 0.5 \\cdot \\mathbf {1} \\lbrace 21 \\le j \\le 40 \\rbrace + 0.25 \\cdot \\mathbf {1} \\lbrace 41 \\le j \\le 60 \\rbrace , \\\\\\beta ^{ (90) }_{j} & = \\mathbf {1} \\lbrace 1 \\le j \\le 30 \\rbrace + 0.5 \\cdot \\mathbf {1} \\lbrace 31 \\le j \\le 60 \\rbrace + 0.25 \\cdot \\mathbf {1} \\lbrace 61 \\le j \\le 90 \\rbrace $ for $s \\in \\lbrace 15, 60, 90 \\rbrace $ and the $ \\gamma $ -sparse signals $\\beta ^{ (3) }_{j}: = j^{ - 3 }, \\quad \\beta ^{ (2) }_{j}: = j^{ - 2 }, \\quad \\beta ^{ (1) }_{j}: = j^{ - 1 }, \\qquad j \\le p$ for $ \\gamma \\in \\lbrace 3, 2, 1 \\rbrace $ .", "Note that the definition of Algorithm REF allows to consider decreasingly ordered coefficients without loss of generality.", "In a second step, we normalize all signals to the same $ \\ell ^{1} $ -norm of value 10.", "Since the Scaled Lasso penalizes the $ \\ell ^{1} $ -norm, this is necessary to make the noise estimations comparable between simulations.", "For both the covariance structure of the design and the noise terms $\\varepsilon $ , we consider independent standard normal variables.", "For the early stopping time $ \\tau $ in Equation (REF ), we focus on the noise level estimate $ \\widehat{ \\sigma }^{2} $ .", "For our theoretical results, $ C_{ \\tau } > 0 $ was needed to control the discretization error $ \\Delta ( r_{ \\tau }^{2} ) $ of the residual norm and to counteract the fact that Lemma REF does not provide a lower bound of the same size.", "Since empirically, both of these aspect do not pose any problems, it seems warranted to set $ C_{ \\tau } = 0 $ and exclude this hyperparameter from our simulation study.", "The simulation in Figure REF of Section is based on $ \\beta ^{ (2) } $ .", "The estimated noise result used a penalty $\\lambda _{0} = \\sqrt{ \\log (p) / n }$ for the Scaled Lasso.", "The two-step procedure used $\\lambda _{0} = \\sqrt{ 0.5 \\log (p) / n }$ and $ C_{ \\text{AIC} } = 2 $ .", "The HDAIC-procedure from Ing [12] was computed with $ C_{ \\text{HDAIC} } = 2 $ , see also the discussion in Section REF .", "As a baseline for the potential performance of sequential early stopping, we consider the setting in which we have access to the true empirical noise level and set $ \\widehat{ \\sigma }^{2} = \\Vert \\varepsilon \\Vert _{n}^{2} $ .", "As a performance metric for a simulation run, we consider the relative efficiency $\\min _{ m \\ge 0 } \\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n} / \\Vert \\widehat{F}^{ ( \\tau ) } - f^{*} \\Vert _{n},$ Figure: Boxplots for the estimation errors for the noise estimation withλ 0 =log(p)/n \\lambda _{0} = \\sqrt{ \\log (p) / n } together with the classical oraclerisk.which can be interpreted as a proxy for the constant $ C_{ \\text{Risk} }^{ - 1 / 2 } $ in Theorem REF .", "We choose this quantity rather than its inverse because it makes for clearer plots.", "Values bounded away from zero indicate optimal adaptation up to a constant.", "Values closer to one indicate better estimation overall.", "We report boxplots of the relative efficiencies in Figure REF .", "The values are clearly bounded away from zero and close to one, indicating that with access to the true empirical noise level $ \\Vert \\varepsilon \\Vert _{n}^{2} $ , the sequential early stopping procedure achieves optimal adaptation simultaneously over different sparsity levels for both sparsity concepts from Assumption [assSparse] (Sparse).", "This is expected, from the results in Theorem REF , Corollary REF and Theorem REF .", "The oracles $ m^{ \\mathfrak {o} } $ and $ m^{ \\mathfrak {b} } $ vary only very little over simulation runs.", "Their medians, in the same order as the signals displayed in Figure REF , are given by $ (4, 7, 14, 15, 45,53) $ and $ (5, 10, 31, 15, 51, 66) $ respectively.", "This is nearly identically replicated by the median sequential early stopping times $ (5, 9, 23, 15, 44, 52) $ .", "In our second simulation, we estimate the empirical noise level using the Scaled lasso estimator $ \\widehat{ \\sigma }^{2} $ from Section .", "For the penalty parameter, we opt for the choice $\\lambda _{0} = \\sqrt{ \\log (p) / n },$ which tended to have the best performance in the simulation study in Sun Zhang [21].", "Note that the choice of $ \\lambda _{0} $ in Proposition REF is scale invariant, see Proposition 1 in Sun and Zhang [21].", "We report boxplots of the estimation error $ | \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} | $ in Figure REF together with the squared estimation error $ \\Vert \\widehat{F}^{ ( m^{ \\mathfrak {o} } ) } - f^{*} \\Vert _{n}^{2} $ at the classical oracle.", "The results indicate that the two quantities are of the same order, which is the essential requirement for optimal adaptation in Theorem REF and Theorem REF .", "This is born out by the relative efficiencies in Figure REF , which remain bounded away from zero.", "For the signals $ \\beta ^{ (3) }, \\beta ^{ (2) }, \\beta ^{ (1) } $ , the quality of estimation is comparable to that in Figure REF .", "For the signals $ \\beta ^{ ( 15 ) }, \\beta ^{ ( 60 ) }, \\beta ^{ ( 90 ) } $ , the quality of estimation decreases, which matches the fact that for these signals, the noise estimation deviates more from the risk at the classical oracle $ m^{ \\mathfrak {o} } $ .", "The median stopping times $ (4, 6, 14, 22, 20) $ indicate that for these signals, we tend to stop too early.", "Nevertheless, in our simulation, early stopping achieves the optimal estimation risk up to a constant of at most eight.", "Overall, this confirms the major claim of this paper that it is possible to achieve optimal adaptation by a fully data-driven, sequential early stopping procedure.", "The computation times in Table REF show an improvement of an order of magnitude in the computational complexity relative to exhaustive model selection methods as the high-dimensional Akaike criterion from Ing [12] or the cross-validated Lasso.", "Figure: Relative efficiencies for early stopping with estimated empiricalnoise level for λ 0 =0.5log(p)/n \\lambda _{0} = \\sqrt{ 0.5 \\log (p) / n } .", "Figure: Relative efficiencies for the Scaled Lasso withλ 0 =log(p)/n \\lambda _{0} = \\sqrt{ \\log (p) / n } .", "Experimenting with different simulation setups, however, also reveals some shortcomings of our methodology.", "The performance of the early stopping procedure is fairly sensitive to the noise estimation.", "This can already be surmised by comparing Figures REF and REF , and Theorem REF suggests that the risk of the estimation method is additive in the estimation error of the empirical noise level.", "In Figure REF , we present the relative efficiencies when the empirical noise level is estimated with the penalty $ \\lambda _{0} = \\sqrt{ 0.5 \\log (p) / n } $ .", "The median stopping times $( 17, 18, 25, 21, 39, 41 )$ indicate that the change from 1 to a factor $ 0.5 $ in $ \\lambda _{0} $ already makes the difference between stopping slightly too early and stopping slightly too late.", "While the relative efficiencies show that this does not make our method unusable, ideally this sensitivity should be reduced.", "Further, the joint minimization of the Scaled Lasso objective (REF ) always includes computing an estimator $ \\widehat{ \\beta } $ of the coefficients.", "In particular, Corollary 1 in Sun and Zhang [21] also guarantees optimal adaptation of this estimator at least under $ s $ -sparsity.", "Ex ante, it is therefore unclear why one should apply our stopped boosting algorithm on top of the noise estimate rather than just using the Scaled Lasso estimator of the signal.", "In Figure REF , we report the relative efficiencies $\\min _{ m \\ge 0 }\\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n} /\\Vert \\mathbf {X} \\widehat{ \\beta } - f^{*} \\Vert _{n}$ of the Lasso estimator for the same penalty parameters $\\lambda _{0} = \\sqrt{ \\log (p) / n }$ which we considered for the initial noise estimation.", "Note that this quantity can potentially be larger than one, in case the Lasso risk is smaller than the risk at the classical oracle boosting iteration $ m^{ \\mathfrak {o} } $ .", "Sequential early stopping slightly outperforms the Scaled Lasso estimator, which we also confirmed in other experiments.", "Naturally, it shares the sensitivity to the choice of $ \\lambda _{0} $ .", "Overall, the stopped boosting algorithm would have to produce results more stable and closer to the benchmark in Figure REF to warrant a clear preference.", "We address these issues in the following section." ], [ "An improved two step procedure", "We aim to make our methodology more robust to deviations of the estimated empirical noise level and, at the same time, improve its estimation quality in order to match the results from Figure REF more closely.", "Motivated by Blanchard et al.", "[3], we propose a two-step procedure combining early stopping with an additional model selection Figure: Relative efficiencies for the two-step procedure.", "Table: Computation times for different methods in seconds.", "Figure: Relative efficiencies for the Akaike criterion from Ing owith C HDAIC =2.25 C_{ \\text{HDAIC} } = 2.25 .", "Figure: Relative efficiencies for the Lasso based on 5-fold cross-validation.", "step based on the high-dimensional Akaike-information criterion $\\widehat{m}_{ \\text{AIC} }: = \\operatornamewithlimits{arg\\!\\min }_{ m \\ge 0 } \\text{AIC}(m)\\qquad \\text{with} \\qquad \\text{AIC}(m):=r_{m}^{2} + \\frac{ C_{ \\text{AIC} } m \\log p }{n},\\quad m \\ge 0.$ This criterion slightly differs from the one introduced in Ing [12], which is necessary for our setting, see Remark REF .", "In combination, we select the iteration $\\tau _{ \\text{two-step} }: = \\operatornamewithlimits{arg\\!\\min }_{ m \\le \\tau } \\text{AIC}(m)\\qquad \\text{ with } \\qquad \\tau \\text{ from Equation (\\ref {eq_1_SequentialEarlyStoppingTimeKappaM})}.$ Since this only requires $ \\tau $ additional comparisons of $ \\text{AIC}(m) $ for $ m \\le \\tau $ , the two-step procedure has the same computational complexity as the estimator $ \\widehat{F}^{ ( \\tau ) } $ .", "The two-step procedure enables us to directly address the sensitivity of $ \\tau $ to the noise estimation.", "Our results for fully sequential early stopping in Theorem REF , Corollary REF and Theorem REF require estimating the noise level with the optimal rate $ \\mathcal {R}( s, \\gamma ) $ .", "Conversely, assuming that the high-dimensional Akaike criterion selects an iteration such that its risk is of optimal order among the iterations $ m \\le \\tau $ , the two-step procedure only requires an estimate $ \\widehat{ \\sigma }^{2} $ of $ \\Vert \\varepsilon \\Vert _{n}^{2} $ which has a slightly negative bias.", "Proposition REF then guarantees that $\\tau \\ge \\tilde{m}_{ s, \\gamma , G }$ from Equation (REF ) for some $ G > 0 $ with probability converging to one, i.e., the indices $ m \\le \\tau $ contain an iteration with risk of order $ \\mathcal {R}( s, \\gamma ) $ .", "Moreover, the second selection step guarantees that as long as this is satisfied, any imprecision in $ \\tau $ only results in a slightly increased or decreased computation time rather than changes in the estimation risk.", "We can establish the following theoretical guarantee: Theorem 5.1 (Two-step procedure) Under Assumptions [assSubGaussianErrors] (SubGE), [assSparse] (Sparse), [assSubGaussianDesign] (SubGD) and [assCovB] (CovB), choose $ \\widehat{ \\sigma }^{2} $ in Equation (REF ) such that $\\widehat{ \\sigma }^{2}\\le \\Vert \\varepsilon \\Vert _{n}^{2} + C \\mathcal {R}( s, \\gamma )$ with probability converging to one.", "Then, for any choice $C_{ \\tau } = c ( \\overline{ \\sigma }^{2} + \\rho ^{4} )$ in (REF ) with $ c \\ge 0 $ and $ C_{ \\text{AIC} } = C ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) $ with $ C > 0 $ large enough, the two-step procedure satisfies that with probability converging to one, $\\tau _{ \\text{two-step} } \\ge \\tilde{m}_{ s, \\gamma , G }$ from Equation (REF ) for some $ G > 0 $ .", "On the corresponding event, $\\Vert \\widehat{F}^{ ( \\tau _{ \\text{two-step} } ) } - f^{*} \\Vert _{ L^2 }^{2}& \\le C \\mathcal {R}( s, \\gamma ).$ Due to our $ \\omega $ -pointwise analysis on high probability events, the proof of Theorem REF is simpler than the result for the two-step procedure in Proposition 4.2 of Blanchard et al.", "[3].", "In particular, it does not require the analysis of probabilities conditioned on events $ \\lbrace \\tau \\le m \\rbrace $ .", "The details are in Appendix .", "We note two important technical aspects of the two-step procedure: Remark 5.2 (Two-step procedure) Under Assumptions [assSubGaussianErrors] (SubGE), [assSparse] (Sparse), [assSubGaussianDesign] (SubGD) and [assCovB] (CovB), taken by itself, the Akaike criterion in (REF ) satisfies $\\Vert \\widehat{F}^{ ( \\widehat{m}_{ \\text{AIC} } ) } - f^{*}\\Vert _{ L^{2} }^{2}\\le C \\mathcal {R}( s, \\gamma )$ .", "The proof of this statement is included in the proof of Theorem REF .", "The criterion in Ing [12] minimizes $\\text{HDAIC}(m):=r_{m}^{2} \\Big ( 1 + \\frac{ C_{ \\text{HDAIC} } m \\log p }{n} \\Big ),\\qquad 0 \\le m \\le M_{n}.$ Including iterations $ m > M_{n} $ potentially makes it unreliable, since $ r_{m}^{2} \\rightarrow 0 $ for $ m \\rightarrow \\infty $ .", "In particular, $ \\text{HDAIC}(n) = 0 $ .", "The fact that under the assumptions of Theorem REF , we have no upper bound $ \\tau \\le M_{n} $ therefore makes it necessary to formulate the new criterion in Equation (REF ).", "Under the assumptions of Theorem REF , there is no upper bound for $ \\tau $ .", "For any noise estimate $ \\widehat{ \\sigma }^{2} $ with $\\widehat{ \\sigma }^{2}\\ge \\Vert \\varepsilon \\Vert _{n}^{2} - C \\mathcal {R}( s, \\gamma )$ , however, Proposition REF guarantees that $\\tau \\le H m^{*}_{ s, \\gamma }$ for some $ H > 0 $ with probability converging to one.", "The two-step procedure therefore retains the computational advantages of early stopping.", "For simulations, this suggests choosing the smaller penalty parameter $\\lambda _{0} = \\sqrt{ 0.5 \\log (p) / n }$ in the Scaled Lasso objective (REF ), which puts a negative bias on $ \\widehat{ \\sigma }^{2} $ , and then applying the two-step procedure.", "The proof of Theorem REF shows that the penalty term $C_{ \\text{AIC} } m \\log (p) / n $ in Equation (REF ) essentially has to dominate the empirical stochastic error $ s_{m} $ .", "In accordance with Lemma REF , we therefore choose $ C_{ \\text{AIC} } = 2 \\overline{ \\sigma }^{2} = 2 $ .", "Compared to Figure REF , the results in Figure REF show that the high-dimensional Akaike criterion corrects the instances where $ \\tau $ stops later than the oracle indices.", "Empirically, the method attains the risk at the pointwise classical oracle $ m^{ ( \\mathfrak {o} ) } $ up to a factor $ C_{ \\text{Risk} } = 2 $ .", "The median two-step times are given by $ (4, 7, 12, 15, 37, 37) $ .", "Overall, performance of the two-step procedure comes very close to the benchmark results in Figure REF .", "It is much better than that of the Scaled Lasso and at least as good as that of the full Akaike selection and the default method LassoCV from the python library scikit-learn [18] based on 5-fold cross-validation, see Figures REF and REF .", "Since we have intentionally biased our noise estimate and iterate slightly further, the computation times of the two-step procedure are slightly larger than those of purely sequential early stopping.", "Since they are still much lower than those of the full Akaike selection or the cross-validated Lasso, however, the two-step procedure maintains most of the advantages from early stopping and yet genuinely achieves the performance of exhaustive selection criteria." ], [ "Proofs for the main results", "[Proof of Proposition REF (No stopping too early)]Proposition REF guarantees that $ \\tilde{m}_{ s, \\gamma , G} \\le m^{*}_{ s, \\gamma } $ with probability converging to one given that $ G $ is large enough.", "We start by analyzing the left-hand side of the condition (REF ).", "By Lemma REF (i), we have $b_{m}^{2} + 2 c_{m}& \\ge b_{m}^{2}\\Big (1-\\sqrt{\\frac{ 16 \\overline{ \\sigma }^{2} ( m + 1 ) \\log p }{ n b_{m}^{2} }}\\Big )\\qquad \\text{ for all } m < \\tilde{m}_{ s, \\gamma , G}$ with probability converging to one.", "We estimate $ b_{m}^{2} $ from below.", "By a standard convexity estimate, we can write $2 b_{m}^{2}& =2 \\Vert ( I - \\widehat{ \\Pi }_{m} ) f^{*} \\Vert _{n}^{2}\\ge \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{n}^{2}-2 \\Vert ( \\widehat{ \\Pi }_{m} - \\Pi _{m} ) f^{*} \\Vert _{n}^{2}.$ For the first term in Equation (REF ), we distinguish between the two possible sparsity assumptions.", "Under $ \\gamma $ -sparsity, Proposition REF and Lemma REF imply that with probability converging to one, for any $m < \\tilde{m}_{ s, \\gamma , G}$ , $\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{n}^{2}& \\ge \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}\\Big (1-C \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{ \\frac{ - 2 }{ 2 \\gamma - 1 } }\\sqrt{ \\frac{ \\rho ^{4} \\log p }{n} }\\Big )\\\\& \\ge \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}\\Big (1-\\frac{C}{ G^{ 1 / ( 2 \\gamma - 1 ) } }\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ \\frac{1}{2} - \\frac{1}{ 2 \\gamma } }\\Big ).$ By increasing $ G > 0 $ , the term in the outer parentheses becomes larger than $ 1 / 2 $ , which yields $\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{n}^{2}& \\ge \\frac{G}{2}\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - \\frac{1}{ 2 \\gamma } }.$ Under $ s $ -sparsity, analogously with probability converging to one, for any $m < \\tilde{m}_{ s, \\gamma , G}$ , $\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{n}^{2}& \\ge \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}\\Big (1-C ( s + m )\\sqrt{ \\frac{ \\rho ^{4} \\log p }{n} }\\Big )\\\\& \\ge \\frac{1}{2} \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}\\ge \\frac{ c_{ \\lambda } }{2} \\underline{ \\beta }^{2},$ where we have used that $ s = o( ( n / \\log p )^{ 1 / 3 } ) $ and $ m < \\tilde{m}_{ s, \\gamma , G} \\le m^{*}_{ s, \\gamma } $ .", "For the second term in Equation (REF ), we can write $( \\widehat{ \\Pi }_{m} - \\Pi _{m} ) f^{*}& =\\widehat{ \\Pi }_{m} ( I - \\Pi _{m} ) f^{*}=( X^{ ( \\widehat{J}_{m} ) } )^{ \\top }\\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1}\\langle ( I - \\Pi _{m} ) f^{*}, g_{ \\widehat{J}_{m} }\\rangle _{n},$ i.e., the coefficients $\\beta ( ( \\widehat{ \\Pi }_{m} - \\Pi _{m} ) f^{*} )$ are given by $\\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1}\\langle ( I - \\Pi _{m} ) f^{*}, g_{ \\widehat{J}_{m} } \\rangle _{n}$ .", "From Corollary REF (i), it then follows that $\\Vert \\beta ( ( \\widehat{ \\Pi }_{m} - \\Pi _{m} ) f^{*} ) \\Vert _{2}& \\le \\Vert \\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1} \\Vert _{ \\text{op} }\\sqrt{m}\\Vert \\langle ( I - \\Pi _{m} ) f^{*}, g_{ \\widehat{J}_{m} } \\rangle _{n}\\Vert _{1}\\\\& \\le C\\Vert \\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1} \\Vert _{ \\text{op} }\\Vert \\beta ^{*} \\Vert _{1}\\sqrt{m}\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}-\\langle g_{j}, g_{k} \\rangle _{ L^{2} }|.$ In combination with Lemma REF we obtain that with probability converging to one, $\\Vert ( \\widehat{ \\Pi }_{m} - \\Pi _{m} ) f^{*} \\Vert _{n}^{2}& =\\tilde{\\beta }^{ \\top } \\widehat{ \\Gamma }_{ \\widehat{J}_{m} } \\tilde{\\beta }\\le C\\Vert \\beta ^{*} \\Vert _{1}^{2}\\frac{ \\rho ^{4} m \\log p }{n}\\qquad \\text{ for all } m \\le m^{*}_{ s, \\gamma }.$ For $ s $ -sparse $ \\beta ^{*} $ , this term converges to zero for $ n \\rightarrow \\infty $ .", "For $ \\gamma $ -sparse $ \\beta ^{*} $ , it is smaller than the rate $ \\mathcal {R}( s, \\gamma ) $ up to a constant independent of $ G $ .", "By increasing $ G > 0 $ again, Equations (REF ), (REF ), (REF ), and (REF ) yield $b_{m}^{2}& \\ge \\begin{dcases}\\frac{ c_{ \\lambda } }{4} \\underline{ \\beta }^{2},& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\frac{G}{4}\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - 1 / ( 2 \\gamma ) },& \\beta ^{*} \\ \\gamma \\text{-sparse}\\end{dcases}\\\\& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\text{ for all } m < \\tilde{m}_{ s, \\gamma , G }$ with probability converging to one.", "Plugging this into Equation (REF ), we obtain that the left-hand side in condition (REF ) satisfies $b_{m}^{2} + 2 c_{m}& \\ge \\begin{dcases}\\frac{ c_{ \\lambda } }{8} \\underline{ \\beta }^{2},& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\frac{G}{8}\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - 1 / ( 2 \\gamma ) },& \\beta ^{*} \\ \\gamma \\text{-sparse}\\end{dcases}\\\\& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\text{ for all } m < \\tilde{m}_{ s, \\gamma , G }$ on an event with probability converging to one for $ G > 0 $ sufficiently large.", "At the same time, however, by Lemma REF and our assumption on $ \\widehat{ \\sigma }^{2} $ , the right-hand side in condition (REF ) satisfies $\\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2}+C_{ \\tau } \\frac{ m \\log p }{n}+s_{m}& \\le C \\mathcal {R}( s, \\gamma )\\qquad \\text{ for all } m < \\tilde{m}_{ s, \\gamma , G }$ with probability converging to one for a constant $ C $ independent of $ G $ .", "From Equation (REF ) and (REF ), it finally follows that for large $ G > 0 $ , condition (REF ) can only be satisfied on a event with probability converging to zero.", "This finishes the proof.", "[Proof of Proposition REF (No stopping too late)]Lemma REF (i) yields that $b_{m}^{2} + 2 c_{m}\\le b_{m}^{2}+b_{m}\\sqrt{ \\frac{ 16 \\overline{ \\sigma }^{2} ( m + 1 ) \\log p }{n} }\\qquad \\text{ for all } m \\ge 0$ with probability converging to one.", "For $ s $ -sparse $ \\beta ^{*} $ , with probability converging to one, this is zero for $ m \\ge m^{*}_{ s, \\gamma } $ by Proposition REF .", "For $ \\gamma $ -sparse $ \\beta ^{*} $ , Lemma REF provides the estimate $& \\ \\ \\ \\ b_{m}^{2} + 2 c_{m}\\\\& \\le C \\Big (\\Big [\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big ]^{ 1 - \\frac{1}{ 2 \\gamma } }+\\Big [\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big ]^{ \\frac{1}{2} - \\frac{1}{ 4 \\gamma } }\\sqrt{ \\frac{ 16 \\overline{ \\sigma }^{2} ( m + 1 ) \\log p }{n} }\\Big )\\\\& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\text{ for all } m \\ge m^{*}_{ s, \\gamma }$ with probability converging to one.", "For $ m = H m^{*}_{ s, \\gamma } $ with $ H > 0 $ large enough, this yields $b_{m}^{2} + 2 c_{m}& \\le C \\sqrt{H}\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - \\frac{1}{ 2 \\gamma } }.$ At the same time, under the assumption on $ \\widehat{ \\sigma }^{2} $ , the right-hand side of condition (REF ) satisfies $\\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2}+C_{ \\tau } \\frac{ m \\log p }{n}+s_{m}& \\ge c H \\mathcal {R}( s, \\gamma ).$ For $ H > 0 $ sufficiently large, condition (REF ) can therefore only be satisfied on an event with probability converging to zero.", "[Proof of Proposition REF (Fast noise estimation)]Theorem 2 in Sun and Zhang [21] states that on the event $\\Omega _{ \\text{Lasso} }=\\Big \\lbrace \\sup _{ j \\le p } |\\langle g_{j}, \\varepsilon \\rangle _{n} |\\le ( 1 - \\alpha ^{*} )\\frac{ \\xi - 1 }{ \\xi + 1 }\\Vert \\varepsilon \\Vert _{n} \\lambda _{0}\\Big \\rbrace ,$ the Scaled Lasso noise estimator $ \\widehat{ \\sigma } $ satisfies $\\max \\Big (1 - \\frac{ \\widehat{ \\sigma } }{ \\Vert \\varepsilon \\Vert _{n} },1 - \\frac{ \\Vert \\varepsilon \\Vert _{n} }{ \\widehat{ \\sigma } }\\Big )& =1 -\\frac{ \\min ( \\widehat{ \\sigma }, \\Vert \\varepsilon \\Vert _{n} ) }{ \\max ( \\widehat{ \\sigma }, \\Vert \\varepsilon \\Vert _{n} ) }\\le \\alpha ^{*}=\\frac{\\lambda _{0}\\mu ( \\Vert \\varepsilon \\Vert _{n} \\lambda _{0}, \\xi )}{ \\Vert \\varepsilon \\Vert _{n} }.$ This implies that on $ \\Omega _{ \\text{Lasso} } $ , $| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |& =( \\widehat{ \\sigma } + \\Vert \\varepsilon \\Vert _{n} )(\\max ( \\widehat{ \\sigma }, \\Vert \\varepsilon \\Vert _{n} )- \\min ( \\widehat{ \\sigma }, \\Vert \\varepsilon \\Vert _{n} ))\\\\& \\le \\Big (1 + \\frac{ \\max ( \\widehat{ \\sigma }, \\Vert \\varepsilon \\Vert _{n} ) }{ \\min ( \\widehat{ \\sigma }, \\Vert \\varepsilon \\Vert _{n} ) }\\Big )\\max ( \\widehat{ \\sigma }, \\Vert \\varepsilon \\Vert _{n} )\\lambda _{0}\\mu ( \\Vert \\varepsilon \\Vert _{n} \\lambda _{0}, \\xi )\\\\& \\le \\Big ( 1 + \\frac{1}{ 1 - \\alpha ^{*} } \\Big )\\max ( \\widehat{ \\sigma }, \\Vert \\varepsilon \\Vert _{n} )\\lambda _{0}\\mu ( \\Vert \\varepsilon \\Vert _{n} \\lambda _{0}, \\xi )\\\\& \\le \\Big (\\frac{1}{ 1 - \\alpha ^{*} }+ \\frac{1}{ ( 1 - \\alpha ^{*} )^{2} }\\Big )\\Vert \\varepsilon \\Vert _{n} \\lambda _{0}\\mu ( \\Vert \\varepsilon \\Vert _{n} \\lambda _{0}, \\xi )\\le \\frac{ 2 \\Vert \\varepsilon \\Vert _{n}^{2} \\alpha ^{*} }{ ( 1 - \\alpha ^{*} )^{2} },$ where, without loss of generality, we have used that $ \\alpha ^{*} < 1 $ , since for $ \\alpha ^{*} > 1 $ , the event $ \\Omega _{ \\text{Lasso} } $ is empty.", "It remains to be shown that Equation (REF ) provides a meaningful bound on an event with probability converging to one.", "Step 1: Bounding $ \\mu ( \\lambda , \\xi ) $ .", "Set $ \\nu = 1 / 2 $ .", "For $ s $ -sparse $ \\beta ^{*} $ , the choice $J = S = \\lbrace j \\le p: | \\beta ^{*}_{j} | \\ne 0 \\rbrace $ yields the immediate estimate $\\mu ( \\Vert \\varepsilon \\Vert _{n} \\lambda _{0}, \\xi )& \\le ( \\xi + 1 )\\kappa ^{ - 2 }( 2 \\xi + 1, S )\\Vert \\varepsilon \\Vert _{n} \\lambda _{0} s.$ For $ \\gamma $ -sparse $ \\beta ^{*} $ , the choice $J = J_{ \\lambda }= \\lbrace j: | \\beta ^{*}_{j} | \\ge \\lambda \\rbrace $ yields the estimate $\\mu ( \\lambda , \\xi )& \\le ( \\xi + 1 )\\max ( 2, \\kappa ^{ - 2 }( 2 \\xi + 1, J_{ \\lambda } ) )\\sum _{ j = 1 }^{p}\\min ( \\lambda , | \\beta ^{*}_{j} | ).$ Without loss of generality, we can assume that the $( \\beta ^{*}_{j} )_{ j \\le p }$ are decreasingly ordered.", "We derive that for any $ m \\in \\mathbb {N} $ , it holds that $\\sum _{ j > m } | \\beta ^{*}_{j} |^{2}\\le C m^{ 1 - 2 \\gamma }.$ By the $ \\gamma $ -sparsity of $ \\beta ^{*} $ , we have $\\sum _{ j > m } | \\beta ^{*}_{j} |^{2}& \\le | \\beta ^{*}_{m} | \\sum _{ j > m } | \\beta ^{*}_{j} |\\le | \\beta ^{*}_{m} | C_{ \\gamma }\\Big (\\sum _{ j > m } | \\beta ^{*}_{j} |^{2}\\Big )^{ \\frac{ \\gamma - 1 }{ 2 \\gamma - 1 } }.$ Rearranging yields $| \\beta ^{*}_{m} |^{2} \\ge C_{ \\gamma }^{ - 2 }(\\sum _{ j > m } | \\beta ^{*}_{j} |^{2})^{ 2 \\gamma / ( 2 \\gamma - 1 ) }$ , which implies $\\sum _{ j > m + 1 } | \\beta ^{*}_{j} |^{2}& =\\sum _{ j > m } | \\beta ^{*}_{j} |^{2}-| \\beta ^{*}_{m} |^{2}\\le \\sum _{ j > m } | \\beta ^{*}_{j} |^{2}\\Big (1-C_{ \\gamma }^{ - 2 }\\Big (\\sum _{ j > m } | \\beta ^{*}_{j} |^{2}\\Big )^{ \\frac{1}{ 2 \\gamma - 1 } }\\Big ).$ As in the proof of Proposition REF , the intermediate claim now follows from Lemma 1 in Gao et al.", "[10] by setting $a_{m}: = \\sum _{ j > m } | \\beta ^{*}_{j} |^{2}$ .", "From the above, we obtain that for any $ m \\in \\mathbb {N} $ , $\\sum _{ j = 1 }^{p}\\min ( \\lambda , | \\beta ^{*}_{j} | )& \\le m \\lambda +\\sum _{ j > m } | \\beta ^{*}_{j} |\\le m \\lambda +C_{ \\gamma }\\Big (\\sum _{ j > m } | \\beta ^{*}_{j} |^{2}\\Big )^{ \\frac{ \\gamma - 1 }{ 2 \\gamma - 1 } }\\\\& \\le m \\lambda + C m^{ 1 - \\gamma }.$ For $\\lambda = \\Vert \\varepsilon \\Vert _{n} \\lambda _{0}$ and a choice $ m $ of order $( n / ( \\Vert \\varepsilon \\Vert _{n}^{2} \\log p ) )^{ 1 / ( 2 \\gamma ) }$ , Equations (REF ) and (REF ) translate to the estimate $\\mu ( \\lambda , \\xi )& \\le C ( \\xi + 1 )\\max ( 2, \\kappa ^{ - 2 }( 2 \\xi + 1, J_{ \\lambda } ) )\\Big (\\frac{ \\Vert \\varepsilon \\Vert _{n}^{2} \\log p }{n}\\Big )^{ \\frac{1}{2} - \\frac{1}{ 2 \\gamma } }.$ Step 2: Positive compatibility factor.", "For the bounds in Equations (REF ) and (REF ) to be meaningful, we have to guarantee that the compatibility factor is strictly positive.", "For rows $( X_{i} )_{ i \\le n } \\sim N( 0, \\Gamma )$ i.i.d.", "of the design matrix $ \\mathbf {X} $ , Theorem 7.16 in Wainwright [26] states that $\\Vert \\mathbf {X} \\beta \\Vert _{n}^{2}& \\ge \\frac{1}{8}\\Vert \\sqrt{ \\Gamma } \\beta \\Vert _{2}^{2}-50 \\max _{ j \\le p } \\Gamma _{ j j }\\frac{ \\log p }{n}\\Vert \\beta \\Vert _{1}^{2}\\qquad \\text{ for all } \\beta \\in \\mathbb {R}^{p}$ on an event $ \\Omega _{ \\text{Comp} } $ with probability at least $1 - e^{ - n / 32 } / ( 1 - e^{ - n / 32 } )$ .", "Since we assume unit variance design, the bound in Equation (REF ) implies $\\Vert \\mathbf {X} \\beta \\Vert _{n}^{2}& \\ge \\frac{\\lambda _{ \\min }( \\Gamma ) }{8} \\Vert \\beta \\Vert _{2}^{2}-\\frac{ 50 \\log p }{n} \\Vert \\beta \\Vert _{1}^{2}\\ge \\frac{ c_{ \\lambda } }{ 16 } \\Vert \\beta \\Vert _{2}^{2}$ for all $ \\beta \\in \\mathbb {R}^{p} $ such that $\\frac{ 50 \\log p }{n} \\Vert \\beta \\Vert _{1}^{2}\\le \\frac{ c_{ \\lambda } }{ 16 } \\Vert \\beta \\Vert _{2}^{2}.$ If $\\Vert \\beta _{ J^{c} } \\Vert _{1} \\le \\xi \\Vert \\beta _{J} \\Vert _{1}$ for some $ \\xi > 1 $ and $ J \\subset \\lbrace 1, \\dots , p \\rbrace $ , then $\\Vert \\beta \\Vert _{1}^{2}& =( \\Vert \\beta _{J} \\Vert _{1} + \\Vert \\beta _{ J^{c} } \\Vert _{1} )^{2}\\le ( 1 + \\xi )^{2} \\Vert \\beta _{J} \\Vert _{1}^{2}\\le ( 1 + \\xi )^{2} | J | \\Vert \\beta \\Vert _{2}^{2}.$ Plugging this estimate into the left-hand side of condition (REF ) yields that on $ \\Omega _{ \\text{Comp} } $ , $ \\mathbf {X} $ satisfies the restricted eigenvalue condition $\\Vert \\mathbf {X} \\beta \\Vert _{n}^{2}& \\ge \\frac{ c_{ \\lambda } }{ 16 }\\Vert \\beta \\Vert _{2}^{2},\\qquad \\text{ for all } \\beta \\in \\mathbb {R}^{p}:\\Vert \\beta _{ J^{c} } \\Vert _{1} \\le \\xi \\Vert \\beta _{J} \\Vert _{1}$ and all sets $ J \\subset \\lbrace 1, \\dots , p \\rbrace $ with $| J | \\le c_{ \\lambda } / 800 ( 1 + \\xi )^{ - 2 } n / \\log p$ .", "However, due to the estimate $\\Vert \\beta _{J} \\Vert _{1}^{2}& \\le | J | \\Vert \\beta \\Vert _{2}^{2}\\le \\frac{ 16 | J | }{ c_{ \\lambda } }\\Vert \\mathbf {X} \\beta \\Vert _{n}^{2}\\qquad \\text{ for all } \\beta \\in \\mathbb {R}^{p}:\\Vert \\beta _{ J^{c} } \\Vert _{1} \\le \\xi \\Vert \\beta _{J} \\Vert _{1},$ this implies that the compatibility factor $ \\kappa ^{2}( \\xi , J ) $ is larger than $ c_{ \\lambda } / 16 $ .", "Under $ s $ -sparsity, the set $S = \\lbrace j \\le p: | \\beta ^{*}_{j} | \\ne 0 \\rbrace $ immediately satisfies the assumption on $ J $ above for $ n $ large enough.", "Under $ \\gamma $ -sparsity, let $ \\Omega _{ \\text{Conv} } $ be the event of probability one on which $ \\Vert \\varepsilon \\Vert _{n}^{2} $ converges to the (unconditional) variance $\\text{Var}( \\varepsilon _{1} ) = \\underline{ \\sigma }^{2} > 0$ .", "Since $ \\Vert \\beta ^{*} \\Vert _{1} \\le C $ with some constant $ C > 0 $ , for $ \\lambda = \\Vert \\varepsilon \\Vert _{n} \\lambda _{0} $ , $| J_{ \\lambda } |& \\le \\frac{C}{ C_{ \\lambda _{0} } }\\sqrt{ \\frac{n}{ \\Vert \\varepsilon \\Vert _{n}^{2} \\log p } }\\le \\frac{ c_{ \\lambda } }{ 800 ( 1 + \\xi )^{2} }\\frac{n}{ \\log p }$ on $ \\Omega _{ \\text{Conv} } $ for $ n $ sufficiently large.", "We conclude that on $\\Omega _{ \\text{Conv} } \\cap \\Omega _{ \\text{Comp} }$ , for any $ \\xi > 1 $ , the compatibility factor $ \\kappa ^{2}( 2 \\xi + 1, J_{ \\lambda } ) $ is larger than $ c_{ \\lambda } / 16 $ for $ n $ large enough.", "Step 3: Bound on the combined event.", "Finally, for $\\lambda _{0} = C_{ \\lambda _{0} } ( \\xi + 1 ) / ( \\xi - 1 )\\sqrt{ \\log (p) / n },$ we have from Step 2 and Lemma REF (ii) that $\\mathbb {P}( \\Omega _{ \\text{Lasso} }^{c} )& =\\mathbb {P} \\Big (\\Big \\lbrace \\sup _{ j \\le p } | \\langle g_{j}, \\varepsilon \\rangle _{n} |>( 1 - \\alpha ^{*} )\\frac{ \\xi - 1 }{ \\xi + 1 }\\Vert \\varepsilon \\Vert _{n} \\lambda _{0}\\Big \\rbrace \\cap \\Omega _{ \\text{Conv} }\\cap \\Omega _{ \\text{Comp} }\\Big )+o(1)\\\\& \\le \\mathbb {P} \\Big (\\Big \\lbrace \\sup _{ j \\le p } | \\langle g_{j}, \\varepsilon \\rangle _{n} |>\\frac{ \\xi - 1 }{ \\xi + 1 }\\frac{ \\underline{ \\sigma } }{2} \\lambda _{0}\\Big \\rbrace \\cap \\Omega _{ \\text{Conv} }\\cap \\Omega _{ \\text{comp} }\\Big )+o(1)\\\\& \\le \\mathbb {P} \\Big (\\Big \\lbrace \\sup _{ j \\le p } | \\langle g_{j}, \\varepsilon \\rangle _{n} |>C_{ \\varepsilon }\\sqrt{ \\frac{ \\overline{ \\sigma }^{2} \\log p }{n} }\\Big \\rbrace \\Big )+o(1)\\xrightarrow[ ]{ n \\rightarrow \\infty } 0.$ We conclude that $\\Omega _{ \\text{Lasso} }\\cap \\Omega _{ \\text{Conv} }\\cap \\Omega _{ \\text{Comp} }$ is an event with probability converging to one on which by Equations (REF ), (REF ), (REF ) and Step 2, $| \\widehat{ \\sigma }^{2} - \\Vert \\varepsilon \\Vert _{n}^{2} |& \\le \\frac{ 2 \\Vert \\varepsilon \\Vert _{n}^{2} \\alpha ^{*} }{ ( 1 - \\alpha ^{*} )^{2} }\\le C \\Vert \\varepsilon \\Vert _{n}\\lambda _{0}\\mu ( \\Vert \\varepsilon \\Vert _{n} \\lambda _{0}, \\xi )\\\\& \\le C\\begin{dcases}\\Big (\\frac{ \\overline{ \\sigma }^{2} \\log p }{n}\\Big )^{ 1 - 1 / ( 2 \\gamma ) },& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\frac{ \\overline{ \\sigma }^{2} s \\log p }{n},& \\beta ^{*} \\ \\gamma \\text{-sparse}\\end{dcases}$ for $ n $ sufficiently large.", "This finishes the proof.", "[Proof of Theorem REF (Two-step procedure)] The proof follows along the same arguments that we have applied in the derivation of Proposition REF and Proposition REF .", "From Proposition REF , we already know that for some $ G > 0 $ , the sequential stopping time satisfies $\\tau \\ge \\tilde{m}_{ s, \\gamma , G}$ .", "For $ G^{\\prime } > G $ sufficiently large, we now show that $\\tau _{ \\text{two-step} } \\ge \\tilde{m}_{ s, \\gamma , G^{\\prime } }$ with probability converging to one.", "Assuming that $ \\tau _{ \\text{two-step} } < \\tilde{m}_{ s, \\gamma , G^{\\prime } } $ , we obtain that $\\exists m < \\tilde{m}_{ s, \\gamma , G^{\\prime } }:r_{m}^{2} + \\frac{ C_{ \\text{AIC} } m \\log p }{n}\\le r_{ \\tau }^{2} + \\frac{ C_{ \\text{AIC} } \\tau \\log p }{n},$ which is equivalent to $\\exists m < \\tilde{m}_{ s, \\gamma , G^{\\prime } }:b_{m}^{2} + 2 c_{m} - s_{m}+\\frac{ C_{ \\text{AIC} } m \\log p }{n}\\le b_{ \\tau }^{2} + 2 c_{ \\tau } - s_{ \\tau }+\\frac{ C_{ \\text{AIC} } \\tau \\log p }{n}.$ Combining the reasoning from the proof of Proposition REF with the bound from Lemma REF , the left-hand side of condition (REF ) is larger than $\\begin{dcases}\\frac{ c_{ \\lambda } }{ 16 } \\underline{ \\beta }^{2},& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\frac{ G^{\\prime } }{16}\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - 1 / ( 2 \\gamma ) },& \\beta ^{*} \\ \\gamma \\text{-sparse}\\end{dcases}$ with probability converging to one and $ G^{\\prime } $ sufficiently large.", "For the right-hand side of condition (REF ), we can assume that $ \\tau < m^{*}_{ s, \\gamma } $ from Equation (REF ).", "Otherwise, we may replace $ \\tau $ with $ m^{*}_{ s, \\gamma } $ .", "Note that in the setting of Remark REF , we can replace $ \\tau $ with $ m^{*}_{ s, \\gamma } $ from the start, which yields the result stated there.", "Using that $\\tilde{m}_{ s, \\gamma , G }\\le \\tau <\\tilde{m}_{ s, \\gamma , G^{\\prime } }\\le m_{ s, \\gamma }^{*}$ with probability converging to one, together with Lemmas REF and REF , the right-hand side converges to zero under $ s $ -sparsity with probability converging to one and is smaller than $ C \\mathcal {R}( s, \\gamma ) $ with probability converging to one and $ C $ independent of $ G^{\\prime } $ under $ \\gamma $ -sparsity.", "Therefore, $ \\tau _{ \\text{two-step} } < \\tilde{m}_{ s, \\gamma , G^{\\prime } } $ can only be true on an event with probability converging to zero.", "Similar to Proposition REF , we can also show that $\\tau _{ \\text{two-step} } \\le H m^{*}_{ s, \\gamma }$ with probability converging to one for $ H > 0 $ large enough.", "If $ \\tau _{ \\text{two-step} } > H m^{*}_{ s, \\gamma } $ , analogously to condition (REF ), we have $&\\exists m > H m^{*}_{ s, \\gamma }:\\\\&b_{m}^{2} + 2 c_{m} - s_{m}+\\frac{ C_{ \\text{AIC} } m \\log p }{n}\\le b_{ m^{*}_{ s, \\gamma } }^{2} + 2 c_{ m^{*}_{ s, \\gamma } }-s_{ m^{*}_{ s, \\gamma } }+\\frac{ C_{ \\text{AIC} } m^{*}_{ s, \\gamma } \\log p }{n}.$ Using the bounds from Lemmas REF , REF and REF , on an event with probability converging to one, the left-hand side of condition (REF ) is larger than $H / 2 \\tilde{ \\mathcal {R} }( s, \\gamma )$ for $ H $ large enough with $\\tilde{ \\mathcal {R} }( s, \\gamma ):=\\begin{dcases}\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) s \\log p }{n},& \\beta ^{*} \\ s \\text{-sparse}, \\\\\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - \\frac{1}{ 2 \\gamma } },& \\beta ^{*} \\ \\gamma \\text{-sparse}, \\\\\\end{dcases}$ whereas the right-hand side is smaller than $C \\tilde{ \\mathcal {R} }( s, \\gamma )$ with $ C $ independent of $ H $ .", "Therefore, $\\tau _{ \\text{two-step} } > H m^{*}_{ s, \\gamma }$ can only be satisfied on an event with probability converging to zero.", "This finishes the proof." ], [ "Proofs for auxiliary results", "[Proof of Lemma REF (Bounds for the cross term)]For (i), without loss of generality, $ b_{m}^{2} > 0 $ for all $ m \\ge 0 $ .", "We proceed via a supremum-out argument: We have $| c_{m} |& =| \\langle ( I - \\widehat{ \\Pi }_{m} ) f^{*}, \\varepsilon \\rangle _{n} |\\le b_{m} \\sup _{ h \\in \\mathcal {H}_{m} }| \\langle h, \\varepsilon \\rangle _{n} |$ with $\\mathcal {H}_{m}:=\\lbrace \\Pi f^{*} / ( \\Vert \\Pi f^{*} \\Vert _{n} ):\\Pi \\text{ is a projection orthogonal to } m \\text{ of the }g_{j}, j \\le p\\rbrace $ .", "Since $ | \\mathcal {H}_{m} | \\le p^{m} $ , we obtain $& \\ \\ \\ \\ \\mathbb {P} \\Big \\lbrace \\sup _{ m \\ge 0 }\\frac{ | c_{m} | }{ \\sqrt{ ( m + 1 ) b_{m}^{2} } }\\ge \\sqrt{ \\frac{ 4 \\overline{ \\sigma }^{2} \\log p }{n} }\\Big \\rbrace \\le \\sum _{ m = 0 }^{ \\infty }\\sum _{ h \\in \\mathcal {H}_{m} }\\mathbb {P} \\Big \\lbrace | \\langle h, \\varepsilon \\rangle _{n} |\\ge \\sqrt{ \\frac{ 4 \\overline{ \\sigma }^{2} ( m + 1 ) \\log p }{n} }\\Big \\rbrace \\\\& \\le 2 \\sum _{ m = 0 }^{ \\infty }p^{m}\\exp \\Big (\\frac{ - 2 n \\overline{ \\sigma }^{2} ( m + 1 ) \\log p }{ n \\overline{ \\sigma }^{2} }\\Big )\\le 2 \\sum _{ m \\ge 0 }p^{ - ( m + 1 ) }=\\frac{2}{ 1 - p^{-1} } - 2\\xrightarrow[ ]{ n \\rightarrow \\infty } 0,$ using a union bound and [assSubGaussianErrors] (SubGE).", "For (ii), we argue analogously.", "From the definition of Algorithm REF and the Gram-Schmidt orthogonalization, we have $r_{ m - 1 }^{2} - r_{m}^{2}& =\\Big \\langle ( I - \\widehat{ \\Pi }_{ m - 1 } ) Y,\\frac{( I - \\widehat{ \\Pi }_{ m - 1 } )g_{ \\widehat{j}_{m} }}{\\Vert ( I - \\widehat{ \\Pi }_{ m - 1 } )g_{ \\widehat{j}_{m} }\\Vert _{n}}\\Big \\rangle _{n}^{2}\\qquad \\text{ for all } m \\ge 1.$ This yields $\\Delta ( r_{m}^{2} ) \\le 2 b_{ m - 1 } + 2 Z_{ m - 1 }^{2}$ , where $Z_{m}:& =\\Big | \\Big \\langle \\varepsilon ,\\frac{( I - \\widehat{ \\Pi }_{m} ) g_{ \\widehat{j}_{ m + 1 } }}{\\Vert ( I - \\widehat{ \\Pi }_{m} ) g_{ \\widehat{j}_{ m + 1 } }\\Vert _{n}}\\Big \\rangle _{n} \\Big |\\le \\sup _{ h \\in \\tilde{ \\mathcal {H} }_{m} }| \\langle \\varepsilon , h \\rangle _{n} |,\\qquad m \\ge 0$ with $\\tilde{ \\mathcal {H} }_{m}: = \\lbrace \\Pi g_{k} / ( \\Vert \\Pi g_{k} \\Vert _{n} ):k \\le p,\\Pi \\text{ is a projection orthogonal to } m \\text{ of the }g_{j}, j \\le p\\rbrace $ .", "Since $ | \\tilde{ \\mathcal {H} }_{m} | \\le p^{ m + 1 } $ , $& \\ \\ \\ \\ \\mathbb {P} \\Big \\lbrace \\sup _{ m \\ge 1 }\\frac{ Z_{m} }{ \\sqrt{ m + 1 } }\\ge \\sqrt{ \\frac{ 4 \\overline{ \\sigma }^{2} \\log p }{n} }\\Big \\rbrace \\le \\sum _{ m = 0 }^{ \\infty }\\sum _{ h \\in \\mathcal {H}_{m} }\\mathbb {P} \\Big \\lbrace | \\langle \\varepsilon , h \\rangle _{n} |\\ge \\sqrt{ \\frac{ 4 \\overline{ \\sigma }^{2} ( m + 1 ) \\log p }{n} }\\Big \\rbrace \\\\& \\le 2\\sum _{ m = 0 }^{ \\infty }p^{ m + 1 }\\exp \\Big (\\frac{ - 2 n \\overline{ \\sigma }^{2} ( m + 1 ) \\log p }{ n \\overline{ \\sigma }^{2} }\\Big )\\le 2\\sum _{ m = 0 }^{ \\infty }p^{ - ( m + 1 ) }=\\frac{2}{ 1 - p^{-1} } - 2\\xrightarrow[ ]{ n \\rightarrow \\infty } 0$ as in (i).", "This finishes the proof.", "[Proof of Lemma REF (Uniform bounds in high probability)] From assumption [assSubGaussianDesign] (SubGD), it is immediate that the $X_{1}^{ (j) }, j \\le p$ are subgaussian with parameter $ \\rho ^{2} $ .", "Therefore, $\\langle g_{j}, g_{k} \\rangle _{n} - \\langle g_{j}, g_{k} \\rangle _{ L^{2} }& =\\frac{1}{n} \\sum _{ i = 1 }^{n}\\big (X_{i}^{ (j) } X_{i}^{ (k) }-\\mathbb {E} ( X_{i}^{ (j) } X_{i}^{ (k) } )\\big )\\qquad j, k \\in \\mathbb {N}$ is an average of centered subexponential variables with parameters $ ( C \\rho ^{4}, C \\rho ^{2} ) $ , i.e., for $Z:=X_{i}^{ (j) } X_{i}^{ (k) }- \\mathbb {E} X_{i}^{ (j) } X_{i}^{ (k) }$ , $\\mathbb {E} e^{ u Z }& \\le e^{ u^{2} C \\rho ^{4} / 2 }\\qquad \\text{ for all } | u | \\le \\frac{1}{ C \\rho ^{2} }.$ From Bernstein's inequality, see Theorem 2.8.1 in Vershynin [25], we obtain that for $ t > 0 $ , $& \\ \\ \\ \\ \\mathbb {P} \\Big \\lbrace \\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle _{ L^{2} }|\\ge t\\Big \\rbrace \\le \\sum _{ j, k \\le p }\\mathbb {P} \\Big \\lbrace |\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle _{ L^{2} }|\\ge t\\Big \\rbrace \\\\ & \\le 2 p^{2}\\exp \\Big (- c n\\min \\Big [\\frac{ t^{2} }{ \\rho ^{4} },\\frac{ t }{ \\rho ^{2} }\\Big ]\\Big )=2 \\exp \\Big (2 \\log p-c n \\min \\Big [\\frac{ t^{2} }{ \\rho ^{4} },\\frac{ t }{ \\rho ^{2} }\\Big ]\\Big ).$ Setting $t = C_{g} \\sqrt{ \\rho ^{4} \\log (p) / n }$ with $ C_{g} > 0 $ sufficiently large yields the statement in (i), since we have assumed that $ \\log p = o(n) $ .", "By (i), we have that via a union bound, $& \\ \\ \\ \\ \\mathbb {P} \\Big \\lbrace \\sup _{ j \\le p }| \\langle \\varepsilon , g_{j} \\rangle _{n} |\\ge t\\Big \\rbrace \\\\& \\le \\mathbb {P} \\Big \\lbrace \\sup _{ j \\le p }| \\langle \\varepsilon , g_{j} \\rangle _{n} |\\ge t,\\sup _{ j \\le p } \\Vert g_{j} \\Vert _{n} < \\frac{3}{2}\\Big \\rbrace +\\mathbb {P} \\Big \\lbrace \\sup _{ j \\le p } \\Vert g_{j} \\Vert _{n} \\ge \\frac{3}{2}\\Big \\rbrace \\\\& \\le \\sum _{ j = 1 }^{p}\\mathbb {P} \\Big \\lbrace | \\langle \\varepsilon , g_{j} \\rangle _{n} |\\ge t,\\sup _{ j \\le p } \\Vert g_{j} \\Vert _{n} < \\frac{3}{2}\\Big \\rbrace +o(1)\\\\& \\le 2 p \\exp \\Big ( \\frac{ - c n t^{2} }{ \\overline{ \\sigma }^{2} } \\Big )+o(1)=2\\exp \\Big (\\log p - \\frac{ c n t^{2} }{ \\overline{ \\sigma }^{2} }\\Big )+o(1),$ where the last inequality follows from [assSubGaussianErrors] (SubGE) by conditioning on the design, applying Hoeffding's inequality, see Theorem 2.6.2 in Vershynin [25], and estimating $\\Vert g_{j} \\Vert _{n} < 3 / 2$ in the denominator of the exponential.", "By choosing $t = C_{ \\varepsilon } \\sqrt{ \\overline{ \\sigma }^{2} \\log (p) / n }$ with $ C_{ \\varepsilon } > 0 $ large enough, we then obtain the statement in (ii).", "For $ t > 0 $ , a union bound yields $\\mathbb {P} \\Big \\lbrace \\sup _{ | J | \\le c_{ \\text{iter} } n / \\log p }\\frac{\\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op}}}{\\rho ^{2}}\\ge t\\Big \\rbrace & \\le \\sum _{ | J | \\le c_{ \\text{iter} } n / \\log p }\\mathbb {P} \\Big \\lbrace \\frac{\\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op}}}{\\rho ^{2}}\\ge t\\Big \\rbrace .$ For any fixed $ J $ with $| J | = m \\le c_{ \\text{iter} } n / \\log p,$ we can choose a $ 1 / 4 $ -net $ \\mathcal {N} $ of the unit ball in $\\mathbb {R}^{m} $ with $ | \\mathcal {N} | \\le 9^{m} $ , see Corollary 4.2.13 in Vershynin [25].", "By an approximation argument, $\\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op} }& \\le 2 \\max _{ v \\in \\mathcal {N} }|\\langle ( \\widehat{ \\Gamma }_{J} - \\Gamma _{J} ) v, v\\rangle |\\\\& =2 \\max _{ v \\in \\mathcal {N} }|\\frac{1}{n}\\sum _{ i = 1 }^{n}\\langle X_{i}^{ (J) }, v \\rangle ^{2}-\\mathbb {E} \\langle X_{i}^{ (J) }, v \\rangle ^{2}|,$ with $X_{i}^{ (J) } = ( X_{i}^{ (j) } )_{ j \\in J }\\in \\mathbb {R}^{ | J | }.$ As in (i), the $\\langle X_{i}^{ (J) }, v \\rangle ^{2}-\\mathbb {E} \\langle X_{i}^{ (J) }, v \\rangle ^{2}$ , $ i = 1, \\dots , n $ , are independent subexponential random variables with parameters $ ( C \\rho ^{4}, C \\rho ^{2} ) $ , i.e., by a union bound and Bernstein's inequality, $\\mathbb {P} \\Big \\lbrace \\frac{\\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op}}}{\\rho ^{2}}\\ge t\\Big \\rbrace & \\le \\sum _{ v \\in \\mathcal {N} }\\mathbb {P} \\Big \\lbrace |\\frac{1}{n}\\sum _{ i = 1 }^{n}\\langle X_{i}^{ (J) }, v \\rangle -\\mathbb {E} \\langle X_{i}^{ (J) }, v \\rangle |\\ge \\frac{ t^{2} \\rho ^{2} }{2}\\Big \\rbrace \\\\& \\le 2 \\cdot 9^{m}\\exp \\big ( - c n \\min ( t^{2}, t ) \\big ).$ Together, this yields $\\mathbb {P} \\Big \\lbrace \\sup _{ | J | \\le c_{ \\text{iter} } n / \\log p }\\frac{\\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op}}}{\\rho ^{2}}\\ge t\\Big \\rbrace & \\le \\sum _{ m = 1 }^{ \\lfloor c_{ \\text{iter} } n / \\log p \\rfloor }\\sum _{ J: | J | = m }\\mathbb {P} \\Big \\lbrace \\frac{\\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op}}}{\\rho ^{2}}\\ge t\\Big \\rbrace \\\\& \\le \\sum _{ m = 1 }^{ \\lfloor c_{ \\text{iter} } n / \\log p \\rfloor }\\binom{p}{m}2 \\cdot 9^{m}\\exp \\big ( - c n \\min ( t^{2}, t ) \\big )\\\\& \\le \\frac{ 2 c_{ \\text{iter} } n }{ \\log p }\\exp \\Big (\\frac{ c_{ \\text{iter} } n }{ \\log p }\\log ( 9 p )-c n \\min ( t^{2}, t )\\Big ).$ Setting $t = c_{ \\text{iter} } C_{ \\Gamma }$ with $ C_{ \\Gamma } > 0 $ large enough yields the result.", "Set $c_{ \\text{iter} } < c_{ \\lambda } / ( C_{ \\Gamma } \\rho ^{2} )$ , with $ c_{ \\lambda } $ from Assumption [assCovB] (CovB), and consider $Q_{n}: = \\big \\lbrace \\forall | J | \\le c_{ \\text{iter} } n / \\log p:\\widehat{ \\Gamma }_{J}^{-1} \\text{ exists}\\big \\rbrace .$ For $ n $ large enough, we have $\\mathbb {P}( Q_{n} )& =\\mathbb {P} \\Big \\lbrace \\inf _{ | J | \\le c_{ \\text{iter} } n / \\log p }\\lambda _{ \\min }( \\widehat{ \\Gamma }_{J} )> 0\\Big \\rbrace \\\\& \\ge \\mathbb {P} \\Big \\lbrace \\inf _{ | J | \\le c_{ \\text{iter} } n / \\log p }\\Big (\\lambda _{ \\min }( \\Gamma _{J} )- \\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op} }\\Big )> 0\\Big \\rbrace \\\\& \\ge \\mathbb {P} \\Big \\lbrace \\inf _{ | J | \\le c_{ \\text{iter} } n / \\log p }\\Big (c_{ \\text{iter} } C_{ \\Gamma } \\rho ^{2}- \\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op} }\\Big )> 0\\Big \\rbrace \\xrightarrow[ ]{ n \\rightarrow \\infty } 1,$ where we have used (iii) and $\\lambda _{ \\min }( \\widehat{ \\Gamma }_{J} )& \\ge \\lambda _{ \\min }( \\Gamma _{J} )-\\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op} }$ by Weyl's inequality.", "Now, let $F_{n}:=\\lbrace \\sup _{ | J | \\le c_{ \\text{iter} } n / \\log p }\\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op} }\\le c_{ \\text{iter} } C_{ \\Gamma } \\rho ^{2}\\rbrace $ be the event from (iii).", "For $ n $ large enough and $C_{ \\Gamma ^{-1} } > 1 / (c_{ \\lambda }- c_{ \\text{iter} } C_{ \\Gamma } \\rho ^{2}),$ we then have $& \\ \\ \\ \\ \\mathbb {P} \\Big (\\Big \\lbrace \\sup _{ | J | \\le c_{ \\text{iter} } n / \\log p }\\Vert \\widehat{ \\Gamma }_{J}^{-1} \\Vert \\le C_{ \\Gamma ^{-1} }\\Big \\rbrace \\cap F_{n} \\cap Q_{n}\\Big )\\\\& \\ge \\mathbb {P} \\Big (\\Big \\lbrace \\Big (1-\\frac{ c_{ \\text{iter} } C_{ \\Gamma } \\rho ^{2} }{ c_{ \\lambda } }\\Big )\\sup _{ | J | \\le c_{ \\text{iter} } n / \\log p }\\Vert \\widehat{ \\Gamma }_{J}^{-1} \\Vert _{ \\text{op} }\\le \\frac{1}{ c_{ \\lambda } }\\Big \\rbrace \\cap F_{n} \\cap Q_{n}\\Big )\\\\& \\ge \\mathbb {P} \\Big (\\Big \\lbrace \\forall | J | \\le c_{ \\text{iter} } n / \\log p:\\big (1-\\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op} }\\Vert \\Gamma _{J}^{-1} \\Vert _{ \\text{op} }\\big )\\Vert \\widehat{ \\Gamma }_{J}^{-1} \\Vert _{ \\text{op} }\\le \\Vert \\Gamma _{J}^{-1} \\Vert _{ \\text{op} }\\Big \\rbrace \\cap F_{n} \\cap Q_{n}\\Big )\\\\& \\ge \\mathbb {P}( F_{n} \\cap Q_{n} )\\xrightarrow[ ]{ n \\rightarrow \\infty } 1,$ where we have used Banach's Lemma for the inverse in the last inequality, which yields that for fixed $ J $ , $\\Vert \\widehat{ \\Gamma }_{J}^{-1} \\Vert _{ \\text{op} }& =\\Vert \\widehat{ \\Gamma }_{J}^{-1}-\\Gamma _{J}^{-1} + \\Gamma _{J}^{-1}\\Vert _{ \\text{op} }\\le \\frac{\\Vert \\Gamma _{J}^{-1} \\Vert _{ \\text{op} }}{1 - \\Vert \\Gamma _{J}^{-1}( \\widehat{ \\Gamma }_{J} - \\Gamma _{J} )\\Vert _{ \\text{op} }}$ as long as $\\Vert \\Gamma _{J}^{-1} ( \\widehat{ \\Gamma }_{J} - \\Gamma _{J} )\\Vert _{ \\text{op} }<1.$ Otherwise, the inequality $\\big (1-\\Vert \\widehat{ \\Gamma }_{J} - \\Gamma _{J} \\Vert _{ \\text{op} }\\Vert \\Gamma _{J}^{-1} \\Vert _{ \\text{op} }\\big )\\Vert \\widehat{ \\Gamma }_{J}^{-1} \\Vert _{ \\text{op} }\\le \\Vert \\Gamma _{J}^{-1} \\Vert _{ \\text{op} }$ is trivially true, since the left-hand side is negative.", "Corollary 7.1 (Reappearing terms) Under Assumptions [assSubGaussianErrors] (SubGE), [assSubGaussianDesign] (SubGD) and [assCovB] (CovB), the following statements hold: For any $ J \\subset \\lbrace 1, \\dots , p \\rbrace $ , $ j^{\\prime } \\in J $ , $ k \\notin J $ , we have $| \\langle ( I - \\Pi _{J} ) g_{k}, g_{ j^{\\prime } } \\rangle _{n} |& \\le ( 1 + C_{ \\text{Cov} } ) \\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle _{ L^{2} }|.$ For $ k \\in J $ , the left-hand side vanishes.", "With probability converging to one, we have $\\sup _{ | J | \\le M_{n}, k \\notin J }\\langle \\varepsilon , ( I - \\widehat{ \\Pi }_{J} ) g_{k} \\rangle _{n}\\le C\\sqrt{ \\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n} }.$ [Proof] Note that for $j^{\\prime } \\in J, k \\notin J,$ by the characterization of the projections in Equation (REF ) and Assumption [assCovB] (CovB), we have that $& \\ \\ \\ \\ | \\langle g_{k} - \\Pi _{J} g_{k}, g_{ j^{\\prime } } \\rangle _{n} |=\\Big |\\Big \\langle g_{k}- \\sum _{ j \\in J }\\big (\\Gamma _{J}^{-1}\\langle g_{k}, g_{J} \\rangle _{ L^{2} }\\big )_{j}g_{j},g_{ j^{\\prime } }\\Big \\rangle _{n}\\Big |\\\\& =\\Big |\\Big \\langle g_{k}- \\sum _{ j \\in J }\\big (\\Gamma _{J}^{-1}\\langle g_{k}, g_{J} \\rangle _{ L^{2} }\\big )_{j}g_{j},g_{ j^{\\prime } }\\Big \\rangle _{n}\\\\& -\\underbrace{\\Big (\\langle g_{k}, g_{ j^{\\prime } } \\rangle _{ L^{2} }- \\sum _{ j \\in J }\\big (\\Gamma _{J}^{-1}\\langle g_{k}, g_{J} \\rangle _{ L^{2} }\\big )_{j}\\langle g_{j}, g_{ j^{\\prime } } \\rangle _{ L^{2} }\\Big )}_{= \\langle ( I - \\Pi _{J} ) g_{k}, g_{ j^{\\prime } }\\rangle _{ L^{2} }= 0}\\Big |\\\\& \\le \\Vert (1,\\Gamma _{J}^{-1}\\langle g_{k}, g_{J} \\rangle _{ L^{2} })^{ \\top }\\Vert _{1}\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle _{ L^{2} }|\\\\& \\le ( 1 + C_{ \\text{Cov} } )\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle _{ L^{2} }|,$ where the second equality follows by the properties of the projection.", "For $ k \\notin J $ , we have $|\\langle \\varepsilon , ( I - \\widehat{ \\Pi }_{J} ) g_{k}\\rangle _{n}|& \\le | \\langle \\varepsilon , g_{k} \\rangle _{n} |+| \\langle \\varepsilon , ( \\widehat{ \\Pi }_{J} - \\Pi _{J} ) g_{k}\\rangle _{n} |+| \\langle \\varepsilon , \\Pi _{J} g_{k} \\rangle _{n} |\\\\& =| \\langle \\varepsilon , g_{k} \\rangle _{n} |+|\\langle \\varepsilon , \\widehat{ \\Pi }_{J} ( I - \\Pi _{J} ) g_{k}\\rangle _{n}|+| \\langle \\varepsilon , \\Pi _{J} g_{k} \\rangle _{n} |.$ The supremum over the first term in Equation (REF ) can be treated immediately by Lemma REF (ii).", "The same is true for the supremum over the last term, since $\\sup _{ | J | \\le M_{n}, k \\notin J }| \\langle \\varepsilon , \\Pi _{J} g_{k} \\rangle _{n} |& =\\sup _{ | J | \\le M_{n}, k \\notin J }|\\langle \\varepsilon ,g_{J}^{ \\top }\\Gamma _{J}^{-1}\\langle g_{k} , g_{J} \\rangle _{ L^{2} }\\rangle _{n}|\\\\& \\le \\sup _{ | J | \\le M_{n}, k \\notin J }\\Vert \\Gamma _{J}^{-1} \\langle g_{k}, g_{J} \\rangle \\Vert _{1}\\sup _{ j \\le p }| \\langle \\varepsilon , g_{j} \\rangle _{n} |\\\\& \\le C_{ \\text{Cov} } \\sup _{ j \\le p }| \\langle \\varepsilon , g_{j} \\rangle _{n} |$ by the characterization of $ \\Pi _{J} $ in Equation (REF ) and Assumption [assCovB] CovB.", "Finally, the supremum over the middle term in Equation (REF ) can be written as $& \\ \\ \\ \\ \\sup _{ | J | \\le M_{n}, k \\notin J }\\langle \\varepsilon , g_{J} \\rangle _{n}^{ \\top }\\widehat{ \\Gamma }_{J}^{-1}\\langle ( I - \\Pi ) g_{k}, g_{J} \\rangle _{n}\\\\& \\le \\sup _{ | J | \\le M_{n} }\\Vert \\widehat{ \\Gamma }_{J}^{-1} \\Vert _{ \\text{op} }\\sup _{ | J | \\le M_{n}, k \\notin J }\\Vert \\langle ( I - \\Pi _{J} ) g_{k}, g_{J} \\rangle _{n} \\Vert _{2}\\sup _{ j, k \\le p }\\Vert \\langle \\varepsilon , g_{J} \\rangle _{n} \\Vert _{2}\\\\& \\le \\sup _{ | J | \\le M_{n} }\\Vert \\widehat{ \\Gamma }_{J}^{-1} \\Vert _{ \\text{op} }M_{n} ( 1 + C_{ \\text{Cov} } )\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle _{ L^{2} }|\\sup _{ j \\le p }| \\langle \\varepsilon , g_{j} \\rangle _{n} |,$ where we have used (i) for the second inequality.", "This term can now also be treated by Lemma REF (i), (ii) and (iv).", "Lemma 7.2 (Lower bound for order statistics) Let $ Z_{1}, \\dots , Z_{p} \\sim N( 0, \\sigma ^{2} ) $ i.i.d.", "Then, the order statistic $ Z_{ ( p - m + 1 ) } $ satisfies $Z_{ ( p - m + 1 ) }\\ge c \\sqrt{ \\sigma ^{2} \\log p }$ with probability converging to one for $ p \\rightarrow \\infty $ , as long as $ m \\le c p^{ \\varrho } $ for some $ \\varrho \\in ( 0, 1 ) $ .", "[Proof]Without loss of generality, let $ p / m \\in \\mathbb {N} $ .", "Split the sample into $ m $ groups of size $ p / m $ and for a natural number $ k \\le m $ , let $ Z^{ (k) } $ denote the maximal value of the $ Z_{j}, j \\le p $ which belong to the $ k $ -th group.", "Then, by a union bound, $& \\ \\ \\ \\ \\mathbb {P} \\Big \\lbrace Z_{ ( p - m + 1 ) } \\ge \\sqrt{ \\sigma ^{2} \\log \\Big ( \\frac{p}{m} \\Big ) }\\Big \\rbrace \\ge \\mathbb {P} \\Big \\lbrace \\min _{ k \\le m } Z^{ (k) }\\ge \\sqrt{ \\sigma ^{2} \\log \\Big ( \\frac{p}{m} \\Big ) }\\Big \\rbrace \\\\& \\ge 1 -m\\mathbb {P} \\Big \\lbrace Z^{ (1) } < \\sqrt{ \\sigma ^{2} \\log \\Big ( \\frac{p}{m} \\Big ) }\\Big \\rbrace =1 -m\\Big (1 -\\mathbb {P} \\Big \\lbrace Z^{ (1) } \\ge \\sqrt{ \\sigma ^{2} \\log \\Big ( \\frac{p}{m} \\Big ) }\\Big \\rbrace \\Big ).$ By independence, we can further estimate $& \\ \\ \\ \\ \\mathbb {P} \\Big \\lbrace Z^{ (1) } \\ge \\sqrt{ \\sigma ^{2} \\log \\Big ( \\frac{p}{m} \\Big ) }\\Big \\rbrace =1 -\\Big (1 -\\mathbb {P} \\Big \\lbrace Z_{1} \\ge \\sqrt{ \\sigma ^{2} \\log \\Big ( \\frac{p}{m} \\Big ) }\\Big \\rbrace \\Big )^{ \\frac{p}{m} }\\\\& \\ge 1 -\\Big (1 - \\frac{c}{ \\sqrt{ ( p / m ) \\log ( p / m ) } }\\Big )^{ \\frac{p}{m} }\\ge 1 - e^{ - c \\sqrt{ \\frac{p}{m} / \\log ( \\frac{p}{m} ) } },$ using the lower Gaussian tail bound from, e.g., Proposition 2.1.2 in Vershynin [25].", "Together, this yields $\\mathbb {P} \\Big \\lbrace Z_{ ( p - m + 1 ) } \\ge \\sqrt{ \\sigma ^{2} \\log \\Big ( \\frac{p}{m} \\Big ) }\\Big \\rbrace & \\ge 1 - m e^{ - c \\sqrt{ \\frac{p}{m} / \\log ( \\frac{p}{m} ) } }\\xrightarrow[ ]{ p \\rightarrow \\infty } 1.$ On the event on the left-hand side, the order statistic satisfies $Z_{ ( p - m + 1 ) }\\ge c \\sqrt{ ( 1 - \\varrho ) \\sigma ^{2} \\log p }$ ." ], [ "Proofs for supplementary results", "[Proof of Lemma REF (Bound for the population stochastic error)]We define $Q_{m} = \\langle Y - \\Pi _{m} f^{*}, g_{ \\widehat{J}_{m} } \\rangle _{n}= \\langle \\varepsilon , g_{ \\widehat{J}_{m} } \\rangle _{n}+ \\langle ( I - \\Pi _{m} ) f^{*}, g_{ \\widehat{J}_{m} } \\rangle _{n}.$ From Corollary REF (i), it follows that $\\Vert Q_{m} \\Vert _{2} ^{2}& \\le 2\\Big (m \\sup _{ j \\le p }\\langle \\varepsilon , g_{j} \\rangle _{n}^{2}+( 1 + C_{ \\text{Cov} } )^{2} \\Vert \\beta ^{*} \\Vert _{1}^{2}m\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle _{ L^{2}( \\mathbb {P} ) }|^{2}\\Big )\\in \\mathbb {R}^{m}$ Note that under $ s $ -sparsity, we can ignore the second term in the parentheses when $ m > \\tilde{m} $ , since then, $( I - \\Pi _{ m } ) f^{*} = 0$ .", "We can express the function $ \\widehat{F}^{ (m) } - \\Pi _{m} f^{*} $ in terms of $ Q_{m} $ via $\\widehat{F}^{ (m) } - \\Pi _{m} f^{*}=g_{ \\widehat{J}_{m} }^{ \\top }\\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1}\\langle Y - \\Pi _{m} f^{*}, g_{ \\widehat{J}_{m} } \\rangle _{n}=( \\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1} Q_{m} )^{ \\top }g_{\\widehat{J}_{m}}.$ Due to the mean zero design, we have $\\Vert \\widehat{F}^{ (m) } - \\Pi _{m} f^{*} \\Vert ^{2}_{ L^{2} }& =\\text{Cov}( \\widehat{F}^{ (m) } - \\Pi _{m} f^{*} )=Q_{m}^{ \\top }\\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1}\\Gamma _{ \\widehat{J}_{m} }\\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1}Q_{m}\\\\& =Q_{m}^{ \\top }\\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1}(\\widehat{ \\Gamma } _{ \\widehat{J}_{m} }- \\widehat{ \\Gamma } _{ \\widehat{J}_{m} }+ \\Gamma _{ \\widehat{J}_{m} })\\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1}Q_{m}\\\\& \\le \\Vert Q_{m} \\Vert _{2}^{2}\\Vert \\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1} \\Vert _{ \\text{op} }+ \\Vert Q_{m} \\Vert _{2}^{2}\\Vert \\widehat{ \\Gamma }_{ \\widehat{J}_{m} }^{-1} \\Vert _{ \\text{op} }^{2}\\Vert \\widehat{ \\Gamma }_{ \\widehat{J}_{m} }- \\Gamma _{ \\widehat{J}_{m} }\\Vert _{ \\text{op} }.$ Together, Equations (REF ) and (REF ) yield the desired result on the intersection of the events from Lemma REF (i)-(iv), the probability of which converges to one.", "[Proof of Proposition REF (Bound for the population bias)]We present the proof under Assumption [assSparse] (Sparse) (ii).", "Under [assSparse] (Sparse) (i), the reasoning is analogous.", "The details are discussed in Step 5.", "Step 1: Sketch of the arguments.", "For $ 1 \\le k \\le p $ , we consider the two residual dot product terms $\\mu _{ J, k }:& =\\langle ( I - \\Pi _{J} ) f^{*}, g_{k} \\rangle _{ L^{2} }=\\sum _{ j \\notin J }\\beta _{j}\\langle g_{j}, ( I - \\Pi _{J} ) g_{k} \\rangle _{ L^{2} },\\\\\\widehat{ \\mu }_{ J, k }:& =\\Big \\langle ( I - \\widehat{ \\Pi }_{J} ) Y, \\frac{ g_{k} }{ \\Vert g_{k} \\Vert _{n} }\\Big \\rangle _{n}=\\Big \\langle \\varepsilon , \\frac{ ( I - \\widehat{ \\Pi }_{J} ) g_{k} }{ \\Vert g_{k} \\Vert _{n} }\\Big \\rangle _{n}+\\Big \\langle f^{*},\\frac{ ( I - \\widehat{ \\Pi }_{J} ) g_{k} }{ \\Vert g_{k} \\Vert _{n} }\\Big \\rangle _{n}.$ Note that the choice of the next component in Algorithm REF is based on $ \\widehat{ \\mu }_{ J, k } $ and $ \\mu _{ J, k } $ is its population counterpart.", "In Step 4, we show that $\\mathbb {P}( A_{n}( M_{n} )^{c} ) & \\xrightarrow[ ]{ n \\rightarrow \\infty } 0,\\text{ where }\\\\A_{n}(m)& =\\Big \\lbrace \\sup _{ | J | \\le m, j \\le p }| \\widehat{ \\mu }_{ J, j } - \\mu _{ J, j } |\\le \\tilde{C} \\sqrt{ \\frac{ ( \\sigma ^{2} + \\rho ^{4} ) \\log p }{n} }\\Big \\rbrace $ for some constant $ C > 0 $ .", "Then, for any $ \\xi \\in ( 0, 1 ) $ , we set $B_{n}(m):=\\Big \\lbrace \\inf _{ l \\le m } \\sup _{ j \\le p }| \\mu _{ \\widehat{J}_{l}, j } |> \\frac{ 2 \\tilde{C} }{ 1 - \\xi }\\sqrt{ \\frac{ ( \\sigma ^{2} + \\rho ^{4} ) \\log p }{n} }\\Big \\rbrace .$ In Step 2 of the proof, we show that on $ A_{n}(m) \\cap B_{n}(m) $ , $| \\mu _{ \\widehat{J}_{l}, \\widehat{j}_{ l + 1 } } |\\ge \\xi \\sup _{ j \\le p }| \\mu _{ \\widehat{J}_{l}, j } |\\qquad \\text{ for all } l \\le m,$ which implies $\\Vert ( I - \\Pi _{ l + 1 } ) f^{*} \\Vert _{ L^2( \\mathbb {P} ) }^{2}& \\le \\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^2( \\mathbb {P} ) }^{2}\\Big (1-c \\big (\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^2( \\mathbb {P} ) }^{2}\\big )^{ 1 / ( 2 \\gamma - 1 ) }\\Big )\\\\& \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\qquad \\text{ for all } l \\le m - 1.$ Lemma 1 from Gao et al.", "[10] states that any sequence $ ( a_{m} )_{ m \\in \\mathbb {N}_{0} } $ for which there exist $ A, c > 0 $ and $ \\alpha \\in ( 0, 1 ] $ with $a_{0} \\le A\\qquad \\text{ and } \\qquad a_{ m + 1 } \\le a_{m} ( 1 - c a_{m}^{ \\alpha } )\\qquad \\text{ for all } m = 0, 1, 2, \\dots $ satisfies $a_{m}& \\le \\max \\lbrace 2^{ 1 / \\alpha ^{2} } ( c \\alpha )^{ - 1 / \\alpha }, A \\rbrace m^{ - 1 / \\alpha }\\qquad \\text{ for all } m = 0, 1, 2, \\dots $ Applying this to $( \\Vert ( I - \\Pi _{m} ) f \\Vert _{ L^{2} }^{2} )_{ m \\in \\mathbb {N}_{0} }$ then yields $\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2( \\mathbb {P} ) }^{2}\\le C m^{ 1 - 2 \\gamma }\\qquad \\text{ on } A_{n}(m) \\cap B_{n}(m).$ In Step 3, we independently show that $\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2( \\mathbb {P} ) }^{2}& \\le C\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - 1 / ( 2 \\gamma ) }\\qquad \\text{on } B_{n}(m)^{c}.$ This establishes that on $ A_{n}( M_{n} ) $ , $\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2( \\mathbb {P} ) }^{2}& \\le C\\Big (m^{ 1 - 2 \\gamma }+\\Big (\\frac{ ( \\overline{ \\sigma }^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - 1 / ( 2 \\gamma ) }\\Big )\\qquad \\text{ for all } m \\le M_{n}.$ Finally, the monotonicity of $m \\mapsto \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2( \\mathbb {P} ) }^{2}$ yields that the estimate in Equation (REF ) also holds for $ m > M_{n} $ as $ m^{ 1 - 2 \\gamma } $ then becomes a lower order term.", "Step 2: Analysis on $ A_{n}(m) \\cap B_{n}(m) $ .", "On $ A_{n}(m) \\cap B_{n}( m ) $ , for any $ l \\le m $ , we have $| \\mu _{ \\widehat{J}_{l}, \\widehat{j}_{l + 1} } |& \\ge | \\widehat{ \\mu }_{ \\widehat{J}_{l}, \\widehat{j}_{l + 1} } |-|\\widehat{ \\mu }_{ \\widehat{J}_{l}, \\widehat{j}_{l + 1} }- \\mu _{ \\widehat{J}_{l}, \\widehat{j}_{l + 1} }|\\ge | \\widehat{ \\mu }_{ \\widehat{J}_{l}, \\widehat{j}_{l + 1} } |-\\sup _{ | J | \\le m, j \\le p }| \\widehat{ \\mu }_{ J, j } - \\mu _{ J, j } |\\\\& \\ge \\sup _{ j \\le p }| \\widehat{ \\mu }_{ \\widehat{J}_{j}, j } |- \\tilde{C} \\sqrt{ \\frac{ ( \\sigma ^{2} + \\rho ^{4} ) \\log p }{n} }\\ge \\sup _{ j \\le p }| \\mu _{ \\widehat{J}_{j}, j } |- 2 \\tilde{C} \\sqrt{ \\frac{ ( \\sigma ^{2} + \\rho ^{4} ) \\log p }{n} }\\\\& \\ge \\xi \\sup _{ j \\le p }| \\mu _{ \\widehat{J}_{j}, j } |,$ where for the third inequality, we have used the definition of Algorithm REF and $ A_{n}(m) $ .", "For the final inequality, we have used the definition of $ B_{n}(m) $ .", "Using the orthogonality of the projection $ \\Pi _{m} $ , we can always estimate $\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}& =\\Big \\langle ( I - \\Pi _{l} ) f^{*},\\sum _{ j = 1 }^{p} \\beta ^{*}_{j} g_{j}\\Big \\rangle _{ L^{2} }=\\sum _{ j \\notin \\widehat{J}_{l} }\\beta ^{*}_{j}\\langle ( I - \\Pi _{l} ) f^{*}, g_{j} \\rangle _{ L^{2} }\\\\& \\le \\sup _{ j \\le p }\\langle ( I - \\Pi _{l} ) f^{*}, g_{j} \\rangle _{ L^{2} }\\sum _{ j \\notin \\widehat{J}_{l} }| \\beta ^{*}_{j} |=\\sup _{ j \\le p } | \\mu _{ \\widehat{J}_{l}, j } |\\sum _{ j \\notin \\widehat{J}_{l} }| \\beta ^{*}_{j} |.$ Further, $\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}& =( \\beta ^{*} - \\beta ( \\Pi _{l} f^{*} ) )^{ \\top }\\Gamma ( \\beta ^{*} - \\beta ( \\Pi _{l} f^{*} ) )\\\\& \\ge \\lambda _{ \\min }( \\Gamma )\\Vert ( \\beta ^{*} - \\beta ( \\Pi _{l} f^{*} ) ) \\Vert _{2}^{2}\\ge c_{ \\lambda }\\sum _{ j \\notin \\widehat{J}_{l} }| \\beta ^{*}_{j} |^{2},$ where in the last step, we have used that $\\beta ( \\Pi _{l} f^{*} )_{l} = 0$ for all $ l \\notin \\widehat{J}_{l} $ .", "Together with the $ \\gamma $ -sparsity Assumption, Equations (REF ) and (REF ) yield $\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}& \\le \\sup _{ j \\le p } | \\mu _{ \\widehat{J}_{l}, j } |\\sum _{ j \\notin \\widehat{J}_{l} }| \\beta _{j}^{*} |\\le \\sup _{ j \\le p } | \\mu _{ \\widehat{J}_{l}, j } |\\Big (\\sum _{ j \\notin \\widehat{J}_{l} }| \\beta _{j}^{*} |^{2}\\Big )^{ ( \\gamma - 1 ) / ( 2 \\gamma - 1 ) }\\\\& \\le C \\sup _{ j \\le p } | \\mu _{ \\widehat{J}_{l}, j } |\\big (\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}\\big )^{ ( \\gamma - 1 ) / ( 2 \\gamma - 1 ) }.$ Since the inequality in (REF ) holds on $ A_{n}(m) \\cap B_{n}(m) $ , we have that for any $ l \\le m - 1 $ , $\\Vert ( I - \\Pi _{ l + 1 } ) f^{*} \\Vert _{ L^{2} }^{2}& \\le \\Vert ( I - \\Pi _{l} ) f^{*}-\\mu _{ \\widehat{J}_{m}, \\widehat{j}_{ l + 1 } }g_{ \\widehat{j}_{ l + 1 } }\\Vert _{ L^{2} }^{2}=\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}-\\mu _{ \\widehat{J}_{l}, \\widehat{j}_{ l + 1 } }^{2}\\\\& \\le \\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}-\\xi ^{2}\\sup _{ j \\le p } \\mu _{ \\widehat{J}_{m}, j }^{2}\\\\& \\le \\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}-c \\big (\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}\\big )^{ 2 \\gamma / ( 2 \\gamma - 1 ) }\\\\& \\le \\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}\\Big (1-c \\big (\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}\\big )^{ 1 / ( 2 \\gamma - 1 ) }\\Big ).$ Step 3: Analysis on $ A_{n}(m) \\cap B_{n}(m)^{c} $ .", "Equation (REF ) implies that $\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}& \\le C \\Big (\\sup _{ j \\le p } | \\mu _{ \\widehat{J}_{l}, j } |\\Big )^{ 2 - 1 / \\gamma }.$ On $ B_{n}(m)^{c} $ , we therefore obtain by the monotonicity of $m \\mapsto \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^{2} }^{2}$ that $& \\ \\ \\ \\ \\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^{2} }^{2}=\\inf _{ l \\le m }\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}\\\\& \\le C \\Big (\\min _{ l \\le m }\\sup _{ j \\le p }| \\mu _{ \\widehat{J}_{l}, j } |\\Big )^{ 2 - 1 / \\gamma }\\le C\\Big (\\frac{ ( \\sigma ^{2} + \\rho ^{4} ) \\log p }{n}\\Big )^{ 1 - 1 / ( 2 \\gamma ) }.$ Step 4: $ \\mathbb {P}( A_{n}( M_{n} )^{c} ) \\rightarrow 0 $ .", "Since for $ k \\in J $ , $ \\widehat{ \\mu }_{ J, k } = \\mu _{ J, k } = 0, $ we only need to consider the case $ k \\notin J $ .", "For $ | J | \\le M_{n} $ , we may write $\\widehat{ \\mu }_{ J, k } - \\mu _{ J, k }& =\\Big \\langle \\varepsilon ,\\frac{ ( I - \\widehat{ \\Pi }_{J} ) g_{k} }{ \\Vert g_{k} \\Vert _{n} }\\Big \\rangle _{n}\\\\& +\\sum _{ j \\notin J }\\beta _{j}\\Big [\\frac{\\langle g_{j}, ( I - \\widehat{ \\Pi }_{J} ) g_{k}\\rangle _{n}}{ \\Vert g_{k} \\Vert _{n} }- \\langle g_{j}, ( I - \\Pi _{J} ) g_{k} \\rangle _{ L^{2} }\\Big ].$ From Lemma REF (i) and Corollary REF (ii), we obtain an event with probability converging to one on which $\\Big \\langle \\varepsilon ,\\frac{ ( I - \\widehat{ \\Pi }_{J} ) g_{k} }{ \\Vert g_{k} \\Vert }\\Big \\rangle _{n}& \\le 2\\langle \\varepsilon , ( I - \\widehat{ \\Pi }_{J} ) g_{k}\\rangle _{n}\\le C \\sqrt{ \\frac{ ( \\sigma ^{2} + \\rho ^{4} ) \\log p }{n} }.$ Further, for $ j, k \\notin J $ , we can estimate $& \\ \\ \\ \\ \\frac{\\langle g_{j}, ( I - \\widehat{ \\Pi }_{J} ) g_{k}\\rangle _{n}}{ \\Vert g_{k} \\Vert _{n} }-\\langle g_{j}, ( I - \\Pi _{J} ) g_{k} \\rangle _{ L^{2} }\\\\& \\le \\Big | \\frac{1}{ \\Vert g_{k} \\Vert _{n} } - 1 \\Big |\\Vert g_{j} \\Vert _{n} \\Vert g_{k} \\Vert _{n}+|\\langle g_{j}, ( I - \\widehat{ \\Pi }_{J} ) g_{k} \\rangle _{n}-\\langle g_{j}, ( I - \\Pi _{J} ) g_{k}\\rangle _{ L^{2} }|.$ From Lemma REF (i), we obtain an event with probability converging to one on which $& \\ \\ \\ \\ \\Big | \\frac{1}{ \\Vert g_{k} \\Vert _{n} } - 1 \\Big |\\Vert g_{j} \\Vert _{n} \\Vert g_{k} \\Vert _{n}\\le | \\Vert g_{k} \\Vert _{n} - 1 | \\Vert g_{j} \\Vert _{n}\\\\& \\le | \\Vert g_{k} \\Vert _{n}^{2} - 1 | \\Vert g_{j} \\Vert _{n}\\le C \\sqrt{ \\frac{ \\rho ^{4} \\log p }{n} }.$ Additionally, $& \\ \\ \\ \\ |\\langle g_{j}, ( I - \\widehat{ \\Pi }_{J} ) g_{k} \\rangle _{n}- \\langle g_{j}, ( I - \\Pi _{J} ) g_{k} \\rangle _{ L^2 }|\\\\& \\le |\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle _{ L^2 }|+| \\langle g_{j}, ( \\widehat{ \\Pi }_{J} - \\Pi _{J} ) g_{k}\\rangle _{n} |+|\\langle g_{j}, \\Pi _{J} g_{k} \\rangle _{n}- \\langle g_{j}, \\Pi _{J} g_{k} \\rangle _{ L^2 }|\\\\& \\le ( C_{ \\text{Cov} } + 1 )\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}- \\langle g_{j}, g_{k} \\rangle |+| \\langle g_{j}, \\widehat{ \\Pi }_{J} ( I - \\Pi _{J} ) g_{k}\\rangle _{n} |$ by Assumption [assCovB] (CovB).", "The first term can be treated by Lemma REF (i) again.", "Using the representation of $ \\widehat{ \\Pi }_{J} $ in Equation (REF ), the second term can be estimated against $& \\ \\ \\ \\ | \\langle ( I - \\Pi _{J} ) g_{j},\\widehat{ \\Pi }_{J} ( I - \\Pi _{J} ) g_{k}\\rangle _{n} |+| \\langle \\Pi _{J} g_{j},\\widehat{ \\Pi }_{J} ( I - \\Pi _{J} ) g_{k}\\rangle _{n} |\\\\& =|\\langle ( I - \\Pi _{J} ) g_{j}, g_{J} \\rangle _{n}^{ \\top }\\widehat{ \\Gamma }_{J}^{-1}\\langle ( I - \\Pi _{J} ) g_{k}, g_{J} \\rangle _{n}|+| \\langle \\Pi _{J} g_{j}, ( I - \\Pi _{J} ) g_{k}\\rangle _{n} |\\\\& \\le \\Vert \\widehat{ \\Gamma }_{J}^{-1} \\Vert _{ \\text{op} }\\Vert \\langle ( I - \\Pi _{J} ) g_{j}, g_{J} \\rangle _{n}^{ \\top }\\Vert _{2}^{2}+|\\langle g_{J}^{ \\top } \\Gamma _{J}^{-1} \\langle g_{j}, g_{J}\\rangle ,( I - \\Pi _{J} ) g_{k}\\rangle _{n}|\\\\& \\le \\Vert \\widehat{ \\Gamma }_{J}^{-1} \\Vert _{ \\text{op} } M_{n}\\sup _{ j \\in J, k \\notin J }| \\langle ( I - \\Pi _{J} ) g_{k}, g_{j} \\rangle _{n} |^{2}+C_{ \\text{Cov} }\\sup _{ j \\in J, k \\notin J }| \\langle ( I - \\Pi _{J} ) g_{k}, g_{j} \\rangle _{n} |.$ Analogously to before, the remaining terms can now be treated by Lemma REF (iv) and Corollary REF (i).", "The result now follows by intersecting all the events and taking the supremum in Equation (REF ).", "Step 5: $ s $ -sparse setting.", "Under Assumption [assSparse] (Sparse) (i), the general argument from Step 1 is the same.", "However, we need to consider the events $A_{n}(m):& =\\Big \\lbrace \\sup _{ | J | \\le m, k \\le p }| \\widehat{ \\mu }_{ J, k } - \\mu _{ J, k } |\\le C \\sqrt{\\frac{( \\overline{ \\sigma }^{2} + \\Vert \\beta ^{*} \\Vert _{1} \\rho ^{4} )s \\log p}{n}}\\Big \\rbrace ,\\\\B_{n}(m):& =\\Big \\lbrace \\inf _{ l \\le m } \\sup _{ l \\le p }| \\mu _{ \\widehat{J}_{l}, j } |>\\frac{ 2 C }{ 1 - \\xi }\\sqrt{\\frac{( \\overline{ \\sigma }^{2} + \\Vert \\beta ^{*} \\Vert _{1} \\rho ^{4} )s \\log p}{n}}\\Big \\rbrace .$ On $ A_{n}(m) \\cap B_{n}(m)^{c} $ , the analysis is exactly the same as before.", "In Step 4, we can also argue the same.", "We merely have to account for the coefficients of $ f^{*} $ by a factor $ \\Vert \\beta ^{*} \\Vert _{1} $ , instead of shifting them into the constant.", "On $ A_{n}(m) \\cap B_{n}(m) $ , we obtain that for any $ l \\le m $ , $\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}& \\le \\sup _{ | J | \\le l, j \\le p } | \\mu _{ J, j } |\\sum _{ j \\notin \\widehat{J}_{l} }| \\beta ^{*}_{j} |\\le \\sup _{ | J | \\le l, j \\le p } | \\mu _{ J, j } |\\Big (s\\sum _{ j \\notin \\widehat{J}_{l} }| \\beta ^{*}_{j} |^{2}\\Big )^{ \\frac{1}{2} }.$ Since additionally, $\\sum _{ j \\notin \\widehat{J}_{l} }| \\beta ^{*}_{j} |^{2}\\le c \\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}$ , we have $\\sup _{ | J | \\le l, j \\le p } \\mu _{ J, j }^{2}& \\ge \\frac{c}{s}\\Vert ( I - \\Pi _{l} ) f^{*} \\Vert _{ L^{2} }^{2}.$ Recursively, this yields $\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^{2} }^{2}& \\le \\Vert ( I - \\Pi _{ m - 1 } ) f^{*}-\\mu _{ \\widehat{J}_{ m - 1 }, \\widehat{j}_{m} }g_{ \\widehat{j}_{m} }\\Vert _{ L^{2} }^{2}=\\Vert ( I - \\Pi _{ m - 1 } ) f^{*} \\Vert _{ L^{2} }^{2}-\\mu _{ \\widehat{J}_{ m - 1 }, \\widehat{j}_{m} }^{2}\\\\& \\le \\Vert ( I - \\Pi _{ m - 1 } ) f^{*} \\Vert _{ L^{2} }^{2}-\\xi ^{2}\\sup _{ j \\le p } \\mu _{ \\widehat{J}_{ m - 1 }, j }^{2}\\le \\Vert ( I - \\Pi _{ m - 1 } ) f^{*} \\Vert _{ L^{2} }^{2}\\Big ( 1 - \\frac{c}{s} \\Big )\\\\& \\le \\Vert f^{*} \\Vert _{ L^{2} }^{2}\\exp \\Big ( \\frac{ - c m }{s} \\Big ).$ Finally, as long as $ S \\lnot \\subset \\widehat{J}_{m} $ , $\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^{2} }^{2}& \\ge \\beta ( ( I - \\Pi _{m} ) f^{*} )^{ \\top }\\Gamma \\beta ( ( I - \\Pi _{m} ) f^{*} )\\ge c_{ \\lambda } \\underline{ \\beta }^{2}.$ However, for $ m = C s $ with some $ C > 0 $ and $ n \\in \\mathbb {N} $ large enough, $\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^{2} }^{2}& \\le \\frac{ c_{ \\lambda } }{4} \\underline{ \\beta }^{2}+\\sqrt{\\frac{( \\overline{ \\sigma }^{2} + \\Vert \\beta ^{*} \\Vert _{1} \\rho ^{4} )s \\log p}{n}}\\le \\frac{ c_{ \\lambda } }{2} \\underline{ \\beta }^{2}$ with probability converging to one.", "This yields the last claim of Proposition REF .", "[Proof of Proposition REF (Fast norm change for the bias)]For a fixed $ m \\le M_{n} $ , let $ \\tilde{\\beta }$ be the coefficients of $ ( I - \\Pi _{m} ) f^{*} $ .", "For $ \\gamma $ -sparse $ \\beta ^{*} $ , we then have $& \\ \\ \\ \\ |\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{n}^{2}-\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}|=|\\sum _{ j, k = 1 }^{p}\\tilde{\\beta }_{j}\\tilde{\\beta }_{k}(\\langle g_{j}, g_{k} \\rangle _{n}-\\langle g_{j}, g_{k} \\rangle _{ L^2 })|\\\\& \\le \\Vert \\tilde{\\beta }\\Vert _{1}^{2}\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}-\\langle g_{j}, g_{k} \\rangle _{ L^2 }|\\\\& \\le ( C_{ \\text{Cov} } + 1 )\\Big (\\sum _{ j \\notin \\widehat{J}_{m} } | \\beta ^{*}_{j} |\\Big )^{2}\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}-\\langle g_{j}, g_{k} \\rangle _{ L^2 }|\\\\& \\le ( C_{ \\text{Cov} } + 1 )C_{ \\gamma }\\Big (\\sum _{ j \\notin \\widehat{J}_{m} } | \\beta ^{*}_{j} |^{2}\\Big )^{ \\frac{ 2 \\gamma - 2 }{ 2 \\gamma - 1 } }\\sup _{ j, k \\le p }|\\langle g_{j}, g_{k} \\rangle _{n}-\\langle g_{j}, g_{k} \\rangle _{ L^2 }|,$ where the second inequality follows from the uniform Baxter inequality in (REF ).", "Additionally, we have $\\Vert ( I - \\Pi _{m} ) f^{*} \\Vert _{ L^2 }^{2}& =\\tilde{\\beta }^{ \\top } \\Gamma \\tilde{\\beta }\\ge \\lambda _{ \\min }( \\Gamma )\\Vert \\tilde{\\beta }\\Vert _{2}^{2}\\ge c_{ \\lambda }\\sum _{ j \\notin \\widehat{J}_{m} }| \\beta ^{*}_{j} |^{2}.$ Plugging this inequality into Equation (REF ) yields the result.", "For $ s $ -sparse $ \\beta ^{*} $ , the statement in Proposition REF is obtained analogously by using the inequality $\\Vert \\tilde{\\beta }\\Vert _{1}^{2}\\le ( s + m ) \\Vert \\tilde{\\beta }\\Vert _{2}^{2}$ .", "It follows from the fact that the projection $ ( I - \\Pi _{m} ) $ adds at most $ m $ components to the support of $ \\beta ^{*} $ .", "Lemma 8.1 (Uniform Baxter's inequality, Ing [12]) Under Assumption [assCovB] (CovB), for any $ J \\subset \\lbrace 1, \\dots , p \\rbrace $ with $ | J | \\le M_{n} $ , we have $\\Vert \\beta ( ( I - \\Pi _{J} ) f^{*} ) \\Vert _{1}& \\le ( C_{ \\text{Cov} } + 1 )\\sum _{ j \\notin J } | \\beta ^{*}_{j} |,$ where $ \\tilde{\\beta }_{J} $ denotes the coefficients of the population residual term $ ( I - \\Pi _{J} ) f^{*} $ .", "[Proof]For any $ J \\subset \\lbrace 1, \\dots , p \\rbrace $ as above, we have $\\Vert \\beta ( ( I - \\Pi _{J} ) f^{*} ) \\Vert _{1}& =\\Vert \\beta ^{*} - \\beta ( \\Pi _{J} f^{*} ) \\Vert _{1}\\le \\Vert \\beta ^{*}_{J} - \\beta ( \\Pi _{J} f^{*} ) \\Vert _{1}+\\sum _{ j \\notin J } | \\beta ^{*}_{j} |\\\\& \\le ( C_{ \\text{Cov} } + 1 )\\sum _{ j \\notin J } | \\beta ^{*}_{j} |,$ where for the last inequality, we have used that $& \\ \\ \\ \\ \\beta ^{*}_{J} - \\beta ( \\Pi _{J} f^{*} )=\\Gamma _{J}^{-1} \\Gamma _{J}(\\beta ^{*}_{J}-\\Gamma _{J}^{-1} \\langle f^{*}, g_{J} \\rangle _{ L^{2} })\\\\& =\\Gamma _{J}^{-1}\\Big (\\Gamma _{J} \\beta ^{*}_{J}-\\sum _{ j = 1 }^{p}\\beta ^{*}_{j}\\langle g_{j}, g_{J} \\rangle _{ L^{2} }\\Big )=- \\sum _{ j \\notin J }\\beta ^{*}_{j}\\Gamma _{J}^{-1}\\langle g_{j}, g_{J} \\rangle _{ L^{2} }$ and condition (REF ) in its formulation from Section REF ." ], [ "Simulation study", "In this section, we provide additional simulation results.", "We begin by displaying the boxplots of the stopping times for the different scenarios from Section .", "In order to indicate whether stopping happened before or after the classical oracle $m^{ \\mathfrak {o} }=\\operatornamewithlimits{arg\\!\\min }_{ m \\ge 0 } \\Vert \\widehat{F}^{ (m) } - f^{*} \\Vert _{n}^{2}$ , we report the difference $\\tau - m^{ \\mathfrak {o} }$ or $\\tau _{ \\text{two-step} } - m^{ \\mathfrak {o} }$ .", "Figures REF -REF correspond to the Figures REF , REF , REF and REF respectively.", "The results clearly indicate that for the true empirical noise level, the sequential early stopping time matches the classical oracle very closely.", "For the estimated noise level with $\\lambda _{0} = \\sqrt{ \\log (p) / n }$ , this is still true for the very sparse signals.", "For the estimated noise level with $\\lambda _{0} = \\sqrt{ 0.5 \\log (p) / n }$ , the stopping times systematically overestimate the classical oracle in the sparse signals, which is then corrected by the two-step procedure.", "Figure: Stopping times for early stopping with the true empirical noise level.", "Figure: Stopping times for early stopping with the estimated empirical noise levelfor λ 0 =log(p)/n \\lambda _{0} = \\sqrt{ \\log (p) / n } .", "Figure: Stopping times for early stopping with the estimated empirical noise levelfor λ 0 =0.5log(p)/n \\lambda _{0} = \\sqrt{ 0.5 \\log (p) / n } .", "Figure: Stopping times for the two-step procedure.", "Figure: Boxplots for the relative efficiencies and the deviation τ-m 𝔬 \\tau - m^{ \\mathfrak {o} } of the stopping time from the classical oracle for the estimated noiselevel with λ 0 =log(p)/n\\lambda _{0} = \\sqrt{ \\log (p) / n } in the correlated design setting.Figure: Boxplots for the noise estimation errors with λ 0 =log(p)/n\\lambda _{0} = \\sqrt{ \\log (p) / n } together with the classical oracle risk in the correlated design setting.For a correlated design simulation, we use the same setting as in Section but instead of $\\Gamma = I_{p}$ , we consider the covariance matrix $\\Gamma :=\\begin{pNiceMatrix}[columns-width=.5cm]1 & a & b & & \\\\a & & & & \\\\b & & & & b \\\\& & & & a \\\\& & b & a & 1 \\\\\\end{pNiceMatrix}\\in \\mathbb {R}^{ p \\times p }\\qquad \\text{ for } a = 0.4, b = 0.1.$ This allows for substantial serial correlation over the whole set of covariates.", "Since $ a + b \\le 1 / 2 $ , the estimate $v^{ \\top } \\Gamma v& =\\sum _{ j = 1 }^{p} v_{j}^{2}+2 a \\sum _{ j = 1 }^{ p - 1 } v_{j} v_{ j + 1 }+2 b \\sum _{ j = 1 }^{ p - 2 } v_{j} v_{ j + 2 }>( 1 - 2 ( a + b ) ) \\Vert v \\Vert _{2}^{2}$ for all $ v \\in \\mathbb {R}^{p} $ guarantees that $ \\Gamma $ is a well defined positive definite covariance matrix.", "Coincidentally, this also guarantees that the cumulative coherence satisfies $\\mu (m) \\le 1 / 2$ for all $ m \\ge 1 $ .", "By Example REF (b), we can therefore assume that Assumption [assCovB] (CovB) is satisfied.", "The medians of the classical oracles $ m^{ \\mathfrak {o} } $ are given by $( 5, 10, 28, 15, 53, 72 )$ in the same order as the signals are displayed in Figure REF .", "The medians of the balanced oracle $ m^{ \\mathfrak {b} } $ are given by $( 6, 13, 54, 15, 55, 80 )$ .", "Here, both the early stopping procedure with the noise estimate for $\\lambda _{0} = \\sqrt{ \\log (p) / n }$ and the two-step procedure match the benchmark results for the full Akaike selection from Ing [12] and the 5-fold cross-validated LassoCV from Scikit-learn [18].", "Figure: Boxplots for the relative efficiencies and the deviation τ two-step -m 𝔬 \\tau _{ \\text{two-step} }- m^{ \\mathfrak {o} } of the two-step procedure from the classical oracle for an estimatednoise level with λ 0 =0.5log(p)/n\\lambda _{0} = \\sqrt{ 0.5 \\log (p) / n } in the correlated design setting.Figure: Boxplots for the relative efficiencies for the Akaike criterion fromIng with C HDAIC =2C_{ \\text{HDAIC} } = 2 and the Lasso based on 5-fold cross-validation in the correlated designsetting.Figure: Boxplots for the relative efficiencies and the deviation τ-m 𝔬 \\tau - m^{ \\mathfrak {o} } of the stopping time from the classical oracle for the estimated noiselevel with λ 0 =log(p)/n\\lambda _{0} = \\sqrt{ \\log (p) / n } in the classification setting.Figure: Boxplots for the noise estimation errors with λ 0 =log(p)/n\\lambda _{0} = \\sqrt{ \\log (p) / n } together with the classical oracle risk in the classification setting.For the classification setting in Example REF (b), we maintain the correlated design and sample Bernoulli distributed labels $ Y_{i}, i = 1, \\dots , n $ , with $\\mathbb {P} \\lbrace Y_{i} = 1 \\rbrace =\\max \\Big (\\min \\Big (0.5+\\sum _{ j = 1 }^{p}\\tilde{\\beta }^{ (r) }_{j} X_{i}^{ (j) },1\\Big ),0\\Big ),\\qquad r \\in \\lbrace 3, 2, 1, 15, 60, 90 \\rbrace ,$ where the $ \\tilde{\\beta }^{ (r) } $ are rescaled versions of the coefficients $ \\beta ^{ (r) } $ from Section .", "In particular, $\\tilde{\\beta }^{ (3) } = 0.03 \\beta ^{ (3) }, \\qquad \\tilde{\\beta }^{ (2) } = 0.03 \\beta ^{ (2) }, \\qquad \\tilde{\\beta }^{ (1) } = 0.1 \\beta ^{ (1) },\\\\\\tilde{\\beta }^{ ( 15 ) } = 0.1 \\beta ^{ ( 15 ) }, \\qquad \\tilde{\\beta }^{ ( 60 ) } = 0.1 \\beta ^{ ( 60 ) }, \\qquad \\tilde{\\beta }^{ ( 90 ) } = 0.1 \\beta ^{ ( 90 ) }.$ The rescaling guarantees that with high probability, the values of the linear signals are in between $ [ 0, 1 ] $ and the linear model is indeed a good approximation for the simulated data.", "In Algorithm REF , we add an intercept column $ X^{0} = 1 \\in \\mathbb {R}^{n} $ to the design.", "In this setting, estimating the noise level becomes essential, since it depends on the coefficients themselves.", "The medians of the empirical noise levels $ \\Vert \\varepsilon \\Vert _{n}^{2} $ are given by $( 0.15, 0.18, 0.28, 0.12, 0.19, 0.21 )$ Figure: Boxplots for the relative efficiencies and the deviation τ two-step -m 𝔬 \\tau _{ \\text{two-step} } - m^{ \\mathfrak {o} } of the two-step procedure from the classical oracle for an estimatednoise level with λ 0 =0.5log(p)/n\\lambda _{0} = \\sqrt{ 0.5 \\log (p) / n } in the classification setting.Figure: Boxplots for the relative efficiencies for the Akaike criterion fromIng with C HDAIC =0.5C_{ \\text{HDAIC} } = 0.5 and the Lasso based on 5-fold cross-validation in the classificationsetting.in the order in which the signals are displayed.", "We present the same plots as in the correlated regression setting.", "The median oracles $ m^{ ( \\mathfrak {o} ) } $ and $ m^{ ( \\mathfrak {b} )} $ are given by $( 2, 3, 6, 13, 14, 8 )$ and $( 7, 5, 12, 29, 32, 28 )$ respectively.", "For both the two-step procedure and the full Akaike selection, we use a constant $ C_{ \\text{AIC} } = C_{ HDAIC } = 0.5 $ , which is two times the maximal variance of a Bernoulli variable.", "[Acknowledgments] The author is very grateful for the discussions with Markus Reiß and Martin Wahl that were indispensable during the preparation of this paper.", "The research of the author has been partially funded by the Deutsche Forschungsgemeinschaft (DFG) – Project-ID 318763901-SFB1294." ] ]
2210.07850
[ [ "On heat equations associated with fractional harmonic oscillators" ], [ "Abstract We establish some fixed-time decay estimates in Lebesgue spaces for the fractional heat propagator $e^{-tH^{\\beta}}$, $t, \\beta>0$, associated with the harmonic oscillator $H=-\\Delta + |x|^2$.", "We then prove some local and global wellposedness results for nonlinear fractional heat equations." ], [ "Introduction", "Consider the heat equation associated with the fractional harmonic oscillator, namely ${\\left\\lbrace \\begin{array}{ll}\\partial _t u(t,x) + H^{\\beta }u(t,x)=0\\\\u(0,x)= u_0(x),\\end{array}\\right.}", "\\quad (t, x) \\in \\mathbb {R}^{+}\\times \\mathbb {R}^d,$ where $H^{\\beta }=(-\\Delta +|x|^2)^{\\beta }$ , $\\beta >0$ , and $u(t,x)\\in \\mathbb {C}$ .", "Strictly speaking, the corresponding fractional heat semigroup $e^{-t H^{\\beta }}$ is defined in terms of the spectral decomposition of the standard Hermite operator $H=H^{1}=-\\Delta +|x|^2$ .", "To be precise, recall that $ H = \\sum _{k=0}^\\infty (2k+d) P_k, $ where $P_k$ stands for the orthogonal projection of $ L^2(\\mathbb {R}^d) $ onto the eigenspace corresponding to the eigenvalue $(2k+d)$ – see Section REF below for further details.", "As a consequence of the spectral theorem, we can consider the family of fractional powers of $H$ defined by $ H^\\beta = \\sum _{k=0}^\\infty (2k+d)^\\beta P_k, \\quad \\beta >0.", "$ The heat semigroup $e^{-t H^{\\beta }}$ is then defined accordingly by $e^{-t H^{\\beta }}f = \\sum _{k=0}^\\infty e^{-t(2k+d)^{\\beta }} P_kf, \\quad f \\in L^2(\\mathbb {R}^d).", "$ While there is a wealth of literature on the semigroup $e^{-t (-\\Delta )^\\beta }$ (see e.g., [19], [31]), stimulated by the very wide range of physics-inspired models involving the fractional Laplacian [15], [11], the current research of the semigroup $e^{-tH^\\beta }$ is rather limited, even in fundamental settings such as the Lebesgue spaces.", "This is particularly striking in view of the role played by the Hermite operator $H$ and its fractional powers $H^\\beta $ in several aspects of quantum physics and mathematical analysis [27], [17].", "The purpose of this note is to advance the knowledge of the fractional heat semigroup, in the wake of a research program initiated by the authors in [3].", "In particular, our main result is a set of fixed-time decay estimates for $e^{-tH^\\beta }$ in the Lebesgue space setting.", "Theorem 1.1 For $1 \\le p,q\\le \\infty $ and $\\beta >0,$ set $\\quad \\sigma _\\beta \\frac{d}{2\\beta } \\Big |\\frac{1}{p}-\\frac{1}{q}\\Big |.$ If $p,q \\in (1, \\infty ),$ or $(p,q)=(1, \\infty ),$ or $p=1$ and $q \\in [2, \\infty ),$ or $p\\in (1, \\infty )$ and $q=1,$ then there exists a constant $C>0$ such that $\\Vert e^{-tH^\\beta } f\\Vert _{L^q} \\le {\\left\\lbrace \\begin{array}{ll}C e^{-td^\\beta } \\Vert f\\Vert _{L^p} & \\text{if} \\quad t\\ge 1\\\\C t^{-\\sigma _\\beta } \\Vert f\\Vert _{L^p} & \\text{if} \\quad 0<t\\le 1.\\end{array}\\right.", "}$ If $0<\\beta \\le 1,$ then the above estimate holds for $p,q \\in [1, \\infty ].$ To the best of our knowledge, the dissipative estimate in Theorem REF is new even for the Hermite operator ($\\beta =1$ ).", "We also stress that the time decay at infinity in (REF ) is sharp for any choice of Lebesgue exponents.", "Moreover, since the power of $t$ is never positive for small time, we infer that there is a singularity near the origin for $p\\ne q$ .", "It is worth emphasizing that the fractional Hermite propagator $e^{-tH^\\beta }$ is not a Fourier multiplier, hence we cannot rely on the arguments typically used to establish $L^p-L^q$ space-time estimates for the fractional heat propagator $e^{-t(-\\Delta )^\\beta }$ – see for instance [19].", "In fact, we will resort to techniques of pseudodifferential calculus to deal with the operators $e^{-tH^\\beta }$ and $e^{-tH}$ (cf.", "[20]), and also to Bochner's subordination formula in order to express the heat semigroup $e^{-tH^{\\beta }}$ , $0<\\beta \\le 1$ , in terms of solutions of the heat equation $ e^{-tH}$ (see (REF )).", "As an application of Theorem REF , we investigate the wellposedness of ${\\left\\lbrace \\begin{array}{ll}\\partial _t u(t,x) + H^{\\beta }u(t,x)= |u(t,x)|^{\\gamma -1} u(t,x)\\\\u(0,x)= u_0(x),\\end{array}\\right.}", "\\quad (t, x) \\in \\mathbb {R}^{+}\\times \\mathbb {R}^d,$ with $u(t,x)\\in \\mathbb {C}$ , $\\beta >0$ and $\\gamma >1$ .", "First, let us highlight that, due to the occurrence of the quadratic potential $|x|^2$ , the problem (REF ) has no scaling symmetry.", "Nevertheless, the companion fractional heat equation ${\\left\\lbrace \\begin{array}{ll}\\partial _t u(t,x) + (-\\Delta )^{\\beta }u(t,x)= |u(t,x)|^{\\gamma -1}u(t,x)\\\\u(0,x)= u_0(x),\\end{array}\\right.}", "\\quad \\quad (t, x) \\in \\mathbb {R}^{+}\\times \\mathbb {R}^d ,$ is invariant under the following scaling transformation.", "For $\\lambda >0$ , set $ u_{\\lambda } (t,x) = \\lambda ^{\\frac{2\\beta }{\\gamma -1}} u(\\lambda ^{2\\beta } t, \\lambda x) \\quad \\text{and} \\quad u_{0, \\lambda }(x) = \\lambda ^{\\frac{2\\beta }{\\gamma -1}} u_0(\\lambda x).", "$ If $u(t,x)$ is a solution of (REF ) with initial datum $u_0(x)$ , then $u_{\\lambda }(t,x)$ is also a solution of (REF ) with initial datum $u_{0, \\lambda }(x)$ .", "The $L^p$ space is invariant under the above scaling only when $p=p^\\beta _c \\frac{d (\\gamma -1)}{2\\beta }$ .", "Motivated by this remark, we shall say that (REF ) is $L^p-{\\left\\lbrace \\begin{array}{ll} \\text{sub-critical} & \\text{if} \\quad 1\\le p <p^\\beta _c\\\\\\text{critical} & \\text{if} \\quad p=p^\\beta _c\\\\\\text{super-critical} & \\text{if} \\quad p>p^\\beta _c.\\end{array}\\right.", "}$ Concerning the wellposedness of (REF ), our result can be stated as follow.", "Theorem 1.2 Assume that $u_0 \\in L^p(\\mathbb {R}^d), 1<p< \\infty $ and $\\beta >0.$ (Local well-posedness) If $p>p_c^{\\beta },$ then there exists a $T>0$ such that (REF ) has a solution $u \\in C([0, T ], L^p (\\mathbb {R}^d)).$ Moreover, $u$ extends to a maximal interval $[0, T_{\\max })$ such that either $T_{\\max }=\\infty $ or $T_{\\max } < \\infty $ and $\\displaystyle \\lim _{t\\rightarrow T_{\\max }} \\Vert u(t)\\Vert _{L^p} = \\infty .$ (Lower blow-up rate) Consider $p>p_c^{\\beta }$ and suppose that $T_{\\max }<\\infty ,$ where $T_{\\max }$ is the existence time of the resulting maximal solution of (REF ).", "Then $ \\Vert u(t)\\Vert _{L^p} \\ge C \\left( T_{\\max }- t \\right) ^{\\frac{d}{2p\\beta }-\\frac{1}{\\gamma -1}}, \\quad \\text{for all} \\ t \\in [0, T_{\\max }).", "$ (Global existence) If $p=p_c^{\\beta }$ and $\\Vert u_0\\Vert _{L^{p_c^{\\beta }}}$ is sufficiently small, then $T_{max}=\\infty .$ Let us briefly recall the literature to better frame our results.", "Weissler [31] proved local wellposeness for (REF ) in $L^p$ for super-critical indices $p> p^1_c\\ge 1$ .", "Concerning the sub-critical regime $p< p_c^1$ , there is no general theory of existence, see [31], [6].", "Actually, Haraux-Weissler [14] proved that if $1<p^1_c < \\gamma +1$ then there is a global solution of (REF ) (with zero initial data) in $L^p(\\mathbb {R}^d)$ for $1\\le p < p^1_c$ , but no such solution exists when $\\gamma +1< p_c$ .", "In the critical case where $p=p^1_c$ it is proved that the solution exists globally in time for small initial data.", "Some results in the same vein have been proved for the fractional heat equation (REF ) by Miao, Yuan and Zhang in [19].", "Remark 1.1 Let us discuss some aspects of the previous results.", "In particular, we highlight some intriguing related problems that we plan to explore in future work.", "- The sign in power type non-linearity (focusing or defocusing) will not play any role in our analysis.", "Therefore, we have chosen to consider the defocusing case for the sake of concreteness.", "- Using properties of Hermite functions and interpolation, in [32] Wong proved that $\\Vert e^{-tH}f\\Vert _{L^2(\\mathbb {R})} \\lesssim (\\sinh t)^{-1}\\Vert f\\Vert _{L^p(\\mathbb {R})}$ for $t>0$ and $1\\le p \\le 2$ .", "We note that Theorem REF recaptures and improves Wong's result.", "- It is known that (REF ) is ill-posed on Lebesgue spaces in the sub-critical regime, see [14].", "There is reason to believe that the same conclusion holds for (REF ).", "However, a thorough analysis of this problem is beyond the scope of this note.", "- It is expected that Theorem REF could be useful in dealing with other types of non-linearities in (REF ), such as exponential and inhomogeneous type non-linearity (which are also extensively studied in the literature).", "- In Section we discuss another application of Theorem REF , namely Strichartz estimates for the fractional heat semigroup.", "Our approach here relies on a standard technique (i.e., $TT^{\\star }$ method and real interpolation), whereas a refined phase-space analysis of $H^\\beta $ is expected to reflect into better estimates." ], [ "Preliminary results", "Notation.", "The symbol $X \\lesssim Y$ means that the underlying inequality holds with a suitable positive constant factor: $ X \\lesssim Y \\quad \\Longrightarrow \\quad \\exists \\, C>0\\,:\\,X \\le C Y.", "$" ], [ "On the fractional harmonic oscillator $H^{\\beta }$", "Let us briefly review some facts concerning the spectral decomposition of the Hermite operator $H=-\\Delta + |x|^2$ on $\\mathbb {R}^d$ .", "Let $\\Phi _{\\alpha }(x)$ , $\\alpha \\in \\mathbb {N}^d$ , be the normalized $d$ -dimensional Hermite functions, that is $ \\Phi _\\alpha (x) = \\Pi _{j=1}^d h_{\\alpha _j}(x_j), \\quad h_k(x) = (\\sqrt{\\pi }2^k k!", ")^{-1/2} (-1)^k e^{\\frac{1}{2}x^2} \\frac{d^k}{dx^k} e^{-x^2}.$ The Hermite functions $ \\Phi _\\alpha $ are eigenfunctions of $H$ with eigenvalues $(2|\\alpha | + d)$ , where $|\\alpha |= \\alpha _{1}+ ...+ \\alpha _d$ .", "Moreover, they form an orthonormal basis of $ L^2(\\mathbb {R}^d)$ .", "The spectral decomposition of $ H $ is thus given by $ H = \\sum _{k=0}^\\infty (2k+d) P_k, \\qquad P_kf = \\sum _{|\\alpha |=k} \\langle f,\\Phi _\\alpha \\rangle \\Phi _\\alpha , $ where $\\langle \\cdot , \\cdot \\rangle $ is the inner product in $L^2(\\mathbb {R}^d)$ .", "In general, given a bounded function $m \\colon \\mathbb {N}\\rightarrow \\mathbb {C}$ , the spectral theorem allows us to define the operator $m(H)$ such that $ m(H)f= \\sum _{\\alpha \\in \\mathbb {N}^d} m(2|\\alpha | +d) \\langle f, \\Phi _{\\alpha } \\rangle \\Phi _{\\alpha } = \\sum _{k=0}^\\infty m(2k+d)P_kf, \\quad f \\in L^2(\\mathbb {R}^d).$ In view of the Plancherel theorem for the Hermite expansions, $m(H)$ is bounded on $L^{2}(\\mathbb {R}^d)$ .", "We refer to [27] for further details, in particular for Hörmander multiplier-type results for $m(H)$ on $L^p(\\mathbb {R}^d)$ ." ], [ "Some relevant function spaces", "For the benefit of the reader we review some basic facts of time-frequency analysis – see for instance [13], [7], [1] for comprehensive treatments.", "Recall that the short-time Fourier transform of a temperate distribution $f \\in \\mathcal {S}^{\\prime }(\\mathbb {R}^d)$ with respect to a window function $0\\ne g \\in {\\mathcal {S}}(\\mathbb {R}^d)$ (Schwartz space) is defined by $V_{g}f(x,\\xi )= \\langle f,g \\rangle = \\int _{\\mathbb {R}^{d}} f(t) \\overline{g(t-x)} e^{-2\\pi i \\xi \\cdot t}dt, \\ (x, \\xi ) \\in \\mathbb {R}^{2d},$ where the brackets $\\langle \\cdot ,\\cdot \\rangle $ denote the extension to $\\mathcal {S}^{\\prime }(\\mathbb {R}^d)\\times \\mathcal {S}(\\mathbb {R}^d)$ of the $L^2$ inner product.", "Modulation spaces, introduced by Feichtinger [9], have proved to be extremely useful in a wide variety of contexts, ranging from analysis of PDEs to mathematical physics – among the most recent contributions, see e.g., [8], [18], [2], [21], [10].", "Modulation spaces are defined as follows.", "For $1 \\le p,q \\le \\infty $ we have $ M^{p,q}(\\mathbb {R}^d)= \\left\\lbrace f \\in \\mathcal {S}^{\\prime }(\\mathbb {R}^d): \\Vert f\\Vert _{M^{p,q}} \\left\\Vert \\Vert V_gf(x,\\xi )\\Vert _{L^p_x} \\right\\Vert _{L_\\xi ^q}< \\infty \\right\\rbrace .", "$ We recall from [3] some bounds for the fractional heat semigroup on modulation spaces.", "Theorem 2.1 Let $\\beta >0$ , $0< p_1,p_2,q_1,q_2\\le \\infty $ and set $\\frac{1}{\\tilde{p}}\\max \\Big \\lbrace \\frac{1}{p_2}-\\frac{1}{p_1},0\\Big \\rbrace ,\\ \\ \\frac{1}{\\tilde{q}}\\max \\Big \\lbrace \\frac{1}{q_2}-\\frac{1}{q_1},0\\Big \\rbrace ,\\quad \\sigma _\\beta \\frac{d}{2\\beta } \\Big (\\frac{1}{\\tilde{p}}+\\frac{1}{\\tilde{q}}\\Big ).$ Then $\\Vert e^{-tH^\\beta } f\\Vert _{M^{p_2,q_2}} \\le {\\left\\lbrace \\begin{array}{ll}C e^{-td^\\beta } \\Vert f\\Vert _{M^{p_1,q_1}} & \\text{if} \\quad t\\ge 1\\\\C t^{-\\sigma _\\beta } \\Vert f\\Vert _{M^{p_1,q_1}} & \\text{if} \\quad 0<t\\le 1,\\end{array}\\right.", "}$ where $C>0$ is a universal constant.", "We briefly recall some properties of the Shubin classes $\\Gamma ^s$ , which play a central role as symbol classes in the theory of pseudodifferential operators – we refer to [20] for additional details.", "For $s\\in \\mathbb {R}$ we define $\\Gamma ^s$ as the space of functions $a\\in C^\\infty (\\mathbb {R}^{2d})$ satisfying the following condition: for every $\\tilde{\\alpha } \\in \\mathbb {N}^{2d}$ there exists $C_{\\tilde{\\alpha }}>0$ such that $|\\partial ^{\\tilde{\\alpha }} a(z)|\\le C_{\\tilde{\\alpha }} (1+|z|)^{s-|\\tilde{\\alpha }|},\\qquad z\\in \\mathbb {R}^{2d},$ This space becomes a Fréchet space endowed with the obvious seminorms.", "It is important for our purposes to recall that the fractional Hermite propagator is a pseudodifferential operator with symbol in a suitable Shubin class, as proved in [3].", "Proposition 2.1 Let $\\beta >0$ .", "The fractional Hermite operator $H^{\\beta } = (-\\Delta +|x|^2)^{\\beta }$ is a pseudodifferential operator with Weyl symbol $a_{\\beta } \\in \\Gamma ^{2\\beta }$ .", "More precisely, we have $a_\\beta (x,\\xi ) = (|x|^2+|\\xi |^2)^\\beta + r(x,\\xi ), \\quad |x|+|\\xi |\\ge 1,$ where $r \\in \\Gamma ^{2\\beta -2}$ .", "We also recall some facts concerning the so-called Shubin-Sobolev (also known as Hermite-Sobolev) spaces $Q^s$ , $s\\in \\mathbb {R}$ – see [23], [12] for further details.", "In particular, $Q^s$ is the space of $f\\in \\mathcal {S}^{\\prime }(\\mathbb {R}^d)$ such that $\\Vert f\\Vert ^2_{Q^s}\\Vert H^{s/2}f\\Vert ^2_{L^2}=\\sum _{k=0}^{\\infty } ||P_k f||^2_{L^2}(2k+d)^{s}<\\infty .$ In view of the characterisation $Q^s=M^{2,2}_{v_s}$ (see for instance [7]), Hölder's inequality and the inclusion relations of Shubin-Sobolev spaces (see e.g., [7]), it is well known that, for every $1\\le p,q\\le \\infty $ , if $s$ is large enough, $Q^s\\hookrightarrow M^{p,q}\\hookrightarrow M^\\infty \\hookrightarrow Q^{-s}.$" ], [ "Proof of Part (", "It is known that $L^p \\hookrightarrow M^{p,\\infty } \\text{ and } M^{q,1} \\hookrightarrow L^q \\text{ for } 1\\le p,q \\le \\infty ,$ see e.g., [7], [25].", "In light of this embedding and Theorem REF , for $t>1$ we obtain the desired estimate $ \\Vert e^{-tH^\\beta } f\\Vert _{L^q}\\lesssim e^{-td^\\beta }\\Vert f\\Vert _{L^p}, \\quad \\forall p, q \\in [1, \\infty ].", "$ Let us consider now the case where $0<t \\le 1$ .", "In view of Proposition REF we think of $H^{\\beta }$ as a pseudodifferential operator with Weyl symbol $a_{\\beta } \\in \\Gamma ^{2\\beta }$ , where $a_\\beta (x,\\xi ) = (|x|^2+|\\xi |^2)^\\beta + r(x,\\xi ), \\quad |x|+|\\xi |\\ge 1,$ for a suitable $r \\in \\Gamma ^{2\\beta -2}$ .", "We may further rewrite $a_\\beta (x,\\xi ) = a(x, \\xi ) + r^{\\prime }(x,\\xi ), \\quad x, \\xi \\in \\mathbb {R}^d,$ for some $r^{\\prime } \\in \\Gamma ^{2\\beta -2}$ , where $a\\in \\Gamma ^{2\\beta }$ satisfies $ a(x,\\xi ) \\ge (1+|x|+|\\xi |)^{2\\beta }, \\quad x, \\xi \\in \\mathbb {R}^d.$ Note that the same conclusion holds for the Kohn-Nirenberg symbol of $H^{\\beta }$ (see [20]).", "Therefore, we assume hereinafter that the above functions $a(x,\\xi ),~r^{\\prime }(x,\\xi )$ denote the Kohn-Nirenberg symbol of the corresponding operators.", "It follows from [20] that the heat semigroup $e^{-tH^\\beta }$ has a Kohn-Nirenberg symbol with the following structureNote that the mentioned result is stated for the Weyl quantization, but again one can easily check that the same conclusion holds for the Kohn-Nirenberg quantization.", ": $b_t(x,\\xi )=e^{-t a(x,\\xi )}+e^{-t a(x,\\xi )} \\sum _{j=1}^{J-1} \\sum _{l=1}^{2j} t^l u_{l,j}(x,\\xi ) + r_t^{^{\\prime \\prime }}(x,\\xi ),$ where $J\\ge 1$ is arbitrarily chosen, $u_{l,j} \\in \\Gamma ^{2\\beta l-2j}$ and $r_t^{^{\\prime \\prime }}$ satisfy $\\big |\\partial _x^{\\alpha } \\partial _{\\xi }^{\\gamma }r_t^{^{\\prime \\prime }}(x,\\xi )\\big | \\le C_{\\alpha , \\gamma } (1+|x|+|\\xi |)^{-2J-|\\alpha |-|\\gamma |}$ for a constant $C_{\\alpha , \\gamma } $ independent of $t \\in (0,1),$ for every $\\alpha , \\gamma \\in \\mathbb {N}^{d}$ .", "Since $r_t^{^{\\prime \\prime }}(x, D) \\colon Q^{-J} \\rightarrow Q^J,$ for $J$ large enough, we have $\\Vert r_t^{^{\\prime \\prime }}(x, D) f\\Vert _{L^q} \\le C \\Vert f\\Vert _{L^p}.$ Let us focus now on the symbol $C_t(x,\\xi )e^{-t a(x,\\xi )} \\sum _{j=1}^{J-1} \\sum _{l=1}^{2j} t^l u_{l,j}(x,\\xi ).$ By virtue of the Leibniz rule, the chain rule and (REF ), one can verify the estimates $ \\big |\\partial _x^{\\alpha } \\partial _{\\xi }^{\\gamma }[e^{\\frac{t}{4} \\langle x \\rangle ^{2\\beta }} C_t(x,\\xi )]\\big | \\le C_{\\alpha , \\gamma } (1+|\\xi |)^{-|\\gamma |},$ where $\\langle \\cdot \\rangle = (1+ |\\cdot |^2)^{1/2}$ .", "In fact, it suffices to observe that $\\partial _x^{\\alpha }e^{\\frac{t}{4} \\langle x\\rangle ^{2\\beta }}$ is a finite linear combination of terms of the type $e^{\\frac{t}{4} \\langle x\\rangle ^{2\\beta }}~~\\partial ^{\\alpha _1} [t \\langle x\\rangle ^{2\\beta }] \\cdots \\partial ^{\\alpha _k} [t \\langle x\\rangle ^{2\\beta }],$ with $\\alpha _1 +\\cdots +\\alpha _k=|\\alpha |$ , so that $\\big |\\partial _x^{\\alpha }e^{\\frac{t}{4} \\langle x\\rangle ^{2\\beta }}\\big | \\le e^{\\frac{t}{4}2\\langle x\\rangle ^{2\\beta }} ~\\langle x\\rangle ^{-|\\alpha |}.$ Similarly, since $a \\in \\Gamma ^{2\\beta }$ satisfies (REF ), we have $\\big |\\partial _x^{\\alpha } \\partial _{\\xi }^{\\gamma }a(x,\\xi )\\big | \\le a(x,\\xi ) \\, (1+|x|+|\\xi |)^{-|\\alpha |-|\\gamma |},$ so that, arguing as above, $\\big |\\partial _x^{\\alpha } \\partial _{\\xi }^{\\gamma }e^{-t a(x,\\xi )}\\big | \\le e^{-\\frac{t}{2} a(x,\\xi )} \\, (1+|x|+|\\xi |)^{-|\\alpha |-|\\gamma |},$ hence we infer $\\big |\\partial _x^{\\alpha } \\partial _{\\xi }^{\\gamma }[t^l \\, u_{l,j}(x,\\xi )]\\big | \\le t^l \\, a(x,\\xi )^l \\, (1+|x|+|\\xi |)^{-2j-|\\alpha |-|\\gamma |}.$ The claimed bound thus follows by the Leibniz rule.", "To summarize, for every $p \\in (1,\\infty )$ we have $\\Vert e^{\\frac{t}{4} \\langle x\\rangle ^{2\\beta }} C_t(x,D)f\\Vert _{L^p} \\le \\Vert f\\Vert _{L^p},\\quad 0<t<1,$ by the $L^p$ boundedness of pseudodifferential operators with symbol in Hörmander's class $S_{1,0}^0$ – see for instance [24].", "For $1\\le q \\le p \\le \\infty $ we have, by Hölder inequality, $\\Vert e^{-\\frac{t}{4} \\langle x\\rangle ^{2\\beta }} f\\Vert _{L^q} \\le C t^{ \\frac{d}{2\\beta } \\left(\\frac{1}{q}-\\frac{1}{p}\\right)} \\Vert f\\Vert _{L^p}.$ Hence we obtain, for $1\\le q \\le \\infty ,~1<p<\\infty ,~q\\le p$ , $\\Vert C_t(x, D) f\\Vert _{L^q} \\le C t^{ \\frac{d}{2\\beta } \\left(\\frac{1}{q}-\\frac{1}{p}\\right)} \\Vert f\\Vert _{L^p}, \\quad 0<t<1.$ On the other hand, we also have $ \\big |C_t(x,\\xi )\\big | \\le C e^{-\\frac{t}{2} |\\xi |^{2\\beta }},\\quad 0<t<1,$ and the integral kernel of the operator $C_t(x,D)$ given by $K(x,y)=(2 \\pi )^{-d} \\int _{\\mathbb {R}^d} e^{i(x-y) \\cdot \\xi } C_t(x,\\xi ) \\, d\\xi $ is readily seen to satisfy $\\big |K(x,y)\\big | \\le C t^{- \\frac{d}{2\\beta }}.$ This gives the desired continuity result $L^1 \\rightarrow L^{\\infty }$ , while the remaining bounds follow by interpolation with the above $L^p \\rightarrow L^q$ estimates.", "Remark 3.1 Note that some endpoint cases can be obtained in a straightforward way.", "For instance, from $L^1 \\rightarrow L^{\\infty }$ continuity we also obtain $L^1 \\rightarrow L^2$ bounds as follows: if $f\\in L^2(\\mathbb {R}^d)$ then $\\big |\\langle e^{-t H^{\\beta }}f, e^{-t H^{\\beta }}f \\rangle \\big |=\\big |\\langle e^{-2t H^{\\beta }}f, f \\rangle \\big | \\le C t^{-\\frac{d}{2\\beta }} \\Vert f\\Vert _{L^1}^2$ so that $\\Vert e^{-t H^{\\beta }}f\\Vert _{L^2} \\le C t^{-\\frac{d}{4\\beta }} \\Vert f\\Vert _{L^1}, \\quad 0<t<1.$ By interpolation with $L^1 \\rightarrow L^{\\infty }$ one also gets the desired estimate $L^1 \\rightarrow L^q$ for $2\\le q \\le \\infty $ .", "Remark 3.2 Some endpoint cases (e.g., if $p,q \\in \\lbrace 1,\\infty \\rbrace $ ) are not covered in the results above.", "A deeper investigation of the kernel $K(x,y)$ of $C_t(x,D)$ could likely give some result in this connection (for example $L^1 \\rightarrow L^1,~~L^{\\infty } \\rightarrow L^{\\infty }$ ), but it will not be essential for the applications to the nonlinear problem in Theorem REF .", "Nevertheless, the dispersive estimate $L^1 \\rightarrow L^{\\infty }$ is covered." ], [ "Proof of Part (", "In order to prove the second claim in Theorem REF , some preparatory work is needed.", "First, we recast $e^{-tH}$ as the Weyl transform of a function on $\\mathbb {C}^d$ , which allows us to think of $e^{-tH}$ as a pseudodifferential operator.", "Recall that the Weyl transform $W(F)$ of a function $F \\colon \\mathbb {C}^d \\rightarrow \\mathbb {C}$ is defined by $ W(F)\\phi (\\xi )=(2\\pi )^{-d} \\int _{\\mathbb {R}^d}\\int _{\\mathbb {R}^d} e^{i(\\xi -\\eta )\\cdot y} b\\left(\\frac{\\xi +\\eta }{2},y\\right) \\, \\phi (\\eta ) \\, dy d\\eta , $ for $\\phi \\in L^2(\\mathbb {R}^d)$ , where the symbol $b(\\xi , \\eta )$ is the full inverse Fourier transform of $F$ in both variables.", "In particular, the Weyl transform $W(F)$ is a pseudodifferential operator in the Weyl calculus with symbol $b$ .", "Let us highlight that the Weyl symbol of the Hermite semigroup $e^{-tH}$ is given by the function $a_t(x, \\xi ) =C_d(\\cosh t)^{-d} \\, e^{-(\\tanh t)(|x|^2+|\\xi |^2)}$ , see [28].", "Thus, $e^{-tH}f(x) = C_d (\\cosh t)^{-d}(2\\pi )^{-d} \\underset{=I}{\\underbrace{\\int _{\\mathbb {R}^d}\\int _{\\mathbb {R}^d} e^{i(x-\\eta )\\cdot y} \\, e^{-(\\tanh t)|y|^2} \\, e^{-(\\tanh t)(|\\frac{x+\\eta }{2}|^2)} \\, f(\\eta ) \\, dy d\\eta }}.$ In order to bound the above integral $I$ , we first recast the latter expression in terms of convolution.", "Recall that the Fourier transform of the Gaussian function $f(y)= e^{-\\pi a |y|^2}$ with $a>0$ is given by $\\hat{f}(x)= a^{-d/2} e^{-\\pi |x|^2/a}$ , and note that $\\frac{|x-\\eta |^2}{4} - \\frac{|x|^2}{2} - \\frac{|\\eta |^2}{2} = - \\frac{|x + \\eta |^2}{4}.", "$ As a result, we have $\\left(\\tanh t \\right) ^{d/2} I= \\int _{\\mathbb {R}^d} \\, e^{-(\\frac{1}{4 \\tanh t}-\\frac{\\tanh t}{4}) |x- \\eta |^2} e^{- \\frac{\\tanh t}{2} (|x|^2 + |\\eta |^2) } f(\\eta ) \\, d\\eta = e^{-\\frac{\\tanh t}{2} |x|^2} \\left( e^{- \\frac{1}{2\\sinh 2t} |\\cdot |^2} \\ast g\\right) (x),$ where we set $g(\\cdot )= e^{-\\frac{\\tanh t}{2}|\\cdot |^2} \\, f(\\cdot )$ .", "Note that $(\\cosh t)^{-d} \\left(\\tanh t \\right) ^{-d/2}= \\left( \\sinh (2t) \\right)^{-d/2}$ , hence $e^{-tH}f(x)= \\tilde{C}_d \\left( \\sinh (2t) \\right)^{-d/2} e^{-\\frac{\\tanh t}{2} |x|^2} \\left( e^{- \\frac{1}{2\\sinh 2t} |\\cdot |^2} \\ast g\\right) (x).$ Lemma 3.1 Let $1\\le p,q \\le \\infty $ and $t>0$ .", "Then $ \\Vert e^{-tH} f\\Vert _{L^q} \\le C (\\tanh t)^{-\\frac{d}{2} \\left|\\frac{1}{q} - \\frac{1}{p}\\right| } \\, \\Vert f\\Vert _{L^{p}},$ for some constant $C>0$ that depends only on $d$ .", "Using Mehler's formula for the Hermite functions (see e.g., [27]), the kernel $ K_t(x,y) $ of the semigroup $ e^{-tH} $ is explicitly given by $ K_t(x,y) = c_d (\\sinh 2t)^{-d/2} e^{-\\frac{1}{4} (\\coth t) |x-y|^2} e^{-\\frac{1}{4} (\\tanh t)|x+y|^2}.", "$ For $ 1 < p < q< \\infty , $ set $ \\alpha = d (1/p-1/q)$ .", "Then we have $K_t(x,y) \\\\= c_d (\\sinh 2t)^{-d/2} (\\tanh t)^{(d-\\alpha )/2} |x-y|^{\\alpha -d} ( (\\coth t)|x-y|^2)^{(d-\\alpha )/2} e^{-\\frac{1}{4} (\\coth t) |x-y|^2} e^{-\\frac{1}{4} (\\tanh t)|x+y|^2},$ from which we obtain the estimate $ K_t(x,y) \\le C (\\cosh t)^{-d} (\\tanh t)^{-\\alpha /2} |x-y|^{\\alpha -d}.", "$ Since the Riesz potential $ R_\\alpha f(x) = c_\\alpha \\int _{\\mathbb {R}^d} f(y) |x-y|^{\\alpha -d} dy$ is bounded from $ L^p $ to $ L^q $ for $1<p<q<\\infty $ , we get $ \\Vert e^{-tH} f\\Vert _{L^q} \\le C (\\cosh t)^{-d} (\\tanh t)^{-\\alpha /2} \\Vert f\\Vert _{L^p} $ for $1 < p < q< \\infty $ .", "To prove the remaining cases, we use the identity (REF ).", "We consider the case $ 1\\le q\\le p\\le \\infty $ first.", "Set $\\frac{1}{q} = \\frac{1}{p} + \\frac{1}{\\tilde{q}} $ and note that $ \\Vert e^{- \\frac{\\tanh t}{2} |\\cdot |^2}\\Vert _{L^{\\tilde{q}}} \\sim (\\tanh t)^{-d/ 2\\tilde{q}}= (\\tanh t)^{\\frac{d}{2} \\left(\\frac{1}{p} - \\frac{1}{q} \\right)}.$ By (REF ) and invoking Hölder and Young's inequalities, we obtain $\\Vert e^{-tH}f\\Vert _{L^{q}} & \\lesssim (\\sinh 2t)^{-d/2} \\, \\Vert e^{- \\frac{\\tanh t}{2} |\\cdot |^2}\\Vert _{L^{\\tilde{q}}} \\, \\Vert e^{- \\frac{1}{2\\sinh 2t} |\\cdot |^2} \\ast g \\Vert _{L^{p}}\\\\& \\lesssim (\\sinh 2t)^{-d/2} \\, (\\tanh t)^{\\frac{d}{2} \\left(\\frac{1}{p} - \\frac{1}{q} \\right)} \\, \\Vert e^{- \\frac{1}{2\\sinh 2t} |\\cdot |^2}\\Vert _{L^1} \\Vert g \\Vert _{L^p}\\\\& \\lesssim (\\tanh t)^{-\\frac{d}{2} \\left( \\frac{1}{q} - \\frac{1}{p} \\right)} \\Vert f \\Vert _{L^p}.$ Let $1\\le q\\le \\infty $ and note that $ \\Vert e^{- \\frac{1}{2\\sinh 2t} |\\cdot |^2}\\Vert _{L^{q}} \\approx (\\sinh (2t))^{d/ 2q}.$ By (REF ) and Young inequality, we have $\\Vert e^{-tH}f\\Vert _{L^{q}} & \\lesssim (\\sinh 2t)^{-d/2} \\Vert e^{- \\frac{1}{2\\sinh 2t} |\\cdot |^2} \\ast g \\Vert _{L^{q}}\\\\& \\lesssim (\\sinh 2t)^{-d/2} \\Vert e^{- \\frac{1}{2\\sinh 2t} |\\cdot |^2}\\Vert _{L^{q}} \\Vert g \\Vert _{L^1}\\\\& \\lesssim (\\sinh 2t)^{-\\frac{d}{2} \\left( 1- \\frac{1}{q} \\right)} \\Vert f \\Vert _{L^1}\\\\& \\lesssim (\\cosh t)^{-d \\left( 1- \\frac{1}{q} \\right)} \\, (\\tanh t)^{-\\frac{d}{2} \\left( 1- \\frac{1}{q} \\right)} \\Vert f \\Vert _{L^1}.$ This completes the proof.", "Note that Lemma REF essentially gives the desired fixed-time estimate of Theorem REF (REF ) for $\\beta =1$ – see also Remark REF below.", "In order to deal with the case $0<\\beta <1$ , Bochner’s subordination formula and the property of probability density function (see (REF )) will play a crucial role.", "To be precise, Bochner’s subordination formula allows us to express the heat semigroup $e^{-t\\sqrt{H}}$ in terms of solutions of the heat equation: $e^{-t\\sqrt{H}}f(x)=\\pi ^{-1/2}\\int _0^{\\infty } e^{-y} \\, e^{-\\frac{t^2}{4y} H} f(x) \\, y^{-1/2} \\, dy,$ which ultimately follows from the identity $ e^{-a}=\\pi ^{-1/2}\\int _0^{\\infty } e^{-y} \\, e^{-\\frac{a^2}{4y}} \\, y^{-1/2} \\, dy \\quad (a>0).$ The Macdonald function $K_\\nu (z)$ is defined, for $z > 0$ , by $ K_{\\nu }(z)=2^{-\\nu -1} \\, z^{\\nu } \\, \\int _0^{\\infty } e^{-y-\\frac{z^2}{4y}} \\, y^{-\\nu -1} \\, dy.$ A straightforward change of variables shows that $ z^{\\nu } K_{\\nu }(z)=2^{\\nu -1} \\, \\int _0^{\\infty } e^{-y-\\frac{z^2}{4y}} \\, y^{\\nu -1} \\, dy=z^{\\nu } K_{-\\nu }(z).", "$ Then $z^{\\nu }K_\\nu (z)$ converges to $2^{\\nu -1} \\Gamma (\\nu )$ as $z \\rightarrow 0$ .", "Moreover, it is known that $K_\\nu (z)$ has exponential decay at infinity (see [16]).", "Consider now the Gaussian kernel of the form $ g_t(x)=(4\\pi t)^{-d/2} \\, e^{-\\frac{|x|^2}{4t}},~t>0,~x\\in \\mathbb {R}^d.$ We set $p_t(x,y)=p_t(x-y)$ , where $ p_t(x)=\\int _0^{\\infty } g_s(x) \\, \\eta _t(s) \\, ds,$ $g_s$ is the Gaussian kernel defined above and $\\eta _t\\ge 0$ is the density function of the distribution of the $\\beta $ -stable subordinator at time $t$ , see e.g., [4], [5].", "Therefore, $\\eta _t(s)=0$ for $s\\le 0$ and, for $0<\\beta <1$ , we have $\\int _0^{\\infty } e^{-us} \\, \\eta _t(s) \\, ds=e^{-tu^{\\beta }}, \\quad u\\ge 0.$ The fractional heat semi group $e^{-tH^{\\beta }}$ is thus given in terms of solutions of the heat equation: $e^{-tH^{\\beta }}f(x)=\\int _0^{\\infty } e^{-sH} f(x) \\, \\eta _t(s) \\, ds.$ We are now ready to complete the proof of Theorem REF .", "The case $t>1$ follows from the proof of Part (REF ) of Theorem REF , as it holds for all $p,q \\in [1, \\infty ]$ .", "We then assume $0<t\\le 1$ from now on.", "In view of the identity (REF ) and Lemma REF for the case $ \\beta = 1 $ , we obtain $ \\Vert e^{-tH^\\beta }f \\Vert _{L^q} \\le C \\left[\\int _0^\\infty (\\tanh s)^{-\\alpha /2} \\eta _t(s) ds\\right] \\, \\Vert f\\Vert _{L^p},$ where we set $ \\alpha = d| 1/p-1/q|$ .", "Splitting the integral above into two parts, the integral taken over $ [1,\\infty )$ is bounded by $ \\int _0^\\infty \\eta _t(s) ds = 1.$ The remaining integral is bounded by $ \\int _0^\\infty s^{-\\alpha /2} \\eta _t(s) ds = \\frac{1}{\\Gamma (\\alpha /2)} \\int _0^\\infty \\Big ( \\int _0^\\infty e^{-us} u^{\\alpha /2-1} du \\Big ) \\eta _t(s) ds.$ Changing the order of integration, and using (REF ), for a suitable constant $C>0$ we obtain $ \\int _0^\\infty s^{-\\alpha /2} \\eta _t(s) ds \\le C \\int _0^\\infty u^{\\alpha /2-1} e^{-t u^\\beta } du.$ Finally, the change of variables $ v = u^\\beta $ gives the estimate $ \\int _0^\\infty u^{\\alpha /2-1} e^{-t u^\\beta } du \\le C \\int _0^\\infty v^{(\\alpha /2\\beta ) -1} e^{-tv} dv = C_{\\alpha , \\beta } t^{-(\\alpha /2\\beta )}.$ This completes the proof for the case $0<t\\le 1$ .", "Remark 3.3 We would like to have also a representation in the vein of (REF ) for the fractional heat propagator $e^{-tH^\\beta }$ with $\\beta >1$ in terms of the Weyl transform.", "On the other hand, we have a convolution formula for the classical fractional heat propagator $e^{-t(-\\Delta )^\\beta }$ .", "Regretfully, we do not know how to get fixed-time estimates for $\\beta > 1$ via the Weyl transform at the time.", "Remark 3.4 Using the fact that $ e^{-tH} $ commutes with the Fourier transform, i.e., $ \\widehat{e^{-tH}f} = e^{-tH}\\hat{f},$ one obtains $\\Vert e^{-tH}f\\Vert _{\\mathcal {F}L^q} \\le C (\\tanh t)^{-\\frac{d}{2} \\left|\\frac{1}{q} - \\frac{1}{p}\\right| } \\, \\Vert f\\Vert _{\\mathcal {F}L^p},$ where the Fourier-Lebesgue spaces $\\mathcal {F}L^p(\\mathbb {R}^d)$ is defined by $ \\mathcal {F}L^p(\\mathbb {R}^d)= \\left\\lbrace f\\in \\mathcal {S}^{\\prime }(\\mathbb {R}^d): \\Vert f\\Vert _{\\mathcal {F}L^{p}}\\Vert \\hat{f}\\Vert _{L^{p}}< \\infty \\right\\rbrace .$" ], [ "Part (", "Fix $M_1 \\ge \\Vert u_0\\Vert _{L^p}.$ The proof strategy is quite standard.", "Let $T>0$ and set $Y_T=L^{\\infty }\\left( (0,T), L^p(\\mathbb {R}^d)\\right) \\cap L^{\\infty }\\left((0,T), L^{p \\gamma }(\\mathbb {R}^d)\\right),$ endowed with a norm $\\Vert u\\Vert _{Y_T}=\\max \\left\\lbrace \\sup _{0<t<T} \\Vert u(t)\\Vert _{L^p}, \\sup _{0<t<T}t^{\\frac{d( \\gamma -1)}{2p \\gamma \\, \\beta }} \\Vert u(t)\\Vert _{L^{p\\gamma }} \\right\\rbrace .", "$ Moreover, consider $B_{M+1}=\\lbrace u \\in Y_{T}:\\Vert u\\Vert _{Y_T}\\le M+1\\rbrace $ where $M>0$ is chosen in such a way that $\\Vert e^{-tH^\\beta }u_0\\Vert _{Y_T} \\le C M_1 \\le M$ .", "Note that $M$ depends only on $\\Vert u_0\\Vert _{Y_T}$ – in particular, it is independent of $t$ .", "Consider the mapping $\\Phi \\colon B_{M +1} \\rightarrow Y_T$ defined by $\\Phi [u](t)=e^{-tH^\\beta }u_0+\\int _0^t e^{-(t-\\tau )H^\\beta }\\left(|u(\\tau )|^{\\gamma -1} \\, u(\\tau ) \\right) \\, d\\tau .$ We shall show that in fact $\\Phi $ is a mapping from $B_{M +1}$ into $B_{M +1}$ .", "Indeed, consider $u \\in B_{M +1}$ .", "By Theorem REF , for $q \\in \\lbrace p, p\\gamma \\rbrace $ , we have $\\left\\Vert \\int _0^t e^{-(t-\\tau )H^\\beta }\\left(|u(\\tau )|^{\\gamma -1} \\, u(\\tau ) \\right) \\, d\\tau \\right\\Vert _{L^q}& \\le C \\int _0^t (t-\\tau )^{-\\frac{d}{2\\beta }[\\frac{1}{p}-\\frac{1}{q}]} \\, \\Vert u(\\tau )\\Vert _{L^{p\\gamma }}^{\\gamma } d\\tau \\\\& \\le C (M+1)^{\\gamma }\\int _0^t (t-\\tau )^{-\\frac{d}{2\\beta }[\\frac{1}{p}-\\frac{1}{q}]} \\, \\tau ^{-\\frac{d(\\gamma -1)}{2p\\beta }} d\\tau \\\\&= \\begin{multlined}[t] C (M+1)^{\\gamma } \\, t^{1-\\frac{d}{2\\beta }[\\frac{1}{p}-\\frac{1}{q}]-\\frac{d(\\gamma -1)}{2p\\beta }} \\\\ \\times \\int _0^1 (1-\\tau )^{-\\frac{d}{2\\beta }[\\frac{1}{p}-\\frac{1}{q}]} \\, \\tau ^{-\\frac{d(\\gamma -1)}{2p\\beta }} d\\tau .\\end{multlined}$ Since $q=p$ or $q = p\\gamma ,~\\gamma >1$ and $p>p_c^\\beta ,$ we have $ \\int _0^1 (1-\\tau )^{-\\frac{d}{2\\beta }[\\frac{1}{p}-\\frac{1}{q}]} \\, \\tau ^{-\\frac{d(\\gamma -1)}{2p\\beta }} d\\tau <\\infty .$ Therefore, we infer $ t^{\\frac{d}{2\\beta }[\\frac{1}{p}-\\frac{1}{q}]}\\left\\Vert \\int _0^t e^{-(t-\\tau )H^\\beta }\\left(|u(\\tau )|^{\\gamma -1} \\, u(\\tau ) \\right) \\, d\\tau \\right\\Vert _{L^q}\\le C (M+1)^{\\gamma } \\, T^{1-\\frac{d(\\gamma -1)}{2p\\beta }}.$ If we take $q =p$ or $q = p\\gamma $ in (REF ), then $\\left\\Vert \\int _0^t e^{-(t-\\tau )H^\\beta }\\left(|u(\\tau )|^{\\gamma -1} \\, u(\\tau ) \\right) \\, d\\tau \\right\\Vert _{L^p}\\le C_1 (M+1)^{\\gamma } \\, T^{1-\\frac{d(\\gamma -1)}{2p\\beta }}$ or $t^{\\frac{d(\\gamma -1)}{2p\\gamma \\beta }} \\, \\left\\Vert \\int _0^t e^{-(t-\\tau )H^\\beta }\\left(|u(\\tau )|^{\\gamma -1} \\, u(\\tau ) \\right) \\, d\\tau \\right\\Vert _{L^{p\\gamma }}\\le C_2 (M+1)^{\\gamma } \\, T^{1-\\frac{d(\\gamma -1)}{2p\\beta }}.$ As a result, we conclude that $\\Vert \\Phi [u]\\Vert _{Y_T}\\le M+\\max \\lbrace C_1,C_2\\rbrace \\, (M+1)^{\\gamma } \\, T^{1-\\frac{d({\\gamma }-1)}{2p\\beta }}.", "$ Moreover, for a sufficiently small $T > 0$ , we have $\\max \\lbrace C_1,C_2\\rbrace \\, (M+1)^{\\gamma } \\, T^{1-\\frac{d(\\gamma -1)}{2p\\beta }} \\le 1.$ This shows that $\\Phi $ is a mapping from $B_{M +1}$ into $B_{M +1}$ , as claimed.", "We now prove that $\\Phi \\colon B_{M +1} \\rightarrow Y_T$ is a contraction mapping.", "Recall that $\\left| |u|^{\\gamma -1}u- |v|^{\\gamma -1}v\\right| \\lesssim _{\\gamma } \\left( |u|^{\\gamma -1} + |v|^{\\gamma -1} \\right) |u-v|.$ By (REF ) and Hölder inequality, we have $ \\Vert |u|^{\\gamma -1}u-|v|^{\\gamma -1}v\\Vert _{L^p}\\le \\gamma \\left(\\Vert u\\Vert _{L^{p\\gamma }}^{\\gamma -1}+\\Vert v\\Vert _{L^{p\\gamma }}^{\\gamma -1}\\right) \\, \\, \\Vert u-v\\Vert _{L^{p\\gamma }}.$ In light of the previous computation, for $u, v \\in B_{M +1}$ and $q\\in \\lbrace p, p\\gamma \\rbrace $ we thus have $ \\Vert \\Phi [u](t)-\\Phi [v](t)\\Vert _{L^q}\\ & \\le \\gamma \\int _0^t (t-\\tau )^{-\\frac{d}{2\\beta } \\left( \\frac{1}{p} -\\frac{1}{q} \\right)} \\left(\\Vert u(\\tau )\\Vert _{L^{p\\gamma }}^{\\gamma -1}+\\Vert v (\\tau )\\Vert _{L^{p\\gamma }}^{\\gamma -1}\\right) \\, \\, \\Vert u(\\tau )-v(\\tau )\\Vert _{L^{p\\gamma }} d\\tau \\nonumber \\\\& \\le C_3 (M+1)^{\\gamma -1} \\, t^{1-\\frac{d}{2\\beta }[\\frac{1}{p}-\\frac{1}{q}]-\\frac{d( \\gamma -1)}{2p\\beta }} \\Vert u-v\\Vert _{Y_T}$ for a constant $C_3 > 0$ .", "By taking $q = p$ or $q = p\\gamma $ in (REF ), we similarly obtain $\\Vert \\Phi [u](t)-\\Phi [v](t)\\Vert _{Y_T}\\le C_4 (M+1)^{\\gamma -1} \\, T^{1-\\frac{d(\\gamma -1)}{2p\\beta }} \\Vert u-v\\Vert _{Y_T}$ for a constant $C_4 > 0$ .", "Since $1-\\frac{d(\\gamma -1)}{2p\\beta } > 0$ , for a sufficiently small $T > 0$ we have $C_4 (M+1)^{\\gamma -1} \\, T^{1-\\frac{d(\\gamma -1)}{2p\\beta }} \\le \\frac{1}{2}.$ We have thus proved that the mapping $\\Phi $ is the contraction mapping for a sufficiently small $T$ .", "By Banach fixed point theorem, there exists a unique fixed point $u$ of the mapping $\\Phi $ in $B_{M +1}$ and, in light of Duhamel's principle, the latter is a solution of (REF ).", "Remark 4.1 We shall also mention that the result of Part (REF ) can be alternatively derived from the abstract theorem of Weissler [30].", "To this aim, we define $K_t(u)= e^{-tH^{\\beta }} (|u|^{\\gamma -1}u)$ .", "Then for $t>0,$ $K_t:L^p(\\mathbb {R}^d) \\rightarrow L^p(\\mathbb {R}^d)$ is locally Lipschitz and $\\Vert K_t(u)-K_t(v)\\Vert _{L^p} & \\lesssim t^{-\\frac{d}{2\\beta } \\left( \\frac{\\gamma }{p} - \\frac{1}{p}\\right)} \\Vert |u|^{\\gamma -1}u - |v|^{\\gamma -1}v\\Vert _{L^{\\frac{p}{\\gamma }}}\\\\& \\lesssim t^{-\\frac{d}{2\\beta } \\left( \\frac{\\gamma }{p} - \\frac{1}{p}\\right)}\\left( \\Vert u\\Vert _{L^p}^{\\gamma -1} + \\Vert v\\Vert _{L^p}^{\\gamma -1} \\right) \\Vert u-v\\Vert _{L^p}\\\\& \\lesssim t^{-\\frac{d}{2\\beta } \\left( \\frac{\\gamma }{p} - \\frac{1}{p}\\right)} M^{\\gamma -1} \\Vert u-v\\Vert _{L^p},$ for $\\Vert u\\Vert _{L^p}\\le M$ and $\\Vert v\\Vert _{L^p} \\le M.$ Since $p> \\frac{d(\\gamma -1)}{2\\beta }$ , we have $t^{-\\frac{d}{2\\beta } \\left( \\frac{\\gamma }{p} - \\frac{1}{p}\\right)} \\in L^1_{\\mathrm {loc}}(0, \\infty ).$ Note that $t\\mapsto \\Vert K_t(0)\\Vert _{L^p}=0 \\in L^1_{\\mathrm {loc}} (0, \\infty )$ and $e^{-sH}K_{t}= K_{t+s}$ for $t,s >0$ .", "Then (REF ) follows by [30]." ], [ "Part (", "Let $u_0 \\in L^p(\\mathbb {R}^d)$ be such that $T_{\\max }<\\infty $ , and let $u\\in C\\left( [0, T_{\\max }), L^p(\\mathbb {R}^d) \\right)$ be the maximal solution of (REF ).", "Fix $s\\in [0, T_{\\max })$ and let $w(t)= u(t+s), \\quad t\\in [0, T_{\\max }-s ), $ with $w(0)=u(s).$ Then, as in the proof of Part (REF ), we claim that $\\Vert u(s)\\Vert _{L^p} + K M^{\\gamma } (T_{\\max }-s)^{1- \\frac{d(\\gamma -1)}{2p\\beta }}> M, \\quad \\forall M>0,$ for some constant $K>0$ .", "If this were not the case, there would exist $M>0$ such that $ \\Vert u(s)\\Vert _{L^p}+K M^{\\gamma } (T_{\\max }-s)^{1- \\frac{d(\\gamma -1)}{2p\\beta }} \\le M, $ and $w$ would be defined on $[0, T_{\\max }-s]$ – in particular, $u(T_{\\max })$ would be well defined, a contradiction.", "Hence, (REF ) is verified, for any $t\\in [0, T_{\\max })$ fixed and for all $M>0$ .", "Set then $M= 2 \\Vert u(t)\\Vert _{L^p}$ .", "By (REF ), we infer $ \\Vert u(t)\\Vert _{L^p} + K2^{\\gamma } \\Vert u(t)\\Vert _{L^p}^{\\gamma } \\left( T_{\\max }-t \\right)^{1- \\frac{d(\\gamma -1)}{2p\\beta }}> 2 \\Vert u(t)\\Vert _{L^p}, \\quad \\forall t\\in [0, T_{\\max }).", "$ Hence, we have $ \\Vert u(t)\\Vert _{L^p} \\ge C \\left( T_{\\max }- t \\right) ^{\\frac{d}{2p\\beta }-\\frac{1}{\\gamma -1}}\\quad \\text{for all} \\ t \\in [0, T_{\\max }).", "$" ], [ "Part (", "Given $\\gamma >1$ , one can choose $r$ in such a way that $\\frac{2 \\beta }{d \\gamma (\\gamma -1)} < \\frac{1}{r} < \\frac{2 \\beta }{d(\\gamma -1)}.$ Let $r$ be fixed once for all and set $\\delta =\\frac{1}{\\gamma -1}-\\frac{d}{2r \\beta }.$ We observe that $\\delta +1-\\frac{d(\\gamma -1)}{2r\\beta }-\\delta \\gamma =0.$ Suppose that $\\rho > 0$ and $M > 0$ satisfy the inequality $\\rho +KM^{\\gamma } \\le M,$ where $K = K(\\gamma , d, r) > 0$ is a constant and can explicitly be computed.", "We claim that if $\\sup _{t>0} t^{\\delta } \\Vert e^{-tH^{\\beta }}u_0\\Vert _{L^r} \\le \\rho $ then there is a unique global solution $u$ of (REF ) such that $\\sup _{t>0} t^{\\delta } \\Vert u(t)\\Vert _{L^{r}} \\le M.$ In order to prove our claim, consider $X= \\left\\lbrace u\\colon (0, \\infty )\\rightarrow L^{r}(\\mathbb {R}^d):\\sup _{t>0} t^{\\delta } \\Vert u(t)\\Vert _{L^r}< \\infty \\right\\rbrace ,$ $X_{M}= \\left\\lbrace u\\in X: \\sup _{t>0} t^{\\delta } \\Vert u(t)\\Vert _{L^r} \\le M \\right\\rbrace ,\\quad d(u,v)= \\sup _{t>0} t^{\\delta } \\Vert u(t) -v(t)\\Vert _{L^r}.$ It is easy to realize that $(X_M,d)$ is a complete metric space.", "Consider now the mapping $\\mathcal {J}_{u_0}(u)(t)= e^{-tH^{\\beta }} u_0+\\int _0^t e^{- (t-s) H^{\\beta }} (|u(s)|^{\\gamma -1} u(s)) ds.$ Let $u_0$ and $v_0$ satisfy (REF ) and choose $u,v \\in X_M$ .", "Clearly, we have $ t^{\\delta } \\Vert \\mathcal {J}_{u_0}u(t)- \\mathcal {J}_{v_0}v(t)\\Vert _{L^r} \\le t^{\\delta } \\Vert e^{-tH^{\\beta }}(u_0-v_0)\\Vert _{L^r} \\\\ + t^{\\delta }\\int _0^t\\Vert e^{- (t-s) H^{\\beta }} (|u(s)|^{\\gamma -1} u(s)-|v(s)|^{\\gamma -1} v(s))\\Vert _{L^r} ds.", "$ Using Theorem REF with exponents $(p, q)=(r/\\gamma , r),$ (REF ) and Hölder's inequality, we obtain $\\Vert e^{- (t-s) H^{\\beta }} (|u(s)|^{\\gamma -1} u(s)-&|v(s)|^{\\gamma -1} v(s))\\Vert _{L^r} \\\\& \\lesssim (t-s)^{-\\frac{d(\\gamma -1)}{2r\\beta }} \\Vert |u(s)|^{\\gamma -1} u(s)-|v(s)|^{\\gamma -1} v(s)\\Vert _{L^{\\frac{r}{\\gamma }}}\\\\& \\lesssim (t-s)^{-\\frac{d(\\gamma -1)}{2r\\beta }} \\gamma \\left( \\Vert u(s)\\Vert _{L^r}^{\\gamma -1} + \\Vert v(s)\\Vert _{L^r}^{\\gamma -1} \\right) \\Vert u(s)-v(s)\\Vert _{L^r}\\\\& \\lesssim (t-s)^{-\\frac{d(\\gamma -1)}{2r\\beta }} \\gamma s^{-\\delta \\gamma } M^{\\gamma -1} d(u,v).$ Using this inequality, we get $ t^{\\delta } \\Vert \\mathcal {J}_{u_0}u(t)- \\mathcal {J}_{v_0}v(t)\\Vert _{L^r} & \\le & t^{\\delta } \\Vert e^{-tH^{\\beta }}(u_0-v_0)\\Vert _{L^r} + t^{\\delta } \\gamma M^{\\gamma -1} d(u,v )\\int _0^t (t-s)^{-\\frac{d(\\gamma -1)}{2r\\beta }} s^{-\\delta \\gamma } ds\\nonumber \\\\&\\le & t^{\\delta } \\Vert e^{-tH^{\\beta }}(u_0-v_0)\\Vert _{L^r} +K \\, M^{\\gamma -1} d(u,v ),$ where $K=t^{\\delta } \\gamma \\, \\int _0^t (t-s)^{-\\frac{d(\\gamma -1)}{2r\\beta }} s^{-\\delta \\gamma } ds$ is a finite positive constant.", "Indeed, since $\\delta \\gamma <1,~~\\frac{d(\\gamma -1)}{2r\\beta }< 1,$ we see that $ \\int _0^t (t-s)^{-\\frac{d(\\gamma -1)}{2r\\beta }} s^{-\\delta \\gamma } ds=t^{1-\\frac{d(\\gamma -1)}{2r\\beta }-\\delta \\gamma }\\int _0^1 (1-s)^{-\\frac{d(\\gamma -1)}{2r\\beta }} s^{-\\delta \\gamma } ds<\\infty .$ Setting $v_0=0$ and $v=0$ in (REF ) we have $t^{\\delta } \\Vert \\mathcal {J}_{u_0}u(t)\\Vert _{L^r}\\le \\rho +KM^{\\gamma }\\le M.$ That is, $\\mathcal {J}_{u_0}$ maps $X_M$ into itself.", "Letting $u_0=v_0$ in (REF ), we note that $d (\\mathcal {J}_{u_0}u(t), \\mathcal {J}_{u_0}v(t)) \\le K \\, M^{\\gamma -1} d(u,v ).", "$ Since $KM^{\\gamma -1} < 1$ , then $\\mathcal {J}_{u_0}$ is a strict contraction on $X_M$ .", "Therefore, $\\mathcal {J}_{u_0}$ has a unique fixed point $u$ in $X_M$ , which is a solution of (REF ).", "Finally, using Theorem REF with exponents $(p,q)=\\left(\\frac{d(\\gamma -1)}{2\\beta }, r \\right)$ , we see that if $\\Vert u_0\\Vert _{L^{p_c^{\\beta }}}$ is sufficiently small then (REF ) is satisfied." ], [ "Concluding remarks", "In this concluding section we illustrate another application of Theorem REF , that is a set of Strichartz estimates for the fractional heat propagator.", "We emphasize that Strichartz estimates are indispensable tools for a thorough study of the wellposedness theory for nonlinear equations – see e.g., [26], [29].", "Since the proof is based on a standard machinery, via $TT^{\\star }$ method and real interpolation (see for instance [19] and [33] and the references therein), we omit the details.", "We say that $(q, p,r)$ is an admissible triplet of indices if $1\\le p\\le r\\le \\infty , \\beta >0$ and $\\frac{1}{q} = \\frac{d}{2\\beta }\\left( \\frac{1}{r}-\\frac{1}{p}\\right).$ Theorem 5.1 Let $I=[0,T)$ with $0<T\\le \\infty $ and $\\beta >0.$ Let $(q, p,r)$ be any admissible triplet and consider $ f \\in L^r(\\mathbb {R}^d)$ .", "Then $e^{-tH^{\\beta }}f \\in L^q(I, L^p(\\mathbb {R}^d)) \\cap C_b(I, L^r(\\mathbb {R}^d))$ and there exists a constant $C>0$ such that $\\Vert e^{-tH^{\\beta }} f\\Vert _{L^q(I, L^p)} \\le C \\Vert f\\Vert _{L^r}.$ Let $p_1^{\\prime },p \\in (1, \\infty ),$ or $(p_1^{\\prime },p)=(1, \\infty ),$ or $p_1^{\\prime }=1$ and $p \\in [2, \\infty ),$ or $p_1^{\\prime }\\in (1, \\infty )$ and $p=1.$ Assume that $(q, p)$ and $(q_1, p_1)$ satisfy $ p_1^{\\prime } \\ne p, 1<q_1^{\\prime }<q< \\infty $ and $\\frac{1}{q_1^{\\prime }} + \\frac{d}{2\\beta } \\left| \\frac{1}{p_1^{\\prime }} - \\frac{1}{p} \\right| = 1+ \\frac{1}{q}.$ Then, there exists a constant $C>0$ such that $\\left\\Vert \\int _0^t e^{- (t-s) H^{\\beta }} F(s) ds \\right\\Vert _{L^q(I, L^p(\\mathbb {R}^d))} \\le C \\Vert F\\Vert _{L^{q_1^{\\prime }}(I, L^{p_1^{\\prime }} (\\mathbb {R}^d))}.$ We note that Pierfelice [22] studied Strichartz estimates for (REF ) with $H= -\\Delta $ , while Miao-Yuan-Zhang [19] and Zhai [33] obtained Strichartz estimates for the fractional Laplacian $(-\\Delta )^{\\beta }$ .", "Remark 5.1 Taking Theorem REF into account, part (REF ) of Theorem REF , can be proved in analogy with [19] and part (REF ) of Theorem REF can be proved in analogy with [33].", "The property (REF ) is weaker than the admissibility of triplets $(q, p,2)$ and $(q_1, p_1,2)$ .", "The hypothesis (REF ) and the constraint $ p_1^{\\prime } \\ne p,~1<q_1^{\\prime }<q< \\infty $ appear as a consequence of the Hardy-Littlewood-Sobolev inequality.", "While the hypothesis on $(p_1^{\\prime },p)$ is needed in order to apply Theorem REF when dealing with truncated decay estimates (see [33])." ], [ "Acknowledgments", "D.G.B.", "is thankful to DST-INSPIRE (DST/INSPIRE/04/2016/001507) for the research grant.", "R.M.", "acknowledges the support of DST-INSPIRE [DST/INSPIRE/04/2019/001914] for research grants.", "F.N.", "and S.I.T.", "are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).", "S.T.", "is supported by J.C.Bose Fellowship from DST, Government of India." ] ]
2210.07691
[ [ "ENTS: An Edge-native Task Scheduling System for Collaborative Edge\n Computing" ], [ "Abstract Collaborative edge computing (CEC) is an emerging paradigm enabling sharing of the coupled data, computation, and networking resources among heterogeneous geo-distributed edge nodes.", "Recently, there has been a trend to orchestrate and schedule containerized application workloads in CEC, while Kubernetes has become the de-facto standard broadly adopted by the industry and academia.", "However, Kubernetes is not preferable for CEC because its design is not dedicated to edge computing and neglects the unique features of edge nativeness.", "More specifically, Kubernetes primarily ensures resource provision of workloads while neglecting the performance requirements of edge-native applications, such as throughput and latency.", "Furthermore, Kubernetes neglects the inner dependencies of edge-native applications and fails to consider data locality and networking resources, leading to inferior performance.", "In this work, we design and develop ENTS, the first edge-native task scheduling system, to manage the distributed edge resources and facilitate efficient task scheduling to optimize the performance of edge-native applications.", "ENTS extends Kubernetes with the unique ability to collaboratively schedule computation and networking resources by comprehensively considering job profile and resource status.", "We showcase the superior efficacy of ENTS with a case study on data streaming applications.", "We mathematically formulate a joint task allocation and flow scheduling problem that maximizes the job throughput.", "We design two novel online scheduling algorithms to optimally decide the task allocation, bandwidth allocation, and flow routing policies.", "The extensive experiments on a real-world edge video analytics application show that ENTS achieves 43\\%-220\\% higher average job throughput compared with the state-of-the-art." ], [ "Introduction", "Recently, there has been a noticeable shift to migrate the computation-intensive workloads from the remote cloud to near-end edges [1].", "Compared with traditional cloud computing, the emerging edge computing paradigm enjoys outstanding benefits, including reduced response latency and enhanced privacy preservation [2][3].", "A large number of latency-sensitive and mission-critical applications gradually switch to the deployment at the network edge, e.g., virtual reality [4], autonomous driving [5], and personalized healthcare [6].", "Collaborative edge computing (CEC) is a popular and new edge computing paradigm enabling sharing of data, computation, and networking resources among geo-distributed and heterogeneous edge nodes, including edge servers, edge gateways, and mobile phones [7].", "CEC is promising and beneficial because it provides higher reliability and lower latency and facilitates collaboration among different stakeholders [8].", "Task scheduling is a fundamental problem of collaborative edge computing, which refers to the arrangement of the user-generated application tasks to the heterogeneous edge nodes by deciding when, where, and how to offload the tasks and how to manage and utilize the underlying computation, storage, and networking resources [2].", "Many works have investigated the task scheduling problems in collaborative edge computing [9].", "Recently, there has been a trend of scheduling containerized application workloads among the geo-distributed and heterogeneous edge infrastructure [10].", "This is because the container technology provides lightweight resource virtualization and enables fast application development and flexible service deployment over heterogeneous edge nodes.", "There are several solutions to orchestrate containerized applications, such as Swarm [11], Kubernetes [12], and Mesos [13].", "Among them, Kubernetes has established its leadership [14].", "Many works have studied optimizing the Kubernetes scheduler for the cloud environment, where cloud servers with abundant computation resources are interconnected with a high-bandwidth and stable network in a data center [15].", "However, Kubernetes is designed not dedicated to edge computing, neglects the unique features of edge nativeness, and lacks adequate support for edge-native applications [16].", "First, edge-native applications are usually performance-aware, demanding high throughput, low latency, and strict privacy.", "The Kubernetes scheduler is mainly designed to ensure resource provision of workloads, such as the capacity of requested memory and CPU cores.", "It lacks support to meet the performance requirements of edge-native applications.", "Second, edge-native applications are with inner dependencies.", "Many intelligent edge applications are resource-greedy and complex, consisting of lots of inter-dependent components which are usually deployed to multiple edge nodes considering the constraint resource of a single node.", "However, the Kubernetes scheduler fails to consider the application's inner structure.", "Third, the data, computation, and networking resources are heterogeneous and coupled with each other.", "Application deployed on heterogeneous edge nodes experiences distinct performance, and the coupled resources require joint orchestration.", "However, Kubernetes concentrates on orchestrating computation resources without jointly considering the data locality and networking resources, which may lead to underutilized resources and poor performance of workloads.", "Though some works [17][18] consider the inner dependencies of workloads and the computation resources among edge nodes for optimizing the application performance, they fail to consider the data locality and resource heterogeneity.", "In this work, we designed and developed ENTS, the first edge-native task scheduling system, to manage the geo-distributed and heterogeneous resources of edge infrastructures and enable efficient task scheduling among distributed edge nodes to optimize application performance.", "ENTS is developed based on Kubernetes, allowing Kubernetes to collaboratively schedule computation and networking resources considering both job profile and resource status.", "Specifically, to parse the inner dependencies of the user-submitted jobs, we adopt a data flow programming model, where each task in a job is programmed as a functional module.", "A profiler is designed to profile the job's execution time on heterogeneous edge nodes.", "The job profile information will later be used to facilitate efficient task scheduling.", "We also developed a network manager to manage the networking resources, which collaborates with the Kubernetes original components to jointly orchestrate the coupled resources under the coordination of a newly designed collaborative online scheduler.", "The scheduler runs the intelligent scheduling algorithms to generate the task scheduling policies to optimize the application performance.", "To showcase the efficacy of ENTS, we formulate a joint task allocation and flow scheduling problem for data streaming applications as a case study.", "The problem is a mixed integrated non-linear problem proven to be NP-hard [19].", "We design two online algorithms to solve the problem, which decides how to partition the job, where to allocate the tasks, and how to allocate the routing path and bandwidth for intermediate data flow to optimize the average job throughput.", "The efficacy of the proposed system is illustrated by developing a real-world testbed for a representative edge video analytics application, namely, object attribute recognition.", "We develop a real-world hybrid testbed with both physical and virtual edge nodes to evaluate the system even in large scale.", "Online jobs will continuously arrive and be partitioned and scheduled among the edge nodes.", "We have comprehensively evaluated the performance of the designed system by comparing it with the state-of-the-art regarding different metrics, including average job throughput and average waiting time.", "The evaluation results show that our edge-native task scheduling approach improves the performance significantly.", "The main contributions of this work are as follows: We design and develop ENTS to manage the data, computation, and networking resources in the heterogeneous geo-distributed edge infrastructure.", "ENTS is the first work to jointly manage coupled edge resources for optimizing the performance of edge-native applications.", "We formulate a joint task allocation and flow scheduling problem for data streaming applications and propose two online algorithms to solve the problem.", "We evaluate the performance of proposed solutions in a real-world testbed with a video analytics application.", "The experimental results indicate the superiority of ENTS over the baseline approaches in terms of higher job throughput and lower latency." ], [ "Background and Motivations", "In this section, we introduce some background knowledge of the Kubernetes scheduler and illustrate the motivations for designing ENTS through some concise examples.", "Figure: Components of Kubernetes SystemFigure: A Motivating Example of Collaborative Task Scheduling" ], [ "Kubernetes Scheduler", "Fig.", "REF depicts the components of Kubernetes with a master-client architecture.", "There is at least one centralized master managing resources and scheduling containerized workloads across multiple worker nodes (clients).", "The pod is the basic unit of Kubernetes to schedule the workload.", "A pod can contain one or more containers.", "There are mainly four components in the master node.", "The API server is an entry point to manage the whole cluster, providing services via Restful APIs.", "Components communicate and interact with each other through the API server.", "Etcd is a key-value pair distributed database that records the cluster status, such as node resource availability, location, states, and namespace.", "The scheduler is responsible for scheduling pods.", "It parses the operational requirements of pods and binds a pod to the best fit node.", "The controller manager is responsible for monitoring the overall state of the cluster.", "It launches a daemon running in a continuous loop and is responsible for collecting cluster information.", "Kubelet is the node agent in the clients.", "It is responsible for reporting events and resource usage and managing containers.", "When scheduling user-submitted workloads, the scheduler first takes a pod pending to be scheduled from the etcd database and then binds the pod to the corresponding client node according to the pre-defined scheduling policies.", "The scheduling policy is sent to Kubelet on the client nodes via the API server.", "After receiving the policies, Kubelet lunches the pods and monitors the pods' execution status.", "Kubernetes scheduler adopts a multi-criteria decision-making algorithm in two stages.", "The first stage is node filtering, where the scheduler will select candidate nodes capable of running the pods by applying a set of filters, such as memory and storage availability.", "Those filters are also known as predicates.", "The second stage is node scoring.", "It scores all the candidates based on one or more strategies, such as LeastRequestedPriority, which allocates pods to the nodes with the least computation resource consumption, and BanlancedResourceAllocation, which balances the resource consumption among edge nodes.", "Those strategies are known as priorities.", "The scheduler will allocate a pod to the node with the highest score." ], [ "A Motivating Example", "As shown in Fig.", "REF , this section presents a motivating example articulating the benefits of collaborative task scheduling, which jointly considers the coupled data, computation, and networking resources in edge computing scenarios.", "The problem is to allocate the application with dependent tasks, shown in Fig.", "REF (a), to a set of edge nodes, shown in Fig.", "REF (b), such that the job throughput is maximized.", "Fig.", "REF (a) shows the task graph of the job modeled as a directed acyclic graph.", "There are 6 tasks in the job, and the weight of the link between tasks indicates the volume of the dependent data.", "Fig.", "REF (a) also shows the memory demand and workload of each task.", "We assume that the total memory demand and workload are the sum of tasks, i.e., 11 and 55, respectively.", "Note that the job is a streaming application, where input data continuously arrives from the source, i.e., edge node $e4$ .", "The amount of the input data is 5.", "In Fig.", "REF (b), there are 5 edge nodes $\\lbrace e1, e2, e, e4, e5\\rbrace $ .", "The weight of the link between the edge nodes indicates the bandwidth.", "Similarly, the table in Fig.", "REF (b) shows the available memory and computing power of the edge nodes in the network.", "Fig.", "REF (c) shows the job allocation strategy without task partition, where the job is scheduled to node $e1$ and the input data is transmitted from the source node $e4$ to $e1$ indicated by data flow $f_{sa}$ , whose allocated bandwidth is 10 and routing path is $e4 \\rightarrow e2 \\rightarrow e1$ .", "The throughput is calculated by $1/\\max \\lbrace 5/10, 55/200\\rbrace = 2$ .", "Strategy in Fig.", "REF (c) is known as LeastRequestPriority, which are extensively used in Kubernetes.", "Differently, Fig.", "REF (d) partition the job, where task $a$ is allocated to source node $e4$ and the rest tasks are allocated to node $e1$ .", "Hence there are two data flows indicated by $f_{ac}$ and $f_{ab}$ with the same routing path $e4 \\rightarrow e2 \\rightarrow e1$ .", "By default, two data flows equally share the bandwidth of link $<e2, e4>$ .", "The throughput of the job using this strategy is $2.5$ , which is better than strategy in Fig.", "REF (c) as the raw data transmission in (c) becomes the bottleneck.", "Further, Fig.", "REF (e) improves (d) with the throughput 3.3 due to the optimized bandwidth sharing policy, where the bandwidths allocated to flow $f_{ac}$ and $f_{ab}$ are proportional to the amount of dependent data.", "Fig.", "REF (f) shows a throughput of 4 with customized routing policy, where the flow $f_{ac}$ selects the routing path $e4 \\rightarrow e2 \\rightarrow e1$ with the allocated bandwidth 10 and the flow $f_{ab}$ selects the path $e4 \\rightarrow e3 \\rightarrow e1$ with the allocated bandwidth 6.", "From the above examples, we can see that joint consideration of the coupled resources by optimizing the task allocation strategies, the bandwidth allocation, and flow routing policies can improve the application performance.", "In the rest of this paper, we build ENTS system to orchestrate coupled edge resources and design optimal collaborative task scheduling algorithms by jointly considering the data, computing, and networking resources of the geo-distributed edge nodes." ], [ "System Overview", "This section gives an overview of the design goals and the system components.", "ENTS is designed based on Kubernetes to manage the resources and schedule the workloads over the geo-distributed, large-scale, and heterogeneous edge environment.", "It has two main objectives: 1) Jointly manage and orchestrate the coupled and distributed data, computation, and networking resources; 2) Enable effective distributed task execution to achieve better performance of applications." ], [ "Design Goals", "The design of ENTS obeys the principles as follows.", "Scalability.", "The system can be scaled to a large number of devices and services retaining its high performance.", "Collaboration.", "The different edge nodes can collaborate to manage the distributed and heterogeneous resource regarding data, computation, and networking.", "Universality.", "The system supports execution of various kinds of tasks and workloads." ], [ "System Architecture", "In Fig.", "REF , we show a birds-eye view of ENTS's system architecture and functional workflow.", "The system adopts the server-client architecture and is built based on Kubernetes with a master node to manage the distributed resources and schedule the tasks among edge nodes.", "Kubernetes components are used to manage the computation and storage resources of edge nodes.", "However, Kubernetes lacks support to profile the job's inner-dependency and execution time on heterogeneous edge nodes and orchestrate networking resources.", "Hence, we develop new components to enhance the ability of Kubernetes to orchestrate coupled resources considering the job profile.", "The system follows the principles of service-oriented architecture, where functions of the components are developed as services and can be called with APIs.", "The components of the system are listed below.", "Profiler parses the input job and profiles the execution time of tasks on heterogeneous edge nodes.", "The job profile will be used to support intelligent task scheduling.", "Scheduler accesses the system information, such as CPU and GPU usage, network conditions, and job profile.", "On this basis, it generates the policies of task execution and resource allocation that optimizes job performance.", "Compute controller manages the computation and storage resources at the edge nodes.", "It leverages the Kubernetes components API server and controller manager to orchestrate the computation resources.", "Network controller and manager manage the networking resources of edge nodes, such as bandwidth allocation, routing and forwarding of data flows.", "Messenger handles the message between the edge node and the master.", "We extend the messaging of Kubernetes between the master and clients because it lacks support for orchestrating network resources.", "Kubelet manages pods, containers, and data volumes.", "It is Kubernetes original component, whose primary responsibility is for task execution.", "MetaManager is responsible for monitoring and storing device status and application status.", "Specifically, the device and task monitors are responsible for storing and retrieving metadata (device status and task execution status) to and from a lightweight database.", "Such information will be sent to the master node for supporting task scheduling.", "Figure: Architecture of the ENTS SystemENTS is based on Kubernetes and reuses the key components of Kubernetes.", "It equips Kubernetes with the ability to jointly orchestrate the networking and computation resources to optimize the performance of edge-native applications.", "The general workflow of the system is described as follows.", "The profiler first parses the user-submitted job and profiles the execution time of each task of the job on heterogeneous edge nodes.", "The job profile information, including the inter-dependencies of tasks and task execution time, will be used for later decision-making of task scheduling.", "The scheduler generates the task execution policies by jointly considering the job profile information, the data locality, available computation and networking resources of the edge nodes.", "Specifically, the policies decide which node to allocate tasks, the bandwidth allocation and the routing path of dataflows.", "The policies will be managed by the network controller and the compute controller together, and then be executed by the Kubelet and the network manager on the client nodes.", "The run-time characteristics of tasks and the nodes' status will be sent back to the controller in the master and used for later task scheduling." ], [ "System Design", "This section illustrates the details of the ENTS system workflow, including job profiling, collaborative task scheduling, and distributed task execution." ], [ "Application Development and Profiling", "To easily parse the user-submitted job and facilitate efficient distributed task execution, we adopt the data flow programming model [20], where each task in a job is programmed as a function module.", "Tasks are loosely coupled with intermediate data transmission.", "Note that many modern applications are modeled in such a way.", "Those applications are complex in nature, structured on microservices architecture style, consisting of a large number of inter-dependent and loosely coupled modules.", "Besides, to support various kinds of workloads, the programming model is non-intrusive to the user programming language.", "As shown in Fig.", "REF , we only require developers to declare the tasks in the submitted job without intruding on the main functions of the applications.", "Users can use any programming language to implement their applications.", "Compared with those programming models, which require users to learn lots of pre-defined operations, such as Hadoop, Spark, and Flink, ENTS is easier to learn and use.", "Users are required to submit the job configuration so that the system can profile the job and perform efficient task scheduling.", "As shown in Fig.", "REF , the configuration explicitly defines the data source, dependencies among the tasks, and the resource demand of each task.", "Particularly, the job consists of 4 tasks.", "The first task $task0$ demands 2GB memory and has subsequent tasks $task1$ and $task2$ .", "After the user submits the job configuration, ENTS will start the profiling.", "The objective of job profiling is to estimate the running time of each task of the submitted job on heterogeneous edge nodes, which will then be used to support the collaborative task scheduling.", "Since it may take much time to profile the job, depending on the complexity of the job, we do the profiling offline.", "Specifically, the profiler will send the job configuration to the edge nodes that meet the resource requirements of the job.", "Each edge node will profile the job by executing the tasks under the requested resource and send the job profile information back to the scheduler.", "Offline profiling is reasonable for those long-running jobs, such as video analytics [21] and virtual reality [4].", "Other methods can be used to measure the computing capability of edge nodes and estimate the workload of the application in advance, which is more suitable for online application profiling [22][23].", "We will study them in the future and incorporate the mechanisms into ENTS.", "Figure: Code Snippet of User ApplicationFigure: Code Snippet of Application ConfigurationFigure: ENTS Task Scheduling Workflow" ], [ "Collaborative Task Scheduling", "After a job is profiled, it will be added to a Job Queue and pending to be scheduled, as shown in Fig.", "REF .", "The job-related information, including task dependencies and requested resources, the available computation resource of edge nodes, and the status of the network will be sent to the scheduler to support the collaborative task scheduling decisions.", "We will elaborate on the scheduling algorithms in Sec. .", "Figure: Collaborative Task Scheduling StrategyThe scheduler generates the collaborative task scheduling strategy, which decides where to allocate each task, how much the allocated bandwidth is, and the routing path together with the communication port for each data flow.", "As shown in Fig.", "REF , the job shown in Fig.", "REF is partitioned into 3 tasks, where task0 and task1 are allocated to edge nodes $e1$ and $e2$ , respectively.", "Task2 and task3 are both allocated to $e3$ .", "The bandwidth of data flow $f_{01}$ and $f_{02}$ is restricted to 15Mbps and 10Mbps, respectively.", "The source node port and destination port of flow $f_{01}$ are set to be 8089 and 8090, respectively.", "The routing path of flow $f_{13}$ is determined as $\\lbrace e2, e3, e4\\rbrace $ .", "Once the task scheduling strategy has been determined, they will be maintained by the compute controller and network controller, respectively, and sent to the edge nodes for execution.", "Specifically, the computation resource-related strategies, such as where to allocate the task and how many resources are assigned to the task, will be managed by Computer Controller, which interacts with the Kubelet on edge nodes to ensure the start, status monitoring, and stop of the containerized task.", "The networking resource-related strategies, such as port, bandwidth, and routing path of data flow, are managed by the network controller, which interacts with the network manager on edge nodes to ensure the communication and data transmission among edge nodes.", "The two controllers jointly manage the edge resources and ensure the correct execution of the collaborative task scheduling strategies with the coordination of the scheduler." ], [ "Distributed Task Execution", "When the messenger receives the task execution policy, it will decompose the policies into computation-related and networking-related policies.", "The computation-related policies will be forwarded to and maintained by the Kubelet, while the networking-related ones will be forwarded to and maintained by the network manager.", "Kubeetl and network manager work together to ensure the proper execution of the assigned task.", "One important role of the network manager is to manage and orchestrate the networking resources.", "In this work, we are mainly concerned with the bandwidth allocation and customized routing of the cross-node data flows.", "For cross-node communication, Kubernetes usually adopts a flannel network [24].", "As shown in Fig.", "REF , a data package from Pod1 to Pod3 will first be forward to docker0 and then to the flannel interface.", "The package will go through eth on edge node A and be sent to edge node B, where a reverse process will be performed to analyze the Internal IP of the package and route the package to the destination, i.e., Pod 3.", "To achieve the bandwidth allocation and customized routing of data flow, for each data flow in a scheduled job, the network manager will specify the {source_ip, source_ip_port, bandwidth_limit, destination_ip, destination_ip_port}, as shown in Fig.", "REF .", "Through this information, the network manager leverages the Linux kernel functions, i.e., Traffic Control and Iproute [25], to shape the bandwidth between two edge nodes and customize routing for data packages.", "Traffic control creates Classful Queuing Disciplines (qdisc) to filter and redirect network packages to a particular quality-of-service queue before sending them out.", "The network manager also maintains the routing table of each assigned task.", "As shown in Fig.", "REF , the data package going through port 8009 from edge node 1 will be forwarded to another edge node rather than go directly to the destination, i.e., edge node 2.", "Also, the bandwidth of data flow from Pod1 of edge node 1 will be shaped to 3Mbps.", "Figure: Bandwidth Allocation and Customized Routing of Network ManagerAfter the network configuration takes effect, the kublet will launch the pod according to the assigned computation-related policies, such as CPU and memory requests.", "The device monitor and task monitor will consistently and continuously monitor the status of the devices and the task." ], [ "Collaborative Task Scheduling with Data Streaming Applications", "In this section, we showcase the collaborative task scheduling of ENTS with representative data streaming applications, namely edge video analytics.", "We first introduce the system model.", "Then, we formulate a joint task allocation and flow scheduling problem for a single job scheduling and illustrate the proposed algorithms.", "On this basis, we further propose two online scheduling algorithms to schedule multiple continuous arriving jobs to maximize the average job throughput." ], [ "System Model", "Edge video analytics [21][26][27] is a killer application of edge computing.", "The network and application model used in formulating the problem is described as follows.", "The communication network is a mesh network of edge nodes connected using a multi-hop path.", "The network is modelled as an undirected graph $G = (V, E)$ , where $V$ is the set of edge nodes, $V = \\lbrace j|1\\le j \\le M\\rbrace $ , and $E$ is the set of links connecting different edge nodes, $E = \\lbrace l_{u,v}|u, v \\in V\\rbrace $ .", "Here, $M$ is the total number of edge nodes.", "The computing capacity, maximum resource and available resource of edge node $j$ is $PS_{j}$ , $R_{max}^{j}$ and $R_{avail}^{j}$ , respectively.", "The bandwidth of link $l$ is represented by $B_{l}$ .", "The network can be heterogeneous in terms of the computation capacity of edge nodes and link bandwidth." ], [ "Application Model", "There will be multiple jobs submitted to the ENTS system by the edge nodes.", "Each job is modeled as a directed acyclic graph $J = (T, P)$ , where $T$ is a set of dependent tasks and $P$ represents the set of dependencies between the tasks in the job.", "$Pd_{i}$ denotes the predecessor tasks of task $T_{i}$ .", "The computation workload and resource demand of task $j$ is $C_{j}$ and $R_{req}^{j}$ .", "The amount of dependent data between task $j$ and task $i$ is $D_{i,j}$ .", "The input data source of job $J$ is assumed to be located at an edge node $s_{J}|s_{J} \\in V $ .", "The objective of the single job scheduling is to maximize the throughput of the job by deciding where to allocate each task of the job, the routing path and bandwidth allocation of each data flow caused by the intermediate data transmission.", "If two dependent tasks are allocated to the same edge node, there will be no intermediate data transmission and thus no data flow.", "The joint task allocation and flow scheduling problem denoted as $P_{1}$ is formulated as follows.", "$\\max \\left\\lbrace TP = \\frac{1}{t_{p}}\\right\\rbrace $ $ t_{p} = \\max \\left\\lbrace \\max _{i \\in T}(t_{comp}^{i}), \\max _{i \\in T, j \\in Pd_{i}}(t_{comm}^{i,j})\\right\\rbrace $ $t_{comp}^{i} = X_{i}^{u} \\cdot \\frac{C_{i}}{PS_{u}}$ $t_{comm}^{i,j} = X_{i}^{u} \\cdot X_{j}^{v} \\cdot \\frac{D_{i,j}}{B_{l_{u,v}}}, j \\in Pd_{i}$ $X_{i}^{u} \\in \\lbrace 0,1\\rbrace , \\forall i, u$ Eq.", "REF indicates the computation time of task $i$ , where $X_{i}^{u}$ is a binary variable.", "$X_{i}^{u}$ equals to 1 if task $i$ is allocated to edge node $u$ , otherwise $X_{i}^{u}$ equals to 0.", "Eq.", "REF shows the transmission time of the intermediate data between dependent task $i$ and $j$ .", "The throughput is $TP = \\frac{1}{t_{p}}$ , where $t_{p}$ is constraint by the maximum transmission and computation time as indicated by Eq.", "REF .", "$P_{1}$ is a mixed Integrated Non-linear problem (MINLP), which is proven to be NP-hard in literature." ], [ "Proposed Solution", "To solve the problem $P_{1}$ , we decompose it into two sub-problems, i.e., allocate each task of the job $P_{2}$ and decide the routing path and bandwidth allocation of all the data flows $P_{3}$ .", "To solve $P_{2}$ , we use a greedy algorithm to allocate each task to the edge node, which can provide the least execution time, including the computation time and the dependent data transmission time.", "To solve $P_{3}$ , we first relax it into a convex problem, which can be solved by convex optimizers, and then derive the solution for $P_{3}$ ." ], [ "Solving Problem $P_{2}$", "The algorithm to solve $P_{2}$ is shown in Algo.", "REF .", "For each task in the job, the algorithm traverses all the edge nodes with satisfied resource capacity and allocates the task to the edge node with the minimum execution time, including both computation time and intermediate data transmission time (Line 3-13).", "For calculating $t_{comm}^{i,j}$ , we set the bandwidth between two edge nodes as the average bandwidth of all routing links.", "This is reasonable because the intermediate data flow can have multiple choices to avoid network congestion.", "Later, we will adjust the allocated bandwidth and the routing path of the data flows in a more fine-grained way in problem $P_{3}$ .", "Task Allocation network $G = (V, E)$ , job $J = (T, P)$ , the task allocation policy $T_{i,j}$ , the data flows $FL$ Initialize $T_{i,j} \\leftarrow 0$ for all $i, j$ Query the available resource $R_{j}$ of all edge nodes task $T_{i}$ in job $J = (T, P)$ edge node $j$ in network $G = (V, E)$ $R_{avail}^{j} > R_{req}^{i}$ Calculate the computation time $t_{comp}^{i} = C_{i} \\div PS_{j}$ Calculate the intermediate data transmission time $t_{comm}^{i} = \\max t_{comm}^{i,j}$ using Eq.", "(4) Calculate the execution time $t_{exec}^{j} = t_{comp}^{j} + t_{comm}^{j}$ Allocate task $T_{i}$ to node $j^{*} = min_{J} \\lbrace t_{exec}^{j}\\rbrace $ $T_{i,j^{*}} \\leftarrow 1$ Update $R_{j*}$ for node $j^{*}$ Calculate data flow $f_{i} = <source, destination, datasize>$ with $T_{i,j}$ Add $f_{i}$ to data flows $FL$ $T_{i,j}$ , $FL$" ], [ "Solving Problem $P_{3}$", "After solving $P_{2}$ , we get the data flows $FL$ , where we can know the number of data flows $Nf$ , the source, destination, and data volume of each data flow $f_{i}$ .", "We then solve $P_{3}$ to decide the routing path and bandwidth allocation of each data flow.", "The $P_{3}$ is formulated as follows.", "$\\min \\max _{i=1, \\ldots , Nf}\\left\\lbrace \\frac{V_{i}}{b_{i}}\\right\\rbrace $ $\\sum _{i} \\sum _{k: l \\in P_{i}^{k}} b_{i} y_{i}^{k} \\le B_{l}, \\forall l$ $\\sum _{k} y_{i}^{k}=1, \\quad \\forall i$ $y_{i}^{k} \\in \\lbrace 0,1\\rbrace , \\forall i, k$ where $V_{i}$ is the size of flow $f_{i}$ and $b_{i}$ is the bandwidth allocated to flow $f_{i}$ .", "$P_{i}^{k}$ is the collection of all the possible routing paths of flow $f_{i}$ .", "$y_{i}^{k}$ is a binary variable.", "$y_{i}^{k}$ equals to 1 if flow $f_{i}$ chooses the $k^{th}$ routing path of $P_{i}^{k}$ .", "Note that Eq.", "REF indicates that the sum of allocated bandwidth of all data flows going through link $l$ cannot exceed its capacity.", "Eq.", "REF and Eq.", "REF ensure that a data flow can only choose one routing path.", "The problem $P_{3}$ is still a MINLP problem.", "Therefore, we resort to relaxing the integer variable $y_{i}^{k}$ to a real variable $y_{i}^{k} \\ge 0$ .", "We name the relaxed problem $P_{3}-\\textsc {Relax}$ .", "Due to the existence of term $b_{i} \\cdot y_{i}^{k}$ , the $P_{3}-\\textsc {Relax}$ problem is still a non-linear programming problem which is hard to solve.", "In the following, we transform the $P_{3}-\\textsc {Relax}$ problem into an equivalent convex optimization problem." ], [ "An Equivalent Convex Problem", "First, we introduce an variable $TH$ such that $TH = \\max \\limits _{i=1, \\ldots , Nf}\\left\\lbrace \\frac{V_{i}}{b_{i}}\\right\\rbrace $ .", "Furthermore, we introduce another variable $q_{i}$ such that $q_{i} = TH \\cdot b_{i}$ , and variable $m_{i}^{k} = q_{i} \\cdot y_{i}^{k}$ .", "Then, the equivalent problem $P_{3}-\\textsc {Relax-Cvx}$ is formulated below.", "$\\min TH$ $\\sum _{i} \\sum _{k: l \\in P_{i}^{k}} m_{i}^{k} \\le B_{l} \\cdot TH, \\forall l$ $\\sum _{k} m_{i}^{k}=q_{i}, \\quad \\forall i$ $m_{i}^{k} \\ge 0, \\forall i, k$ $q_{i} \\ge V_{i}, \\forall i$ All constraint in the $P_{3}-\\textsc {Relax-Cvx}$ is affine, and the objective function is convex.", "Therefore, the $P_{3}-\\textsc {Relax-Cvx}$ problem is a convex optimization problem which can be solved using convex optimizers [28].", "However, since we relax the binary integer constraint, the solution may be that some $y_{i}^{k}$ are decimal factions.", "To solve the problem, we route the $i^{th}$ data flow to a path $k^{*}$ such that $m_{i}^{k^{*}} = \\max _{k} m_{i}^{k}$ .", "When the routing path is determined, the optimal bandwidth allocation policies is given by $b_{i}^{*} = \\min \\left\\lbrace \\frac{V_{i}}{\\sum _{i} \\sum _{k: l \\in P_{i}^{k^{*}}} V_{i} y_{i}^{k^{*}}}\\right\\rbrace , l \\in P_{i}^{k^{*}}$ The algorithm to solve $P_{3}$ is shown in Algo.", "REF .", "[t] Joint Routing and Bandwidth Allocation (JRBA) network $G = (V, E)$ , data flows $FL$ , the routing policy $y_{i}^{k}$ , the bandwidth allocation policy $b_{i}$ , and job throughput $JTH$ Solve $P_{3}-\\textsc {Relax-Cvx}$ and get $\\lbrace T^{*}, q_{i}^{*}, m^{k^{*}}_{i}\\rbrace $ flow $f_{i}$ in $FL$ Initialize $y_{i}^{k} \\leftarrow 0$ for all $k$ $k^{*} \\leftarrow arg_{k}\\max m_{i}^{k}$ $y_{i}^{k^{*}} \\leftarrow 1$ Calculate $b_{i}^{*}$ using Eq.", "REF Update $B_{l}$ according to $y_{i}^{k^{*}}$ , $b_{i}^{*}$ $JTH \\leftarrow \\max _{i=1, \\ldots , N}\\left\\lbrace \\frac{V_{i}}{b_{i}}\\right\\rbrace $ $y_{i}^{k}$ , $b_{i}$ , $JTH$" ], [ "Online Scheduling", "Algo.", "REF and Algo.", "REF study the task scheduling for one job.", "However, in a practical ENTS system, jobs constantly arrive and share the resource in the network.", "Our goal is to maximize the average job throughput.", "Motivated by this, we propose two online scheduling algorithms, which run in the ENTS online scheduler and periodically schedule all arrived jobs.", "The online scheduler maintains two job queues: 1) a queue of jobs that are running, denoted by $Q_{run}$ , and 2) a queue of jobs that are waiting to be scheduled, denoted by $Q_{wait}$ .", "The two online scheduling algorithms are: 1) schedule the job in $Q_{wait}$ one by one, and 2) schedule the job in $Q_{wait}$ one by one but readjust the routing and bandwidth sharing strategy by considering all the existing and coming data flows in the edge network.", "The first algorithm (OTFS) is shown in Algo.", "REF .", "For each job in the queue $Q_{wait}$ , the algorithm first sorts the job in descending order of waiting time and schedules the jobs in sequence (Line 6-9).", "During scheduling, the algorithm calls the procedure Task Allocation (Algo.", "REF ) and JRBA (Algo.", "REF ) in turn (Line 9-13).", "The second algorithm (OTFA) is shown in Algo.", "REF .", "Different from OTFS, which makes task scheduling decisions based on the current status of the computation and networking resource in the edge network, OTFA jointly manages the existing data flows and the coming data flows.", "It first allocates the computation resources for arriving jobs and then readjusts the networking resources for all data flows (Line 10-15).", "[t] OTFS: Online Task Allocation and Flow Scheduling current time $curT$ , network $G = (V, E)$ , $Q_{wait}$ $J_{finish} \\leftarrow $ all jobs finishing at $curT$ $J_{finish} \\ne \\emptyset $ Release all computing resource and bandwidth allocated to $J_{finish}$ Update $R_{j}$ and $B_{l}$ for network there are jobs arriving at $curT$ Add jobs arriving at $curT$ to $Q_{wait}$ Sort $Q_{wait}$ in descending order of waiting time job $J_{i}$ in $Q_{wait}$ Call the Task Allocation procedure to get $\\lbrace T_{i,j}, FL\\rbrace $ Call the JRBA procedure [t] OTFA: Online Scheduling Task Allocation Joint Flow Adjustment current time $curT$ , network $G = (V, E)$ , $Q_{wait}$ , $Q_{run}$ $J_{finish} \\leftarrow $ all jobs finishing at $curT$ $J_{finish} \\ne \\emptyset $ Release all computing resource and bandwidth allocated to $J_{finish}$ Update $R_{j}$ and $B_{l}$ for network there are jobs arriving at $curT$ Add jobs arriving at $curT$ to $Q_{wait}$ Sort $Q_{wait}$ in descending order of waiting time job $J_{i}$ in $Q_{wait}$ Call the Task Allocation procedure to get $\\lbrace T_{i,j}, FL\\rbrace $ Release all bandwidth allocated to data flows $FL_{run}$ in $Q_{run}$ Add $FL$ to $FL_{run}$ Call the procedure JRBA with $FL_{run}$" ], [ "Benchmarks", "To evaluate the ENTS system, we use a real-world live video analytics application, i.e., object attribute recognition [29], which is extensively used in surveillance of public safety.", "The application graph is shown in Fig.", "REF , where we have 10 functional modules.", "For modules 2 to 9, each of them is implemented with a computing-extensive and resource-greedy DNN model [30][31].", "The application takes the surveillance video as input and recognizes the attributes of pedestrians and vehicles in the video, such as the color of cloth, gender of pedestrians, and type of vehicles.", "Specifically, we use MobileNet-V2 [32] as the backbone network for object detection in module 2.", "For attribute recognition and object re-identification, i.e., module $3-9$ , we use Resnet-50 [33] as the backbone network.", "We use the Kalman filter to track the objects in module 10.", "The resolution of the video is 1920x1080 with 30fps and the size of each video frame is about 6MB.", "The application is implemented with Python." ], [ "Baselines", "We compared the proposed method with three state-of-the-art baselines as follows.", "LeastRequestPriority (LR).", "It schedules the whole job to the edge node with the least resource consumption.", "The LR policy is frequently used in Kubernetes.", "BalancedResourceAllocation (BR).", "It schedules the whole job to the edge node, which can balance the resource consumption among the edge nodes.", "BR is used in Kubernetes to achieve workload balancing.", "Task Partition (TP).", "It partitions the job and schedules each task to the edge nodes with the least execution time, including the transmission time and the computation time.", "We adopt the default shortest path to transfer the intermediate data.", "When multiple data flows go through the same link, all flows equally share the link bandwidth." ], [ "Metrics", "We employ two metrics as follows.", "Average Job Throughput.", "It is the average throughput of all submitted jobs.", "It is an important metric to measure the performance of the scheduling algorithms.", "Average Waiting Time.", "It is the average waiting time of all submitted jobs, i.e., the time from the job submitted to the job scheduled.", "It is a metric reflecting the effectiveness of the scheduler and system overhead.", "Figure: Test Environment of ENTS" ], [ "Testbed Implementation", "To test the system on a large scale geo-distributed edge environment, we developed a hybrid testbed with both physical and virtual edge nodes, as shown in Fig.", "REF .", "We use virtual machines to emulate virtual edge nodes.", "While numerous virtual edge nodes enable us to test in a large-scale and network-flexible testing environment, the incorporation of physical nodes guarantees the fidelity of the testbed.", "We leverage Linux Traffic Control to configure the network topology and bandwidth among the edge nodes.", "We vary the network link bandwidth, e.g., from $1Mbps$ to $10Mbps$ , to emulate the physical distance among edge nodes.", "The intuition is that the bandwidth should be low if two nodes are far away.", "Similar idea is also adopted in [34].", "Specifically, we randomly generate the network connection among edge nodes with the average node degree as 3.", "We also enable routing and forwarding on each node so that each node is both a compute node and a router.", "We use 4 raspberry pi, 2 Nvidia Jetson Nano, and 2 Nvidia Jetson Xavier NX to represent physical edge nodes.", "A PC equipped with four Intel Cores i9-7100U with 20GB RAM to act as the master node to manage the edge nodes.", "Two servers are leveraged to host virtual machines acting as virtual edge nodes.", "One is equipped with Intel(R) Xeon(R) Gold 6128 CPU with 192GB Memory, another is Intel(R) Core(TM) i9-10900F CPU with 64GB memory.", "The specifications of the physical devices are shown in TAB.", "REF ." ], [ "Results and Analysis", "We test the performance of the ENTS system and the proposed online scheduling algorithms under various situations." ], [ "Effects of Number of Edge Nodes", "We evaluate the influence of the number of edge nodes on the average job throughput and average waiting time to test the scalability of ENTS.", "In this experiment, a total of 50 jobs are submitted by the edge nodes to the master with the arriving rate following a Poisson distribution with $\\lambda = 0.5/second$ .", "As shown in Fig.", "REF (a), TR, OTFS, and OTFA perform much better than LR and BR, with higher average throughput.", "The average throughput of LR and BR does not exceed 1.", "It is because LR and BR do not partition the job, which leads to the transmission of source video data over a low-bandwidth edge network.", "It becomes the bottleneck of the job throughput.", "Unlike LR and BR, the other three methods, i.e., TP, OTFS, and OTFA, partition the job and enable distributed job execution, avoiding raw data transmission.", "OTFA performs best with the highest throughput among TP, OTFS, and OTFA.", "TP shares the bandwidth equally and assigns the shortest routing path for network flows, which usually leads to traffic congestion when multiple data flows pass through the same network link.", "Instead, OTFS and OTFA optimize the networking resources by enabling optimal bandwidth sharing and routing path selection concerning the end-to-end job throughput.", "OTFA goes further.", "It considers all the available data flows in the network, which can improve the average job throughput compared to OTFS.", "Table: Specifications of Physical DevicesFigure: a) Impact of the number of edge nodes on average throughput with average bandwidth 1Mbps.", "b) Impact of the number of edge nodes on average throughput with average bandwidth 10Mbps.", "c) Impact of the number of edge nodes on average waiting time.", "d) Impact of the number of submitted jobs on average throughput.", "e) Impact of the number of submitted jobs on average waiting time.", "f) Impact of average bandwidth on average throughput.We also observe that the average throughput does not show a linear growth with an increasing number of edge nodes.", "Generally, when the number of edge nodes increases, the network will have more resources and higher job throughput.", "However, the throughput decreases slightly when the number of edge nodes increases from 10 to 20 and 30 to 40.", "It is because of the limited network bandwidth, i.e., 1Mbps with a variance of $0.3$ in our experiment.", "When the number of edge nodes increases, the number of hops and network links between two edge nodes also increases, resulting in more bottleneck communication paths.", "As shown in Fig.", "REF (b), when the average bandwidth of the edge network becomes 10Mbps, such fluctuation of the average throughput will no longer exist.", "More specifically, it shows a linear growth as expected.", "It is because the network bandwidth is not the bottleneck anymore, and there are fewer bottleneck communication paths.", "Fig.", "REF (c) depicts the influence of the number of edge nodes on the waiting time.", "When the number of the edge nodes is below 30, the average waiting time of TP, OTFS, and OTFA is much smaller than that of LR and BR.", "The reason is that the former scheduling policies partition the job and allocate the task into edge nodes with less abundant resources, improving resource utilization and the number of jobs executable among the geo-distributed edge nodes.", "When the number of edge nodes is above 30, the total resource is sufficient, where the average waiting time is dominated by the running efficiency of the scheduling algorithms.", "Compared with the LR and BR algorithms, TP, OTFS, and OTFA are required to traverse all the edge nodes for each task and solve the formulated optimization problem, which increases the average waiting time.", "However, we observe that when the number of edge nodes is below 50, the average waiting time is no more than 1 second, and about $2.5$ second when the number of edge nodes is 70, which is still at a low level." ], [ "Effects of Number of Submitted Jobs", "We evaluate the performance of the average job throughput and wait time with a changing number of submitted jobs.", "We set the average bandwidth as 1Mbps with a variance of $0.3$ .", "The number of edge nodes is 30.", "The arriving rate of the submitted jobs follows a Poisson distributed with $\\lambda = 0.5/second$ .", "As shown in Fig.", "REF (d), when the number of submitted jobs is no more than 30, our method performs similarly to the baseline.", "In such cases, the edge resources are relatively abundant, and the proposed methods, i.e., OTFS and OTFA, tend to yield similar decisions compared with the baseline methods.", "However, when there are more jobs, the average throughput of LR and BR declines dramatically.", "It is because multiple jobs compete for limited networking and computation resources.", "Without partitioning the submitted jobs and optimizing the bandwidth allocation and routing path of flows, LR and BR easily suffer from network congestion and fragmented computation resource usage, degrading the average job throughput significantly.", "OTFA performs the best.", "Compared to TR and OTFS, OTFA considers optimal bandwidth sharing and routing path for incoming in addition to existing data flows, which can further improve the averaging job throughput with better resource utilization when there are more jobs.", "Fig.", "REF (e) depicts similar trends concerning the performance in average waiting time.", "When the number of submitted jobs is below 50, the average waiting time for all the mentioned methods is low, i.e., no larger than $0.5$ without apparent fluctuation.", "We can also see that the waiting time of LR and BR is shorter than that of TP, OTFS, and OTFA.", "It is because the latter three approaches have to traverse all the edge nodes for each task, which leads to more waiting time for scheduling jobs.", "When the number of submitted jobs exceeds 50, TP, OTFS, and OTFA show consistent average waiting times while the performance of LR and BR increases significantly.", "The reason is that there are no available resources to schedule the new-coming jobs.", "The rest of the jobs are required to wait in the job queue, which results in an increased average waiting time.", "Compared to TR, OTFS, and OTFA, the other two methods, i.e., BR and LR, do not partition the submitted job, which may easily lead to fragmented resource consumption and thus serve fewer jobs." ], [ "Effects of Average Bandwidth", "We also evaluate the performance of the average job throughput with the variance of the average bandwidth of the edge network.", "We set the number of edge nodes as 30 in this experiment and the number of submitted jobs as 50 with the arriving rate following a Poisson distributed with $\\lambda = 0.5/second$ .", "As shown in Fig.", "REF (f), the average throughput of all the methods increases with the average bandwidth.", "More specifically, when the average bandwidth of the edge network is no more than 5Mbps, OTFA outperforms other methods significantly because it jointly considers and optimizes the data locality, the networking, and computing resources of edge nodes.", "However, when the average bandwidth is above 10Mbps, baselines and proposed methods tend to have similar performance.", "It is because the bandwidth is relatively abundant now.", "However, OTFS and OTFA are slightly better than LR and BR, as they optimize the bandwidth allocation and routing selection for data flows in the edge network.", "BR outperforms LR as it aims to achieve balanced resource consumption, enabling the powerful edge nodes to service more jobs.", "In a nutshell, we evaluated and compared the performance of ENTS with the state-of-the-art and proposed online algorithms for scheduling streaming jobs.", "Benefiting from the ability to consider task dependencies and jointly optimize the limited coupled computation and networking resources, ENTS achieves a $43\\%-220\\%$ improvement in average throughput.", "Although the proposed solutions introduce additional overhead in making the scheduling strategies, they can serve more jobs when resources of the edge network are limited, which leads to less averaging waiting time." ], [ "Related Work", "Container scheduler.", "The default scheduler of Kubernetes (K8S) [12] is an online scheduler that implements a greedy multi-criteria decision-making (MCDM) algorithm.", "MCDM scores the available nodes with pre-defined rules and selects the highest scoring node for scheduling.", "This scheduling algorithm performs well in the cloud environment.", "However, it lacks features for container scheduling in the edge environment, such as limited network connections and geo-distributed and resource-constraint edge nodes.", "Furthermore, it is not performance-aware.", "There are several attempts to tailor the Kubernetes for the edge.", "Regarding the resource-constraint edge environment, MicroK8s and K3s [35] aims to simplify K8S and provide lightweight K8S distribution.", "KubeEdge [36] and OpenYurt extend the K8S capability to the edge by enabling the virtual network connection between edge servers and VMs in the cloud.", "However, those solutions do not change the core idea of task scheduling of Kubernetes.", "They are not application performance sensitive.", "Some work tries to improve the scheduling policies for performance-sensitive edge applications.", "Santos et al.", "[37] tried to extend the default task scheduling strategies in Kubernetes with the ability to sense the network status.", "They consider the round trip time information of candidate nodes to minimize the overall response time of an application to be deployed.", "Rossi et al.", "[17] designed a customized scheduler leveraging the Monitor, Analyze, Planning, Execute (MAPE) pattern to deploy applications in a geo-distributed environment.", "Wojciechowski et al.", "[18] proposed NetMARKS, fulfilling the Kubernetes scheduler with the network-aware feature.", "It uses Istio service mesh to collect network metrics, facilitating scheduling pods on a server and its neighbors and encouraging co-locating pods [38].", "Though these works consider the network latency between edge nodes, it neglects the heterogeneous computing capability of edge nodes and the locality of data sources.", "Besides, they do not orchestrate the networking resource, such as bandwidth allocation and customized routing of data flows.", "Task scheduling in cloud-edge infrastructure.", "Many works consider dispatching streaming tasks among heterogeneous edge servers and cloud to minimize the average task completion time [9], [39].", "However, they only consider the independent tasks while neglecting the dependency among tasks.", "Concerning dependent tasks, Sundar et al.", "[40] proposed a heuristic algorithm for scheduling dependent tasks in a generic cloud computing system by greedily optimizing the scheduling of each task subject to its time constraint.", "Wang et al.", "[41] developed a deep reinforcement learning-based task offloading scheme, which leverages the off-policy reinforcement learning algorithm with a sequence-to-sequence neural network to capture the task dependency of applications.", "Nevertheless, they fail to consider the orchestration of the network flows [19], which necessarily results in network congestion and prolonged task completion time.", "Although there are some works [42], [43] optimizing the average task completion time and jointly considering the task allocation and flow scheduling, they do not optimize the application throughput and lack real-world system implementation.", "To summarize, different from existing works, we jointly consider the data, computing, and networking resource to maximize the throughput of stream applications and proposed and developed a holistic system to enable application development, online scheduling, and distributed task execution." ], [ "Conclusion and Future Work", "In this work, we designed and developed ENTS, the first edge-native task scheduling system, to manage geo-distributed and heterogeneous edge resources in collaborative edge computing.", "ENTS extends Kubernetes with the ability to jointly orchestrate computation and networking resources to optimize the application performance.", "ENTS comprehensively considers both the application characteristics and edge resource status.", "We show the superiority of ENTS with a case study on data streaming applications, in which we formulate a joint task allocation and flow scheduling problem and propose two online scheduling algorithms.", "Experiments on an object attribute recognition application on a large number of edge nodes show ENTS achieves improved performance.", "In the future, we will improve the work from two aspects as follows.", "On the one hand, we will develop more advanced algorithms for collaborative task scheduling.", "Current algorithms do not allocate resources for tasks, such as how much memory and CPU periods should be allocated to the containerized tasks.", "However, regarding optimization of the overall resource usage, we have to jointly consider the task partition and allocation, computing resource allocation, and networking resource allocation.", "On the other hand, we will integrate software-defined networking (SDN) into the network controller.", "We use the Linux kernel functions, i.e., Iproute and Traffic control, to achieve networking resource management for the network manager.", "The objective is consistent with SDN, which provides programming interfaces for conveniently orchestrating networking resources.", "Many works [44], [45], [46] are exploring integrating SDN with edge computing to facilitate the management of various edge nodes." ], [ "Acknowledgement", "This work was supported by the Research Institute for Artificial Intelligence of Things, The Hong Kong Polytechnic University, HK RGC Research Impact Fund No.", "R5060-19, and General Research Fund No.", "PolyU 15220020." ] ]
2210.07842
[ [ "Modelling phylogeny in 16S rRNA gene sequencing datasets using string\n kernels" ], [ "Abstract Motivation: Bacterial community composition is commonly quantified using 16S rRNA (ribosomal ribonucleic acid) gene sequencing.", "One of the defining characteristics of these datasets is the phylogenetic relationships that exist between variables.", "Here, we demonstrate the utility of modelling phylogenetic relationships in two tasks (the two sample test and host trait prediction) using a novel application of string kernels.", "Results: We show via simulation studies that a kernel two-sample test using string kernels is sensitive to the phylogenetic scale of the difference between the two populations and is more powerful than tests using kernels based on popular microbial distance metrics.", "We also demonstrate how Gaussian process modelling can be used to infer the distribution of bacterial-host effects across the phylogenetic tree using simulations and two real host trait prediction tasks." ], [ "The human microbiome", "The microbiome is defined as the microorganisms (including bacteria, fungi and viruses), their genetic material and their interactions that live in or on a host organism.", "The human body is itself a vast and diverse microbial ecosystem, with estimates placing the number of microbial genes per human host at up to ten times larger than the number of human genes .", "Datasets collected via 16S rRNA (ribosomal ribonucleic acid) gene sequencing are driving our rapidly increasing understanding of the role of the microbiome in human health by enabling cost-efficient identification and quantification of bacterial abundance.", "Financial and technical difficulties mean that it is usually not possible to perform whole genome sequencing of the organisms that comprise microbial communities.", "The 16S rRNA gene region is part of the bacterial genome that contains both conserved regions (used to design primers to amplify the sequence) and variable regions (used to identify and quantify organisms), meaning it is well-suited for measuring bacterial community composition.", "Each variable in a 16S rRNA gene dataset represents a distinct organism and is defined by a unique representative sequence.", "These variables (called operational taxonomic units or OTUs) are related to one another via historical evolutionary relationships (phylogeny) that can be represented by a phylogenetic tree, which is inferred from the representative sequences.", "These phylogenetic relationships distinguish 16S rRNA gene sequencing datasets from those generated using other sequencing modalities and so it may be beneficial to apply phylogeny-aware tools when analysing them.", "One popular approach to model phylogenetic relationships is using kernel methods, where the kernel function is derived from ecological distance metrics.", "Here, we present a novel approach using string kernels and demonstrate its utility in the kernel two-sample test and supervised learning using Gaussian processes (GPs)." ], [ "Previous work on kernel methods for microbiome analysis", "Kernels are a popular method of non-parametric analysis of biological data and can be used to perform both supervised and unsupervised tasks via the specification of a kernel function $k(\\cdot ,\\cdot )$ , which computes inner products (i.e.", "similarities) in a reproducing kernel Hilbert space (RKHS).", "They are particularly well-suited to biological applications as (i) it is straightforward to encode complex prior knowledge via the kernel function's definition of similarity and (ii) kernel functions are well-suited for application to discrete data types (e.g.", "strings and trees) that are ubiquitous in biological settings.", "The most prominent application of kernels in the microbial setting is the Microbiome Regression-Based Kernel Association Test (MiRKAT, ), which tests for association between community composition and an outcome using semi-parametric kernel regression.", "MiRKAT has subsequently been extended in several directions, including to longitudinal data , and multiple outcomes .", "Other similar semi-parametric kernel approaches include the microbiome-based sum of powered score (MiSPU, ) and optimal microbiome-based association test (OMiAT, ).", "For a dichotomous outcome, an alternative approach is to perform a kernel two sample-test using maximum mean discrepancy (MMD, ).", "This is the approach taken by the Adaptive multivariate two-sample test for Microbiome Differential Analysis (AMDA, ), which also includes a preceding permutation step to select a subset of variables for the MMD calculation.", "The choice of kernel function encodes the modelling assumptions in any kernel method.", "Given two observations $x$ and $x^{\\prime }$ , it is standard practice for kernel methods to use the radial basis function (RBF) kernel, $k(x,x^{\\prime }) = \\sigma ^2 \\exp \\left( -\\Vert x-x^{\\prime }\\Vert ^2_2/2l^2 \\right)$ where $\\sigma ^2$ and $l$ are variance and lengthscale hyperparameters.", "However, the RBF and other similar kernels (e.g.", "the Matern family) only consider observed abundances and ignore phylogenetic relationships.", "These semi-parametric methods can model phylogenetic relationships by incorporating the distances between OTUs on the phylogenetic tree in the kernel computation (e.g.", "the UniFrac kernel described in Section ).", "This approach (modelling phylogeny using kernels derived from UniFrac distances) is commonly utilised in the second main application of kernel methods to microbial data, host trait prediction (supervised learning).", "Such kernels have been incorporated into methods such as the generalized linear mixed models , kernel ridge regression and kernelised support vector machines .", "Kernels utilised by previous methods therefore either ignore phylogenetic relationships or rely on the phylogenetic tree to incorporate phylogenetic information.", "Here, we propose an alternative approach that directly utilises the observed sequence data by computing their pairwise similarities using string kernels." ], [ "Our contributions and structure of the paper", "Here, we present an investigation of string kernels (a kernel function that operates on pairs of strings) as a novel approach to model phylogeny.", "These string kernels operate on the representative sequences that define OTUs.", "We demonstrate the utility of string kernels in the context of two important statistical problems: (i) the kernel two-sample test; and (ii) host trait prediction using GPs.", "Our contributions are the first application of string kernels to model phylogeny in 16S rRNA gene sequencing datasets; demonstrating via simulation studies that phylogeny-aware kernels induce a more appropriate kernel two-sample test than kernels that only model taxa abundance; demonstrating that the resulting test is more powerful using a string kernel than the UniFrac kernel for small sample sizes; and showing how string kernels can be used with GP to infer the distribution of host phenotype effects across the phylogenetic tree.", "This paper is structured as follows.", "Section describes the relevant background on kernel methods (GPs and the two-sample test) before Section outlines their relevance to microbial applications as well as previously applied phylogeny-aware kernels based on the UniFrac distance.", "Section introduces the string kernels used in this study before Section describes the simulation setup that is then used to investigate the two-sample test (Section ) and host trait prediction using GPs (Section ).", "We then demonstrate our host trait prediction approach on a real dataset relating the airway bacterial community to respiratory disease (Section ) before Section summarises our findings and outlines future work." ], [ "Background on kernel methods", "This work studies two types of statistical tasks that can be performed using kernel-based approaches: (i) a two-sample test and (ii) supervised learning tasks such as regression or classification.", "The performance of kernel methods in both tasks is determined by the choice of a symmetric, positive semi-definite kernel function $k(\\cdot ,\\cdot )$ satisfying $k(x,x^{\\prime }) = \\langle \\phi (x), \\phi (x^{\\prime }) \\rangle _\\mathcal {H}\\qquad \\forall x,x^{\\prime }\\in \\mathcal {X}$ for feature map $\\phi : \\mathcal {X} \\rightarrow \\mathcal {H}$ which induces the RKHS $\\mathcal {H}$ .", "Kernels therefore compute inner products in a feature space defined by $\\phi (\\cdot )$ ." ], [ "The kernel two-sample test", "An important research question in microbial studies is to determine whether two groups of samples are drawn from distinct distributions.", "In most cases the two groups correspond to disease or treatment groups and it is of interest to establish whether the two groups have distinct microbial communities.", "Given two sets of samples $X=\\lbrace x_i\\rbrace _{i=1}^{n_x}$ and $Y=\\lbrace y_i\\rbrace _{i=1}^{n_y}$ , where $x_i \\overset{\\text{i.i.d}}{\\sim }P$ and $y_i \\overset{\\text{i.i.d}}{\\sim } Q$ , the two-sample test considers the following competing hypotheses $H_0: P = Q\\,, \\quad H_1: P \\ne Q \\,,$ where $H_0$ and $H_1$ are the null and alternative hypotheses.", "Given a kernel $k(\\cdot ,\\cdot )$ , the maximum mean discrepancy (MMD, ) is defined as $ \\textnormal {MMD}(P,Q) =&\\, \\Vert \\mathbb {E}_{x \\sim P}[ \\phi (x) ] - \\mathbb {E}_{y \\sim Q}[ \\phi (y) ]\\Vert _\\mathcal {H} \\\\=&\\, \\Vert \\mu _P - \\mu _Q \\Vert _\\mathcal {H} \\,,$ where $\\mu _P$ and $\\mu _Q$ are the kernel mean embeddings of $P$ and $Q$ in $\\mathcal {H}$ .", "The kernel two-sample test uses as the test statistic the biased, minimum variance estimator of (REF ), estimated from the samples in $X$ and $Y$ : $ \\widehat{\\textnormal {MMD}}_k^2(X,Y) = \\dfrac{1}{n_x^2} \\sum _{i,j=1}^{n_x} &k(x_i,x_j) + \\dfrac{1}{n_y^2} \\sum _{i,j=1}^{n_y} k(y_i,y_j) \\\\&- \\dfrac{2}{n_x n_y} \\sum _{i,j=1}^{n_x,n_y} k(x_i,y_j) \\,.", "\\nonumber $ Statistical significance is assessed using a permutation test with $N_\\textnormal {perm}$ permutations, where the p-value is given by $p_{\\textnormal {perm}} = \\dfrac{\\sum _{i=1}^{N_\\textnormal {perm}} \\mathbb {1}(\\widehat{\\textnormal {MMD}}_k(X^*_i,Y^*_i) \\ge \\widehat{\\textnormal {MMD}}_k(X,Y)) + 1}{N_\\textnormal {perm} + 1} \\,,$ where $\\lbrace (X^*_i, Y^*_i)\\rbrace _{i=1}^{N_\\textnormal {perm}}$ is formed by permuting the combined samples of $X$ and $Y$ and $\\mathbb {1}(\\cdot )$ is the indicator function ." ], [ "Gaussian processes", "Kernel methods can also be used for non-parametric Bayesian supervised learning tasks via a Gaussian process (GP).", "Let $X$ be an $n \\times p$ input matrix (e.g.", "containing OTU counts) and $y=(y_1,\\dots y_n)$ an $n$ -dimensional host phenotype vector.", "For a continuous trait, consider the following regression task $y_i=f(x_i)+\\varepsilon \\;, \\qquad \\varepsilon \\sim \\mathcal {N}(0,\\tau ^2)\\,, \\quad i=1\\,,\\ldots \\,,n \\,,$ where $x_i$ denotes the $i$ -th row of the matrix $X$ and $f(\\cdot )$ is an unknown function.", "To infer this unknown function one can specify a zero-mean GP prior distribution over the function space $ f(\\cdot ) \\sim \\mathcal {GP}(0, k(\\cdot ,\\cdot ))$ which is fully specified by the positive semi-definite kernel function $k(\\cdot ,\\cdot )$ and its hyperparameter $\\theta $ .", "The GP prior (REF ) can be seen as a generalisation of a multivariate Gaussian distribution: when evaluating $f(\\cdot )$ on a finite set of observations e.g.", "$x_1,\\dots x_n$ , the n-dimensional vector $(f(x_1),\\dots f(x_n))$ follows a multivariate Gaussian distribution with mean 0 and covariance matrix $K_{XX}$ , which is the positive semi-definite matrix with elements formed by pairwise evaluations of $k(\\cdot ,\\cdot )$ on the rows of $X$ .", "The Gaussian likelihood of this regression model permits exact computation of the posterior distribution $p(f(\\cdot )\\mid X,y)$ via Bayes rule .", "In addition, the log-marginal likelihood (LML) of the GP regression model can be obtained analytically: $ \\log p \\, (y \\mid X, \\theta ) =& -\\dfrac{1}{2} y^T ({K}_{{X}{X}} + \\tau ^2 {I})^{-1} y \\nonumber \\\\&\\qquad - \\dfrac{1}{2} \\log | ({K}_{{X}{X}} + \\tau ^2 {I}) | - \\dfrac{n}{2} \\log 2\\pi \\,,$ where $I$ is the identity matrix; note that $K_{XX}$ depends on the kernel hyperparameter $\\theta $ .", "For binary traits, we consider regression models of the form $y_i = \\Phi (f(x_i))\\,, \\quad i=1\\,,\\ldots \\,,n \\,,$ where $\\Phi (\\cdot )$ is the cumulative distribution function of the standard Gaussian and $f(\\cdot )$ is now a latent function that cannot be inferred in closed-form due to the probit likelihood.", "In this paper we use the variational GP classifier , which approximates the latent posterior $p(f(\\cdot ) \\mid X, y)$ with a multivariate Gaussian $q(f)=\\mathcal {N}(\\mu ,\\Sigma )$ .", "The optimal $q(f)$ is found by maximising the evidence lower bound (ELBO), $ \\textnormal {ELBO} =&\\, \\mathbb {E}_q[ \\log p \\, (y \\mid f, \\theta ) ] - \\textnormal {KL}( \\, q(f) \\, || \\, p(f) \\, ) \\,,$ with respect to $\\mu $ , $\\Sigma $ and $\\theta $ , where $\\textnormal {KL}( \\, q(f) \\, || \\, p(f) \\, ) = \\int q(f) \\log \\frac{q(f)}{p(f)} \\textnormal {d}f$ is the Kullback-Leibler divergence from $q(f)$ to the prior $p(f)$ .", "Depending on the task either the log-marginal likelihood (REF ) or the ELBO (REF ) can be used for model selection (e.g.", "selection of the kernel and its hyperparameters) , ." ], [ "Kernels for microbiome analysis", "The choice of kernel encodes the modelling assumptions of the kernel two-sample test or the GP model and so has a critical effect on their behaviour.", "We now describe the characteristics of 16S rRNA gene sequencing datasets and how they motivate the use of phylogenetic kernels." ], [ "16S rRNA gene sequencing", "A 16S rRNA gene sequencing dataset typically consists of three elements: a count matrix $X \\in \\mathbb {Z}_{\\ge 0}^{n \\times p}$ , where $\\mathbb {Z}_{\\ge 0} = \\lbrace 0,1,2,\\ldots \\rbrace $ are the non-negative integers, containing $n$ samples and $p$ operational taxonomic units (OTUs); a phylogenetic tree describing the evolutionary relationships between the $p$ OTUs; and a set of host phenotypes.", "16S rRNA gene sequencing datasets are collected using a series of experimental then computational steps.", "The experimental steps – whose description is beyond the scope of this paper – produce a set of 16S rRNA gene sequences per sample.", "The subsequent computational steps (described in Figure REF ) pool reads from all samples and cluster them to 97% sequence similarity.", "Each cluster of sequences is an OTU, which is assigned its most central member as its representative sequence.", "The representative sequences are then used to (i) assign a taxonomic identification to the OTU using a reference database and (ii) infer a phylogenetic tree describing the evolutionary relationships between the OTUs.", "The OTU count matrix is then constructed such that its $ij^\\textnormal {th}$ element is the number of occurrences of OTU $j$ in sample $i$ .", "Figure: Computational steps required to collect a 16S rRNA gene sequencing dataset using QIIME2 and FastTree2 , .", "Figure created using BioRender.com." ], [ "Motivation for phylogeny-aware kernels", "The phylogenetic relationships present in 16S rRNA gene sequencing datasets have important implications for both the two-sample test and GP regressions.", "In the two-sample test the choice of kernel function determines the properties of the RKHS $\\mathcal {H}$ and so the behaviour of the test statistic (REF ).", "Meanwhile for a GP, the kernel defines the covariance structure of the prior and so has a strong effect on the functions that can be learnt.", "In both cases, using a standard kernel (e.g.", "an RBF or Matern) can lead to misleading results as they ignore the phylogenetic relationships in the data.", "In this section we illustrate this fact for the kernel two-sample test on a very simple example.", "The behaviour of the kernel two-sample test is determined by the kernel mean embeddings $\\mu _P$ and $\\mu _Q$ , which are in turn determined by the kernel function.", "If $\\mu _P$ and $\\mu _Q$ are injective then the corresponding kernel is said to be characteristic and $\\textnormal {MMD}(P,Q)=0$ if and only if $P=Q$ .", "Both the RBF and Matern family of kernels are characteristic, which is often cited as an advantage in the context of a two-sample test.", "However, consider a scenario where $P$ and $Q$ each contain the marginal distributions of two OTUs but where the marginals are swapped between $P$ and $Q$ .", "A non-phylogenetic, characteristic kernel will find $\\textnormal {MMD}(P,Q) \\gg 0 $ irrespective of the similarity of the OTUs (Figure REF (A)).", "Assume now that the two OTUs are very similar – or maybe even indistinguishable.", "A phylogenetic kernel naturally encodes the phylogenetic distance between the two OTUs and the resulting two-sample test is therefore more appropriate for microbial applications (Figure REF (B)).", "Note that such a phylogenetic kernel is not characteristic by definition as the kernel mean embeddings are surjective by design.", "Figure: Visualisation of the kernel mean embeddings of two distributions PP and QQ, where each contain the marginal densities of two indistinguishable OTUs.", "A characteristic kernel leads to a large MMD (plot A) but if the kernel models phylogenetic relationships it correctly finds that the distance between PP and QQ is small (plot B)." ], [ "UniFrac kernel", "Due to the close relationship between similarities and distances it is possible to compute a kernel that corresponds to a distance metric.", "Given a sample-wise distance matrix $\\Delta $ for $n$ samples $x_1,\\dots x_n$ , the corresponding kernel matrix $K$ such that $K_{ij}=k(x_i,x_j)$ is given by $K=-\\frac{1}{2} J \\Delta J$ , where $J = I - \\frac{1}{n} 1_n 1_n^T$ is the centring matrix and $1_n$ is an $n$ -dimensional vector of ones .", "This is a natural approach to construct kernels for microbial studies, as such distance metrics are ubiquitous for exploratory analyses such as principal coordinate analysis.", "The (unweighted) UniFrac distance between two samples is the ratio of unshared branch lengths between the two samples to the total branch lengths in the tree, $ d_\\text{uf-uw}(x, x^{\\prime }) = \\dfrac{\\sum _{j=1}^p l_j | \\mathbb {1}(x^{(j)}>0) - \\mathbb {1}(x^{\\prime (j)}>0) |}{\\sum _{j=1}^p l_j \\max (\\mathbb {1}(x^{(j)}>0), \\mathbb {1}(x^{\\prime (j)}>0))} \\,,$ where $l_j$ is the branch length between taxa $j$ and the root and $\\mathbb {1}(x^{(j)}>0)$ is an indicator function for whether taxa $j$ appears in sample $x$ .", "A weighted variant of the UniFrac distance also exists, where the branch length ratios are weighted by the abundances in the two samples ." ], [ "String-based kernels for microbiome analysis", "The aim of this study is to investigate the benefits of explicitly modelling the phylogenetic relationships between OTUs using string kernels.", "This section will describe two approaches to constructing phylogenetic kernels using three types of string kernels (Spectrum, Mismatch and Gappy pair), which measure the similarity between the representative sequences of pairs of OTUs.", "Each OTU is defined by a representative DNA sequence of $\\sim $ 200 base pairs.", "OTU-wise similarity can therefore be quantified using string kernels, which were developed in natural language processing for text classification that quickly became popular for the classification of protein sequences in combination with support vector machines , , .", "However, in these sequence classification tasks the samples themselves are strings, while in 16S rRNA gene sequencing datasets samples are count vectors whose dimensions (the OTUs) are related to one another by strings (the representative sequences).", "This distinction means that the string kernels in this study are used to construct an inner product space in which sample similarity is computed.", "We use the notation $q(\\cdot ,\\cdot )$ for a kernel matrix that operates feature-wise (as oppposed to sample-wise kernels $k(\\cdot ,\\cdot )$ ), for clarity, although there is no real distinction between the two types of kernel.", "An OTU-wise similarity matrix $S$ with elements $(S)_{ij} = q(z_i,z_j)$ , for OTUs with representative sequences $z_i$ and $z_j$ , $i,j=1, \\dots , p$ defines an inner product $\\langle x, x^{\\prime } \\rangle _S = x^{\\prime T} S x$ , where $x,x^{\\prime }$ are the $p$ -dimensional count vectors containing the abundances of the OTUs whose similarities are encoded in $S$ .", "If these abundances are stored in the rows of an $n\\times p$ matrix $X$ then the kernel matrix is given by $X S X^T$ .", "The simplest string kernel is the Spectrum kernel , which is defined by a feature mapping that counts the number of $k$ -mers that appear in string $s$ , $\\phi (s) = (h^{\\textnormal {spec}}_u(s))_{u \\in \\mathcal {A}^k} \\,,$ where $h^{\\textnormal {spec}}_u(\\cdot )$ counts the number of occurrences of substring $u$ and $\\mathcal {A}^k$ is the set of possible $k$ -mers in alphabet $\\mathcal {A}$ .", "When analysing DNA sequences, $\\mathcal {A} = \\lbrace \\textnormal {T}, \\textnormal {G}, \\textnormal {C}, \\textnormal {A}\\rbrace $ for the four nucleotide and so the $k$ -mer feature space $\\mathcal {A}^k$ has size $4^k$ .", "The resulting kernel is the inner product $ q(z,z^{\\prime }) = \\langle \\phi (z), \\phi (z^{\\prime }) \\rangle _{\\mathcal {A}^k},$ where $z,z^{\\prime }$ are the representative sequences of two OTUs.", "Figure REF illustrates the $S=(q(z_i,z_j))_{i,j=1}^p$ matrices for Spectrum kernels with lengthscales $k \\in \\lbrace 10,30\\rbrace $ , computed using the 1,189 OTUs in the respiratory disease dataset utilised throughout this study (described in Section , ).", "Smaller values of $k$ produce a matrix with many non-zero elements while larger values of $k$ induce a block diagonal structure, with blocks corresponding to clades of closely-related OTUs.", "Figure: Spectrum kernels for kk-mer lengths of 10 (A) and 30 (B).", "Coloured bars indicate the Order of the OTU, illustrating how blocks of OTUs with high string similarity correspond to taxonomic classifications.", "The 100 most abundand OTUs from the chronic respiratory disease dataset used in the simulation studies are plotted.During replication DNA sequences undergo mutation, mainly in the form of insertions/deletions (indels) and substitutions, but such similarities would not be recognised by the Spectrum kernel.", "The Mismatch kernel addresses this by allowing for mismatches in $k$ -mers of length $m$ , which is an additional hyperparameter whose maximum value is $k-1$ .", "Its feature map is given by $\\phi (s) = (h^{\\textnormal {mis}}_{u,m}(s))_{u \\in \\mathcal {A}^k} \\,,$ where $h^{\\textnormal {mis}}_{u,m}(\\cdot )$ counts the number of occurrences of any substring with at most $m$ mismatches with $u$ .", "The Gappy Pair kernel allows for matches between a pair of $k$ -mers with up to $g$ gaps, where $g$ is an additional hyperparameter.", "Its feature map is $\\phi (s) = (h^{\\textnormal {gap}}_{u,g}(s))_{u \\in \\mathcal {A}^k} \\,,$ where $h^{\\textnormal {gap}}_{u,g}(\\cdot )$ counts the number of occurrences of any substring with that matches $u$ with at most $g$ gaps." ], [ "Computing String kernels", "Efficient implementations of String kernels rely on tries, a tree data structure whose leaves represent a set of sequences and where all the children of an internal node have the same prefix .", "Tries allow for far more efficient $k$ -mer lookups than a naive search in the size of the $k$ -mer space, which is exponential in $k$ ($|\\mathcal {A}_k|=4^k$ ).", "When using tries the time complexity to compute one element in a Spectrum kernel is $\\mathcal {O}(k(|z|+|z^{\\prime }|))$ for $k$ -mer length $k$ and sequences $z,z^{\\prime }$ with lengths $|z|, |z^{\\prime }|$ , which is linear in $k$ .", "The time complexity of the Mismatch kernel is $\\mathcal {O}(k^{m+1} |\\mathcal {A}_k| (|z|+|z^{\\prime }|))$ , which is an increase of $k^m 4^k$ relative to the Spectrum kernel.", "For a single element of the Gappy pair kernel the running time is $\\mathcal {O}(k^g(|z|+|z^{\\prime }|))$ , which is an increase by a factor of $k^{g-1}$ relative to the Spectrum kernel .", "The empirical compute times for the same respiratory disease dataset used to produce Figure REF are shown in Figure REF , which shows that the Mismatch kernel requires at least 3 orders of magnitude more time than a Spectrum or Gappy pair kernel for the same $k$ -mer length.", "For the Spectrum, Gappy pair kernels and Mismatch kernels with $m\\le 2$ the compute time plateaus once it reaches some value of $k$ (the specific value depends on the type of kernel).", "This is because for all any moderately large $k$ the number of leaves in the trie (which is $4^k$ ) is far larger than the number of $k$ -mers actually present in the two strings $z$ and $z^{\\prime }$ , meaning that large parts of the tree are unpopulated.", "These unpopulated subtrees are pruned before conducting the $k$ -mer search and so increasing the value of $k$ does not increase the size of the search in practice .", "While the time complexity of computing String kernels can be restrictive this is mitigated by a combination of two factors.", "Firstly, the elements of a kernel are independent and so the computational time can be easily reduced using distributed computing infrastructure (so-called embarrassingly parallel computations).", "Secondly, the nature of microbiome dataset analysis means that the definitions of the OTUs (via their representative sequences) are fixed once the initial pre-processing has been completed.", "The entire kernel matrix can therefore be computed in advance and stored for future use, and so a computation time on the order of days is feasible as it only has to be performed once.", "Figure: Empirical computation times for the string similarity matrix SS for 1,189 OTUs with different hyperparameter values.", "Calculations were run on 8 threads of an Intel(R) Xeon(R) CPU using the Kebabs package for R ." ], [ "Simulating realistic fictitious OTU counts", "Recall that the three components of a 16S rRNA gene sequencing dataset are the phylogenetic tree, OTU count matrix and host phenotypes.", "Simulations used to benchmark statistical tools require a realistic simulation procedure, which in the case of microbial datasets means simulating an underlying evolutionary process.", "This difficult task can be avoided by using the tree of an observed dataset and assuming a parametric generative model for the corresponding OTU counts.", "In both sets of simulations in this paper we use a dataset from the respiratory microbiome of patients with chronic respiratory disease .", "This contains $p=1,189$ OTUS measured in 107 individuals with one of cystic fibrosis (83 samples) and non-cystic fibrosis bronchiectasis (24 samples).", "The collection and preparation of this dataset has been described previously , .", "By utilising this real dataset we also have access to its phylogenetic tree, which is inferred from the representative sequences .", "We follow previous studies and model these observed counts as a Dirichlet-multinomial (DMN) for the purposes of simulating realistic fictitious counts , , , , , .", "Given Maximum-likelihood estimates of the DMN parameters, we can then generate fictitious but realistic OTU counts with real phylogenetic relationships between OTUs." ], [ "Dirichlet-multinomial models of OTU counts", "The $\\textnormal {DMN}(N,\\alpha )$ is a compound distribution over non-negative integers $\\mathbb {Z}_{\\ge 0}$ that is parametrised by a vector of concentrations $\\alpha \\in \\mathbb {R}_+^p$ and $N \\in \\mathbb {Z}^n$ trials, where $p$ is the number of categories .", "A sample $x \\in \\mathbb {Z}_{\\ge 0}^p$ is modelled as $\\theta \\sim \\, \\textnormal {Dirichlet}(\\alpha )\\,, \\qquad x \\sim \\, \\textnormal {Multinomial}(N, \\theta )\\;,$ where $\\theta =\\lbrace \\theta _j\\rbrace _{j=1}^p$ is a vector containing the multinomial probabilities such that $\\sum _{j=1}^p \\theta _j=1$ .", "The number of categories $p$ corresponds to the number of OTUs, while the number of trials $N$ is the total number of reads per sample.", "Here, we model the number of trials $N \\in \\mathbb {Z}^n$ using a negative binomial distribution to emulate the common scenario where different samples contain different numbers of reads (see Figure REF (A)))." ], [ "Accounting for compositional effects via transformations", "There is a growing consensus that microbiome datasets are compositional in nature , , meaning that each sample $x = (x^{(1)} \\,, \\ldots \\,, x^{(p)}),\\; x^{(j)} > 0, \\quad j=1, \\ldots \\,,p$ lives on the $p$ -simplex.", "Note that the DMN model of OTU counts includes compositional effects as the multinomial probabilities live on the $p$ -simplex and the subsequent multinomial sampling step simulates the observed counts.", "Compositional data can be transformed to Euclidean space using the centre log-ratio (CLR) transform $\\textnormal {clr}(x) = \\left( \\log \\frac{x^{(1)}}{g(x)} \\,, \\ldots \\,, \\log \\frac{x^{(p)}}{g(x)} \\right)$ , where $x^{(j)}$ is the $j^\\textnormal {th}$ element of the composition $x$ and $g(x)=\\left(\\prod _{j=1}^{p} x^{(j)}\\right)^{1/p}$ is the geometric mean of the composition .", "Applying a CLR transform prior to multivariate analysis is a commonly-used approach to account for compositional effects but requires that the resulting quantities are interpreted as log-ratios relative to the sample geometric mean, rather than in terms of absolute abundance .", "We follow that approach here and apply a CLR transform to the observed counts before computing the RBF, Matern32, Linear or String kernels.", "As the CLR transform does not preserve zeros it is not appropriate for use with the UniFrac kernel, as zeroes are required to determine which branches of the phylogenetic tree are shared between a pair of samples.", "We therefore transform counts using $\\log (x+1)$ instead prior to computing the UniFrac kernel." ], [ "Simulation setup", "In this simulation study we consider two probability distributions, $P = \\textnormal {DMN}(N, \\alpha _1) \\,, \\quad Q = \\textnormal {DMN}(N, \\alpha _2) \\,,$ meaning that the difference between $P$ and $Q$ is fully defined by the relationship between the concentrations $\\alpha _1$ and $\\alpha _2$ .", "This simulation study demonstrates that only phylogenetic kernels offer two-sample tests that are sensitive to the phylogenetic scale of the difference between $P$ and $Q$ .", "This is achieved by restricting the phylogenetic scale of the difference between $\\alpha _1$ and $\\alpha _2$ .", "Consider a scenario where each OTU is assigned to one of a set of clusters $\\mathcal {C}$ , where $\\mathcal {C} = \\lbrace c_1 \\,,\\, \\ldots \\,,\\, c_{|\\mathcal {C}|}\\rbrace $ .", "As each OTU is assigned to a single cluster, it is possible to write the elements of $\\alpha _1$ as the union of disjoint subsets $\\bigcup _{k=1}^{|\\mathcal {C}|} \\alpha _1^{(c_k)}$ , where each subset contains the DMN concentrations corresponding to a single cluster of $\\mathcal {C}$ .", "It is then possible to define a set of permutation operations $\\pi _\\mathcal {C}$ which satisfy $ \\alpha _2 = \\pi _\\mathcal {C}(\\alpha _1) \\Rightarrow \\alpha _1^{(c_k)} = \\alpha _2^{(c_k)} \\quad \\forall c_k \\in \\mathcal {C} \\,, \\quad \\forall \\hat{\\pi }_\\mathcal {C} \\in \\pi _\\mathcal {C} \\,.$ This ensures that the set of concentrations assigned to a cluster in $P$ are identical to the concentrations for that cluster in $Q$ .", "The specific OTUs to which a concentration is assigned may differ between $P$ and $Q$ if the cluster contains more than one item.", "If the clustering $\\mathcal {C}$ is constructed based on the phylogenetic distances between OTUs then the difference between $P$ and $Q$ will be restricted to the same phylogenetic scale as the OTU cluster assignments.", "Given a phylogenetic tree, for any $\\varepsilon >0$ , there exists a set of OTU clusters $\\mathcal {C}_\\varepsilon = \\lbrace c_1 \\,,\\, \\ldots \\,,\\, c_{|\\mathcal {C}_\\varepsilon |}\\rbrace $ that satisfies $ \\Delta ^\\tau _{ij} \\le \\varepsilon \\Delta ^\\tau _{\\max } \\,, \\quad \\forall i,j \\in c_k \\,, \\quad \\forall c_k \\in \\mathcal {C}_\\varepsilon \\,,$ where $\\Delta ^\\tau _{ij}$ is the distance between OTUs $i$ and $j$ along the branches of the phylogenetic tree and $\\Delta ^\\tau _{\\max }$ is the maximum distance between any two OTUs.", "Figure REF (A-B) illustrates the OTU clusters for a subset of OTUs (panel C) from the chronic respiratory disease dataset for $\\varepsilon \\in \\lbrace 0.03, 0.003\\rbrace $ .", "As the value of $\\varepsilon $ decreases there are a larger number of clusters, each of which contains a smaller number of OTUs.", "By combining the cluster definitions (REF ) with a permutation from $\\pi _\\mathcal {C}$ , it is possible to construct two populations of OTU samples, $P$ and $Q$ , where the differences between $P$ and $Q$ occur on a phylogenetic scale less than $\\varepsilon $ .", "The permutations corresponding to the clustering $\\mathcal {C}_\\varepsilon $ are denoted $\\pi _\\varepsilon $ from this point onwards, which is to say $\\pi _\\varepsilon := \\pi {_\\mathcal {C}}_{\\varepsilon }$ .", "The effect of the permutation on the DMN concentrations is illustrated in Figure REF .", "Figure: A and B : Clusters of OTUs for ε∈{0.03,0.003}\\varepsilon \\in \\lbrace 0.03, 0.003\\rbrace for a subset of the chronic respiratory disease dataset phylogenetic tree.", "Red boxes indicate clusters of OTUs and singleton clusters are not marked.", "These clusters are used to control the degree of phylogenetic differences between populations in the two-sample test.", "C: the region shown in panels A and B in the context of the entire tree.Figure: The difference between the two populations in the two-sample test simulation study is a permutation that restricts swaps to those within a set of clusters 𝒞 ε ={c 1 ,...,c |𝒞 ε | }\\mathcal {C}_\\varepsilon = \\lbrace c_1 \\,,\\, \\ldots \\,,\\, c_{|\\mathcal {C}_\\varepsilon |}\\rbrace .", "Here α i (c k ) \\alpha _i^{(c_k)} is the DMN concentration of the i t hi^\\textnormal {th} OTU in cluster c k c_k.", "In this example the clusters c 1 c_1, c 2 c_2 and c |𝒞 ε | c_{|\\mathcal {C}_\\varepsilon |} have sizes 3, 1 and 2 respectively." ], [ "Two-sample test simulation study results", "The final simulation setup for the two-sample test is $N \\sim &\\, \\textnormal {NB}(10^5, b), \\\\X =&\\, \\lbrace x_i\\rbrace _{i=1}^{n_x} \\sim \\textnormal {DMN}(N, \\alpha _1), \\\\Y =&\\, \\lbrace y_i\\rbrace _{i=1}^{n_y} \\sim \\textnormal {DMN}(N, \\alpha _2), \\quad \\text{with }\\alpha _2 = \\pi _\\varepsilon (\\alpha _1) \\,, $ where the scale of phylogenetic differences between two populations is controlled by $\\varepsilon $ and $\\textnormal {NB}(a,b)$ is a negative binomial density with mean $a$ and dispersion $b$ .", "We consider $\\varepsilon \\in \\lbrace 0, 10^{-2}, 10^{-1}, 1\\rbrace $ , where $\\varepsilon =0$ corresponds to the null hypothesis and $\\varepsilon =1$ corresponds to a single cluster containing all $p$ OTUs.", "Throughout these experiments $n_x=n_y=n$ , where $n \\in \\lbrace 25,50,100,200\\rbrace $ is the group size and the dispersion parameter $b$ for the total reads per sample takes one value from $b \\in \\lbrace 3,10,30\\rbrace $ (see Figure REF (B)).", "Results in the main text use $b=10$ as this was representative of all values of $b$ that were investigated.", "The aim of the study is to investigate the behaviour of the two-sample test with $\\widehat{\\textnormal {MMD}}_k(X,Y)$ as the test statistic.", "An appropriate kernel induces a two-sample test which has well-calibrated Type I error and high power, but is also sensitive to the value of $\\varepsilon $ .", "In this study we compare the performance of the test using the following kernels: the (i) Spectrum kernel with $k \\in \\lbrace 2, \\ldots , 30\\rbrace $ , (ii) Mismatch kernel with $k \\in \\lbrace 2, \\ldots , 15\\rbrace $ and $m \\in \\lbrace 1,2,3,4,5\\rbrace $ , (iii) Gappy pair kernel with $k \\in \\lbrace 2, \\ldots , 15\\rbrace $ and $g \\in \\lbrace 1,2,3,4,5\\rbrace $ and (iv) weighted and unweighted UniFrac kernel.", "We also include the following abundance-only kernels: (i) RBF and (ii) Matern32 kernels with median heuristic lengthscale , and (iii) linear kernel.", "We generated 100 datasets using (REF )-() and used the fraction in which $H_0$ is rejected is used to evaluate the behaviour of the two-sample test with a given kernel.", "In each replicate we set $\\alpha _1$ to be a permuted version of the Maximum likelihood estimates of the DMN concentrations for the chronic respiratory disease dataset described in Section .", "We use a nominal significance level of 0.1, for which a well-calibrated test rejects $H_0$ close to 10% of the time when data are simulated under the null hypothesis.", "When $\\varepsilon =0$ (i.e.", "$P=Q$ ), if the observed rate of $H_0$ rejections is significantly different from 10% then the Type I error of the test is poorly-calibrated.", "When $\\varepsilon >0$ , $P\\ne Q$ and so a higher rate of $H_0$ rejections indicates higher power.", "Figure REF shows the $H_0$ rejection rate for the Spectrum kernel with $k=30$ (top row), the Unweighted and Weighted UniFrac kernels (middle row) and the three abundance-only kernels (bottom row).", "The results for other string kernels (Mismatch, Gappy pair and Spectrum with other $k$ -mer lengths) are included in the Supplementary Material (Figure REF ).", "We observe that all kernels induce a test with well-calibrated Type I error (left-hand column).", "When $\\varepsilon >0$ the Spectrum kernel has a higher power than both the weighted and unweighted UniFrac kernel for an appropriate choice of $k$ ($k\\ge 20)$ , with the power of the test increasing with $k$ .", "A complete set of results for the string kernel hyperparameters can be found in the Supplementary Material (Section REF ).", "Figure: Rate of null hypothesis rejections in the two-sample test simulation study for: (A) Spectrum kernels, (B) UniFrac kernels and (C) abundance-only kernels.", "The solid red line denotes the nominal significance level (0.1) and the dashed lines show its 95% binomial proportion confidence interval.", "Results for the full set of string kernel hyperparameters can be found in Figure .The abundance-only kernels at first glance may seem to be the optimal choice as they have the highest power.", "However, this is actually a drawback as they are overly sensitive to differences between $P$ and $Q$ that may not have biological relevance.", "These kernels do not model any phylogenetic relationships and weight all differences between OTUs equally.", "They are therefore very likely to reject $H_0$ based on differences between very closely-related (and often indistinguishable) OTUs.", "As stated previously, an appropriate two-sample test for microbial applications should be sensitive to the phylogenetic scale on which $P$ and $Q$ differ.", "For a single replicate the DMN concentrations $\\alpha _1$ are fixed, from which $\\alpha _2$ are obtained using $\\pi _\\varepsilon (\\cdot )$ using a sequence of increasing $\\varepsilon $ values.", "Therefore, an appropriate RKHS for microbiome applications should produce larger MMD values when $\\varepsilon =1$ than when $\\varepsilon =0.1$ .", "The two scenarios represented by these values of $\\varepsilon $ are very different, as $\\varepsilon =1$ imposes no phylogenetic restrictions on the differences between the probability distributions $P$ and $Q$ , but $\\varepsilon =0.1$ forces any differences to occur amongst OTUs that are at most 10% of the total phylogenetic variation apart.", "Figure REF (A) shows that the MMD value when $\\varepsilon =0.1$ is far smaller than its value when $\\varepsilon =1$ for the Spectrum $k=30$ and Unweighted UniFrac kernel.", "Figure REF (A) also suggests that the Linear and RBF kernels produce smaller MMD values when $\\varepsilon =0.1$ than when $\\varepsilon =1$ , although not to the same degree.", "We now show that this difference in MMD is unrelated to phylogeny.", "To do so, we compare the MMD when $\\alpha _2$ is computed using a set of clusters with the same sizes as $\\mathcal {C}_\\varepsilon $ , but whose labels are assigned at random (without using the phylogenetic tree).", "The result is a set of permutations with the same properties as $\\pi _\\varepsilon $ but that have no relation to phylogeny.", "Figure REF (B) compares MMD values calculated when $\\alpha _1$ and $\\alpha _2$ are related to one another by permutations with and without phylogenetic information.", "MMDs for the Spectrum ($k=30$ ) and Unweighted UniFrac kernels have distinct MMD distributions between the two scenarios, but abundance-only (Linear and RBF) kernels have identical distributions.", "In conclusion, this simulation study demonstrates that Spectrum kernels offer higher power than UniFrac kernels, while still modelling phylogenetic features of microbial datasets.", "Figure: (A): the ratio between MMD 2 (X,Y)\\textnormal {MMD}^2(X,Y) when ε=0.1\\varepsilon =0.1 and ε=1.0\\varepsilon =1.0 shows that the kernels that only model OTU abundances have similar MMD values for very different phylogenetic scenarios, while phylogenetic kernels (spectrum-30 and unweighted UniFrac) have far lower MMD values when ε=0.1\\varepsilon =0.1.", "(B): defining OTU clusters without using phylogeny does not change the MMD values for abundance-only kernels.)" ], [ "Simulation study II: Host trait prediction using Gaussian processes", "Host trait prediction is another important task in microbial studies.", "The aim of this set of simulations is to identify scenarios under which a phylogenetic kernel improves the training data fit and predictive performance of a Gaussian Process model." ], [ "Simulation setup", "We use the same setup to simulate OTU abundances $X \\in \\mathbb {Z}_{\\ge 0}^{n \\times p}$ as in the previous section, but with a single population with DMN concentrations $\\alpha $ .", "Once again these are a permutation of Maximum likelihood concentration estimates from the chronic respiratory disease dataset.", "We follow and assume that the relative abundance of each OTU in a sample is the relevant quantity when determining host phenotype.", "In this section we simulate both continuous and binary host phenotypes.", "Given the simulated OTU counts a fictitious continuous host phenotype $y \\in \\mathbb {R}^n$ is generated from the relative abundances $Z \\in [0,1]^{n \\times p}$ where $Z_{ij} = \\frac{X_{ij}}{\\sum _k{X_{ik}}}$ using a linear model of the form $ y = \\beta Z + \\eta \\,, \\quad \\eta \\sim \\mathcal {N}(0, \\rho ^2) \\,,$ where $\\beta \\in \\mathbb {R}^p$ are effect sizes.", "The variance of $\\beta Z$ is fixed to 1 throughout and two noise-levels defined by one of $\\rho \\in \\lbrace 0.3, 0.6\\rbrace $ were tested, corresponding to signal to noise ratios of $\\frac{10}{3}$ and $\\frac{10}{6}$ .", "Similarly, a fictitious binary host phenotype can be generated using the following thresholded-version of (REF ): $ y = \\mathbb {1}(\\beta Z + \\eta \\ge 0) \\,, \\quad \\eta \\sim \\mathcal {N}(0, \\rho ^2) \\,,$ where $\\rho ^2=0.1$ ." ], [ "Phylogenetic OTU effect sizes", "The phylogenetic component of the simulation is introduced via the OTU effect sizes $\\beta $ , which are assigned to clusters of OTUs in two scenarios, each of which represents a distinct biological hypothesis: OTU effects are driven by the 16S rRNA gene sequence and so phylogenetically similar OTUs have similar effects; or OTU effects are assigned at random and are unrelated to the tree and 16S rRNA gene sequence.", "Scenario 1 is achieved by clustering the 1,189 OTUs in the same manner used in the two-sample test simulations with $\\varepsilon =0.1$ while Scenario 2 assigns clusters at random.", "The distribution of OTU effect sizes in the two scenarios is illustrated in Figure REF .", "Given a set of OTU clusters, ten are sampled without replacement and assigned cluster-level effects $\\tilde{\\beta } \\sim \\mathcal {N}(0, 10 \\, I_{10})$ .", "The OTU-level effects are given by $\\beta _j = {\\left\\lbrace \\begin{array}{ll}\\tilde{\\beta }_k & \\textnormal {if OTU } j \\textnormal { is in cluster } k \\\\0 & \\textnormal {otherwise} \\\\\\end{array}\\right.}", "\\quad j=1, \\ldots , p, \\, k=1, \\ldots , 10\\,,$ which results in a sparse $\\beta $ with ten unique values.", "Figure: Generating OTU effect sizes that are related to phylogeny (plot A) or are unrelated to phylogeny (plot B).", "Unmarked leaves denote OTUs with zero effect size in the phenotype model." ], [ "Results", "We generate 100 datasets for each of the six simulation setups described above – two regression models with different level of additive noise as well as one classification model; with effect size generated under Scenarios 1 and 2.", "For each of the datasets, GP models are trained using a linear and a string kernel.", "We include these kernels as the underlying phenotype model is known to be linear and so these two kernels are the optimal choices by design.", "Note that using a linear kernel for GP regression corresponds exactly to Bayesian linear regression.", "For the regression task, we use an exact GP regression, while for binary traits we use a variational GP with probit likelihood .", "Note that the three variants of the String kernel are considered together with hyperparameters selected by maximising the training objective: the log-marginal likelihood for GP regression and the evidence lower bound for the variational GP.", "See the Supplementary Material (Section REF ) for the hyperparameters chosen in each replicate of these simulations, which generally favour larger values of $k$ .", "The GP models are trained on a training set containing 80% of the samples; the remaining 20% is the test set.", "Kernel hyperparameter were selected by optimising the training objective - either log-marginal likelihood (LML) or evidence lower bound (ELBO) - with the optimised objective being used to evaluate the model fit alongside the log-predictive density (LPD) on the test set.", "In the regression case the difference between the LML of two models is a Bayes factor, while for classification the ELBO can be used analogously for model selection .", "Such an analysis can therefore be used to identify whether the factors controlling a host trait are related to the observed 16S rRNA gene sequence or if they are driven by other factors (such as areas of the bacterial genome that have not been sequenced or environmental factors).", "Figure REF (A) shows that the difference in training objective (LML or ELBO) between GP models with a linear kernel and a string kernel is effective at identifying the distribution of OTU effects on the phylogenetic tree.", "The LPD on the held-out data is also able to distinguish between the two scenarios (Figure REF (B)).", "Figure: (A): training objective (LML for GP regression models and ELBO for the variational GP) for GPs with String and Linear kernels.", "Red dots correspond to datasets simulated under Scenario 1 where OTUs effect size are driven by the 16S rRNA gene sequence while blue dots correspond to datasets where effect sizes are unrelated to the phylogenetic tree.", "(B): The corresponding log-predictive densities show similar behaviour." ], [ "Real data applications - host trait prediction", "We now demonstrate string kernels on two host-trait prediction problems from real datasets.", "The first task ($n=388$ , $p=525$ ) is a regression task predicting vaginal pH from bacterial community composition and the second is a binary classification task ($n=107$ , $p=1,189$ ) classifying between two chronic respiratory diseases using the airway bacterial community .", "Note that the second task uses the same chronic respiratory dataset as the simulations but with the observed OTU counts and host phenotype.", "In the first task the sequences are clustered to 100% identity and so are termed amplicon sequence variants (ASVs) rather than OTUs, which are clustered to 97% identity.", "For these real dataset tasks we use ten-fold cross-validation to estimate the training objectives (log-marginal likelihood for GP regression and ELBO for the variational GP classifier) and log-predictive densities on the held-out samples.", "In each iteration of cross-validation we trained a GP model with a String and Linear kernel for consistency with the simulation study.", "The resulting training objectives are shown in Figure REF (A-B), which indicate that the String kernel is clearly the better model.", "In the regression case (Figure REF (C))) this also corresponds to better predictive performance on the held-out data.", "On the other hand, in the classification task the Linear kernel gives slightly better predictions than the String kernel (Figure REF (D))).", "However, the difference in log-predictive density is very small.", "Figure: Real data applications of host trait prediction using GPs.", "(A,B): predicting vaginal pH from vaginal bacterial community composition .", "(B,C): classifying chronic respiratory disease the airway bacterial community .", "Log-densities estimated using ten-fold cross-validation." ], [ "Discussion", "These results demonstrate the utility of using kernels to model the phylogenetic relationships present in microbial datasets in two tasks: (i) the kernel two-sample test and (ii) host-trait prediction using GPs.", "Modelling phylogenetic relationships when performing the two-sample test results in a test that is sensitive to the phylogenetic scale of the differences between two populations, unlike tests that used kernels that only model abundance.", "We then showed how GPs with string kernels fit their training data better than linear, RBF and Matern32 kernels in respiratory disease and vaginal pH prediction tasks using real datasets.", "The two-sample test simulations demonstrated that popular characteristic kernels may not be appropriate for two-sample tests with 16S rRNA gene sequencing data, at least under the assumptions of these simulations.", "We considered scenarios where differences between the distributions $P$ and $Q$ occurred through permutations of the underlying $\\alpha $ , when there are many other ways for two populations to differ.", "However, this simulation setup was constructed to demonstrate the undesirable behaviours of the abundance-only kernels in this setting, as well as show that the phylogenetic kernels do not exhibit these behaviours.", "This aim was achieved and these findings are sufficient to warn against using Linear, RBF or Matern32 kernels in a two-sample test on OTU-level data (or at least to exercise caution when performing such tests).", "Our simulation results showed that a kernel two-sample test using a string kernel demonstrates the desirable property of being sensitive to the phylogenetic scale (denoted by $\\epsilon $ ) at which the difference between the two probability distributions $P$ and $Q$ occur.", "However, a method for tuning the String kernel hyperparameters to be sensitive to a desired value of $\\varepsilon $ is still required.", "This is left for future work.", "The host trait prediction simulation study showed that the GP training objective – either log-marginal likelihood (LML) or ELBO – of GP models using a string vs a linear kernel can be used as an indicator of the distribution of OTU effects on host phenotype across the phylogenetic tree.", "As the tree is constructed from the 16S rRNA gene sequences this summary statistic therefore quantifies the degree to which the OTU effects are explained by 16S rRNA gene sequence variation.", "If a GP with a linear kernel has a larger LML than one with a string kernel then the OTU effects must be explained by (i) variation in parts of the microbial sequence that have not been collected or (ii) by non-sequence (e.g.", "environmental) factors.", "However, this approach has only been shown to be effective when the assumptions of the simulation are met.", "The most important of these is that the host phenotypes depends linearly on the relative abundance.", "An interesting option for future work is to investigate the robustness of the results to mis-specification of the phenotype model (when the phenotype model contains non-linear dependencies but the phylogenetic kernel remains linear).", "However, one of the benefits of GPs is their modularity and so it is straightforward to combine string and characteristic kernels to model both phylogeny and nonlinear effects.", "One way to achieve this – also left for future work – is to replace the Euclidean distance in the RBF kernel with the distance between samples in $S$ : $k(x,x^{\\prime }) = \\exp \\left( -(x-x^{\\prime })^T S (x-x^{\\prime }) \\right)$ .", "The resulting kernel is able to both model non-linear dependencies and phylogeny.", "A final limitation of these experiments is that they focus on modelling the phylogenetic relationships amongst the OTUs and have largely neglected some other important features of OTU count data: sparsity and zero-inflation.", "While the simulation setup ensured these features were present in the simulated OTU tables they were not explicitly modelled by the kernel two-sample test nor the GP models.", "The aforementioned modularity of GPs also enables the construction of a GP that models both zero-inflation of counts and phylogenetic relationships combining kernels.", "This modularity is one of the reasons why kernel methods are a popular approach for biological data integration as their additive and multiplicative properties enables the straightforward combination of heterogeneous data types , , .", "This study focused on the kernel two-sample test as proposed by Gretton et al , which uses MMD as the test statistic, and host trait prediction using GP models.", "Semi-parametric kernel regression methods (such as MiRKAT and its extensions) also rely on the properties of the RKHS induced by their choice of kernel.", "Practitioners typically use an RBF kernel in these settings, but phylogenetic kernels are likely to be more appropriate in this setting as well.", "A natural extension of our approach is therefore to investigate the performance of string kernels in the context of semi-parametric kernel regression." ], [ "Acknowledgements", "Jonathan Ish-Horowicz was the recipient of a Wellcome Trust PhD studentship (215359/Z/19/Z).", "Supplementary Materials: Modelling phylogeny in 16S rRNA gene sequencing datasets using string kernels" ], [ "Reads per sample in observed datasets", "In the simulation studies we model the total reads per sample as a negative binomial with mean $a$ and dispersion $b$ .", "Figure REF (A) shows the empirical reads per sample in the two real datasets while Figure REF (B) shows the negative binomial distributions used to simulate the total reads per sample in these simulations, which fix $a=10^5$ and $b \\in \\lbrace 3,10,30\\rbrace $ .", "Smaller values of $b$ result in datasets where the reads per sample are more left-skewed.", "Figure: 16S rRNA gene sequencing datasets commonly exhibit variable numbers of reads per sample (plot A).", "This is emulated in the simulated datasets by modelling the number of reads per sample, NN, as being drawn from a negative binomial NB(10 5 ,b)\\textnormal {NB}(10^5, \\, b) with different values of the dispersion parameter bb (plot B)." ], [ "Type I error and power of all string kernel hyperparameters", "Before applying String kernels it is necessary to select the $k$ -mer length as well as the number of mismatches ($m$ , for the Mismatch kernel) or number of gaps ($g$ , for the Gappy pair kernel).", "Figure REF shows that the String kernels all have well-calibrated Type I error for any choice of hyperparameters.", "However, the power of the test depends critically on the choice of $k$ , with larger values increasing the power of the test.", "The larger the value of $k$ , the more powerful the test for all three variants of the String kernel.", "For the Mismatch and Gappy pair kernels, the effect of $k$ is larger than that of their additional hyperparameter ($m$ or $g$ ).", "In addition, the Mismatch kernel has lower power than the Spectrum or Gappy pair kernel for a fixed value of $k$ , irrespective of the choice of $m$ .", "This dependence of power on $k$ can be explained by considering the role of $k$ -mer length when computing String kernels.", "A String kernel computes $k(x,x^{\\prime })=x S x^{\\prime T}$ , where the the length of $k$ -mer controls the entries of $S$ .", "Small values of $k$ (e.g.", "$k\\le 4$ ) result in an $S$ matrix that has few non-zero entries, effectively modelling all OTUs as highly related to one another (see Figure REF ).", "This means that larger values of $\\varepsilon $ or larger group sizes are required for a statistically significant MMD value, as differences between OTU abundances in $X$ and $Y$ are “smoothed” by the $S$ matrix.", "As $k$ increases $S$ approaches a block-diagonal structure, where the only non-zero entries are those corresponding to clusters of OTUs with very similar sequences.", "These $S$ matrices only smooth differences in $P$ and $Q$ if they occur between closely-related OTUs, resulting in tests with higher power.", "Figure: Null hypothesis rejection rate of string kernels with different hyperparameters at a nominal significance level of 0.1 (red line).", "These results use the CLR transform and b=10b=10 but are representative of all simulation scenarios tested.", "FAME (p=1,189p=1,189)" ], [ "Effect of string kernel hyperparameters in host trait prediction (GPs)", "Figure REF -REF show the number of times each value of $k$ , $m$ and $g$ were chosen in 100 replicates of the GP simulations.", "There is a preference for larger $k$ -mer length and a dependence on the sample size, as when $n=400$ the Gappy pair ($g=3$ ) kernel is more likely to have the largest training objective than when $n=200$ .", "The Mismatch kernel is selected less than the other two string kernel variants almost, suggesting that using a Spectrum or Gappy pair kernel is always the preferred option as they are both cheaper to compute.", "Figure: Number of times different String kernel hyperparameters are selected in 1,000 replicates of the GP classification experiments.", "String kernel hyperparameters are selected using the log-marginal likelihoods of the resulting GP model.", "These plots are for b=10b=10 but are representative of the results with other values.Figure: Number of times different String kernel hyperparameters are selected in 1,000 replicates of the GP regression experiments with σ 2 =0.3\\sigma ^2=0.3.", "String kernel hyperparameters are selected using the log-marginal likelihoods of the resulting GP model.", "These plots are for b=10b=10 but are representative of the results with other values.Figure: Number of times different String kernel hyperparameters are selected in 1,000 replicates of the GP regression experiments with σ 2 =0.6\\sigma ^2=0.6.", "String kernel hyperparameters are selected using the log-marginal likelihoods of the resulting GP model.", "These plots are for b=10b=10 but are representative of the results with other values." ] ]
2210.07696
[ [ "Fields2Cover: An open-source coverage path planning library for unmanned\n agricultural vehicles" ], [ "Abstract This paper describes Fields2Cover, a novel open source library for coverage path planning (CPP) for agricultural vehicles.", "While there are several CPP solutions nowadays, there have been limited efforts to unify them into an open source library and provide benchmarking tools to compare their performance.", "Fields2Cover provides a framework for planning coverage paths, developing novel techniques, and benchmarking state-of-the-art algorithms.", "The library features a modular and extensible architecture that supports various vehicles and can be used for a variety of applications, including farms.", "Its core modules are: a headland generator, a swath generator, a route planner and a path planner.", "An interface to the Robot Operating System (ROS) is also supplied as an add-on.", "In this paper, the functionalities of the library for planning a coverage path in agriculture are demonstrated using 8 state-of-the-art methods and 7 objective functions in simulation and field experiments." ], [ "Introduction", "In developed countries, there is a shortage of skilled workers to operate agricultural machinery [1].", "This shortage can be alleviated with the development of autonomous machinery.", "Unlike manually operated machinery, autonomous vehicle operations need meticulous planning beforehand.", "The problem of determining a path to cover a field is known as coverage path planning (CPP).", "CPP is of high importance for cleaning [2], surveillance robots [3], lawn mowers [4], and agricultural vehicles [5], where it has been addressed in several works.", "Whilst there have been many efforts, most of the (partial) CPP solutions have not been released as open-source software thus hindering more rapid advances in CPP by the scientific community.", "The packages shown in Table REF are the only open-source software to the best of our knowledge.", "Note that the software packages listed in Table REF solve the CPP problem partially, but require several modifications in order to be customized to different unmanned vehicles and applications.", "This paper aims to fill the above mentioned gap by proposing and releasing to the community an open-source CPP library for field coverage.", "The library was designed focusing in four modules that are the core of CPP solutions: a headland generator, a swath generator, a route planner, and a path planner.", "Each module includes at least one state-of-the-art method and one objective function.", "The library currently only supports convex fields.", "Regardless, there is an urgent need for an open source software solution to fill the existing gap in the CPP problem in agriculture.", "The ultimate goal of the library is to ease the state of-the-art algorithm benchmark and to accelerate CPP research and application.", "Table: Comparison between coverage path planning open-source software solutions.Owing to the non-holonomous nature of agricultural vehicles, a region of the field known as headlands must be reserved for turning the vehicle.", "The most basic approach is to allocate a constant width area around the field.", "This strategy allocates a large amount of space to a poor yield area.", "Depending on how the swaths are arranged, some headland areas are parallel to the swaths and hence they are not needed for turning.", "By only constructing headlands along the field edges where turns are made, the area reserved for them can be minimized [5], [17].", "Swaths are generated in the inner field, which is the remaining region after subtracting the headlands.", "In two-dimensional planar fields, a reference line can be applied as a guide for the generation of swaths, where each parallel creates a swath [17], [5], [18].", "This line can be chosen for convenience or by an algorithm such as brute force or a meta-heuristic.", "Oksanen [5] describes a driving angle search strategy that requires fewer iterations than brute force search but it does not guarantee finding the global minimum.", "Objective functions such as the number of turns or the sum of swath lengths are used to determine optimality in swath generation [17].", "The distance [17] and time [19] required to cover the field are affected by the order of the swaths.", "A route is the sequence of the swaths to cover.", "The Boustrophedon order, which travels the swaths sequentially from one side of the field to the other, and the snake order, which skips one swath at each turn and returns through the uncovered swaths, are popular preset routing patterns [20].", "Objective functions such as distance, number of rotations, or time necessary to traverse the field [17], [19] are minimized by finding the optimal route through meta-heuristics [21].", "A path is composed of the swaths of a route connected by turns, forming a continuous line along which the vehicle will drive.", "Dubins' [22] or Reeds-Shepp's [23] curves are turns that minimize the path length of the turns.", "These curves are made by either curve segments or straight lines.", "The main problem is that there is an instantaneous change of curvature at the transition point between two segments.", "Techniques such as numerical integrators [24] or clothoids [25] are employed to smooth the turn to avoid the curvature discontinuity.", "Furthermore, to navigate from a swath to the headlands, turns such as non-uniform rational B-spline (NURBS) curves [26] can be adopted.", "CPP problems are composed of numerous sub-problems, several of which have received special attention in literature.", "For example, Spekken [21] presents an approach for calculating the coverage path in undulating terrain that however does not consider turns between rows or headland creation.", "Nilsson [27] and Nørremark [28] divide the CPP problem into two major modules: Field Partitioning/Representation, where the distribution of headlands and swaths in the field is set up, and Route Planning, which determines the optimal order of travelling the swaths within sub-fields.", "In the latter framework, each module has more than one function, increasing the complexity of comparing multiple variations of the module." ], [ "Existing open-source software", "There have been web applications, such as GAOS [18], that allowed farmers to design or adapt coverage paths with a user-friendly interface.", "Many of such web applications, despite being a great help to the farming community, have been developed in collaboration with companies, restricting the possibility to release the code to the public domain.", "The currently existing open source CPP repositories are listed in Table REF .", "Although seven other projects were found, none of them can be adopted for farming purposes with ground robots.", "As mentioned above, ground robots in agriculture are generally not-holonomous, so turning maneuvers must be planned to move from one swath to another.", "Unfortunately, some packages [6], [10], [12], [15] only compute the route to cover a region.", "These packages are designed for quadrotors [10], [12] or for indoor robots [6].", "However, the code needs to be modified to support path generation for non-holonomous robots.", "A special case of CPP is the Nobleo package [7] which, although the vehicle used is non-holonomous, uses a grid to define the nodes that should be covered at least once.", "In agriculture it is important to reduce the damage caused by the wheels of the vehicle, so it is not recommended to cover the same swath several times [7] or to cross through the main field [6], [10], [8].", "On the other hand, Greenzie [14], which was developed for lawn mowers, is the only package that supports headlands, along with Fields2Cover.", "Unlike arable farming, mowers are constrained to avoid repeated tracks for field traffic, thus the coverage path is created with random sweep angles.", "For this reason, Greenzie does neither provide an optimizer nor an objective function for planning the swaths.", "In contrast, Ipiano [15] provides an interface to change the objective function used by its optimizer, but here no headland support is offered.", "Fields2Cover is the only software solution that provides algorithms to create a coverage path for terrestrial agricultural robots, including optimizers and objective functions to generate the best path, headland support and turn planning." ], [ "Contributions", "This paper introduces the first open-source extensible CPP library for agriculture.", "It provides the following contributions: A publicly-available library (Fields2cover) providing connectable modules to address CPP problems with unmanned agricultural vehicles.", "Those modules can be effortlessly customized for other CPP problems.", "Benchmark tools for quantitative comparison between the CPP algorithms and approaches.", "A quantitative comparison using 38 convex fields between eight state-of-the-art CPP approaches/methods and seven objective functions.", "Experiments with a commercial unmanned agricultural vehicle demonstrating Fields2Cover's capability to provide real-world solutions.", "To the best of our knowledge, this is the first work proposing an open-source library for CPP on agriculture." ], [ "Fields2Cover", "Fields2Cover is designed in four modules (Fig.", "REF ): 1) Headland Generator, 2) Swath Generator, 3) Route Planner and 4) Path Planner.", "The inputs of the CPP problem are the shape of the field and the vehicle specifications, while the output is the coverage path of the field.", "Methods from the same module can be used interchangeably to compare their solutions independently from the rest of the CPP problem." ], [ "Headland Generator module", "The Headland Generator module currently implements a single method that buffers the border of the field in inward direction by a custom constant width (see Module 1 in Fig.", "REF ).", "The objective function of this module is the area of the remaining field after removing the headlands.", "$A_r = \\frac{A_{\\bar{hl}}}{A_{f}}$ where $A_r$ is the area remaining, $A_{\\bar{hl}}$ the area of the field without headlands, and $A_{f}$ the area of the original field." ], [ "Swath Generator module", "The inner field (i.e., excluding the headlands) is the input of the Swath Generator module (see Module 2 in Fig.", "REF ).", "This region is divided into parallel swaths matching the operating width.", "In the current version, the library only supports parallel non-overlapping swaths.", "Fields2Cover has a brute force algorithm to find the optimal sweep angle by trying discretized angles using a given step size.", "If the computer running the library supports multiple threads, several sweep angles are tried in parallel [29].", "This module currently implements 3 objective functions: Minimize the Number of Swaths.", "This objective function depends on the shape and the area of the field, and the width of the robot.", "The number of swaths is limited by the equation: $0 \\le \\# S_{\\alpha } \\le \\frac{A_{\\bar{hl}}}{R_w},$ where $\\# S_{\\alpha }$ is the number of swaths for a given sweep angle $\\alpha $ , $A_{\\bar{hl}}$ is the area of the field without headland, and $R_w$ is the operational width of the robot.", "The shape of the field that maximizes the minimun number of swaths is the square field, which results in: $min_\\alpha \\ \\# S_{\\alpha }^{\\tiny \\diamondsuit } \\simeq \\frac{\\sqrt{A_{\\bar{hl}}}}{R_w},$ where $\\# S_{\\alpha }^{\\tiny \\diamondsuit }$ is number of swaths in a square field with a given sweep angle $\\alpha $ .", "Therefore, the optimal value of this objective function is less than the square root of the area of the field.", "Maximize the Field Coverage: $A_{cov} = \\frac{A_{\\bar{hl}} \\cap \\lbrace \\cup _i\\ S^i \\rbrace ) }{A_{\\bar{hl}}},$ where $A_{cov}$ is the amount of area covered, $A_{\\bar{hl}}$ is the field without headlands, $S^i$ is the $i^{th}$ swath, $\\cap $ is the intersection operator, and $ (\\cup _i\\ S^i)$ is the union of all the swaths.", "Minimize the Swaths Length: $\\sum _i^{N} length(S^i) = \\sum _{i}^{N} \\sum _{j}^{S^i_p - 1} ||S^i_{p=j+1} - S^i_{p=j}||_2,$ where $\\sum _i length(S^i)$ is the sum of the length of the swaths, $N$ is the number of swaths, $S^i_p$ is the number of points that the $i^{th}$ swath has, $^i_{p=j}$ is the $j^{th}$ point of the $i^{th}$ swath, and $||x||_2$ is the euclidian norm." ], [ "Route Planner module", "The Route Planner module uses the swaths created earlier to produce the route (see Module 3 in Fig.", "REF ).", "Fields2Cover contains several predefined route patterns, which include the boustrophedon pattern, the snake pattern, the spiral pattern and a custom pattern.", "The Boustrophedon pattern covers the swaths sequentially, and the Snake pattern skips one swath each time to traverse the field in one direction and returns through covering the uncovered swaths.", "The Spiral pattern is a variation of the Snake pattern, that sort the swaths in clusters of a fixed size with the snake pattern.", "The custom pattern requires specification of the swath order by the user.", "To compare different routes, the library provides as objective function the length of the path generated by the Path Planning module.", "In addition, it provides the path length when the minimum turning radius of the vehicle is zero." ], [ "Path Planner module", "The inputs of the Path Planner module (see Module 4 in Fig.", "REF ) are the route (sorted swaths) and the vehicle parameters.", "Once the route is known, the turns to complete the path are computed.", "In the current version of the library, the path planner applies the same type of curves for all the headland turns.", "Fields2Cover currently supports straight curves, the Dubins' curves[22] and the Reeds-Shepp's[23] curves, using the path length as the single objective function." ], [ "ROS wrapper", "Although the Fields2Cover library does not depend on ROS, an interface with ROS is provided as an add-on.", "The fields2cover-roshttps://github.com/Fields2Cover/fields2cover-ros package provides functions that convert Fields2Cover data types into ROS messages.", "Services are created to execute modules directly from ROS topics.", "Launch files are used to script examples of the package.", "RVIZ-support is also provided to visualize the results of the modules.", "Methods, objective functions and parameters can be modified on real time thanks to rqt_reconfigurehttp://wiki.ros.org/rqt_reconfigure." ], [ "Design & Implementation", "Fields2Cover is implemented using C++17, with a Python interface using Swig [30], and released under BSD-3 license.", "The design of Fields2Cover aims to serve both scientists and service providers and is intended to be easily used.", "The reason for making Fields2Cover an open-source library is that doing so encourages the development of additional functionality by providing the code to the community.", "Likewise, Fields2Cover widely employs open-source libraries from third parties to streamline the development process of state-of-the-art algorithms.", "For scientists, priority is given to a flexible design, which allows to extend or modify existing algorithms.", "Additionally, a benchmark against which to compare new solutions is added.", "For service providers, utility concerns the ability to plan the best coverage path for a given objective function in a straightforward manner.", "The modularity of Fields2Cover is key to ensure its usefulness for both cases.", "In addition, the library provides tests, tutorials, and extended documentationhttps://fields2cover.github.io/ to reduce the learning curve." ], [ "Results", "Several experiments were conducted to demonstrate the functionalities of Fields2Cover.", "Firstly, coverage paths were created for convex fields from the Nilsson's benchmark [27].", "In these simulations, the experiments focus on the optimization of the objective functions and the computation time of those methods.", "Secondly, real field experiments were conducted in an agricultural field with a commercial robot (Fig.", "REF ) of the company AgXeed B.V (The Netherlands).", "The aim of the experiment was to program the coverage trajectory of the robot using the Fields2Cover library and assess whether a designed coverage path is efficiently traversed by the robot.", "The planned path is previously transferred to the robot with Protobuf[31].", "The protobuf message defines the path as timestamps, positions, velocities and orientations.", "It also contains the geometry of the field boundary to prevent the vehicle from leaving the field.", "The sensor data collected during the coverage path, such as the GPS position and the velocity, is returned from the AgBot as a rosbag[32].", "Experiments were done with a laptop MSI GF627RE with Intel(R) Core(TM) i7-7700HQ CPU @ 2.80GHz (4 cores, 8 threads) with Ubuntu 20.04.5.", "Figure: The AgBot 5.115T2, from the company Agreed B.V (The Netherlands), is a differential robot with continuous treads.", "The weight of the robot is 7.8t, the total width of the robot is 2.5 m, the minimum turning radius avoiding excessive soil damage is 2.1 m. For the experiments the operational width of the robot (width of the coupled tool) was assigned the same value as the width of the robot.", "The AgBot 5.115T2 has 4-cylinder Deutz Diesel Engine, stage 5 with 156hp, and an electric drive train with a maximum speed of 13.5km/h.", "Some onboard sensors are 2 cameras, a RTK-GNSS receiver and an IMU." ], [ "Simulation results", "Three simulation experiments were performed.", "Firstly, the optimal route was computed for three different fields to visually inspect the effects of the objective function.", "Secondly, the coverage path was computed for 38 convex fields with every possible combination of the algorithms provided by the library.", "The combination of algorithms for creating a coverage path were compared using the path length as the objective function.", "Thirdly, the time for computing coverage paths was recorded using several objective functions of the Swath Generator module.", "The relationship between the area of the field and the computation time was found.", "The first decision for coverage path planning of a field is the objective function to be optimised by the swath generator (Brute force algorithm).", "The optimal pitch angle of the swaths may vary with the chosen objective function.", "Therefore, the first experiment provides examples of optimal swaths for the fields REC_A, CIR_B and SAL_B from the Nilsson's benchmark [27], which are shown in table REF .", "The fields were re-scaled to an area of $100m^2$ .", "If the number of swaths is minimized, the number of turns is also reduced.", "For instance, fields CIR_B and SAL_B are covered using a single turn.", "If maximum field coverage is to be achieved, CIR_B needs seven turns while SAL_B needs five.", "Field coverage is typically achieved when swaths are parallel or perpendicular to one of the edges.", "In contrast, the swath-length objective function may produce many short swaths (bottom-left of CIR_B with swath length), that reduce the total length of the swaths.", "Table: Comparison of swaths generated using brute force optimizing one of the three objective functions: sum of swath lengths (minimization problem), number of swaths (minimization problem) and field coverage (maximization problem).", "The parallel lines inside the field are the centers of the generated swaths.Figure: Coverage path length comparison.", "Plots in the same columns correspond to the same objective function that was optimized using the brute force algorithm, while plots in the same row refer to the same route planner algorithm.", "Each dot represents a field from the Nilsson's benchmark .", "Red dots correspond to paths created by a implementing Dubins' curves, blue dots are for Reeds-Shepp's curves, and the black lines are for in-place turns.The second experiment was conducted using 38 convex fields of the Nilsson's benchmark [27], re-scalated to an area of 1 ha (Fig.", "REF ).", "For each field, a headland of $7.5m$ (three times the operational width of the robot) was generated with constant width generator.", "Then, the brute force algorithm generated the optimal swaths for each objective functions shown in Table REF .", "The route planners sorted the swaths with the boustrophedon, snake or spiral (bulk of 6 swaths) pattern.", "Lastly, the path length was used to compare the coverage path computed with Dubins' or Reeds-Shepp's curves.", "The resulting path lengths were compared against the length of paths with in-place turning (minimum turning radius equal to 0), which is the least possible path length for a holonomic vehicle.", "As shown in Fig.", "REF , a percentage between $0.5\\%$ and $50\\%$ of the coverage path was spent on turns.", "When the number of turns is reduced, the distance traveled is reduced accordingly.", "The distance used for turning increases when the boustrophedon pattern is applied since a shorter width between swaths requires a larger turn to comply with the minimum turning radius requirement.", "For instance, in the first column of the figure REF , the difference between the path length using Dubins' curves and in-place turns is smaller than in the other columns.", "Field coverage and swath length behaved equally in terms of coverage path length.", "With any of the objective functions presented, the boustrophedon pattern produced the shortest pattern with in-place turns, the snake pattern was the second and the spiral pattern the longest.", "The length of the boustrophedon pattern increases when the minimum turning radius is required.", "Figure: Time required to compute a path according to the objective function used.", "Algorithms used are constant width headland generator, parallel brute force for swath generation, Boustrophedon route order and Dubins' curves.In the last simulated experiment, the computation time of planning a coverage path was measured in relation to the area of the field and the objective function of the swath generator (Fig.", "REF ).", "The constant headland width was set to three times the width of the robot.", "Next, the parallel brute force algorithm optimized the pitch angle of the swaths, which were sorted using a boustrophedon pattern.", "Finally, the path planner used Dubins' Curves to create the coverage path.", "Fields2Cover computed a coverage path for a field of 1 ha in less than 3.5 seconds using Field coverage as the objective function, while only 0.5 seconds were needed using the number of swaths or the swath length as the objective functions.", "Since the computation of those objective functions is proportional to the number of swaths and the number of swaths is proportional to the width of the field perpendicular to the direction of the driving angle, the computational time grows proportional to the square root of the area of the field.", "The computation time for all the objective functions can be approximated by: $T_c = C_0 * \\frac{\\sqrt{A_{\\bar{hl}}}}{R_w} + C_1$ where $T_c$ is the computation time, $C_0$ and $C_1$ are constants, $A_{\\bar{hl}}$ is the area of the field, and $R_w$ is the operational width of the robot.", "This relationship is only true when the field is convex, so it can be covered with the same pattern.", "The field coverage is computationally the most demanding objective function because it computes the difference between the field and the union of the areas of each swath.", "Geometrical operations such as 'difference' and 'union' are more expensive than returning the number of swaths, which is the size of the vector of swaths.", "Computational time analysis is focused on the objective function of the brute force algorithm because it consumes more than 80% of the total time of coverage path planning." ], [ "Field experiment", "A field experiment was conducted using the AgBot shown in (Fig.", "REF ).", "In the extreme case shown in Figure REF , the AgBot covered an elongated narrow area.", "Objective functions like the minimum swath length or the number of turns would produce swaths parallel to the longest edge of the field.", "However, here we show a coverage pattern given a custom angle that allows observing the turns in the field.", "The produced swaths were sorted using the Snake pattern and connected by Dubins' curves.", "The difference between the planned path and the recorded track in Figure REF can be explained by an offset between the controlled point (front part) and the GPS antenna.", "Owing to this offset, the recorded GPS data has wider turns than the planned path.", "Turns made with the snake pattern always skip one swath, except for the turn at the rightmost part of the field where coverage direction changes.", "This turn is sharper, causing wider tracks on the ground, greater soil slippage, and thus more soil damage [33].", "In spite of soil slippage, the AgBot was capable of covering the field with the path designed by the library routines.", "Figure: AgBot covering a narrow area (shape on green).", "The coverage path plan in red and the position of the AgBot in blue.", "The AgBot is halfway the coverage task.", "The starting point is near the left edge of the area." ], [ "Conclusions & Future work", "In this work, we introduced Fields2Cover, a Coverage Path Planning open-source library for agricultural vehicles.", "Fields2Cover was implemented to bundle the research knowledge on this topic and to help other developers to accelerate their projects.", "Currently, it supports the creation of coverage paths for convex fields, with a flexible and simple structure thanks to its modular design.", "The library has four modules, which are: the headlands generator, with a constant width headlands generator; the swath generator, with a brute force optimizer; the route planner, with three types of patterns; and the path planner, with Dubins' and Reed-Shepp's curves.", "The last three modules have their own objective functions specific to their domains.", "Fields2Cover was tested using simulation with a public benchmark and in a real field.", "Fields2Cover is an on-going project, which means the functionality of the library will be expanded in the coming years.", "Future developments are supported and maintained by the first author of this paper, with the collaboration of the open-source community.", "Concave fields, 2.5D terrains and capacitated vehicles provide a good starting point for discussion and further research." ], [ "This publication is part of the project \"Fields2Cover: Robust and efficient coverage paths for autonomous agricultural vehicles\" (with project number ENPPS.LIFT.019.019 of the research programme Science PPP Fund for the top sectors which is (partly) financed by the Dutch Research Council (NWO)." ] ]
2210.07838
[ [ "Flame-state monitoring based on very low number of visible or infrared\n images via few-shot learning" ], [ "Abstract The current success of machine learning on image-based combustion monitoring is based on massive data, which is costly even impossible for industrial applications.", "To address this conflict, we introduce few-shot learning in order to achieve combustion monitoring and classification for the first time.", "Two algorithms, Siamese Network coupled with k Nearest Neighbors (SN-kNN) and Prototypical Network (PN), were tested.", "Rather than utilizing solely visible images as discussed in previous studies, we also used Infrared (IR) images.", "We analyzed the training process, test performance and inference speed of two algorithms on both image formats, and also used t-SNE to visualize learned features.", "The results demonstrated that both SN-kNN and PN were capable to distinguish flame states from learning with merely 20 images per flame state.", "The worst performance, which was realized by PN on IR images, still possessed precision, accuracy, recall, and F1-score above 0.95.", "We showed that visible images demonstrated more substantial differences between classes and presented more consistent patterns inside the class, which made the training speed and model performance better compared to IR images.", "In contrast, the relatively low quality of IR images made it difficult for PN to extract distinguishable prototypes, which caused relatively weak performance.", "With the entrire training set supporting classification, SN-kNN performed well with IR images.", "On the other hand, benefitting from the architecture design, PN has a much faster speed in training and inference than SN-kNN.", "The presented work analyzed the characteristics of both algorithms and image formats for the first time, thus providing guidance for their future utilization in combustion monitoring tasks." ], [ "Introduction", "Nam dui ligula, fringilla a, euismod sodales, sollicitudin vel, wisi.", "Morbi auctor lorem non justo.", "Nam lacus libero, pretium at, lobortis vitae, ultricies et, tellus.", "Donec aliquet, tortor sed accumsan bibendum, erat ligula aliquet magna, vitae ornare odio metus a mi.", "Morbi ac orci et nisl hendrerit mollis.", "Suspendisse ut massa.", "Cras nec ante.", "Pellentesque a nulla.", "Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.", "Aliquam tincidunt urna.", "Nulla ullamcorper vestibulum turpis.", "Pellentesque cursus luctus mauris.", "Nulla malesuada porttitor diam.", "Donec felis erat, congue non, volutpat at, tincidunt tristique, libero.", "Vivamus viverra fermentum felis.", "Donec nonummy pellentesque ante.", "Phasellus adipiscing semper elit.", "Proin fermentum massa ac quam.", "Sed diam turpis, molestie vitae, placerat a, molestie nec, leo.", "Maecenas lacinia.", "Nam ipsum ligula, eleifend at, accumsan nec, suscipit a, ipsum.", "Morbi blandit ligula feugiat magna.", "Nunc eleifend consequat lorem.", "Sed lacinia nulla vitae enim.", "Pellentesque tincidunt purus vel magna.", "Integer non enim.", "Praesent euismod nunc eu purus.", "Donec bibendum quam in tellus.", "Nullam cursus pulvinar lectus.", "Donec et mi.", "Nam vulputate metus eu enim.", "Vestibulum pellentesque felis eu massa." ], [ "Headings: first level", "Quisque ullamcorper placerat ipsum.", "Cras nibh.", "Morbi vel justo vitae lacus tincidunt ultrices.", "Lorem ipsum dolor sit amet, consectetuer adipiscing elit.", "In hac habitasse platea dictumst.", "Integer tempus convallis augue.", "Etiam facilisis.", "Nunc elementum fermentum wisi.", "Aenean placerat.", "Ut imperdiet, enim sed gravida sollicitudin, felis odio placerat quam, ac pulvinar elit purus eget enim.", "Nunc vitae tortor.", "Proin tempus nibh sit amet nisl.", "Vivamus quis tortor vitae risus porta vehicula.", "See Section ." ], [ "Headings: second level", "Fusce mauris.", "Vestibulum luctus nibh at lectus.", "Sed bibendum, nulla a faucibus semper, leo velit ultricies tellus, ac venenatis arcu wisi vel nisl.", "Vestibulum diam.", "Aliquam pellentesque, augue quis sagittis posuere, turpis lacus congue quam, in hendrerit risus eros eget felis.", "Maecenas eget erat in sapien mattis porttitor.", "Vestibulum porttitor.", "Nulla facilisi.", "Sed a turpis eu lacus commodo facilisis.", "Morbi fringilla, wisi in dignissim interdum, justo lectus sagittis dui, et vehicula libero dui cursus dui.", "Mauris tempor ligula sed lacus.", "Duis cursus enim ut augue.", "Cras ac magna.", "Cras nulla.", "Nulla egestas.", "Curabitur a leo.", "Quisque egestas wisi eget nunc.", "Nam feugiat lacus vel est.", "Curabitur consectetuer.", "$\\xi _{ij}(t)=P(x_{t}=i,x_{t+1}=j|y,v,w;\\theta )= {\\frac{\\alpha _{i}(t)a^{w_t}_{ij}\\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}{\\sum _{i=1}^{N} \\sum _{j=1}^{N} \\alpha _{i}(t)a^{w_t}_{ij}\\beta _{j}(t+1)b^{v_{t+1}}_{j}(y_{t+1})}}$" ], [ "Headings: third level", "Suspendisse vel felis.", "Ut lorem lorem, interdum eu, tincidunt sit amet, laoreet vitae, arcu.", "Aenean faucibus pede eu ante.", "Praesent enim elit, rutrum at, molestie non, nonummy vel, nisl.", "Ut lectus eros, malesuada sit amet, fermentum eu, sodales cursus, magna.", "Donec eu purus.", "Quisque vehicula, urna sed ultricies auctor, pede lorem egestas dui, et convallis elit erat sed nulla.", "Donec luctus.", "Curabitur et nunc.", "Aliquam dolor odio, commodo pretium, ultricies non, pharetra in, velit.", "Integer arcu est, nonummy in, fermentum faucibus, egestas vel, odio." ], [ "Paragraph", "Sed commodo posuere pede.", "Mauris ut est.", "Ut quis purus.", "Sed ac odio.", "Sed vehicula hendrerit sem.", "Duis non odio.", "Morbi ut dui.", "Sed accumsan risus eget odio.", "In hac habitasse platea dictumst.", "Pellentesque non elit.", "Fusce sed justo eu urna porta tincidunt.", "Mauris felis odio, sollicitudin sed, volutpat a, ornare ac, erat.", "Morbi quis dolor.", "Donec pellentesque, erat ac sagittis semper, nunc dui lobortis purus, quis congue purus metus ultricies tellus.", "Proin et quam.", "Class aptent taciti sociosqu ad litora torquent per conubia nostra, per inceptos hymenaeos.", "Praesent sapien turpis, fermentum vel, eleifend faucibus, vehicula eu, lacus." ], [ "Examples of citations, figures, tables, references", "Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas.", "Donec odio elit, dictum in, hendrerit sit amet, egestas sed, leo.", "Praesent feugiat sapien aliquet odio.", "Integer vitae justo.", "Aliquam vestibulum fringilla lorem.", "Sed neque lectus, consectetuer at, consectetuer sed, eleifend ac, lectus.", "Nulla facilisi.", "Pellentesque eget lectus.", "Proin eu metus.", "Sed porttitor.", "In hac habitasse platea dictumst.", "Suspendisse eu lectus.", "Ut mi mi, lacinia sit amet, placerat et, mollis vitae, dui.", "Sed ante tellus, tristique ut, iaculis eu, malesuada ac, dui.", "Mauris nibh leo, facilisis non, adipiscing quis, ultrices a, dui.", ", and see .", "The documentation for natbib may be found at http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf Of note is the command \\citet, which produces citations appropriate for use in inline text.", "For example,    \\cite{hasselmo} investigated\\dots produces Hasselmo, et al.", "(1995) investigated... https://www.ctan.org/pkg/booktabs" ], [ "Figures", "Suspendisse vitae elit.", "Aliquam arcu neque, ornare in, ullamcorper quis, commodo eu, libero.", "Fusce sagittis erat at erat tristique mollis.", "Maecenas sapien libero, molestie et, lobortis in, sodales eget, dui.", "Morbi ultrices rutrum lorem.", "Nam elementum ullamcorper leo.", "Morbi dui.", "Aliquam sagittis.", "Nunc placerat.", "Pellentesque tristique sodales est.", "Maecenas imperdiet lacinia velit.", "Cras non urna.", "Morbi eros pede, suscipit ac, varius vel, egestas non, eros.", "Praesent malesuada, diam id pretium elementum, eros sem dictum tortor, vel consectetuer odio sem sed wisi.", "See Figure REF .", "Here is how you add footnotes.", "Sample of the first footnote.", "Sed feugiat.", "Cum sociis natoque penatibus et magnis dis parturient montes, nascetur ridiculus mus.", "Ut pellentesque augue sed urna.", "Vestibulum diam eros, fringilla et, consectetuer eu, nonummy id, sapien.", "Nullam at lectus.", "In sagittis ultrices mauris.", "Curabitur malesuada erat sit amet massa.", "Fusce blandit.", "Aliquam erat volutpat.", "Aliquam euismod.", "Aenean vel lectus.", "Nunc imperdiet justo nec dolor.", "Figure: Sample figure caption." ], [ "Tables", "Etiam euismod.", "Fusce facilisis lacinia dui.", "Suspendisse potenti.", "In mi erat, cursus id, nonummy sed, ullamcorper eget, sapien.", "Praesent pretium, magna in eleifend egestas, pede pede pretium lorem, quis consectetuer tortor sapien facilisis magna.", "Mauris quis magna varius nulla scelerisque imperdiet.", "Aliquam non quam.", "Aliquam porttitor quam a lacus.", "Praesent vel arcu ut tortor cursus volutpat.", "In vitae pede quis diam bibendum placerat.", "Fusce elementum convallis neque.", "Sed dolor orci, scelerisque ac, dapibus nec, ultricies ut, mi.", "Duis nec dui quis leo sagittis commodo.", "See awesome Table REF .", "Table: Sample table title" ], [ "Lists", " Lorem ipsum dolor sit amet consectetur adipiscing elit.", "Aliquam dignissim blandit est, in dictum tortor gravida eget.", "In ac rutrum magna." ], [ "Conclusion", "Your conclusion here" ], [ "Acknowledgments", "This was was supported in part by......" ] ]
2210.07845
[ [ "Improved automated lesion segmentation in whole-body FDG/PET-CT via\n Test-Time Augmentation" ], [ "Abstract Numerous oncology indications have extensively quantified metabolically active tumors using positron emission tomography (PET) and computed tomography (CT).", "F-fluorodeoxyglucose-positron emission tomography (FDG-PET) is frequently utilized in clinical practice and clinical drug research to detect and measure metabolically active malignancies.", "The assessment of tumor burden using manual or computer-assisted tumor segmentation in FDG-PET images is widespread.", "Deep learning algorithms have also produced effective solutions in this area.", "However, there may be a need to improve the performance of a pre-trained deep learning network without the opportunity to modify this network.", "We investigate the potential benefits of test-time augmentation for segmenting tumors from PET-CT pairings.", "We applied a new framework of multilevel and multimodal tumor segmentation techniques that can simultaneously consider PET and CT data.", "In this study, we improve the network using a learnable composition of test time augmentations.", "We trained U-Net and Swin U-Netr on the training database to determine how different test time augmentation improved segmentation performance.", "We also developed an algorithm that finds an optimal test time augmentation contribution coefficient set.", "Using the newly trained U-Net and Swin U-Netr results, we defined an optimal set of coefficients for test-time augmentation and utilized them in combination with a pre-trained fixed nnU-Net.", "The ultimate idea is to improve performance at the time of testing when the model is fixed.", "Averaging the predictions with varying ratios on the augmented data can improve prediction accuracy.", "Our code will be available at \\url{https://github.com/sepidehamiri/pet\\_seg\\_unet}" ], [ "Introduction", "For computer-assisted cancer detection and treatment, automatic tumor segmentation from medical images is a crucial step.", "Deep learning has recently been effectively used for this problem, improving performance [1].", "However, most deep learning segmentation techniques currently in use are limited to one imaging modality.", "Today's clinics frequently use PET/CT scanners, which combine PET and CT into one device and deliver metabolic and anatomical data.", "The particular challenge of lesion segmentation in FDG-PET resides in the fact that healthy organs, such as the brain, bladder, etc, can have high FDG uptake, making it challenging to avoid false positive segmentations, which can be seen in Fig REF .", "Various studies have been proposed to segment tumors in PET/CT scans autonomously.", "To get constant segmentation masks between PET and CT, Song et al.", "created an adaptive context term for the target function [2].", "In order to get object seeds, Ju et al.", "adopted a random walk approach as an initial preprocessing.", "After that, a graph cut method was applied to segment lung tumors on PET/CT images [3].", "Based on the Markov Random Field optimization issue, Han et al.", "developed a PET/CT segmentation formulation [4].", "All of the aforementioned studies showed that integrating the data from multiple imaging modalities might produce tumor segmentation results that are more precise than the segmentation results obtained from a single image modality.", "We used an annotated oncologic PET/CT data set in this study.", "Between 2014 and 2018 at the University Hospital Tübingen, 501 consecutive whole-body FDG-PET/CT data sets of patients with malignant lymphoma, melanoma, and non-small cell lung cancer (NSCLC) and 513 data sets without PET-positive malignant lesions (negative controls) were studied [5].", "Additionally, 60 minutes after receiving an I.V.", "injection of 300–350 MBq 18F-FDG, a full-body FDG-PET scan was performed for each patient.", "PET data were rebuilt using the ordered-subset expectation maximization (OSEM) technique, which had a gaussian kernel of 2 mm, 21 subsets, and two iterations on a 400 x 400 matrix.", "Fig.", "REF shows an example of fused whole-body FDG-PET/CT data.", "Figure: An illustration of fused whole-body FDG-PET/CT data.", "The manually segmented malignant lesions are shown in the green sections." ], [ "Preprocessing", "The used dataset already had pre-processing, including normalized and resampled (CT to PET imaging resolution; same matrix size) (PET converted to standardized update values; SUV).", "By converting image units from activity counts to standardized uptake values, PET data were made uniform (SUV).", "We also applied intensity scaling with a minimum of 100 and 0 and a maximum of 250 and 15 for CT and PET, respectively.", "To reduce the model's memory and computation requirements during segmentation, we cropped the images' foreground based on the CT." ], [ "Network Architecture", "The framework used in this study is shown in Fig.", "REF The framework shows that our approach consists of two steps.", "Learning the appropriate augmentation parameters is the first step.", "We combined several augmentations with the U-Net and Swin networks to determine how each augmentation affects performance.", "We devised an optimal combination of test time augmentations using the resulting improvements in network performance when data is augmented.", "In the second part of the framework, we get a different pre-trained network - nnU-net - and testing images that need to be segmented.", "Several augmented images from each testing image were created and then segmented by using the pre-trained nnU-net.", "These segmentations were combinted into a single segmentation mask using the corresponding augmentation improvement coefficients computed in the first step." ], [ "Augmentation", "In the training phase, we used several data augmentation models, including random flip with the spatial axis of 1, 2, and 3, random rotation with the probability of flipping $10\\%$ , and random shift intensity with the probability of flipping $50\\%$ and offset range of $10\\%$ ." ], [ "Optimal augmentation estimation", "One of the ways to improve the existing architectures is to use the test time augmentations [6] (TTA) method.", "Since the multimodality dataset is used in this study, the idea of the learnable composition of TTA is proposed.", "We first trained U-Net [7] and Swin U-Netr [8] networks to obtain coefficients.", "During network testing, we modified the testing images with different augmentations to compute how much the segmentation Dice improves using these augmentations.", "Also, adjust the coefficients of each augmentation so that the transforms with the highest improvement have the largest coefficient.", "This means that if, for example, the original Dice is $75\\%$ , and after adding Gaussian noise to the CT and PET data, we reach Dice of $77\\%$ and $76\\%$ , respectively, we increase the CT noise coefficient and reduce the PET noise coefficient." ], [ "validation of optimal augmentations", "To evaluate how well our proposed combination of coefficients works, we needed to test it on a pre-trained model of nnU-Net architecture [9].", "We augmented the nnU-net with the optimal set of coefficients and tested if it improved the segmentation performance.", "$\\frac{1}{n}\\Sigma _{i=1}^m \\omega _i * A_i = \\frac{1}{n} (\\omega _1 A_1 + \\omega _2 A_2 + \\omega _3 A_3 + ... + \\omega _m A_m) \\ni \\Sigma _{i=1}^m \\omega _i=n$ In this equation, $A$ is the augmented function(e.g., rotation, add noise, resize, zoom in and zoom out), and $\\omega $ s are the contribution coefficients obtained in the first stage of the architecture.", "We have created an algorithm that determines the best possible combination of contribution coefficients for TTA.", "Figure: Our framework trains the Swin U-Netr and U-Net to find the optimal coefficient in test time augmentation and then use it as an input of the nnU-Net." ], [ "Implementation Details", "The configuration of the device used to implement this study is NVIDIA GeForce RTX 3080 GPU with Python 3.8.10 and Torch 1.9.0+cu111.", "We used patch-based U-Net and Swin U-Netr [10] with patch sizes of 96, 96, and 96 in 30000 iterations.", "Our method was implemented on the nnU-Net [9] code and MONAI library [11].", "The batch size is 2 for training and 1 for testing.", "We used an Adam optimizer with a weight decay of 1e-5, and the learning rate is set as 1e-4.", "In the first step, we separated $12\\%$ of the total data for evaluation, $10\\%$ for testing, and $78\\%$ for learning." ], [ "Results", "As we explained earlier, avoiding false positive segmentation in FDG-PET is challenging.", "For this reason, in addition to considering the Dice coefficient, we also examined the false positive metrics.", "In training the learnable coefficient phase, U-Net and Swin U-Netr have dice of 0.5039 and 0.5050, respectively.", "Their false positives are 22.9835 and 22.1624, respectively.", "As mentioned before, we used a pre-trained nnU-Net to evaluate the proposed coefficients.", "For evaluation, we used the five preliminary tests presented in the challenge.", "In these 5 cases, the augmented nnU-Net has the following performance: Table: The results of nnU-Net network with the proposed combination of coefficients test time augmentation in the preliminary test set are presented in the AutoPET challengeThis performance is one percent higher than the performance of the raw nnU-Net, while the false positives decreased from 22.9835 to 0.9296 in contrast to U-Net." ], [ "Conclusion", "The proposed method in this study was presented in the automated lesion segmentation in whole-body FDG-PET/CT MICCAI challenge.", "The network was implemented in this study using an optimal composition of TTA.", "For this purpose, we trained U-Net and Swin U-Netr to detect the efficient coefficient to improve segmentation performance.", "Then we used the newly trained U-Net, and Swin U-Netr results with the nnU-Net to prove that our approach improves performance.", "Averaging the predictions with varying ratios on the augmented data can improve prediction accuracy." ] ]
2210.07761
[ [ "Not All Neighbors Are Worth Attending to: Graph Selective Attention\n Networks for Semi-supervised Learning" ], [ "Abstract Graph attention networks (GATs) are powerful tools for analyzing graph data from various real-world scenarios.", "To learn representations for downstream tasks, GATs generally attend to all neighbors of the central node when aggregating the features.", "In this paper, we show that a large portion of the neighbors are irrelevant to the central nodes in many real-world graphs, and can be excluded from neighbor aggregation.", "Taking the cue, we present Selective Attention (SA) and a series of novel attention mechanisms for graph neural networks (GNNs).", "SA leverages diverse forms of learnable node-node dissimilarity to acquire the scope of attention for each node, from which irrelevant neighbors are excluded.", "We further propose Graph selective attention networks (SATs) to learn representations from the highly correlated node features identified and investigated by different SA mechanisms.", "Lastly, theoretical analysis on the expressive power of the proposed SATs and a comprehensive empirical study of the SATs on challenging real-world datasets against state-of-the-art GNNs are presented to demonstrate the effectiveness of SATs." ], [ "Introduction", "Graph neural networks (GNNs) [22], [33], [13], [41], [49], [4], [23], [48], [45], [1], [52], [24], [53], [58] have achieved great success in semi-supervised learning tasks in graph-structured data.", "Among various types of GNNs, graph attention networks have gained popularity with multifarious real-world applications, especially arising from social and collaboration graphs [15], [41], [56], [10], [43], [51], [3], [32].", "In each graph attention layer [41], the node representation is generally learned following a two-step procedure.", "Attention scores (attention coefficients) between each node and all its neighbors are firstly computed by some attention mechanism.", "The node representation for downstream tasks is then computed as a weighted aggregation of all neighbor features.", "Existing GNNs typically include all neighbors in the scope (i.e., the receptive field) for feature aggregation [54].", "However, it might not be always best to do so in attention-based GNNs (Fig.", "REF ).", "Recent studies [36], [61] have shown that incorporating all neighbors in the scope for feature aggregation can possibly lead to deterioration in the predictive performances of GNNs on various graph learning tasks.", "Notably, our study also indicates that most neighbors in widely used social and collaboration graph datasets, such as Cora and Cite [38], are found to be far apart (see Appendix for more details).", "This reveals that most neighbors in real-world graphs are highly dissimilar to the central node and are hence likely irrelevant for feature aggregation.", "Our idea is that adapting the scope of attention appropriately (i.e., the receptive field of graph attention for feature aggregation) can enable attention-based GNNs to learn better representations by attending more to highly relevant neighbors while ignoring the irrelevant ones.", "We conjecture that with better representations, the performance of GNNs on semi-supervised learning tasks would improve.", "Figure: A case study of attention scoresOur present idea is inspired and motivated by studies [6], [20], [9] in cognitive science.", "Humans are known to be capable of determining the number of stimuli to respond to well, such as spoken words and presented images within their apprehension span [6].", "The high quality of cognition is maintained by paying Selective Attention [20], [9] to the few stimuli identified in the apprehension span as most relevant to the cognitive goal, and ignoring the irrelevant ones.", "Although the idea of apprehension span has been tentatively adapted to solve some learning-based tasks in computer vision [18] and natural language processing [30], [14], how to adapt the scope of attention in attention-based GNNs for representation learning of graph data remains under-explored to date.", "In this paper, we present an investigation on Graph selective attention networks that takes an analogy between the scope of attention for feature aggregation in GNNs and the apprehension span in human cognition.", "A series of Selective Attention (SA) mechanisms for graph neural networks is also proposed, and diverse forms of node-node dissimilarities for learning the node-wise scope of attention are investigated.", "Neighbors that are dissimilar to the central node based on the scope of attention are deemed as highly irrelevant, and hence excluded in the feature aggregation process.", "Our proposed SA mechanisms also return attention coefficients that are differentiably pruned by the learned scope of attention, allowing highly irrelevant nodes to be ignored in feature aggregation.", "Subsequently, we propose and construct Graph selective attention networks (SATs) capable of learning effective representations that favor highly relevant nodes while ignoring the irrelevant ones (identified using the SA mechanisms).", "The main contributions of the paper are summarized as follows: We propose Selective Attention (SA), which comprises a class of novel attention mechanisms for GNNs.", "SA leverages node-node dissimilarity to learn the node-wise scope of attention, which can exclude irrelevant neighbors from the feature aggregation.", "SA thus endows GNNs with capabilities to learn representations that concentrate and aggregate features from highly relevant neighbors while ignoring irrelevant neighbors.", "The expressive power of the proposed SA layers is analyzed.", "The theoretical analysis verifies that the expressive power of the proposed SA layers can reach the upper bound of all message-passing GNNs, indicating that SA layers are more powerful than those layers used in existing attention-based GNNs.", "We further use the proposed SA layers to construct Graph selective attention networks (SATs) for various downstream learning tasks arising from social and collaboration graphs.", "SATs are comprehensively tested on several well-established benchmarking datasets and compared to a number of state-of-the-art GNNs for the task of semi-supervised node classification and clustering.", "The results demonstrate that SATs can outperform other strong baselines.", "Graph attentions [41], [12], [10], [25], [42], [15], [46], [3] take advantage of effective attention mechanisms [47], [5] to dynamically learn the normalized node-node correlations regarding node features (attention scores/coefficients), which determine the neighbor importance for the subsequent feature aggregation.", "To further enhance the learning performance, there is a trend to inject graph structures into the computation of attention coefficients [56], [8], [19], [51], [31], [32].", "Compared with existing attention-based GNNs, our method is fundamentally different.", "Our method can ignore irrelevant neighbors by considering diverse forms of dissimilarity between node pairs.", "Dissimilarity pertaining to graph structure is one possible choice of our approach.", "Besides, our experiments, which analyze the distribution of attention scores, show that injecting the graph structure into the computation of graph attention without considering the node-node dissimilarity cannot exclude those irrelevant neighbors from the feature aggregation." ], [ "Adjusting the scope of GNNs", "In general, the scope (the receptive field) for each node in one GNN layer [54] is its first-order neighbors (including itself), and after $k$ layers, it can capture the information from $k$ -hop neighbors [22], [41], [45], [3].", "There is a rich literature on carefully designing the scope for GNNs.", "This includes approaches (e.g., [13], [4], [36], [55], [57], [26]) that apply sampling strategies to improve the scalability of GNNs.", "Different from these methods, our work is not based on any sampling strategy.", "There are also some approaches [10], [61] adopting the top-$k$ strategy to select the $k$ most relevant neighbors for aggregation.", "Unlike the top-$k$ strategy, our method can learn how many neighbors to ignore for each node adaptively.", "In parallel, there have been several studies (e.g., [49], [27], [56], [16], [29], [60]) on designing adaptive receptive fields, either for each node [29] or for different parts of models [60].", "To the best of our knowledge, none of these methods generate receptive fields by identifying and ignoring irrelevant neighbors." ], [ "Graph selective attention networks", "In this section, we introduce the proposed Graph selective attention networks (SATs).", "The Selective Attention layers adopting different Selective Attention mechanisms are firstly elaborated.", "How to use the proposed Selective Attention layers to construct SATs is then introduced.", "The computational complexity of SATs is finally analyzed." ], [ "Notations", "Let $G = \\lbrace V, E \\rbrace $ denote a graph, where $V$ and $E$ represent the node and edge set.", "In $G$ , there are $N$ nodes, $|E|$ edges, and $C$ classes ($C\\ll N$ ) which the nodes possibly belong to.", "The adjacency matrix of $G$ and the input node feature matrix are denoted as $\\mathbf {A} \\in \\lbrace 0, 1\\rbrace ^{N \\times N}$ and $\\mathbf {X} \\in \\mathbb {R}^{N \\times D}$ .", "Node $i$ and its one-hop neighbors are denoted as $\\mathcal {N}_i$ .", "$\\mathbf {W}^l$ and $\\lbrace \\mathbf {h}^l_i \\rbrace _{i = 1, ... N}$ denote the learnable weight matrix and output representation of node $i$ at $l$ -th layer of SATs, respectively, and $\\mathbf {h}^0$ is set to be the input feature $\\mathbf {X}$ ." ], [ "Selective Attention layers", "In this subsection, we present the Selective Attention (SA) layer, which is the core for building the Graph selective attention networks (SATs).", "The present graph attention mechanisms attempt to compute attention coefficients to all neighbors of each node.", "However, our preliminary study has shown that most neighbors in widely used graph datasets are quite irrelevant when evaluated by simple criteria (Appendix ).", "Thus, these irrelevant neighbors can be excluded from the neighbor aggregation in an appropriate way.", "Meanwhile, humans flexibly adjust the number of stimuli to respond in their apprehension span [6].", "Specifically, in order to maintain the quality of cognition, human selectively attends to a few stimuli that are most relevant to the cognitive goal but ignores those irrelevant ones.", "Inspired by these, in this paper, we propose Selective Attention, which utilizes diverse forms of node-node dissimilarity to learn the scope of attention for each node.", "Irrelevant neighbors can be ignored in the feature aggregation stage.", "SA therefore endows SATs with the capability of learning representations concentrating on highly relevant neighbors.", "Given a set of node features $\\lbrace \\mathbf {h}^l_i\\rbrace _{i = 1, ... N}$ , $\\mathbf {h}^l_i \\in R^{D^l}$ , the Selective Attention layer maps them to $D^{l+1}$ dimensional vectors $\\lbrace \\mathbf {h}^{l+1}_i\\rbrace _{i = 1, ... N}$ .", "The mapping requires weighted aggregation, with weights computed from both correlations of node features and diverse forms of node-node dissimilarity.", "The feature correlation between two connected nodes is firstly obtained.", "To generate the correlations of node features between connected nodes, we adopt the method in [41]: $f_{ij}=\\frac{\\exp (\\text{LeakyReLU}(\\mathbf {\\vec{a}}^T(\\mathbf {W}^l\\mathbf {h}^l_i\\parallel \\mathbf {W}^l\\mathbf {h}^l_j )))}{\\sum _{k\\in \\mathcal {N}_i} \\exp (\\text{LeakyReLU}(\\mathbf {\\vec{a}}^T(\\mathbf {W}^l\\mathbf {h}^l_i\\parallel \\mathbf {W}^l\\mathbf {h}^l_k )))},$ where $\\mathbf {\\vec{a}}\\in \\mathbb {R}^{2D^{l+1}}$ is a vector of attention parameters, $\\parallel $ stands for the concatenation function for two vectors, and $\\mathbf {W}^l$ is a $D^{l+1}\\times D^l$ parameter matrix for feature mapping.", "Given $f_{ij}$ computed from Eq.", "(REF ), the proposed SA layer can capture the feature correlations between connected nodes.", "As mentioned above, existing graph attention mechanisms [41], [15], [3], [56] do not consider adjusting the scope of attention, resulting in somewhat less informative attention scores.", "To address this, the proposed Selective Attention further considers utilizing diverse forms of node-node dissimilarity to learn the scope of attention for each node in the graph.", "The node-node dissimilarity, which will be introduced later in this subsection, can quantify the extent that a neighbor can be ignored.", "Leveraging the aforementioned feature correlations (Eq.", "(REF )) and node-node dissimilarity, we propose two different strategies derived from the SA concept to compute attention coefficients.", "The first strategy is named as Contractive apprehension span, which is able to exponentially contract the normalized feature correlations (Eq.", "(REF )).", "The attention score obtained via the Contractive apprehension span is defined as follows: $\\begin{aligned}&\\alpha _{ij} = \\frac{f_{ij}\\cdot \\exp {(-\\beta \\mathbf {S}_{ij})}}{\\sum _{k \\in \\mathcal {N}_i} f_{ik}\\cdot \\exp {(-\\beta \\mathbf {S}_{ik})}},\\\\\\end{aligned}$ where $\\mathbf {S}_{ij}$ represents the node-node dissimilarity, and $\\beta \\in (0, 1]$ is a positive hyperparameter used to control the significance of $\\mathbf {S}_{ij}$ .", "As shown in Eq.", "(REF ), the feature-based attention scores are reduced as $\\mathbf {S}_{ij}$ goes higher.", "The scope of attention for a given node is contracted as the attention coefficients obtained by Eq.", "(REF ) might be very close to zero.", "Those irrelevant neighbors can be less attended to in the feature aggregation according to $\\mathbf {S}_{ij}$ , and the SA layer can therefore pay higher attention to those neighbors that are more relevant to the central node.", "The second strategy is called Subtractive apprehension span.", "As its name implies, this strategy attempts to adjust the scope of attention by directly subtracting the effect brought by the dissimilarity between node pairs.", "The attention coefficient computed by the Subtractive apprehension span is defined as follows: $\\begin{aligned}&\\quad \\alpha _{ij} = \\frac{f_{ij} (1 - \\beta \\mathbf {T}_{ij})}{\\sum _{k \\in \\mathcal {N}_i} f_{ik} (1 - \\beta \\mathbf {T}_{ik})},\\mathbf {T}_{ij} = \\frac{\\exp {(\\mathbf {S}_{ij})}}{\\sum _{k \\in \\mathcal {N}_i}\\exp {(\\mathbf {S}_{ik})}},\\\\\\end{aligned}$ where $\\beta \\in (0, 1]$ is a positive hyperparameter controlling the effect of $\\mathbf {T}_{ij}$ .", "Compared with the Contractive apprehension span, the above Subtractive apprehension span enables the SA layers to adjust the scope of the attention for feature aggregation in a more radical way, as some attention coefficients between connected nodes can be reduced to zero when $\\mathbf {T}_{ij}$ is sufficiently high.", "Subtractive apprehension span allows SA layers to concentrate only on those relevant nodes when aggregating neighboring features for message passing in the GNN.", "With the Selective Attention coefficients, the SA layer now can aggregate features associated with each node and its neighbors to generate layer outputs, which will be either propagated to the higher layer, or be used as the representations for downstream tasks.", "The described aggregation phase can be formulated as follows: $\\mathbf {h}^{l+1}_i = (\\alpha _{ii}+\\epsilon \\cdot \\frac{1}{\\vert \\mathcal {N}_i \\vert })\\mathbf {W}^l\\mathbf {h}^l_i+ \\sum _{j \\in \\mathcal {N}_i, j \\ne i} \\alpha _{ij} \\mathbf {W}^l\\mathbf {h}^l_j,$ where $\\epsilon \\in (0, 1)$ is a learnable parameter to improve the expressive power of the SA layer." ], [ "Node-node dissimilarity for Selective Attention", "The proposed Selective Attention (SA) allows diverse node-node dissimilarity to be used to compute attention coefficients.", "In this paper, we use the following method to compute dissimilarity ($\\mathbf {S}_{ij}$ ) between each pair of connected nodes in the graph: $\\begin{aligned}&\\mathbf {S}_{ij} = \\sum _k r_k\\cdot \\Psi _k(\\mathbf {c}^k_i, \\mathbf {c}^k_j), \\text{ subject to}\\text{ }\\sum _k r_k = 1,\\end{aligned}$ where $\\Psi (\\cdot , \\cdot )$ is a distance metric, $k$ is the type index of dissimilarity, $\\mathbf {c}^k_i$ is a vector characterizing node $i$ from type $k$ , and $r_k$ is a learnable parameter used to balance the relative significance of different types of node properties, i.e., $\\mathbf {c}^k_i$ .", "In this paper, we focus on two types of dissimilarity, which are node features (inputs) at each layer of the GNN and the structure of the nodes in the graph.", "By replacing $\\Psi (\\cdot , \\cdot )$ as Euclidean distance function, Eq.", "(REF ) can be rewritten as: $\\mathbf {S}_{ij} \\!=\\!", "r_f\\!\\cdot \\!", "\\Vert \\mathbf {Wh}_i \\!-\\!", "\\mathbf {Wh}_j\\Vert ^2 \\!+\\!", "r_p\\!", "\\cdot \\!", "\\Vert \\mathbf {p}_i \\!-\\!", "\\mathbf {p}_j\\Vert ^2,\\text{subject to}\\text{ }r_f \\!+\\!", "r_p \\!=\\!", "1,$ where $\\mathbf {Wh}_i$ is the features of node $i$ in a GNN layer, and $\\mathbf {p}_i$ is defined as a 1-by-$C$ vector of node $i$ in the latent space learned from graph structure.", "As $\\mathbf {p}_i$ is assumed to be learnable, different learning approaches enable $\\mathbf {p}_i$ to capture different properties in the graph structure.", "In this paper, we mainly consider the learnable properties hidden in the graph adjacency.", "Thus, we have: $\\begin{aligned}L(\\mathbf {P}) = \\operatornamewithlimits{arg\\,min}_{\\mathbf {P}\\mathbf {P}^T_{ij}} \\sum _{i,j}(\\mathbf {A}_{ij} - [\\mathbf {P}\\mathbf {P}^T]_{ij})^2,\\end{aligned}$ where $\\mathbf {P}$ is an $N$ -by-$C$ matrix containing all $\\mathbf {p}_i$ s. As shown in Eq.", "(REF ), $\\mathbf {P}$ is learned by matrix factorization, which assumes that the edge adjacency matrix $\\mathbf {A}$ can be reconstructed by $\\mathbf {P}\\mathbf {P}^T$ .", "Through Eq.", "(REF ), similar nodes in the graph will induce a low distance in Eq.", "(REF ), and vice versa.", "Now, we can build Graph selective attention networks (SATs) with the proposed SA layers.", "“To stabilize the learning process”, we follow [40], [41], [15] to use the multi-head attention strategy when constructing SATs.", "SATs either concatenate the node representations generated by multiple attention heads as the input of the next layers, or compute the mean of node features obtained by multiple attention heads as the output representations.", "As we in this paper additionally define learnable latent spaces for each node in the graph, the overall loss function for SATs is slightly different from classical graph attention networks.", "It is conceptually written as follows: $L = L_{task} + L(\\mathbf {P}),$ where $L(\\mathbf {P})$ is the MF method shown in Eq.", "(REF ), and $L_{task}$ is the task-specific loss." ], [ "Computational complexity of Selective Attention layers", "As each layer in SATs additionally requires node-node dissimilarity to compute Selective Attention coefficients, the computational complexity is slightly different from classical attention-based GNNs.", "Let $D^l$ (or $D^{l + 1}$ ) denote the dimension of the input (or output) vector of the $l$ -th layer.", "The complexity of feature aggregation in the $l$ -th layer is $O(ND^lD^{l+1}+(\\vert E\\vert +e)D^{l+1})$ for one attention head, where $e$ represents the average degree of each node and $N$ is the number of nodes in the input graph.", "This is the same as that of classical graph attention networks [41].", "When there are $K$ attention heads, the complexity is $O(KND^lD^{l+1}+K(\\vert E\\vert +e)D^{l+1})$ .", "Additional computation in SATs is demanded as SATs have to capture node-node dissimilarity for the computation of Selective Attention  coefficients.", "The complexity for learning the node-node dissimilarity in each attention head is $O(e(D^{l+1} + 2C))$ , where $C$ represents the dimension of each $\\mathbf {p}$ in Eq.", "(REF ).", "In this section, the expressive power of the proposed SATs is analyzed.", "The expressive power evaluates whether a GNN can discriminate distinct (sub)structures wherein nodes have different features.", "Thus, it can theoretically reveal whether a GNN is sufficiently powerful for various downstream tasks.", "Recent studies [48], [59], [7] have shown that the feature aggregations in all message-passing GNNs are similar to the injective 1-dimensional Weisfeiler-Lehman test (1-WL test) [44].", "Theoretically, the expressive power of all message-passing GNNs is as most as the 1-WL test [48].", "As the proposed SATs belong to message-passing GNNs, their expressive power can be verified by showing the injectivity of the feature aggregation in SATs.", "To do so, we first prove that either Contractive (Eq.", "(REF )), or Subtractive apprehension span (Eq.", "(REF )) is still unable to distinguish some structures satisfying some conditions, without the improving term shown in Eq.", "(REF ), i.e., $\\epsilon \\cdot \\frac{1}{\\vert \\mathcal {N}_i \\vert }\\mathbf {W}^l\\mathbf {h}^l_i$ .", "Then, we prove that all the SA layers can discriminate all different structures when aggregating neighboring features utilizing either of the two proposed strategies integrated with $\\epsilon \\cdot \\frac{1}{\\vert \\mathcal {N}_i \\vert }\\mathbf {W}^l\\mathbf {h}^l_i$ (Eq.", "(REF )).", "Before the proof, we follow [48], [15] to give the notations for multisets.", "For the nodes in $\\mathcal {N}_i$ , their feature vectors form a multiset $X_i = (\\mathsf {M}_i, \\mu _i)$ , where $\\mathsf {M}_i = \\lbrace s_1, ... s_n\\rbrace $ is the underlying set of $X_i$ containing its distinct elements, and $\\mu _i : \\mathsf {M}_i \\rightarrow \\mathbb {N}^\\star $ gives the multiplicity of each distinct element in $\\mathsf {M}_i$ .", "For the neighborhood aggregating function that is solely based on the Contractive apprehension span (Eq.", "(REF )), the following theorem shows that it still cannot distinguish some structures.", "Let $c_i$ denote the feature vector of node $i$ , and $X_i = \\lbrace \\mathsf {M}_i, \\mu _i\\rbrace \\in \\mathcal {X}$ denote a multiset comprising the features from nodes in $\\mathcal {N}_i$ , where $\\mathcal {X}$ represents the countable feature space.", "The aggregation function using the attention scores computed by Eq.", "(REF ) is denoted as $h(c_i, X_i) = \\sum _{x\\in X_i} \\alpha _{c_i x} g(x)$ , where $g(\\cdot )$ is a function defined on $X_i$ and $\\alpha _{c_i x}$ is the attention score between $g(c_i)$ and $g(x)$ .", "For all $g$ , any two nodes 1 and 2 and the Contractive apprehension span in Eq.", "(REF ), $h(c_1, X_1) = h(c_2, X_2)$ holds if and only if $c_1 = c_2, \\mathsf {M}_1 = \\mathsf {M}_2 = \\mathsf {M}$ , and $q\\cdot \\sum _{y=x, y\\in X_1} \\psi (-\\beta \\mathbf {S}_{c_1y}) = \\sum _{y=x, y\\in X_2} \\psi (-\\beta \\mathbf {S}_{c_2y})$ , for $q > 0$ and $x \\in \\mathsf {M}$ , where $\\psi (\\cdot )$ is a function for mapping values to $\\mathbb {R}^+$ .", "Due to space limitations, here we briefly illustrate the method for completing the proof of Theorem .", "The full proof can be checked in supplementary materials.", "The proof of Theorem can be divided into two parts, i.e., the proof of the sufficiency and necessity of the iff conditions [15], [48], [59].", "Given $c_1=c_2$ , $\\mathsf {M}_1=\\mathsf {M}_2$ , and $q \\cdot \\sum _{y=x, y\\in X_1} \\psi (-\\beta \\mathbf {S}_{c_1y}) = \\sum _{y=x, y\\in X_2} \\psi (-\\beta \\mathbf {S}_{c_2y})$ , $h(c_1, X_1) = h(c_2, X_2)$ can be easily verified.", "Thus, the sufficiency of the iff conditions stated in Theorem is proved.", "Given $h(c_1, X_1) = h(c_2, X_2)$ , the necessity of the iff conditions can be proved by showing possible contradictions when $\\mathsf {M}_1 \\ne \\mathsf {M}_2$ , $c_1 \\ne c_2$ , or $q \\cdot \\sum _{y=x, y\\in X_1} \\psi (-\\beta \\mathbf {S}_{c_1y}) \\ne \\sum _{y=x, y\\in X_2} \\psi (-\\beta \\mathbf {S}_{c_2y})$ .", "Theorem shows that the function for feature aggregation ($h$ ) using the attention scores obtained by Eq.", "(REF ) may map different multisets into the same embedding if and only if these multisets share the same central node feature and the same node features whose node-node dissimilarity is proportional.", "For the neighborhood aggregating function that is solely based on the Subtractive apprehension span (Eq.", "(REF )), the following theorem shows that it cannot distinguish some structures.", "Given the same assumptions shown in Theorem and the aggregation function using the attention scores computed by Eq.", "(REF ) is denoted as $h(c_i, X_i) = \\sum _{x\\in X_i} \\alpha _{c_i x} g(x)$ , for all $g$ , any two nodes 1 and 2 and the Subtractive apprehension span in Eq.", "(REF ), $h(c_1, X_1) = h(c_2, X_2)$ holds if and only if $c_1 = c_2$ , $\\mathsf {M}_1 = \\mathsf {M}_2 = \\mathsf {M}$ , and $q\\sum _{y=x, y\\in X_1}[\\sum _{x\\in X_1}\\psi (\\mathbf {S}_{c_1x}) - \\beta \\psi (\\mathbf {S}_{c_1y})]= \\sum _{y=x, y\\in X_2}[\\sum _{x\\in X_2}\\psi ($ $\\mathbf {S}_{c_2x}) - \\beta \\psi (\\mathbf {S}_{c_2y})]$ , for $q > 0$ and $x \\in \\mathsf {M}$ , where $\\psi (\\cdot )$ is a function for mapping values to $\\mathbb {R}^+$ .", "The full proof of Theorem can also be checked in supplementary materials.", "Theorem shows that $h$ solely based on Eq.", "(REF ) may map different multisets into the same embedding if and only if the multisets share the same central node feature, and the same node features whose adjusted negative dissimilarity is proportional.", "Theorems and show that the expressive power of SA layers solely utilizing Contractive (Eq.", "(REF )) or Subtractive apprehension span (Eq.", "(REF )) is stronger than that of classical graph attention networks [41], although they cannot discriminate some structures.", "As shown in the two presented theorems, the conditions giving rise to the failure of attention layers solely utilizing the Contractive or Subtractive apprehension span in distinguishing all structures are dependent on both node features and node-node dissimilarity.", "As node features and node-node dissimilarity are generally heterogeneous, it is infrequent for the conditions stated in Theorems and to be simultaneously satisfied.", "Such observation may well explain those attention-based GNNs additionally considering properties other than node features, e.g., injecting structural node embeddings [35] into the computation of attention coefficients can outperform classical graph attention networks.", "However, the expressive power of SATs can be immediately improved to be equivalent to the 1-WL test through the slight modification as Eq.", "(REF ) shows.", "We next prove the proposed Selective Attention mechanisms (Eqs.", "(REF ) to (REF )) can reach the upper bound of the expressive power of all message-passing GNNs by verifying that the aggregation function based on Eq.", "(REF ) can successfully distinguish the structures whose properties meet the conditions stated in Theorems and .", "Assume $\\mathcal {T}$ is the attention-based aggregator shown in Eq.", "(REF ) and utilizes either Contractive (Eq.", "(REF )) or Subtractive apprehension span (Eq.", "(REF )), $\\mathcal {H}$ is a mapping of countable feature space $\\mathcal {X}$ .", "$\\mathcal {T}$ operates on a multiset $H \\in \\mathcal {H}$ .", "A $\\mathcal {H}$ exists so that with the attention-based aggregator in Eq.", "(REF ), $\\mathcal {T}$ can distinguish all different multisets that it previously cannot distinguish.", "Corollary can be proved by following the procedure presented in [48], [15].", "According to Theorem , we assume $X_1 = (\\mathsf {M}, \\mu _1)$ , $X_2 = (\\mathsf {M}, \\mu _2)$ , $c \\in \\mathsf {M}$ , and $q\\cdot \\sum _{y=x, y\\in X_1} \\psi (-\\beta \\mathbf {S}_{c_1y}) = \\sum _{y=x, y\\in X_2} \\psi (-\\beta \\mathbf {S}_{c_2y})$ , for $q > 0$ .", "When $\\mathcal {T}$ uses the attention scores solely according to Eq.", "(REF ) to aggregate node features, we have $\\sum _{x\\in X_1} \\alpha _{cx} g(x) = \\sum _{x\\in X_2} \\alpha _{cx} g(x)$ .", "This means $\\mathcal {T}$ fails to discriminate the structures satisfying the conditions stated in Theorem .", "When $\\mathcal {T}$ uses Eq.", "(REF ) where the attention coefficients are obtained by the Contractive apprehension span (Eq.", "(REF )) to aggregate node features, we have $\\sum _{x\\in X_1} \\alpha _{cx} g(x) - \\sum _{x\\in X_2} \\alpha _{cx} g(x) = \\epsilon (\\frac{1}{|X_1|}-\\frac{1}{|X_2|})\\alpha _{cc} g(c)$ , where $|X_1| = |\\mathcal {N}_1|$ , and $|X_2| = |\\mathcal {N}_2|$ .", "Since $|X_1| \\ne |X_2|$ , $\\sum _{x\\in X_1} \\alpha _{cx} g(x) - \\sum _{x\\in X_2} \\alpha _{cx} g(x) \\ne 0$ , which means $\\mathcal {T}$ based on Eqs.", "(REF ) and (REF ) is able to discriminate all the structures that $\\mathcal {T}$ solely based on Eq.", "(REF ) fails to distinguish.", "Following the similar procedure, when the Selective Attention layer (Eq.", "(REF )) utilizes the Subtractive apprehension span (Eq.", "(REF )), we are able to prove that the corresponding aggregation function also can distinguish those distinct structures that the aggregation function only using Subtractive apprehension span fails to discriminate.", "Based on the conducted theoretical analysis, the proposed SATs are the most powerful message-passing GNNs and are consequently more powerful than popular attention-based GNNs, e.g., GATs [41] and HardGAT [10].", "In this paper, we mainly verify that the proposed Graph selective attention networks are the most powerful message-passing GNNs under the condition that the feature space is countable [15], [48], [59].", "Recent studies have shown that a simplex operator for feature aggregations in some GNN layer is injective, i.e., the 1-WL test equivalent in the countable feature space [7].", "But such injectivity might not hold when the simplex operator operates in the uncountable feature space.", "To ensure injectivity when a GNN deals with uncountable features, diverse forms of operators, e.g., mean, max, and min operators are required to collaboratively aggregate neighbor features.", "Thus, the expressive power of the proposed SATs can be equivalent to the 1-WL test in uncountable feature space by appropriately integrating with other effective operators for feature aggregation.", "Table: Dataset statistics" ], [ "Experiment and analysis", "In this section, we evaluate the effectiveness of the proposed Selective Attention by comparing SATs with state-of-the-art approaches for two semi-supervised learning tasks on well-established benchmarking datasets.", "To understand the performance of SATs, we also perform ablation studies and visualize the distribution of learned attention coefficients.", "We also analyze the effect of $\\beta $ and the space consumption of SATs.", "Table: Performance comparison on semi-supervised node classification.", "The results in bold show that SAT outperforms all the baselines and the best baselines are underlined.Table: Performance comparison on semi-supervised node clustering.", "The results in bold show that SAT outperforms all the baselines and the best baselines are underlined.SATs are compared with twelve strong baselines, including MoNet [33], GCN [22], GraphSage [13], JKNet [49], APPNP [23], ARMA [2], GIN [48], Neural Sparse [61], GAT [41], GATv2 [3], CAT [15], and HardGAT [10].", "MoNet, GCN, JKnet, APPNP, and ARMA are five representative GNNs that leverage graph convolutional operators to learn representations.", "GIN and CAT are two state-of-the-art GNNs whose expressive power is equivalent to the 1-WL test.", "GAT and GATv2 are two powerful attention-based GNNs that learn representations by attending to all neighbors of each node in the graph.", "GraphSage, Neural Sparse, and HardGAT are three state-of-the-art GNNs that perform graph learning tasks by aggregating the features from sampled neighbors.", "By comparing with diverse types of GNNs which consider different scopes of neighbors for feature aggregation, the effectiveness of the proposed SATs can be better validated." ], [ "Datasets", "Six widely used datasets, including Cora, Cite, Pubmed [28], [38], Wiki [28], Uai [28], and CoauthorCS [39] are used for evaluation.", "Cora, Cite, and Pubmed are citation networks widely used for the evaluation of GNNs.", "However, recent studies show that they may be insufficient to evaluate the performance of GNNs due to their limited data size [17], [39].", "By following previous work, we additionally include Wiki, Uai, and CoauthorCS in our experiments.", "The data statistics are summarized in Table REF , and more details can be found in Appendix REF ." ], [ "Evaluation and experimental settings", "Following previous work [41], [15], [24], [22], we mainly consider two learning tasks to test the effectiveness of different approaches, i.e., semi-supervised node classification and semi-supervised node clustering.", "For the training and testing paradigms, we closely follow the established settings reported in the previous studies [22], [41], [23], [50], [15].", "The classification and clustering performances of all approaches are evaluated by Accuracy.", "In the training phase, all the GNNs are implemented with the two-layer network structure, i.e., one hidden layer followed by the output layer.", "All approaches are run 10 times on each testing dataset to obtain the average performance.", "We report the detailed experimental settings in Appendix REF ." ], [ "Results of semi-supervised learning", "Semi-supervised node classification and clustering are closely related to several important applications from real-world graph data, such as social community detection and document classification.", "In our experiments, we use the proposed SATs to perform these two learning tasks in the aforementioned real graph datasets and compare their performance with that of state-of-the-art GNNs.", "The corresponding results are summarized in Tables REF and REF .", "The performance comparisons of semi-supervised node classification are presented in Table REF .", "As the table shows, SATs consistently outperform all the baselines on all the datasets.", "Specifically, SAT archives 0.76%, 0.72%, 0.56%, 2.81%, 0.89%, and 0.69% improvement over the best baselines (which are underlined in Table REF ) in terms of Accuracy.", "It is worth noting that SAT achieves significant performance gain over all attention-based GNNs.", "Specifically, SAT archives 0.76%, 0.72%, 0.75%, 3.11%, 9.35% and 0.69% performance gain over the best attention-based GNNs.", "The better performance of SATs could be attributed to its ability of identifying irrelevant neighbors and only attending to those highly relevant ones.", "This is why the performance improvement of SATs over attention-based GNNs is much more significant on Wiki, Uai, and CoauthorCS, where the density of edges is higher and more irrelevant neighbors exist.", "We will visualize the distribution of attention scores learned by SATs in Section REF to demonstrate their ability to identify and ignore irrelevant neighbors.", "The results of semi-supervised clustering are summarized in Table REF .", "SAT archives the best scores on five datasets, and it outperforms all the attention-based methods on all six datasets.", "Specifically, SAT archives 0.48%, 0.84%, 0.26%, 2.57%, 6.98%, 0.40% performance gain over the best attention-based GNN.", "Similar to the results on the first task, the Accuracy increases reported by SAT over attention-based GNNs are more significant on dense graphs, including Wiki, Uai, and CoauthorCS." ], [ "Ablation studies", "To further investigate the effectiveness of our model, we conduct ablation studies on the two types of dissimilarity (Eq.", "(REF )) of SAT.", "We consider vanilla GAT, SAT with dissimilarity only regarding node features (C/S-F) (i.e.", "$r_f = 1, r_p=0$ in Eq.", "(REF )), SAT with the dissimilarity of graph structure only (C/S-P) (i.e.", "$r_f=0, r_p=1$ in Eq.", "(REF )), and the complete SAT model.", "The results of semi-supervised node classification and clustering tasks are summarized in Tables REF and REF .", "From the tables, we observe that either using node features or node structural properties to compute node-node dissimilarity can improve the performances of graph attention, which demonstrates that ignoring irrelevant neighbors in graph attention indeed improves the performance.", "Meanwhile, considering both dissimilarities produces better and more stable results.", "Table: Ablation study in semi-supervised node classificationTable: Ablation study in semi-supervised node clustering" ], [ "Visualization of attention scores", "To demonstrate the ability of SA to ignore irrelevant neighbors, we compare the attention scores obtained by the output layer of GAT, CAT, and SATs.", "Here we show the results on Uai and the results on the other datasets are given in Appendix REF .", "In Fig.", "REF , we conduct a case study on a node to show how SA mechanisms influence the values of attention coefficients of neighbors, and then we show the histogram of all attention scores.", "We observe that some attention scores obtained by SAT are close to zero, showcasing that SAT indeed can learn the scope of attention to exclude the irrelevant neighbors from the feature aggregation.", "Such observation generalizes to the whole graph, as the histogram of attention scores (Fig.", "REF ) shows that more attention scores acquired by SAT are close to zero compared with GAT and SAT.", "We also conduct the case study on CAT [15], which utilizes both node features and graph structure to compute attention coefficients for graph analysis, and draw the distribution of its attention scores.", "We do not observe a significant difference in the histogram of attention scores between CAT and GAT, showing that CAT is not able to ignore irrelevant neighbors.", "Figure: Attention scores from GAT, CAT and SATs on Uai" ], [ "The effect of $\\beta $", "SATs use $\\beta $ to control the influence from the node-node dissimilarity when computing attention coefficients.", "The effect of the node-node dissimilarity becomes larger when $\\beta $ goes high.", "Potentially, SATs can learn smaller scopes of attention, which exclude more neighbors, or compute very low attention coefficients for more neighbors when $\\beta $ is set to high values.", "To show this, in Table REF , we list the number of edges with small attention scores (<0.05) when $\\beta $ ranges in [0.1, 0.5, 0.75, 1.0] on all testing datasets.", "As seen in the table, more neighbors are assigned with very small attention scores as $\\beta $ goes higher.", "We show the Accuracy of our models with varying values of $\\beta $ in the range of $[0.01, 1.0]$ in Fig.", "REF .", "Based on the results in Table REF and Fig.", "REF , one possible way to find a better setting of $\\beta $ is to configure it according to the density of the graph.", "Generally, SATs can perform better when $\\beta $ is set to a higher value in dense graphs, such as Wiki, Uai, and CoauthorCS.", "In these graphs, SATs can identify a large number of dissimilar neighbors as irrelevant to the feature aggregation, so that they can learn representations by concentrating on the features of a few relevant nodes.", "However, in sparse graphs such as Cora, Cite, and Pubmed, SATs can perform better when a relatively small $\\beta $ is used.", "In these graphs, the number of neighbors connecting each central node is small.", "Thus, SATs do not need to identify many neighbors as irrelevant ones.", "Table: The number of neighbors with very low attention coefficients (≤0.05\\le 0.05) and proportionsFigure: Sensitivity test for β\\beta ." ], [ "Comparisons on model parameters and space consumption between GAT and SATs", "To show the difference in architecture between GAT and SATs, we compare the number of parameters and memory usage between them.", "The results have been summarized in Table REF .", "SATs uses slightly more memory since more parameters, which are mainly due to the learning of latent spaces (i.e., $\\mathbf {p}$ ), are used to perform the task of representation learning.", "Thus, SATs use slightly more memory in the training stage.", "Table: Parameter comparison between GAT and SAT.In this paper, we have proposed Selective Attention (SA), which generalizes a class of novel attention mechanisms for GNNs.", "Motivated by the analogy between the apprehension span of human cognition and the scope of attention for feature aggregation, SA leverages diverse forms of node-node dissimilarity to adapt the node-wise scope of attention, which can flexibly exclude those irrelevant neighbors from the feature aggregation stage.", "SA therefore enables GNNs to learn representations by favoring the features of highly relevant neighbors and ignoring irrelevant neighbors.", "Given different SA mechanisms, we build Graph selective attention networks (SATs) to learn representations for various tasks arising from real-world graph data.", "SATs have been tested on widely used benchmarking datasets and compared to several strong baselines.", "The obtained notable results can validate the effectiveness of the proposed Selective Attention.", "In the future, the proposed SA will be further improved by exploring more forms of node-node dissimilarity that can be used for computing SA coefficients and developing SA mechanisms that are compatible with multi-view graphs." ], [ "Irrelevance between connected nodes", "In this section, we show a large amount of neighbors are found to highly differ regarding either node features or graph structure.", "The corresponding results obtained from Cora and Cite are exemplified here to demonstrate such phenomenon (Fig.", "REF and REF ).", "Figure: Cumulative histograms of normalized Euclidean distance regarding node features.", "Large values mean connected node pairs have very different features.", "Here most distances are large.Figure: Cumulative histograms of common neighbors between connected node pairs.", "Small values mean connected node pairs differ in terms of graph structure.", "Here most values are 0s or close to 0.The difference between the node features of connected nodes is measured by normalized Euclidean distance, with Fig.", "REF depicting its cumulative histograms.", "It is evident that the distances of most node pairs in both datasets are very close to the largest.", "Specifically, the normalized distance of more than 70% of connected nodes in both datasets is higher than 0.7.", "This indicates that connected nodes have quite different features from their neighbors, and thus are highly irrelevant regarding node features.", "The number of common neighbors is applied to measure the difference in the structure of connected nodes, with Fig.", "REF depicting its cumulative histograms.", "The number of common neighbors is a widely used method to measure the similarity of two connected nodes [37], [34].", "It is observed that around 50% of connected node pairs do NOT have even one common neighbor in Cora and Cite, and most connected node pairs (about 90%) have very few (0, 1 or 2) common neighbors.", "Thus, the structure of connected node pairs is highly different, indicating these nodes are very irrelevant.", "Such observations indicate that a large portion of neighbors are irrelevant to the central node.", "These neighbors are possibly less informative for feature aggregation.", "Thus, they should be ignored by some appropriate attention mechanisms.", "Motivated by the discovered phenomenon that most neighbors are highly irrelevant, we propose Selective Attention to endow graph neural networks with the capability of flexibly ignoring irrelevant neighbors that can be identified by diverse forms of node-node dissimilarity." ], [ "Data description", "The dataset statistics can be found in Table REF .", "Cora, Cite, and Pubmed are citation networks, where nodes, edges, and node features respectively represent the documents, document-document citations, and the keywords of the documents.", "Wiki and Uai are two web networks, where nodes, edges and node features represent web pages, web-web hyperlinks and descriptive information on these web pages, respectively.", "CoauthorCS is a co-authorship graph, where nodes are authors, which are connected by an edge if they co-author a paper, node features represent keywords for each author’s papers, and class labels indicate the most active fields of study for each author.", "For Cora, Cite and Pubmed, we follow previous works [22], [41] to use 20 nodes from each label as the training set, 500 nodes as the validation set, and 1000 nodes as the test set.", "For Wiki, Uai, and CoauthorCS, as there are no established splits for training and testing, we randomly generate five sets of splits.", "In each of them, 20 nodes of each ground truth class are sampled for training, 500 nodes are sampled for validation, and 1000 nodes are done for testing.", "For each of them, we randomly generate five sets of splits (i.e., training, validation, and testing splits) for the classification tasks, and use all nodes in each dataset for clustering tasks." ], [ "Pre-processing", "From Table REF , we find the dimension of input features are too high in Wiki, Uai and CoauthorCS, which may cause unstable training performances.", "Thus we apply a trainable linear layer without non-linearity activation to reduce the feature dimensions.", "After the linear layer, the dimension of the input to the GNNs are reduced to 512.", "This pre-processing step is applied for both our models and all the baselines." ], [ "Detailed settings of all approaches", "SATs are compared with twelve strong graph neural networks, including MoNet [33], GCN [22], GraphSage [13], JKNet [49], APPNP [23], ARMA [2], GIN [48], Neural Sparse [61], GAT [41], GATv2 [3], CAT [15], and HardGAT [10].", "To perform unbiased comparisons, the source codes released by the authors are used to implement all the mentioned baselines.", "In our experiments, all the baselines use a two-layer network structure to learn node representations, meaning that the output layer of each GNN is followed by one hidden layer.", "As for the tuning of each baseline, we mainly follow the configurations presented in [15], [22], [41].", "The configurations of the proposed SATs are generally same to those of GAT.", "Specifically, 8 attention heads are used in hidden layers, while the number of hidden layer dimension (for one head) is 8 for Cora, Cite, and Pubmed, and 32 for Wiki, Uai, and CoauthorCS.", "For the output layer, 1 attention head is used.", "“LeakyReLU” is used as the non-linearity in the attention mechanism with the negative slope as 0.2.", "All attention-based GNNs are trained with learning rate as 0.005, weight decay as 0.0005, number of training epochs as 1000 and dropout ratio as 0.6.", "For Contractive apprehension span, $\\beta = 1.0$ , while for Subtractive apprehension span, $\\beta = 0.5$ .", "All GNNs are initialized with Glorot initialization [11], and all GNNs are trained to minimize the cross-entropy loss of the training nodes using Adam optimizer [21].", "All the experiments are conducted on an NVIDIA RTX 3090 graphics card.", "The software environment of the experiments is CUDA 11.1, Python 3.8, and PyTorch 1.8.1." ], [ "Visualization of attention scores", "We show the histograms of attention scores from the output layers of GAT, CAT and SATs in Fig.", "REF -REF .", "It is not surprising that the attention coefficients learned by different variants of SAT are generally more concentrated.", "We observe that SAT is able to learn ignoring on Cora, Cite, Wiki and Uai datasets.", "SAT ignores fewer neighbors in citation networks, as Cora, Cite, and Pubmed are three very sparse graphs, which means there are few nodes having many neighbors, so SAT does not have enough training samples to learn ignoring.", "However, things are different on Wiki and Uai datasets, where there are much more very small attention values, which indicates SAT indeed learns to ignore irrelevant neighbors.", "It is obvious that learning-to-ignore is indeed the reason that SATs achieve state-of-the-art performances on all the testing datasets.", "Figure: Attention scores from GAT, CAT and SAT on CoraFigure: Attention scores from GAT, CAT and SAT on CiteFigure: Attention scores from GAT, CAT and SAT on PubmedFigure: Attention scores from GAT, CAT and SAT on WikiFigure: Attention scores from GAT, CAT and SAT on UaiFigure: Attention scores from GAT, CAT and SAT on CoauthorCS" ], [ "Proof of Theorem ", "The proof of Theorem can be divided into two parts, i.e., the proof of the sufficiency and necessity of the iff conditions [15], [48], [59].", "The sufficiency of the iff conditions stated in Theorem is firstly proved.", "Given a central node $c_i$ , its aggregation function $h(c_i, X_i)$ can be written as: $\\begin{aligned}&h(c_i, X_i) = \\sum _{x\\in X_i} \\alpha _{c_ix}g(x),\\alpha _{c_ix} = \\frac{f_{c_ix}\\cdot d_{c_ix}}{ \\sum _{x\\in X_i} f_{c_ix}\\cdot d_{c_ix}},\\\\&f_{c_ix} = \\frac{\\exp { (m_{c_ix})}}{\\sum _{x\\in X_i}\\exp { (m_{c_ix})}}, d_{c_ix} = \\exp {(-\\beta \\mathbf {S}_{c_ix})},\\end{aligned}$ where $m_{c_ix}$ is the feature correlations between $c_i$ and the neighbor having vectorized features $x$ .", "Given Eq.", "(REF ), for two central nodes $c_1$ and $c_2$ , $h(c_1, X_1)$ and $h(c_2, X_2)$ can be written as: $\\begin{aligned}&h(c_1, X_1) = \\sum _{x\\in X_1}\\alpha _{c_1x}g(x)=\\sum _{x\\in X_1}[\\frac{f_{c_1x}\\cdot d_{c_1x}}{ \\sum _{x\\in X_1} f_{c_1x}\\cdot d_{c_1x}}]\\cdot g(x)\\\\&h(c_2, X_2) = \\sum _{x\\in X_2}\\alpha _{c_2x}g(x)=\\sum _{x\\in X_2}[\\frac{f_{c_2x}\\cdot d_{c_2x}}{ \\sum _{x\\in X_2} f_{c_2x}\\cdot d_{c_2x}}]\\cdot g(x)\\end{aligned}$ If $\\mathsf {M}_1 = \\mathsf {M}_2$ , $h(c_2, X_2)$ can be written as follows: $\\begin{aligned}&h(c_2, X_2) = \\sum _{x\\in \\mathsf {M}_2}[\\frac{f_{c_2x}\\cdot \\sum _{y=x, y\\in X_2}d_{c_2y}}{ \\sum _{x\\in \\mathsf {M}_2} f_{c_2x}\\cdot \\sum _{y=x, y\\in X_2}d_{c_2y}}]\\cdot g(x)\\\\&=\\sum _{x\\in \\mathsf {M}_2}[\\frac{\\frac{\\exp { (m_{c_2x})}}{\\sum _{x\\in X_2}\\exp { (m_{c_2x})}}\\cdot \\sum _{y=x, y\\in X_2}d_{c_2y}}{ \\sum _{x\\in \\mathsf {M}_2} \\frac{\\exp { (m_{c_2x})}}{\\sum _{x\\in X_2}\\exp { (m_{c_2x})}}\\sum _{y=x, y\\in X_2}d_{c_2y}}]\\cdot g(x)\\\\&=\\sum _{x\\in \\mathsf {M}_2}\\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\exp { (-\\beta \\mathbf {S}_{c_2y})}}{\\sum _{x\\in \\mathsf {M}_2}\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\exp { (-\\beta \\mathbf {S}_{c_2y})}}\\cdot g(x).\\end{aligned}$ Given $q \\cdot \\sum _{y=x, y\\in X_1} \\psi (-\\beta \\mathbf {S}_{c_1y}) = \\sum _{y=x, y\\in X_2} \\psi (-\\beta \\mathbf {S}_{c_2y})$ , and let $\\psi (\\cdot ) \\doteq \\exp {(\\cdot )}$ , we have: $h(c_2, X_2) = \\sum _{x\\in \\mathsf {M}_1}\\frac{q\\cdot \\exp { (m_{c_2x})}\\sum _{ y=x, y\\in X_1}d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}q\\cdot \\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_1}d_{c_1y}}\\cdot g(x).$ $h(c_1, X_1)=h(c_2, X_2)$ can be derived if $c_1=c_2$ .", "We next prove the necessity of the iff conditions that are stated in Theorem .", "And it can be achieved by showing possible contradictions when the iff conditions are not satisfied.", "If $h(c_1, X_1)=h(c_2, X_2)$ , we have: $\\begin{aligned}&h(c_1, X_1)-h(c_2, X_2) =\\\\&\\sum _{x\\in X_1}[\\frac{f_{c_1x}\\cdot d_{c_1x}}{ \\sum _{x\\in X_1} f_{c_1x}\\cdot d_{c_1x}}]\\cdot g(x)\\\\&- \\sum _{x\\in X_2}[\\frac{f_{c_2x}\\cdot d_{c_2x}}{ \\sum _{x\\in X_2} f_{c_2x}\\cdot d_{c_2x}}]\\cdot g(x)=0.\\end{aligned}$ Firstly assuming $\\mathsf {M}_1 \\ne \\mathsf {M}_2$ , we thus have: $\\begin{aligned}&h(c_1, X_1)-h(c_2, X_2) = \\\\&\\sum _{x\\in \\mathsf {M}_1 \\cap \\mathsf {M}_2}\\!", "[\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}} \\\\&- \\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}]\\cdot g(x)\\\\&+\\sum _{x\\in \\mathsf {M}_1 \\setminus \\mathsf {M}_2}\\!", "[\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}]\\cdot g(x)\\\\&-\\sum _{x\\in \\mathsf {M}_2 \\setminus \\mathsf {M}_1}[\\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}]\\cdot g(x)=0.\\end{aligned}$ As Eq.", "(REF ) holds for any possible $g(\\cdot )$ , we define a new function $g^\\prime (\\cdot )$ : $\\begin{aligned}&g(x) = g^\\prime (x), \\text{for } x\\in \\mathsf {M}_1\\cap \\mathsf {M}_2\\\\&g(x) = g^\\prime (x) - 1, \\text{for }x \\in \\mathsf {M}_1\\setminus \\mathsf {M}_2\\\\&g(x) = g^\\prime (x) + 1, \\text{for } x\\in \\mathsf {M}_2\\setminus \\mathsf {M}_1\\\\\\end{aligned}$ It is known that Eq.", "(REF ) holds for both $g(\\cdot )$ and $g^\\prime (\\cdot )$ .", "Thus, we have: $\\begin{aligned}&h(c_1, X_1)-h(c_2, X_2) = 0 =\\\\&\\sum _{x\\in \\mathsf {M}_1 \\cap \\mathsf {M}_2}\\!", "[\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}\\\\&- \\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}]\\cdot g^\\prime (x)\\\\&+\\sum _{x\\in \\mathsf {M}_1 \\setminus \\mathsf {M}_2}\\!", "[\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}]\\cdot g^\\prime (x)\\\\&-\\sum _{x\\in \\mathsf {M}_2 \\setminus \\mathsf {M}_1}[\\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}]\\cdot g^\\prime (x)\\\\\\end{aligned}$ As Eqs.", "(REF ) and (REF ) are equal to zero, we have: $\\begin{aligned}&\\sum _{x\\in \\mathsf {M}_1 \\setminus \\mathsf {M}_2}\\!", "[\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}]\\\\ & + \\sum _{x\\in \\mathsf {M}_2 \\setminus \\mathsf {M}_1}[\\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}]=0.\\end{aligned}$ It is seen that Eq.", "(REF ) does not hold as $\\exp (\\cdot )$ is positive.", "$\\mathsf {M}_1 \\ne \\mathsf {M}_2$ is therefore not true.", "Now assuming $\\mathsf {M}_1 = \\mathsf {M}_2 = \\mathsf {M}$ and excluding the irrational terms, Eq.", "(REF ) can be rewritten as follows: $\\begin{aligned}&\\sum _{x\\in \\mathsf {M}_1 \\cap \\mathsf {M}_2}\\!", "[\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}\\\\& - \\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}]\\cdot g(x)=0.\\end{aligned}$ To ensure the equation above to hold, each term in it has to be zero: $\\begin{aligned}&\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}d_{c_1y}}\\\\& - \\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}d_{c_2y}}=0.\\end{aligned}$ The above equation can be further rewritten as: $\\begin{aligned}&\\frac{\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}=\\\\&\\frac{\\exp { (m_{c_2x})}\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\exp { (m_{c_1x})}\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\!\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}.\\end{aligned}$ Assuming $\\mathsf {M} = \\lbrace s, s_0\\rbrace $ , $c_1 = s_0$ , $c_2 = s$ , the feature correlations between the central node and its neighbors are $m_{c_1x} = 1$ for $x \\in \\mathsf {M}$ , $m_{c_2s} = 1$ , and $m_{c_2s_0} = 2$ .", "When $x = s$ , we have: $\\begin{aligned}&\\frac{\\sum _{s\\in X_1}d_{c_1s}}{\\sum _{s\\in X_2}d_{c_2s}}=\\frac{e [e\\sum _{s\\in X_1}d_{c_1s} + e\\sum _{s_0\\in X_1}d_{c_1s_0}]}{e[e\\sum _{s\\in X_2}d_{c_2s} + e^2\\sum _{s_0\\in X_2}d_{c_2s_0}]}.\\end{aligned}$ $d_{cx} = \\exp { (-\\beta \\mathbf {S}_{cx})}$ can be any positive value as the computation of feature correlation and the learning of node-node dissimilarity ($\\mathbf {S}$ ) are mutually independent.", "Considering $d_{cx} = \\exp { (-\\beta \\mathbf {S}_{cx})}=a > 0$ , we have $\\frac{\\mu _1(s)}{\\mu _2(s)}=\\frac{|X_1|}{|X_2|-n+ne}$ .", "This equation does not hold because LHS (a rational number) does not equal RHS (an irrational number).", "Thus, $c_1 \\ne c_2$ is not true.", "Let $c_1 = c_2 = c$ , Eq.", "(REF ) can be rewritten as follows: $\\begin{aligned}&\\frac{\\sum _{y=x, y\\in X_1}\\!d_{cy}}{\\sum _{y=x, y\\in X_2}\\!d_{cy}}\\!=\\!\\frac{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{cx})}\\!\\sum _{y=x, y\\in X_1}\\!d_{cy}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{cx})}\\!\\sum _{y=x, y\\in X_2}\\!d_{cy}}\\!=\\!const \\!>\\!", "0.\\end{aligned}$ Setting $\\frac{1}{q} = const$ and $d_{cy} = \\exp { (-\\beta \\mathbf {S}_{cy})} = \\psi (-\\beta \\mathbf {S}_{cy})$ , we finally have $q\\sum _{y=x, y\\in X_1}\\!\\psi (-\\beta \\mathbf {S}_{cy})=\\sum _{y=x, y\\in X_2}\\!\\psi (-\\beta \\mathbf {S}_{cy})$ .", "Theorem can also be proved by considering the sufficiency and necessity of the iff conditions stated.", "The sufficiency of the iff conditions in Theorem is firstly proved.", "Given a central node $c_i$ , its aggregation function $h(c_i, X_i)$ can be written as: $\\begin{aligned}&h(c_i, X_i) = \\sum _{x\\in X_i} \\alpha _{c_ix}g(x),\\alpha _{c_ix} = \\frac{f_{c_ix}\\cdot d_{c_ix}}{ \\sum _{x\\in X_i} f_{c_ix}\\cdot d_{c_ix}},\\\\&f_{c_ix} = \\frac{\\exp { (m_{c_ix})}}{\\sum _{x\\in X_i}\\exp { (m_{c_ix})}}, d_{c_ix} =1-\\beta \\frac{\\exp {(\\mathbf {S}_{c_ix})}}{\\sum _{x \\in X_i}\\exp {(\\mathbf {S}_{c_ix})}},\\end{aligned}$ Given $c_1 = c_2$ , $\\mathsf {M}_1 = \\mathsf {M}_2 = \\mathsf {M}$ , $q[\\sum _{y=x, y\\in X_1}\\sum _{x\\in X_1}\\psi (\\mathbf {S}_{c_1x})-\\sum _{y=x, y\\in X_1}$ $ \\beta \\psi (\\mathbf {S}_{c_1y}) ]= \\sum _{y=x, y\\in X_2}\\sum _{x\\in X_2}\\psi (\\mathbf {S}_{c_2x}) - \\sum _{y=x, y\\in X_2} \\beta \\\\ \\psi (\\mathbf {S}_{c_2y})$ , and $\\psi (\\cdot ) \\doteq \\exp {(\\cdot )}$ , we have: $\\begin{aligned}&h(c_2, X_2) = \\sum _{x \\in X_2}\\frac{f_{c_2x}\\cdot d_{c_2x}}{ \\sum _{x\\in X_2} f_{c_2x}\\cdot d_{c_2x}} g(x)\\\\&\\!=\\!\\sum _{x \\in \\mathsf {M}_2}\\!\\frac{\\exp {(m_{c_2x})}\\!\\sum _{y=x, y\\in X_2}\\!", "[1-\\beta \\frac{\\exp {(\\mathbf {S}_{c_2y})}}{\\sum _{x \\in X_2}\\!\\exp {(\\mathbf {S}_{c_2x})}}]}{\\sum _{x \\in \\mathsf {M}_2}\\exp {(m_{c_2x})}\\!\\sum _{y=x, y\\in X_2}\\!", "[1\\!-\\!\\beta \\frac{\\exp {(\\mathbf {S}_{c_2y})}}{\\sum _{x \\in X_2}\\exp {(\\mathbf {S}_{c_2x})}}]}\\!g(x)\\\\&\\!=\\!\\sum _{x\\in \\mathsf {M}_1}\\frac{q\\cdot \\exp { (m_{c_1x})}\\sum _{ y=x, y\\in X_1}d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}q\\cdot \\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}d_{c_1y}}g(x) \\!= \\!h(c_1, X_1).\\end{aligned}$ Next, we prove the necessity of the iff conditions stated in Theorem .", "Given $h(c_1, X_1) = h(c_2, X_2)$ , we have: $\\begin{aligned}&h(c_1, X_1)-h(c_2, X_2) \\!=\\!", "\\sum _{x\\in X_1}[\\frac{f_{c_1x}\\cdot d_{c_1x}}{ \\sum _{x\\in X_1} f_{c_1x}\\cdot d_{c_1x}}]\\cdot g(x)\\\\& - \\sum _{x\\in X_2}[\\frac{f_{c_2x}\\cdot d_{c_2x}}{ \\sum _{x\\in X_2} f_{c_2x}\\cdot d_{c_2x}}]\\cdot g(x)=0.\\end{aligned}$ Assuming $\\mathsf {M}_1 \\ne \\mathsf {M}_2$ , we have: $\\begin{aligned}&h(c_1, X_1)-h(c_2, X_2) = \\\\&\\sum _{x\\in \\mathsf {M}_1 \\cap \\mathsf {M}_2}\\!", "[\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}\\\\&- \\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}]\\cdot g(x)\\\\&+\\sum _{x\\in \\mathsf {M}_1 \\setminus \\mathsf {M}_2}\\!", "[\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}]\\cdot g(x)\\\\&-\\sum _{x\\in \\mathsf {M}_2 \\setminus \\mathsf {M}_1}[\\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}]\\cdot g(x) = 0.\\end{aligned}$ Again we may define $g^\\prime (\\cdot )$ as Eq.", "(REF ) shows.", "It is known that $h(c_1, X_1)-h(c_2, X_2)=0$ holds for both $g(\\cdot )$ and $g^\\prime (\\cdot )$ .", "Thus, we have: $\\begin{aligned}&h(c_1, X_1)-h(c_2, X_2) = \\\\&\\sum _{x\\in \\mathsf {M}_1 \\cap \\mathsf {M}_2}\\!", "[\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}\\\\&- \\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}]\\cdot g^\\prime (x)\\\\&+\\sum _{x\\in \\mathsf {M}_1 \\setminus \\mathsf {M}_2}\\!", "[\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}]\\cdot g^\\prime (x)\\\\&-\\sum _{x\\in \\mathsf {M}_2 \\setminus \\mathsf {M}_1}[\\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}]\\cdot g^\\prime (x) = 0.\\\\\\end{aligned}$ As both Eqs.", "(REF ) and (REF ) equal zero, we have: $\\begin{aligned}&\\sum _{x\\in \\mathsf {M}_1 \\setminus \\mathsf {M}_2}\\!", "[\\frac{\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}]\\\\& + \\sum _{x\\in \\mathsf {M}_2 \\setminus \\mathsf {M}_1}[\\frac{\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}]=0.\\end{aligned}$ Like the analysis on proving Theorem , $\\mathsf {M}_1 \\ne \\mathsf {M}_2$ is false.", "Now we are able to assume $\\mathsf {M}_1 = \\mathsf {M}_2 = \\mathsf {M}$ .", "The terms about $x \\in \\mathsf {M}_1 \\setminus \\mathsf {M}_2$ and $x \\in \\mathsf {M}_2 \\setminus \\mathsf {M}_1$ in Eq.", "(REF ) can therefore be eliminated and it is known that the following equation must hold to ensure $h(c_1, X_1)-h(c_2, X_2)=0$ : $\\begin{aligned}&\\frac{\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}=\\\\&\\frac{\\exp { (m_{c_2x})}\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{c_1x})}\\sum _{y=x, y\\in X_1}\\!d_{c_1y}}{\\exp { (m_{c_1x})}\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{c_2x})}\\!\\sum _{y=x, y\\in X_2}\\!d_{c_2y}}.\\end{aligned}$ Assuming $\\mathsf {M} = \\lbrace s, s_0\\rbrace $ , $c_1 = s_0$ , $c_2 = s$ , the feature correlations between the central node and its neighbors are $m_{c_1x} = 1$ for $x \\in \\mathsf {M}$ , $m_{c_2s} = 1$ , and $m_{c_2s_0} = 2$ .", "When $x = s$ , we have: $\\begin{aligned}&\\frac{\\sum _{s\\in X_1}d_{c_1s}}{\\sum _{s\\in X_2}d_{c_2s}}=\\frac{e [e\\sum _{s\\in X_1}d_{c_1s} + e\\sum _{s_0\\in X_1}d_{c_1s_0}]}{e[e\\sum _{s\\in X_2}d_{c_2s} + e^2\\sum _{s_0\\in X_2}d_{c_2s_0}]}.\\end{aligned}$ $d_{cx}=1-\\beta \\frac{\\exp {(\\mathbf {S}_{cx})}}{\\sum _{x \\in X}\\exp {(\\mathbf {S}_{cx})}}$ can be any positive value as the computation of feature correlation and the learning of $\\mathbf {S}$ are mutually independent.", "Considering $d_{cx} = a > 0$ , We have $\\frac{\\mu _1(s)}{\\mu _2(s)}=\\frac{|X_1|}{|X_2|-n+ne}$ .", "This equation does not hold because LHS (a rational number) does not equal RHS (an irrational number).", "Thus, $c_1 \\ne c_2$ is false.", "Since $c_1 = c_2 = c$ , Eq.", "(REF ) can be rewritten as: $\\begin{aligned}&\\frac{\\sum _{y=x, y\\in X_1}\\!", "[\\sum _{x \\in X_1}\\exp {(\\mathbf {S}_{cx})} - \\beta \\exp {(\\mathbf {S}_{cy})}]}{\\sum _{y=x, y\\in X_2}[\\sum _{x \\in X_2}\\exp {(\\mathbf {S}_{cx})} - \\beta \\exp {(\\mathbf {S}_{cy})}]} =\\\\&\\!\\frac{\\sum _{x\\in \\mathsf {M}_1}\\!\\exp { (m_{cx})}\\!\\sum _{y=x, y\\in X_1}[\\sum _{x \\in X_1}\\exp {(\\mathbf {S}_{cx})} - \\beta \\exp {(\\mathbf {S}_{cy})}]}{\\sum _{x\\in \\mathsf {M}_2}\\!\\exp { (m_{cx})}\\!\\sum _{y=x, y\\in X_2}[\\sum _{x \\in X_2}\\exp {(\\mathbf {S}_{cx})} - \\beta \\exp {(\\mathbf {S}_{cy})}]}\\\\&=const > 0.\\end{aligned}$ Letting $const = \\frac{1}{q}$ and $\\exp {(\\cdot )} = \\psi (\\cdot )$ , we finally have $q\\sum _{y=x, y\\in X_1}$ $[\\sum _{x\\in X_1}\\psi (\\mathbf {S}_{cx}) - \\beta \\psi (\\mathbf {S}_{cy}) ]= \\sum _{y=x, y\\in X_2}[\\sum _{x\\in X_2}\\psi (\\mathbf {S}_{cx}) - \\beta \\psi (\\mathbf {S}_{cy})]$ .", "We may complete this proof by following the procedure presented in [48], [15].", "According to Theorem , we assume $X_1 = (\\mathsf {M}, \\mu _1)$ , $X_2 = (\\mathsf {M}, \\mu _2)$ , $c \\in \\mathsf {M}$ , and $q\\cdot \\sum _{y=x, y\\in X_1} \\psi (-\\beta \\mathbf {S}_{c_1y}) = \\sum _{y=x, y\\in X_2} \\psi (-\\beta \\mathbf {S}_{c_2y})$ , for $q > 0$ .", "When $\\mathcal {T}$ uses the attention scores solely according to Eq.", "(REF ) to aggregate node features, we have $\\sum _{x\\in X_1} \\alpha _{cx} g(x) = \\sum _{x\\in X_2} \\alpha _{cx} g(x)$ .", "This means $\\mathcal {T}$ fails to discriminate the structures satisfying the conditions stated in Theorem .", "When $\\mathcal {T}$ uses Eq.", "(REF ) where the attention coefficients are obtained by the Contractive apprehension span (Eq.", "(REF )) to aggregate node features, we have $\\sum _{x\\in X_1} \\alpha _{cx} g(x) - \\sum _{x\\in X_2} \\alpha _{cx} g(x) = \\epsilon (\\frac{1}{|X_1|}-\\frac{1}{|X_2|})\\alpha _{cc} g(c)$ , where $|X_1| = |\\mathcal {N}_1|$ , and $|X_2| = |\\mathcal {N}_2|$ .", "Since $|X_1| \\ne |X_2|$ , $\\sum _{x\\in X_1} \\alpha _{cx} g(x) - \\sum _{x\\in X_2} \\alpha _{cx} g(x) \\ne 0$ , which means $\\mathcal {T}$ based on Eqs.", "(REF ) and (REF ) is able to discriminate all the structures that $\\mathcal {T}$ solely based on Eq.", "(REF ) fails to distinguish.", "Following the similar procedure, when the Selective Attention layer (Eq.", "(REF )) utilizes the Subtractive apprehension span (Eq.", "(REF )), we are able to prove that the corresponding aggregation function also can distinguish those distinct structures that the aggregation function only using Subractive apprehension span fails to discriminate." ] ]
2210.07715
[ [ "Dirichlet is not just bad and singular in many rational IFS fractals" ], [ "Abstract For $m\\ge 2$, consider $K$ the $m$-fold Cartesian product of the limit set of an IFS of two affine maps with rational coefficients.", "If the contraction rates of the IFS are reciprocals of integers, and $K$ does not degenerate to singleton, we construct vectors in $K$ that lie within the ``folklore set'' as defined by Beresnevich et al., meaning they are Dirichlet improvable but not singular or badly approximable (in fact our examples are Liouville vectors).", "We further address the topic of lower bounds for the Hausdorff and packing dimension of these folklore sets within $K$, however we do not compute bounds explicitly.", "Our class of fractals extends (Cartesian products of) classical missing digit fractals, for which analogous results had recently been obtained." ], [ "The “folklore set” within fractals", "We start by defining classes of real vectors in $\\mathbb {R}^m$ according to their properties regarding rational approximation.", "Denote by $\\Vert \\underline{\\xi }\\Vert $ the distance of $\\underline{\\xi }\\in \\mathbb {R}^m$ to the nearest integer vector with respect to the maximum norm.", "Following Davenport and Schmidt [4], we call $\\underline{\\xi }\\in \\mathbb {R}^m$ Dirichlet improvable if for some $c\\in (0,1)$ the system $ 1\\le q\\le Q, \\qquad \\Vert q\\underline{\\xi }\\Vert < cQ^{-1/m}$ has an integer solution $q$ for all large $Q$ .", "The terminology is rooted in Dirichlet's Theorem, which asserts that (REF ) is soluble for $c=1$ for any $\\underline{\\xi }\\in \\mathbb {R}^m$ .", "We call $\\underline{\\xi }$ singular if (REF ) has a solution for arbitrarily small $c>0$ and $Q\\ge Q_0(c)$ .", "We say $\\underline{\\xi }$ is badly approximable if (REF ) has no solution for all $Q$ and some $c>0$ , or equivalently for some $c^{\\ast }>0$ the estimate $ \\Vert q\\underline{\\xi }\\Vert > c^{\\ast } q^{-1/m}$ holds for any integer $q>0$ .", "The setup (REF ) is usually called uniform approximation, whereas (REF ) is on ordinary approximation.", "We denote these classes of vectors, which are central objects of investigation in Diophantine approximation, by $Di_m, Sing_m$ and $Bad_m$ respectively.", "If $m=1$ , then $Sing_1=\\mathbb {Q}$ easily follows from an observation of Khintchine [7], and Davenport and Schmidt [3] showed $Di_1=Bad_1$ .", "Let us now assume $m\\ge 2$ .", "It is clear that $Sing_m\\cap Bad_m=\\emptyset $ and $Sing_m\\subseteq Di_m$ , on the other hand again Davenport and Schmidt [4] showed that $Bad_m\\subseteq Di_m$ as well.", "These relations motivate to study the “folklore set” as defined in [1] (see also [9], [14]) $\\mathbf {FS}_m= Di_m\\setminus (Sing_m\\cup Bad_m).$ The main result of [1] constitutes that $\\mathbf {FS}_m\\ne \\emptyset $ for $m\\ge 2$ .", "The author provided a very different, constructive proof in [14].", "An advantage of the latter method relevant to us in this paper is that the coordinates of $\\underline{\\xi }$ may be chosen in classical missing digit fractals, like the Cantor middle-third set, see the remarks below Theorem REF below.", "This paper aims to extend this result to a more general class of fractals.", "We study classical fractals given as the attractor of an iterated function system (IFS), see § REF below for details.", "Fishman and Simmons [6] (see also the subsequent papers [12], [15]) studied IFS of $J\\ge 2$ affine maps with rational coefficients, i.e.", "$ f_i(x)= c_i x+ d_i, \\qquad 1\\le i\\le J, \\; c_i\\in (-1,1)\\cap \\mathbb {Q},\\; d_i\\in \\mathbb {Q}.$ For our results below, we may restrict to $J=2$ a fortiori, which we will henceforth assume and write $f=f_1, g=f_2$ .", "We can further assume that $f$ and $g$ have different fixed points, which translates into $ \\frac{d_1}{1-c_1} \\ne \\frac{d_2}{1-c_2}.$ Indeed, otherwise the attractor degenerates to just this common fixed point, a singleton." ], [ "Our main result and a conjecture", "Given the results and proof strategy of [14], it is plausible that the very mild, necessary assumption (REF ) suffices for the attractor of any IFS as in (REF ) to contain vectors in $\\mathbf {FS}_m$ .", "Moreover, we believe that the property of not being badly approximable within the definition of $\\mathbf {FS}_m$ can be considerably sharpened.", "Recall $\\underline{\\xi }\\in \\mathbb {R}^m\\setminus \\mathbb {Q}^m$ is called Liouville vector if for arbitrarily large $t$ there is an integer solution to the estimate $\\Vert q\\underline{\\xi }\\Vert \\le q^{-t}.$ In the sequel, throughout we assume $m\\ge 2, \\qquad K=C^m$ where $C$ is the attractor of an IFS as above, and will identify the IFS with $C$ , or $K$ , when this is convenient.", "We would like to show the following idealistic claim.", "Conjecture 1 For an IFS as in (REF ), there exist uncountably many Liouville vectors $\\underline{\\xi }\\in K$ that are Dirichlet improvable but not singular.", "In particular $\\mathbf {FS}_m\\cap K\\ne \\emptyset .$ Conjecture REF may hold for irrational coefficients $c_i, d_i$ in (REF ) as well, however we are not able to apply our method below in this case.", "In the stated form with arbitrary rational coefficients it is still rather challenging, and we did not succeed in proving it.", "Building up on the method from [14], the problem turns out to become easier when the contraction factors $c_i$ are reciprocals of integers (remark: this restriction already occurred in [6] and [12] for different reasons), which just for simplicity of notation we assume to be positive.", "Our main new result in this paper therefore specializes on the class of IFS $ f(x)= \\frac{1}{b_1} \\cdot x + \\frac{r_1}{s_1}, \\qquad g(x)= \\frac{1}{b_2} \\cdot x + \\frac{r_2}{s_2},$ where $b_i\\ge 2, s_i>0, r_i$ are integers with $(r_i,s_i)=1$ , and satisfying (REF ) which becomes $ \\frac{b_1r_1}{(b_1-1)s_1} \\ne \\frac{b_2r_2}{(b_2-1)s_2}.\\qquad \\mathrm {(i)}$ Indeed, we verify Conjecture REF for this smaller class of fractals.", "Theorem 1.1 For an IFS as in (REF ) with restriction (REF ), the conclusion of Conjecture REF holds.", "Theorem REF extends the special case of missing digit fractals from [14], that is for $C$ the set of real numbers whose expansions to some given base $b\\ge 3$ only use digits within a two digit subset of $\\lbrace 0,1,\\ldots ,b-1\\rbrace $ .", "This corresponds to the setting $ b_1=b_2=s_1=s_2=b\\ge 2,\\qquad r_i\\in \\lbrace 0,1,\\ldots ,b-1\\rbrace ,\\; r_1\\ne r_2.$ An example of (REF ) is the two-fold Cartesian product of the famous Cantor middle third set, obtained for the parameter choices $m=2$ , $b=3$ and $r_1=0, r_2=2$ in (REF ).", "Note that the regularity condition (REF ), or equivalently (REF ), follows automatically from the setup (REF ).", "On the other hand, presumably numbers in a general fractal $C$ derived from (REF ) have no pattern with respect to expansion in any base, however results of this type may be hard to prove.", "We once again stress that, maybe rather surprinsingly, (REF ) suffices for the claim of Theorem REF , in particular usual regularity assumptions like the open set condition need not be assumed.", "Define the Dirichlet constant of $\\underline{\\xi }\\in \\mathbb {R}^m$ as $ \\Theta (\\underline{\\xi }):= \\limsup _{Q\\rightarrow \\infty }\\;\\;\\left( Q^{1/m}\\cdot \\min _{1\\le q\\le Q,\\; q\\in \\mathbb {Z}}\\Vert q\\underline{\\xi }\\Vert \\right).$ Then $\\Theta (\\underline{\\xi })\\in [0,1]$ and $\\underline{\\xi }$ is Dirichlet improvable iff $\\Theta (\\underline{\\xi })<1$ strictly, and singular iff $\\Theta (\\underline{\\xi })=0$ .", "Our proof shows that, as in [14] for missing digit sets, it is possible to construct $\\underline{\\xi }$ in a Cantor set as in Theorem REF whose Dirichlet constant differs from a given real number in $[0,1]$ at most by a fixed factor that depends on $m,b$ only.", "We emphasize again that the method in [1] relying on Roy's Theorem [10] does not give any indication how to obtain the desired restriction to fractals as in Conjecture REF .", "Our method is based on the author's previous work in [14].", "If the rational contraction rates of $f,g$ are no longer reciprocals of integers, a supposedly severe problem in our method occurs.", "In the “metaresult” Theorem REF below we establish a weaker conclusion if we permit one contraction rate to be arbitrary rational.", "Analyzing its proof (especially (REF )), the obstacle when generalizing to both rates being arbitrary rational becomes apparent.", "Finally, if some coefficients of the IFS are irrational, then the induced Cantor set may not contain any rational vector (see Boes, Darst and Erdős [2] for a rigorous proof of this fact for a similar type of Cantor sets), which completely disables our proof strategy.", "In particular the sets (REF ), (REF ) in Theorem REF below become empty." ], [ "On intrinsic approximation", "Rewriting (REF ) by introducing the nearest integers to $q\\xi _j$ , $1\\le j\\le m$ , and dividing by $q$ , gives rise to rational vectors with common denominator $q\\le Q$ and distance $<cQ^{-1/m}q^{-1}$ to $\\underline{\\xi }$ , in maximum norm.", "Let us define $Di_m^{(K)}\\subseteq Di_m\\cap K$ the set of “intrinsically Dirichlet improvable vectors” with respect to $K$ for which for some $c\\in (0,1)$ these rational vectors can be taken within the Cantor set $K$ , for all large $Q$ .", "The inclusion $Di_m^{(K)}\\subseteq K$ hereby follows from the compactness of $K$ .", "The real vectors that we construct in the proof of Theorem REF indeed have this property of good approximants within $K$ .", "Consequently, if we define likewise the intrinsic sets $Sing_m^{(K)}\\subseteq Sing_m\\cap K, Bad_m^{(K)}\\supseteq Bad_m\\cup K^c$ and intrinsic Liouville vectors, then Theorem REF can be refined by means of the derived smaller “intrinsic folklore sets” as follows.", "Theorem 1.2 With notation and assumptions as in Theorem REF , the smaller set $ \\mathbf {FS}_m^{(K)}:= Di_m^{(K)}\\setminus (Sing_m\\cup Bad_m)$ still contains uncountably many intrinsic Liouville vectors of $K$ .", "In particular $ \\mathbf {FS}_m^{(K)\\ast }:= Di_m^{(K)}\\setminus (Sing_m^{(K)}\\cup Bad_m^{(K)})\\ne \\emptyset .$ However, the sets $\\mathbf {FS}_m^{(K)}$ and $\\mathbf {FS}_m^{(K)\\ast }$ are not very natural objects in the sense that the natural intrinsic Dirichlet function is no longer $Q^{-1/m}$ .", "Hereby we mean the minimal function $\\Phi : \\mathbb {N}\\rightarrow (0,\\infty )$ so that (REF ) with right hand side replaced by $\\Phi (Q)$ admits an intrinsic solution for all $Q\\ge 1$ and any $\\underline{\\xi }\\in \\mathbb {R}^m$ .", "This inspires the following problem.", "Problem 1 Does a variant of (REF ) for altered sets with respect to the natural intrinsic Dirichlet function hold?", "Let $m=1$ , and consider $K=C$ derived from an IFS as in (REF ), and assume the open set condition holds (see [6] for instance).", "Then the natural Dirichlet function is of the form $\\Phi (Q)=c(\\log Q)^{-1/d}$ for $d$ the Hausdorff dimension of $C$ and some $c>0$ , see [6].", "However, the precise value of $c$ seems to be unknown.", "See also [12] for partial results for a similar class of Cantor sets in arbitrary dimension $m\\ge 1$ ." ], [ "Extensions: on exact approximation and metric theory", "Theorem REF can be generalized in certain directions similar as [14].", "Here we just provide more information on exact approximation and metrical theory.", "Firstly, we can prescribe uniform approximation with respect to a wide class of functions $\\Phi $ (see [14]) up to a fixed factor.", "We want to explicitly state a final result that captures a relaxed claim regarding power functions $\\Phi $ , and even extends to slightly more general settings.", "For given $\\underline{\\xi }\\in \\mathbb {R}^m$ , define its exponent of uniform simultaneous approximation $\\widehat{\\omega }(\\underline{\\xi })$ as the supremum of $t$ such that $1\\le q\\le Q, \\qquad \\Vert q\\underline{\\xi }\\Vert \\le Q^{-t}$ has an integer solution $q$ for all large $Q$ .", "Then $\\widehat{\\omega }(\\underline{\\xi })\\ge 1/m$ for any $\\underline{\\xi }\\in \\mathbb {R}^m$ by Dirichlet's Theorem and $\\widehat{\\omega }(\\underline{\\xi })\\le 1$ for any $\\underline{\\xi }\\in \\mathbb {R}^m\\setminus \\mathbb {Q}^m$ by the observation of Khintchine [7] recalled in the introduction.", "We call $\\underline{\\xi }\\in \\mathbb {R}^m$ totally irrational if it does not lie in a rational hyperplane of $\\mathbb {R}^m$ , a natural restriction, and refer to the spectrum of $\\widehat{\\omega }$ for $K$ for the set of all values $\\widehat{\\omega }(\\underline{\\xi })$ that occur for totally irrational arguments $\\underline{\\xi }\\in K$ .", "For $u_1\\ge 1$ an integer, define an IFS $ f(x)= \\frac{u_1}{b_1} + \\frac{r_1}{s_1}, \\qquad g(x)= \\frac{1}{b_2} + \\frac{r_2}{s_2},$ where clearly we can assume $u_1<b_1$ and $(u_1,b_1)=1$ .", "We must assume the analogue (generalization) of (REF ), which simply becomes $ \\frac{b_1r_1u_1}{(b_1-1)s_1} \\ne \\frac{b_2r_2}{(b_2-1)s_2}.\\qquad \\mathrm {(ii)}$ The fractals $K$ from Theorem REF just respresent the special case $u_1=1$ and are thus contained in the following result.", "Theorem 1.3 Let $K$ be derived from an IFS as in (REF ) with property (REF ).", "Then given $\\omega \\in [\\frac{1}{m}, \\frac{1}{m-1}]\\cup \\lbrace 1\\rbrace $ , there exist totally irrational Liouville vectors in $K$ with $\\widehat{\\omega }(\\underline{\\xi })=\\omega $ .", "In particular, the spectrum of $\\widehat{\\omega }$ for $K$ contains $[\\frac{1}{m}, \\frac{1}{m-1}]\\cup \\lbrace 1\\rbrace $ .", "Remark 1 To obtain the claim for $\\omega =1$ only, we can take any IFS as in (REF ).", "The accordingly modified claims of Theorem REF on intrinsic approximation apply as well.", "For $\\omega <\\frac{1}{m-1}$ the condition of $\\underline{\\xi }$ being totally irrational is automatically satisfied, so in particular in context of Theorem REF .", "Theorem REF complements results by Kleinbock, Moshchevitin and Weiss [8], who provide the weaker conclusion that the spectrum of $\\widehat{\\omega }$ has non-empty intersection with $[\\frac{1}{m-1},1]$ , but for a larger class of fractals (still their result seems not to cover IFS as in (REF ) but with irrational coefficients $c_i, d_i$ ).", "It is likely that small twists of the proofs enable one to extend the claim to the entire interval $[1/m,1]$ , as for $m=2$ .", "Note that upon dropping the totally irrational condition, this an easy consequence of the original result.", "We may consider $\\tilde{m}:=\\lceil \\omega ^{-1}\\rceil $ and obtain totally irrational vectors $\\tilde{\\underline{\\xi }}\\in \\mathbb {R}^{\\tilde{m}}$ as in Theorem REF .", "Then it suffices to take the remaining $m-\\tilde{m}\\ge 0$ coordinates of $\\underline{\\xi }$ arbitrarily within the $\\mathbb {Q}$ -span of $\\lbrace \\tilde{\\xi }_1,, \\ldots , \\tilde{\\xi }_{\\tilde{m}},1\\rbrace \\in \\mathbb {R}^{\\tilde{m}+1}$ .", "Note further that in the special case of missing digit Cantor sets, the spectrum of $\\widehat{\\omega }$ is indeed $[1/m,1]$ , as follows easily from the more general results in [11].", "Regarding metrical theory of the sets $\\mathbf {FS}_m \\cap K$ , our method alludes that lower bounds on Hausdorff dimension and possibly also packing dimension can be derived, by a similar strategy as in [14]: If $b_1=b_2$ , we can arbitrarily redefine the digits of the words $ {w} _j$ defined in § REF below at positions in the same type of long intervals as in [14], without the induced real vectors leaving the set.", "When $b_1\\ne b_2$ , one needs to be more careful in view of certain restrictions that enter in order to preserve an analogue of (REF ) below, but the basic idea remains the same.", "What remains to be done is to estimate from below the dimensions of the subsets of $\\mathbf {FS}_m \\cap K\\subseteq \\mathbb {R}^m$ induced by these digital patterns.", "We believe this should be feasible by a similar strategy as in [14], at least for the Hausdorff dimension, possibly again with the aid of a principle from Falconer's book [5].", "We conjecture that this bound is independent of the shifts, and therefore if $b_1=b_2$ agrees with the bounds in the case of missing digit sets from [14]." ], [ "A remark on more general fractals", "We finally notice that attractors of certain IFS consisting of rational, affine maps defined on $\\mathbb {R}^m$ do not contain any element in ${FS}_m$ .", "Indeed, consider for example $K$ the attractor of $f( \\underline{x} )= \\frac{1}{2} \\underline{x} , \\qquad g( \\underline{x} )= \\frac{1}{3} \\underline{x} + \\frac{1}{4}\\cdot (1,1,\\ldots ,1)^{t}.$ Then $K$ lies in the one-dimensional rational subspace defined by $x_1=x_2=\\cdots =x_m$ , so any $\\underline{\\xi }\\in K\\setminus \\mathbb {Q}^m$ has Dirichlet exponent $\\widehat{\\omega }(\\underline{\\xi })=1$ , more precisely $1\\le q\\le Q, \\qquad \\Vert q\\underline{\\xi }\\Vert \\le Q^{-1}$ has an integer solution $q$ for all $Q>1$ .", "In particular, all elements of $K$ are (very) singular as soon as $m\\ge 2$ .", "So it seems reasonable to consider Cartesian products of one-dimensional objects as in our results to avoid obstructions of this kind." ], [ "Notation and some IFS theory", "We use $A\\asymp B$ to denote $A\\ll B\\ll A$ , with Vinogradov's notation $A\\ll B$ meaning that $A\\le cB$ for some fixed $c>0$ .", "We write $\\mathbb {P}$ for the set of prime numbers.", "We write $\\lfloor x\\rfloor $ for the largest integer less than or equal to $x\\in \\mathbb {R}$ and $\\lceil x\\rceil $ for the smallest integer larger or equal to $x$ .", "Denote as usual by $v_p(d)\\in \\mathbb {Z}$ the multiplicity of the prime $p$ in a rational number $d$ , which is negative if $p$ divides the denominator (assuming $d$ is reduced), where we let $v_p(0)=\\infty $ .", "Any IFS consisting of contracting maps $f, g$ on $\\mathbb {R}$ induces an attractor $C\\subseteq \\mathbb {R}$ defined as the set of points $\\xi = {w} (0)=\\lim _{k\\rightarrow \\infty } w_1\\circ w_2 \\circ \\cdots w_k(0),$ obtained from infinite words $ {w} =w_1w_2\\cdots $ with $w_i\\in \\lbrace f,g\\rbrace $ .", "We call $w_i$ digits and the digit string $ {w} =w_1w_2\\cdots $ an address of $\\xi \\in C$ , and omit the symbol $\\circ $ occasionally.", "Any $\\xi \\in C$ has at least one address, but possibly it is not unique.", "Further we recall that a sequence of words $( {w} _k)_{k\\ge 1}$ converges to a word $ {w} $ by definition if any finite prefix of $ {w} $ coincides with the according string of $ {w} _k$ for all large enough $k$ .", "It is easy to see that $ {w} _k\\rightarrow {w} $ implies $ {w} _k(0)\\rightarrow {w} (0)$ in $\\mathbb {R}$ .", "We write $| {w} |$ for the length of a finite word $ {w} $ .", "If $ {w} _1$ is a finite word and $ {w} _2$ any word, we just write $ {w} _1 {w} _2$ for the concetanation of the words, i.e.", "reading first the digits of $ {w} _1$ followed on the right by the digit string of $ {w} _2$ ." ], [ "Proof of Theorem ", "The proof generalizes the ideas from [14], however some technical hurdles have to be mastered.", "Start with $c\\in (0,1)$ small enough.", "The goal is to construct a Liouville vector $\\underline{\\xi }\\in \\mathbb {R}^m$ with Dirichlet constant $\\Theta (\\underline{\\xi })\\asymp c$ , see definition (REF ), where the implied constants only depend on the IFS coefficients only.", "If we choose $c$ smaller than the inverse of the implied constant for the upper estimate, then $\\underline{\\xi }$ is Dirichlet improvable, and the other claimed properties not singular and not badly approximable are obvious.", "We will first construct suitable $\\underline{\\xi }\\in K$ in § REF .", "Then we proceed to present some auxiliary results in § REF -§ REF whose proofs only require elementary number theory and basic IFS theory.", "The core of the proof in § REF , REF is to finally verify $\\Theta (\\underline{\\xi })\\asymp c$ , however given the auxiliary results this works very similar as for the special cases of missing digit fractals in [14]." ], [ "Construction of suitable $\\underline{\\xi }$", "We essentially follow the construction for Cartesian products of missing digit fractals in [14], where reading a base $b$ digit 1 resp.", "0 here becomes reading contraction $f$ resp.", "$g$ .", "In [14] the components of $\\underline{\\xi }$ have base $b$ digit 1 at isolated positions between long strings of 0 digits.", "In our more general setup, we take blocks of $N$ consecutive digits $f$ between long strings of digits $g$ instead, for large enough $N$ depending on the IFS.", "This will be reflected in the periodic suffix words $ {p} , {q} $ defined below.", "Let $N$ be a large integer, a lower bound to be defined in § REF below.", "Define the ultimately periodic words $ {p} =f^{N}g^{\\infty }, \\qquad {q} =g^N {p} =g^N f^N g^{\\infty }.$ Let $(M_i)_{i\\ge 1}$ be any fast increasing (lacunary) sequence of positive integers.", "We define $m+1$ increasing positive integer sequences $({f}_{k})_{k\\ge 0}$ and $({g}_{j,k})_{k\\ge 0}$ , for $1\\le j\\le m$ .", "The first is simply given by $ {f}_{k}= Nk, \\qquad \\; k\\ge 0.$ For ${g}$ sequences, define the initial terms as ${g}_{j,0}=0, \\qquad 1\\le j\\le m$ and complete the sequence for $j=1$ via $ {g}_{1,k}=M_k, \\qquad k\\ge 1.$ For the remaining $j$ and $k\\ge 1$ , we choose ${g}_{j,k}$ so that they satisfy $ b_1^{ {f}_{k} } b_2^{ {g}_{j,k} } \\asymp (b_1^{ {f}_{k} } b_2^{ {g}_{1,k} } )^j, \\qquad 2\\le j\\le m-1,$ and $ b_1^{ {f}_{k} } b_2^{ {g}_{m,k} } \\asymp c^m(b_1^{ {f}_{k} } b_2^{ {g}_{1,k} } )^m,$ with some absolute implied constant depending on the IFS only but not on $k$ .", "It is easy to see this can be done, we may take $ {g}_{j,k}= \\left\\lfloor \\frac{\\log (b_1^{(j-1){f}_{k}} b_2^{j{g}_{1,k}}) }{ \\log b_2}\\right\\rfloor , \\quad (2\\le j\\le m-1), \\quad {g}_{m,k}= \\left\\lfloor \\frac{\\log (c^m b_1^{(m-1){f}_{k}} b_2^{m{g}_{1,k}}) }{ \\log b_2}\\right\\rfloor .$ Then it is easy to see that if $(M_i)_{i\\ge 1}$ increase fast enough then $\\eta _{j,k}:=\\; {g}_{j,k} - {g}_{j,k-1} > 0, \\qquad 1\\le j\\le m, \\; k\\ge 1.$ Define $m$ sequences of finite words with initial terms $ {v} _{j,0}=g^N$ for $1\\le j\\le m$ and $ {v} _{j,k}= g^N f^N g^{\\eta _{j,1}} f^N g^{\\eta _{j,2}} \\cdots g^{\\eta _{j,k-1}}f^N g^{\\eta _{j,k}}, \\qquad 1\\le j\\le m, \\; k\\ge 1.$ Define also their prefix sequences when omitting the last $N$ digits so that $ {t} _{j,0}=\\emptyset $ and $ {t} _{j,k}= g^N f^N g^{\\eta _{j,1}} f^N g^{\\eta _{j,2}} \\cdots g^{\\eta _{j,k-1}}f^N g^{\\eta _{j,k}-N}, \\qquad 1\\le j\\le m, \\; k\\ge 1.$ By the fast increase of $M_k$ we have $ | {v} _{1,k}|<| {v} _{2,k}|<\\cdots < | {v} _{m-1,k}|< | {v} _{m,k}|< | {v} _{1,k+1}|, \\qquad k\\ge 1,$ and likewise for $ {t} _{.,.", "}$ .", "Derive sequences of ultimately periodic infinite words by $ {w} _{j,k}= {t} _{j,k} {q} = {v} _{j,k} {p} = g^N f^N g^{\\eta _{j,1}} f^N g^{\\eta _{j,2}} \\cdots g^{\\eta _{j,k}} f^N g^{\\infty }, \\qquad 1\\le j\\le m,\\; k\\ge 0.$ The initial terms are $ {w} _{j,0}= {q} $ and they converge to infinite words $ {w} _{j}= g^N f^N g^{\\eta _{j,1}} f^N g^{\\eta _{j,2}} \\cdots , \\qquad 1\\le j\\le m.$ Then finally we define the components of $\\underline{\\xi }=(\\xi _1,\\ldots ,\\xi _m)$ to be $\\xi _j= {w} _j(0)= \\lim _{k\\rightarrow \\infty } {w} _{j,k}(0), \\qquad 1\\le j\\le m.$ ${Observation}$ : For $k\\ge 0$ , the integers ${f}_{k}$ resp.", "${g}_{j,k}$ count the occurrences of $f$ resp.", "$g$ in the words $ {t} _{j,k}$ .", "In particular $ | {v} _{j,k}|-N= | {t} _{j,k}|= {f}_{k}+ {g}_{j,k}, \\qquad 1\\le j\\le m, \\; k\\ge 0.$" ], [ "A rational in $C$ with denominator divisible by large {{formula:f8008c1b-e7f2-4784-a8a8-13407b49fdea}} powers ", "For the proof of the crucial Lemma REF in § REF below, we first require the existence of rational numbers in $C$ whose denominator is divisible by an arbitrarily large given power of $b_1b_2$ .", "We establish this auxiliary result in this section.", "Lemma 2.1 Let $C$ be induced by an IFS as in (REF ) with restriction (REF ).", "Let $\\ell \\ge 0$ be an integer.", "Then for any large enough positive integer $N$ , then writing $ {q} =g^{N}f^{N}g^{\\infty }$ as in § REF , the rational number $ {q} (0)=r/s$ in $C$ written in reduced form has the property that $b_1^{\\ell }b_2^{\\ell }$ divides $s$ .", "The proof is easy in certain cases, however for the general case we need some preparation.", "The following very elementary claim is stated for convenience of the reader.", "Proposition 2.2 If $p$ is a prime number and $e_1, e_2$ are rational numbers with $v_p(e_1)\\ne v_p(e_2)$ , then $v_p(e_1+e_2)= \\min \\lbrace v_p(e_1), v_p(e_2) \\rbrace $ .", "The claim is well-known and we omit its short proof.", "Proposition 2.3 Let $f,g$ be strict contractions on $\\mathbb {R}$ , inducing different fixed points $f^{\\infty }(0)\\ne g^{\\infty }(0)$ .", "Then for any integer $N\\ge 1$ , we have $f^Ng^{\\infty }(0)\\ne g^{\\infty }(0)$ .", "It suffices to show the claim for all large enough $N$ , as if there is equality $f^N g^{\\infty }(0)= g^{\\infty }(0)$ for some $N$ , then the equality holds for all positive integer multiples of $N$ as well.", "It is easily checked that for any element $e\\in \\mathbb {R}$ the sequence $f^N(e)$ tends to the fixed point $f^{\\infty }(0)$ of $f$ as $N\\rightarrow \\infty $ .", "Hence if the fixed points of $f, g$ are different and thus have positive distance, for large enough $N$ there is positive distance between $f^N(e)$ and the fixed point of $g$ .", "It then suffices to take $e=g^{\\infty }(0)$ the fixed point of $g$ .", "The claim can be generalized to any metric space by the same argument.", "Proposition 2.4 Let $F(x)= x/b+r/s$ be any affine contraction with rational coefficients.", "Let $u/v\\ne br/((b-1)s)$ be a rational number not equal to the fixed point of $F$ .", "Then for an arbitrarily large integer $t\\ge 0$ , for large enough $N\\ge N_0(t)$ we have that $F^N(u/v)$ is a rational number and after reduction has denominator divisible by $b^{t}$ .", "For a formal variable $x$ and an integer $N\\ge 1$ , it is easily checked that $F^N(x)= \\frac{x}{b^N}+ \\frac{r}{s}(1+b^{-1}+\\cdots +b^{-N+1}).$ Inserting $x=u/v$ and simplifying we get $ F^N(u/v)= \\frac{1}{b^N}\\cdot \\frac{ us(b-1)+rbv(b^N-1)}{vs(b-1)}.$ Denote the numerator by $H_N= us(b-1)+rbv(b^N-1).$ Take $N\\ge t$ and $p$ any prime divisor of $b$ .", "Reducing $H_N$ modulo $p^{t}$ gives residue class $us(b-1)-rbv$ , which is a constant independent of $N$ and $p$ .", "Our assumption $u/v\\ne br/((b-1)s)$ is equivalent to this constant being non-zero.", "Then $z_p:= v_p(us(b-1)-rbv)<\\infty $ is finite and we may assume without loss of generality that $t>\\max _{p|b} z_p$ , as increasing $t$ sharpens the claim we aim to prove.", "Then by Proposition REF we infer $v_p(H_N) \\le \\max _{p|b} z_p < t\\le t v_p(b),\\qquad p|b.$ Since the denominator in (REF ) in given form contains every such prime factor dividing $b$ at least $Nv_p(b)\\ge N$ times, after reduction of (REF ) there are still at least $(N-t)v_p(b)$ factors $p$ left in the denominator.", "Since this holds for any $p|b$ , we deduce that $b^{N-t}$ divides the reduced denominator.", "Thus it suffices to take $N=2t$ .", "Given as IFS as in (REF ), we introduce classes of prime numbers by ${P}_1= \\lbrace p\\in \\mathbb {P}: \\; p|b_1 \\rbrace , \\quad {P}_2= \\lbrace p\\in \\mathbb {P}: \\; p|b_2 \\rbrace , \\quad {P}_3={P}_1\\setminus {P}_2= \\lbrace p\\in \\mathbb {P}: \\; p|b_1,\\; p\\nmid p_2 \\rbrace ,$ and finally let $ {P}_4= {P}_1 \\cup {P}_2= {P}_3 \\cup {P}_2$ be the prime divisors of $b_1b_2$ .", "We can finally prove our lemma.", "We first apply Proposition REF for $t=2\\ell , \\qquad u/v= g^{\\infty }(0), \\qquad F=f, \\qquad b=b_1,$ to get that for some large enough integer $N_1$ the integer $b_1^{2\\ell }$ divides the denominator of the rational number $f^{N_1}g^{\\infty }(0)$ after reduction.", "Write $A/B=f^{N_1}g^{\\infty }(0)$ in reduced form.", "We may increase $\\ell $ and hence $N_1$ if necessary so that all primes in ${P}_1$ occur in $B$ in strictly higher multiplicity than in $s_2$ .", "Thus we have $ v_p(B)\\ge \\max \\lbrace 2\\ell v_p(b_1) , v_p(s_2)+1\\rbrace , \\qquad p\\in {P}_1.$ Moreover $f^{N_1}g^{\\infty }(0)\\ne g^{\\infty }(0)$ by Proposition REF .", "Then we again apply Proposition REF with $t=2\\ell , \\qquad u/v= f^{N_1}g^{\\infty }(0), \\qquad F=g, \\qquad b=b_2,$ which gives that for large enough $N_2$ the denominator of $r/s=g^{N_2}f^{N_1}g^{\\infty }(0)$ in reduced form is divisible by $b_2^{2\\ell }$ .", "In particular, we have $ v_p(s)\\ge 2\\ell v_p(b_2), \\qquad p\\in {P}_2.$ Finally let $p\\in {P}_3$ .", "Then by (REF ), ${P}_3\\subseteq {P}_1$ and Proposition REF applied iteratively with $e_1=\\frac{r_2}{s_2}, \\qquad e_2= \\frac{g^{i-1}(A/B)}{b_2}$ for $1\\le i\\le N_2$ , corresponding to $g^{N_2}(A/B)=r/s$ , we easily see that the multiplicity of any $p\\in {P}_3$ in the reduced denominator does not decrease (in fact remains equal).", "So, in short, (REF ) implies $ v_p(s)\\ge v_p(B)\\ge 2\\ell v_p(b_1), \\qquad p\\in {P}_3.$ By (REF ), combining (REF ), (REF ) we see that $v_p(s)\\ge 2\\ell \\max \\lbrace v_p(b_1), v_p(b_2)\\rbrace , \\qquad p\\in {P}_4.$ Hence $(\\rm {rad}(b_1b_2))^{2\\ell }$ divides $s$ , and since $b_1b_2$ divides $(\\rm {rad}(b_1b_2))^2$ , the claim $(b_1^{\\ell }b_2^{\\ell })|s$ follows immediately.", "Finally, it is easy to see that we may increase $N_1$ or $N_2$ if necessary so that we can take $N=\\max \\lbrace N_1, N_2\\rbrace $ ." ], [ "Rational approximations to $\\xi _j$ and estimates ", "In this paragraph we define sequences of rational approximations $p_{j,k}/q_{j,k}$ to $\\xi _j$ from § REF , and estimate their denominators (in reduced form) and how close they are to $\\xi _j$ .", "We will ultimately obtain integers $q$ with small evaluations of $\\Vert q\\underline{\\xi }\\Vert $ from these sequences.", "The next easy result will be involved in our crucial Corollary REF below.", "Proposition 2.5 Let $u/v$ be any rational number.", "Let ${h}$ be any finite word on the alphabet $\\lbrace f,g\\rbrace $ with in total $h_1$ occurrences of digit $f$ and $h_2$ occurrences of digit $g$ , so that $|{h}|=h_1+h_2$ .", "Then the number ${h}(u/v)$ is rational and after reduction has denominator dividing $b_1^{h_1}b_2^{h_2}\\cdot s_1s_2v$ .", "For a formal variable $x$ , formally applying the $|{h}|$ contractions in a row, each time either $h^{(i)}=f(x)=x/b+r_1/s_1$ or $h^{(i)}=g(x)=x/b+r_2/s_2$ and simplifying to standard fraction form, an easy inductive argument yields that ${h}(x)= \\frac{x+M}{b_1^{h_1}b_2^{h_2} s_1s_2},$ for some integer $M$ , depending on the IFS and ${h}$ .", "When $x=u/v$ is rational then obviously the outcome ${h}(x)$ can be written as a rational number $(u+vM)/(b_1^{h_1}b_2^{h_2}s_1s_2v)$ , so after reduction the denominator divides $b_1^{h_1}b_2^{h_2}s_1s_2 v$ .", "The next claim directly implies that the rational numbers $ {w} _{j,k}(0)$ constructed in § REF are almost reduced in the form as obtained from formal symbolic computation.", "This is required for the proof of the lower bound for the Dirichlet constant in § REF .", "Lemma 2.6 With the assumptions of Lemma REF , choose an integer $\\ell $ satisfying $\\ell > \\max _{i=1,2} \\max _{p\\in \\mathbb {P},\\; p|b_i} \\frac{\\max \\lbrace v_p(s_1), v_p(s_2) \\rbrace }{v_p(b_i)}.$ Given such $\\ell $ , take $N$ and let $ {q} =g^Nf^N g^{\\infty }$ as in Lemma REF .", "Let ${h}$ and $h_1, h_2$ be as in Proposition REF .", "Then, the number ${h} {q} (0)\\in C$ is rational and written in reduced form has denominator divisible by $b_1^{h_1}b_2^{h_2}$ .", "We have to show that not much cancellation occurs when formally expanding ${h} {q} (0)$ as a rational number.", "Start with $ {q} (0)=r/s$ written in reduced form.", "By choice of $\\ell $ , the integer $s$ contains any prime divisor of $b_1b_2$ (i.e.", "in class ${P}_4$ from § REF ) more often than $s_1$ and $s_2$ .", "Thus a very similar argument as in the proof of Lemma REF , based on iterative application of Proposition REF , shows the following: Let ${h}=h^{(1)}h^{(2)}\\cdots h^{(|{h}|)}$ .", "If we read the digit $h^{(i)}=f$ at some position $i$ , we get another factor $b_1$ in the reduced denominator while the multiplicity of remaining prime factors of $b_2$ (not dividing $b_1$ ) cannot decrease, in fact remain equal.", "Vice versa, reading $h^{(i)}=g$ instead gives a factor $b_2$ in the reduced denominator whereas the multiplicity of remaining primes dividing $b_1$ but not $p_2$ (i.e.", "in class ${P}_3$ from § REF ) cannot decrease.", "We omit the details.", "Repeating this argument for $i=|{h}|, |{h}|-1, \\ldots , 1$ to transform $r/s$ into ${h}(r/s)={h} {q} (0)$ , and by definition of $h_1, h_2$ , the claim follows directly.", "Remark 2 The proof in fact yields that $sb_1^{h_1}b_2^{h_2}$ divides the reduced denominator.", "In the sequel we always let $r, s$ be given as in (REF ), and coprime, and $N$ as in Lemma REF .", "Derive the words $ {p} = f^Ng^{\\infty }$ and $ {q} =g^N f^N g^{\\infty }$ and $ {t} _{j,k}, {v} _{j,k}, {w} _{j,k}, {w} _j$ , the integers ${f}_k, {g}_{j,k}$ and finally $\\underline{\\xi }$ as in § REF .", "Definition 1 For $1\\le j\\le m, k\\ge 1$ and $i\\ge 1$ , let $\\tau _{j,k,i}\\in \\lbrace b_1^{-1},b_2^{-1}\\rbrace $ be the contraction factor induced by the $i$ -th digit $w_{j,k,i}\\in \\lbrace f,g\\rbrace $ of $ {w} _{j,k}$ .", "It follows from the observation concluding § REF that $ P_{j,k}:=\\prod _{i=1}^{| {t} _{j,k}|} \\tau _{j,k,i}^{-1}= b_1^{{f}_{k}}b_2^{{g}_{j,k} }\\in \\mathbb {Z},\\qquad 1\\le j\\le m,\\; k\\ge 1.$ Then $P_{j,k}$ corresponds to the prefix factor for the denominator of $ {w} _{j,k}(0)$ , and essentially plays the role of the integer $a_{(k-1)m+j}$ with the notation in [14] for the special case of missing digit fractals.", "For simplicity, let $P_{k}:= P_{1,k}, \\qquad P_k^{\\ast }:= P_{k,m}.$ Then (REF ), (REF ) are equivalent to $ P_{j,k}\\asymp P_{k}^{j}, \\quad (1\\le j\\le m-1),\\qquad \\quad P_{k}^{\\ast }=P_{m,k}\\asymp c^m P_{k}.$ Recall $ {w} _{j,k}= {v} _{j,k} {p} $ are ultimately periodic words, for $1\\le j\\le m, k\\ge 1$ .", "Hence $ {w} _{j,k}(0)\\in \\mathbb {Q}$ , so in the sequel we write $ {w} _{j,k}(0)= {v} _{j,k}{p}(0)= {t} _{j,k} {q} (0)= p_{j,k}/q_{j,k}$ where we assume the right hand side fractions are reduced.", "Combining Proposition REF with Lemma REF for ${h}= {t} _{j,k}$ so that ${h}{q}(0)={h}(r/s)= {w} _{j,k}(0)$ , and $u/v=r/s$ , we immediately get the following claim.", "Corollary 2.7 For $1\\le j\\le m$ and $k\\ge 1$ , we have $q_{j,k}=S_{j,k}P_{j,k},$ for integers $S_{j,k}$ dividing the constant $S:= ss_1s_2.$ In particular $q_{j,k}\\asymp P_{j,k}, \\qquad 1\\le j\\le m,\\; k\\ge 1,$ with implied constants depending on the IFS only but not on $k$ .", "The following is easy to see.", "Proposition 2.8 We have the chain of divisibilty $P_{1,k}| P_{2,k}| \\cdots | P_{m,k}| P_{1,k+1}, \\qquad k\\ge 1.$ The claim in an obvious consequence of ${f}_{k}\\le {f}_{k+1}, \\qquad {g}_{1,k}\\le {g}_{2,k}\\le \\cdots \\le {g}_{m,k}\\le {g}_{1,k+1}.$ These inequalities in turn follow from (REF ), (REF ), (REF ), and the fast increase of the $M_i$ .", "The next lemma determines, up to a constant, the distance from the rational approximations $ {w} _{j,k}(0)=p_{j,k}/q_{j,k}$ to their limit $\\xi _j$ .", "Lemma 2.9 With the above notation, we have $|\\xi _j- \\frac{p_{j,k}}{q_{j,k}}| \\asymp \\prod _{i=1}^{| {v} _{j,k+1}|} \\tau _{j,k,i} \\asymp P_{j,k+1}^{-1}, \\qquad \\qquad 1\\le j\\le m,\\; k\\ge 1.$ The implied constants depend on the IFS only but not on $k$ .", "By construction, for any $1\\le j\\le m$ and $k\\ge 1$ the words $ {w} _{j,k}$ and $ {w} _{j,k+1}$ agree up to (including) position $| {v} _{j,k+1}|$ .", "Denote the suffixes of $ {w} _{j,k}$ resp.", "$ {w} _{j,k+1}$ starting from position $| {v} _{j,k+1}|+1$ by $\\sigma _{j,k}$ resp.", "$\\nu _{j,k}$ .", "By (REF ), these are given by $\\sigma _{j,k}= g^{\\infty }, \\qquad \\nu _{j,k}={p}= f^Ng^{\\infty }.$ Note that this is independent of $j, k$ , hence the distance between the associated rationals $\\sigma _{j,k}(0)$ and $\\nu _{j,k}(0)$ is constant as well.", "By Proposition REF it is non-zero, thus $ |\\sigma _{j,k}(0)-\\nu _{j,k}(0)|\\asymp 1.$ Now since the digits of $ {w} _{j,k}$ and $ {w} _{j,k+1}$ agree up to position $| {v} _{j,k+1}|$ , starting from $\\sigma _{j,k}$ and $\\nu _{j,k}$ and reading consecutively the digits at places $| {v} _{j,k+1}|, | {v} _{j,k+1}|-1, \\ldots ,1$ to derive at $p_{j,k}/q_{j,k}$ resp.", "$p_{j,k+1}/q_{j,k+1}$ , we read the same contractions for both numbers $p_{j,k}/q_{j,k}$ and $p_{j,k+1}/q_{j,k+1}$ .", "Hence the distance decreases by the contraction factor $1/b_1$ resp.", "$1/b_2$ in each step, depending on if we read for both $f$ or for both $g$ .", "Now by (REF ), the word $ {v} _{j,k+1}$ consists of ${f}_{k+1}$ occurrences of $f$ and ${g}_{j,k+1}+N$ occurrences of $g$ .", "Hence by (REF ) and as $N$ is fixed, the total distance between two consecutive rationals becomes $|\\frac{p_{j,k}}{q_{j,k}}-\\frac{p_{j,k+1}}{q_{j,k+1}}|&= | {w} _{j,k}(0)- {w} _{j,k+1}(0)|\\\\&=| {v} _{j,k+1}\\sigma _{j,k}(0)- {v} _{j,k+1}\\nu _{j,k}(0)| \\\\&= \\prod _{i=1}^{| {v} _{j,k+1}|} \\tau _{j,k,i}\\cdot |\\sigma _{j,k}(0)-\\nu _{j,k}(0)| \\\\&= b_1^{- {f}_{k+1} } b_2^{-{g}_{j,k+1}-N}\\cdot |\\sigma _{j,k}(0)-\\nu _{j,k}(0)|\\\\&\\asymp b_1^{- {f}_{k+1} } b_2^{-{g}_{j,k+1}-N}\\\\&\\asymp b_1^{- {f}_{k+1} } b_2^{-{g}_{j,k+1}}\\\\&=P_{j,k+1}^{-1}.$ Finally as $\\xi _j$ is the limit of $p_{j,k}/q_{j,k}$ as $k\\rightarrow \\infty $ , we conclude $|\\xi _j- \\frac{p_{j,k}}{q_{j,k}}|= \\lim _{u\\rightarrow \\infty }|\\frac{p_{j,k}}{q_{j,k}}-\\frac{p_{j,k+u}}{q_{j,k+u}}|=|\\sum _{i=k}^{\\infty } (\\frac{p_{j,i}}{q_{j,i}}-\\frac{p_{j,i+1}}{q_{j,i+1}})|\\asymp |\\frac{p_{j,k}}{q_{j,k}}-\\frac{p_{j,k+1}}{q_{j,k+1}}| \\asymp P_{j,k+1}^{-1}$ since we may assume the lengths $| {v} _{j,k}|$ grow rapidly with $k$ by choosing $(M_i)_{i\\ge 1}$ fast increasing, so the first term in the sum is dominating.", "We can now finally present the core of the proof, that is to show $\\Theta (\\underline{\\xi })\\asymp c$ for $\\underline{\\xi }$ constructed above.", "All implied constants will be understood to depend on the IFS only.", "We will assume $m\\ge 3$ in the proof below.", "Otherwise if $m=2$ we have to slightly modify the construction from § REF according to [14], concretely we may alter ${g}_{m,k}={g}_{2,k}$ from (REF ), (REF ) to $(b_1^{ {f}_{k}-{f}_{k-1} } b_2^{ {g}_{2,k}-{g}_{1,k} })^{2} \\asymp c^{-2}b_1^{ {f}_{k} } b_2^{ {g}_{2,k} }, \\qquad {g}_{2,k}=\\left\\lfloor \\frac{ \\log (c^{-2}b_1^{ (k-2)N }b_2^{2{g}_{1,k} }) }{ \\log b_2 } \\right\\rfloor ,$ where we used (REF ) in the right hand side.", "While the preparatory results from § REF , REF above are unaffected, a few modifications in § REF , REF below occur but can be handled very similar to [14], we omit the details.", "We finally assume $ \\frac{\\log M_{i+1}}{\\log M_i} \\rightarrow \\infty ,$ for the sequence $(M_i)_{i\\ge 1}$ of § REF .", "Recall $S$ defined in Corollary REF for the proof below." ], [ "Proof $\\Theta (\\underline{\\xi })\\ll c$ and Liouville property ", "Combining (REF ) with Lemma REF , we get that $P_{k+1}$ is much larger than $P_k$ , more precisely $ P_{k}=P_{k+1}^{o(1)}, \\qquad k\\rightarrow \\infty .$ Let $Q>1$ be large.", "Let $P_0:=1$ and $k\\ge 0$ be the unique integer such that $S \\cdot P_{k}\\le Q < S \\cdot P_{k+1}.$ Case 1: $Q< S \\cdot P_k^{\\ast }$ .", "Then $Q\\ll P_k^{\\ast } \\asymp c^m P_k^{m}$ by (REF ).", "Let $q=S \\cdot P_k\\asymp P_k.$ Then $q\\cdot (p_{1,k}/q_{1,k})=S P_k\\cdot (p_{1,k}/q_{1,k})$ as well as $q\\cdot (p_{j,k-1}/q_{j,k-1})=S P_k \\cdot (p_{j,k-1}/q_{j,k-1})$ for $2\\le j\\le m$ are integers by Corollary REF (Proposition REF suffices) and Proposition REF .", "For $j=1$ , by Lemma REF and (REF ), for arbitrarily large $t>0$ and $k\\ge k_0(t)$ we get $\\Vert q\\xi _1\\Vert \\le \\Vert q\\xi _1- \\frac{ qp_{1,k}}{q_{1,k}}\\Vert \\le q|\\xi _1-\\frac{p_{1,k}}{q_{1,k}}| \\ll P_kP_{k+1}^{-1} \\ll Q^{-t}.$ For $j>1$ , we first combine Lemma REF and (REF ) to get $|\\xi _j-\\frac{p_{j,k-1}}{q_{j,k-1}}| \\asymp P_{j,k}^{-1}\\asymp P_k^{-j}, \\qquad 2\\le j\\le m-1,\\; k\\ge 1$ and for $j=m$ similarly $|\\xi _m-\\frac{p_{m,k-1}}{q_{m,k-1}}| \\asymp P_{m,k}^{-1}\\asymp c^{-m} P_k^{-m}, \\qquad k\\ge 1.$ By combining this with the bound for $Q$ , for $2\\le j\\le m-1$ we get $\\Vert q\\xi _j\\Vert \\le \\Vert q\\xi _j- \\frac{ qp_{j,k-1}}{q_{j,k-1}}\\Vert \\le q|\\xi _j-\\frac{p_{j,k-1}}{q_{j,k-1}}| \\ll P_kP_k^{-j}\\ll P_k^{-1}\\ll cQ^{-1/m},$ and similarly for $j=m$ by our assumption $m\\ge 3$ we get $\\Vert q\\xi _m\\Vert \\le \\Vert q\\xi _m- \\frac{ qp_{m,k-1}}{q_{m,k-1}}\\Vert \\le q|\\xi _m-\\frac{p_{m,k-1}}{q_{m,k-1}}| \\ll P_kc^{-m}P_k^{-m}\\ll P_k^{-1}\\ll cQ^{-1/m}.$ Thus indeed $\\Vert q\\underline{\\xi }\\Vert = \\max _{1\\le j\\le m} \\Vert q\\xi _j\\Vert \\ll cQ^{-1/m },$ as desired.", "Case 2: $Q\\ge S \\cdot P_k^{\\ast }$ .", "Here a crude estimation suffices.", "Let $q= S\\cdot P_k^{\\ast } \\asymp P_k^{\\ast }.$ Then again by Corollary REF and Proposition REF , all $q\\cdot (p_{j,k}/q_{j,k})$ for $1\\le j\\le m$ are integers.", "From (REF ) and from Lemma REF for $\\varepsilon >0$ and large indices $k\\ge k_0(\\varepsilon )$ that $\\max _{1\\le j\\le m} \\vert \\xi _j- \\frac{ p_{j,k}}{q_{j,k}}\\vert \\le P_{k+1}^{-(1-\\varepsilon /2)}$ and further by (REF ), (REF ) and since $m>1$ we conclude for arbitrary $\\varepsilon >0$ and large $Q$ that $\\Vert q\\underline{\\xi }\\Vert &\\le \\max _{1\\le j\\le m} \\Vert q\\xi _j- \\frac{ qp_{j,k}}{q_{j,k}}\\Vert \\le q \\max _{1\\le j\\le m} |\\xi _j-\\frac{p_{j,k}}{q_{j,k}}| \\\\ &\\ll P_{k}^{\\ast }\\cdot P_{k+1}^{-1+\\varepsilon /2}\\ll c^m P_k^{m}P_{k+1}^{-1+\\varepsilon /2} \\\\ &\\ll P_{k+1}^{-1+\\varepsilon }\\ll Q^{-1+\\varepsilon }\\ll cQ^{-1/m}.$ Thus indeed $\\Theta (\\underline{\\xi })\\ll c$ .", "The Liouville property follows easily from the estimates of case 2 and (REF )." ], [ "Proof $\\Theta (\\underline{\\xi })\\gg c$", "Fix a large integer $k\\ge 1$ and define $Q=P_k^{\\ast }-1.$ We will show that for every integer $1\\le q\\le Q$ , we have $ \\Vert q\\underline{\\xi }\\Vert \\gg P_k^{-1} \\gg cQ^{-1/m},$ where the right estimate comes from (REF ).", "Once this is shown, the claim is obvious.", "By Corollary REF , we may write $q_{j,k}=P_{j,k} S_{j,k}$ for $1\\le j\\le m$ , for $S_{j,k}$ divisors of $S$ , hence again $|S_{j,k}|\\ll 1$ , and the fraction $p_{j,k}/q_{j,k}$ is reduced.", "Let $P_{0,k}:=1$ .", "Now for a given positive integer $q$ , let $h=h(q)$ be the largest integer so that $P_{h,k}|q$ , which is well-defined and by our choice of $Q$ and Proposition REF satisfies $0\\le h<m$ .", "We may then write $q= tP_{h,k}$ for $t$ an integer so that $t\\nmid (P_{h+1,k}/P_{h,k})$ , where the latter ratio is an integer by Proposition REF .", "Assume first $h<m-1$ .", "Then $q\\cdot \\frac{p_{h+1,k}}{q_{h+1,k}}= \\frac{ tp_{h+1,k}\\cdot P_{h,k} }{ S_{h+1,k}P_{h+1,k} }$ is a rational number with denominator dividing $P_{h+1,k}/P_{h,k}$ in lowest terms.", "Moreover it is not an integer by $t\\nmid (P_{h+1,k}/P_{h,k})$ and as $(p_{h+1,k},q_{h+1,k})=1$ .", "It follows from (REF ) that $\\Vert q\\cdot \\frac{p_{h+1,k}}{q_{h+1,k}}\\Vert \\ge S_{h+1,k}^{-1}\\cdot \\frac{ P_{h,k} }{ P_{h+1,k}}\\gg \\frac{ P_{h,k} }{ P_{h+1,k}}\\gg P_k^{-1}.$ Since the error term $q|\\xi _{h+1}-\\frac{p_{h+1,k}}{q_{h+1,k}}|\\ll qq_{j,k+1}^{-1}\\ll Qq_{j,k+1}^{-1}\\ll P_k^{\\ast } P_{k+1}^{-1}$ is of smaller order $o(P_k^{-1})$ by Lemma REF and (REF ), again by (REF ) we also have $\\Vert q\\underline{\\xi }\\Vert = \\max _{1\\le j\\le k} \\Vert q\\xi _j\\Vert \\ge \\Vert q \\xi _{h+1}\\Vert \\ge \\Vert q\\cdot \\frac{p_{h+1,k}}{q_{h+1,k}}\\Vert - q|\\xi _{h+1}-\\frac{p_{h+1,k}}{q_{h+1,k}}|\\gg P_k^{-1}\\gg cQ^{-1/m}.$ A similar argument applies for $h=m-1$ .", "Here again by (REF ) we even get the stronger estimate $\\Vert q\\cdot \\frac{p_{m,k}}{q_{m,k}}\\Vert = \\frac{ tP_{m-1,k} }{ S_{m,k}P_k^{\\ast }}\\ge S_{m,k}^{-1 } \\frac{P_{m-1,k}}{P_{m,k}}\\gg \\frac{P_{m-1,k}}{P_{m,k}}\\gg P_k^{-1},$ where we used that the expression is non-zero because $t< P_{m,k}/P_{m-1,k} = P_k^{\\ast }/P_{m-1,k}$ and $(p_{m,k}, q_{m,k})=1$ .", "Hence (REF ) holds and consequently $\\Theta (\\underline{\\xi })\\gg c$ ." ], [ "Proof of Theorem ", "First consider $\\omega \\in [\\frac{1}{m}, \\frac{1}{m-1}]$ .", "We first explain the deduction in the special case of Theorem REF .", "The main twist is to redefine the length of $ {v} _{m,k}$ as $| {v} _{m,k}|= \\lfloor M_k/\\omega \\rfloor $ , where we apply the small twist $| {v} _{m,k}|= \\lfloor M_k/\\omega \\rfloor +k$ if $\\omega =\\frac{1}{m-1}$ to guarantee that $\\underline{\\xi }$ is totally irrational in this case.", "Note that (REF ) follows.", "We need to replace the asymptotics (REF ) for $j=m$ by $ P_{m,k}\\asymp P_k^{1/\\omega }.$ Doing so consistently, we get $\\widehat{\\omega }(\\underline{\\xi })\\ge \\omega $ by the method of § REF and $\\widehat{\\omega }(\\underline{\\xi })\\le \\omega $ by the method of § REF , with minor adjustments, especially using $P_k^{-1}\\asymp P_k^{\\ast -\\omega }$ , which is equivalent to (REF ).", "If the IFS is as in (REF ) in general form, the construction is precisely as above, but some twists occur in the proof.", "The preliminary results up to including Proposition REF are proved analogously, upon letting $P_{j,k}:= b_1^{{f}_{k}}b_2^{{g}_{j,k} }\\in \\mathbb {Z},\\qquad 1\\le j\\le m,\\; k\\ge 1.$ On the other hand, we now have a mismatch between $P_{j,k}$ and $\\prod \\tau _{j,k,i}^{-1}$ since the contraction factors now satisfy $\\tau _{j,k,i}\\in \\lbrace u_1b_1^{-1}, b_2^{-1} \\rbrace $ .", "Therefore in place of Lemma REF we now get $ |\\xi _j- \\frac{p_{j,k}}{q_{j,k}}| \\asymp \\prod _{i=1}^{| {v} _{j,k+1}|} \\tau _{j,k,i}\\asymp u_1^{{f}_{k+1}}P_{j,k+1}^{-1}, \\qquad 1\\le j\\le m,\\; k\\ge 1.$ However, it is easy to see from (REF ), (REF ) and (REF ) that ${f}_k\\asymp k$ and ${g}_{j,k}\\gg M_k$ , so ${f}_k= o( {g}_{j,k})$ as $k\\rightarrow \\infty $ again by (REF ), constituting the fast increase of the $M_k$ .", "Hence for large $k$ the factor $u_1^{{f}_{k+1}}$ is negligible compared to $P_{j,k+1}$ , in other words (REF ) yields $|\\xi _j- \\frac{p_{j,k}}{q_{j,k}}| = P_{j,k+1}^{-1+o(1)}, \\qquad k\\rightarrow \\infty .$ This again suffices to obtain $\\widehat{\\omega }(\\underline{\\xi })\\ge \\omega $ by the method of § REF .", "The reverse inequality is derived by the exact same line of arguments as in § REF again, replacing (REF ) by (REF ).", "Now let $\\omega =1$ .", "Start with a fast increasing (lacunary) integer sequence $(a_i)_{i\\ge 1}$ and slightly perturb it by adding $m$ absolutely bounded sequences $(v_i^{(j)})_{i\\ge 1}$ , $1\\le j\\le m$ , $|v_i^{(j)}|\\ll 1$ , to form $m$ sequences $(a_i^{(j)})_{i\\ge 1}$ , $1\\le j\\le m$ .", "Then take $\\underline{\\xi }$ whose components $\\xi _j$ have addresses $g^{a_1^{(j)}}fg^{a_2^{(j)}}fg^{a_3^{(j)}}\\cdots $ .", "The fast increase of the $a_i^{(j)}$ yields $\\widehat{\\omega }(\\underline{\\xi })\\ge 1$ by very similar arguments as in § REF via looking basically at rational approximations of the form $g^{a_1^{(j)}}fg^{a_2^{(j)}}\\cdots g^{a_n^{(j)}}fg^{\\infty }(0)$ , and the reverse estimate is trivial unless $\\underline{\\xi }\\in \\mathbb {Q}^m$ .", "By some inductive variational argument, it can be shown that certain choices of $v_i^{(j)}$ will guarantee that the arising real vector is totally irrational, similar to [11].", "See [13] and corresponding proofs for more details in the special case of missing digit Cantor sets, which readily generalize to our setting." ] ]
2210.07742
[ [ "Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set\n Active Learning" ], [ "Abstract Unlabeled data examples awaiting annotations contain open-set noise inevitably.", "A few active learning studies have attempted to deal with this open-set noise for sample selection by filtering out the noisy examples.", "However, because focusing on the purity of examples in a query set leads to overlooking the informativeness of the examples, the best balancing of purity and informativeness remains an important question.", "In this paper, to solve this purity-informativeness dilemma in open-set active learning, we propose a novel Meta-Query-Net,(MQ-Net) that adaptively finds the best balancing between the two factors.", "Specifically, by leveraging the multi-round property of active learning, we train MQ-Net using a query set without an additional validation set.", "Furthermore, a clear dominance relationship between unlabeled examples is effectively captured by MQ-Net through a novel skyline regularization.", "Extensive experiments on multiple open-set active learning scenarios demonstrate that the proposed MQ-Net achieves 20.14% improvement in terms of accuracy, compared with the state-of-the-art methods." ], [ "Introduction", "The success of deep learning in many complex tasks highly depends on the availability of massive data with well-annotated labels, which are very costly to obtain in practice [1].", "Active learning (AL) is one of the popular learning frameworks to reduce the high human-labeling cost, where a small number of maximally-informative examples are selected by a query strategy and labeled by an oracle repeatedly [2].", "Numerous query (i.e., sample selection) strategies, mainly categorized into uncertainty-based sampling [3], [4], [5] and diversity-based sampling [6], [7], [8], have succeeded in effectively reducing the labeling cost while achieving high model performance.", "Despite their success, most standard AL approaches rely on a strict assumption that all unlabeled examples should be cleanly collected from a pre-defined domain called in-distribution (IN), even before being labeled [9].", "This assumption is unrealistic in practice since the unlabeled examples are mostly collected from rather casual data curation processes such as web-crawling.", "Notably, in the Google search engine, the precision of image retrieval is reported to be 82$\\%$ on average, and it is worsened to 48$\\%$ for unpopular entities [10], [11].", "That is, such collected unlabeled data naturally involves open-set noise, which is defined as a set of the examples collected from different domains called out-of-distribution (OOD) [12].", "Figure: Motivation of MQ-Net: (a) shows the purity-informativeness dilemma for query selection in open-set AL; (b) shows the AL performances of a standard AL method (HI-focused), LL , and an open-set AL method (HP-focused), CCAL , along with our proposed MQ-Net for the ImageNet dataset with a noise ratio of 10%10\\%; (c) shows the trends with a noise ratio of 30%30\\%.In general, standard AL approaches favor the examples either highly uncertain in predictions or highly diverse in representations as a query for labeling.", "However, the addition of open-set noise makes these two measures fail to identify informative examples; the OOD examples also exhibit high uncertainty and diversity because they share neither class-distinctive features nor other inductive biases with IN examples [14], [15].", "As a result, an active learner is confused and likely to query the OOD examples to a human-annotator for labeling.", "Human annotators would disregard the OOD examples because they are unnecessary for the target task, thereby wasting the labeling budget.", "Therefore, the problem of active learning with open-set noise, which we call open-set active learning, has emerged as a new important challenge for real-world applications.", "Recently, a few studies have attempted to deal with the open-set noise for active learning [13], [16].", "They commonly try to increase the purity of examples in a query set, which is defined as the proportion of IN examples, by effectively filtering out the OOD examples.", "However, whether focusing on the purity is needed throughout the entire training period remains a question.", "In Figure REF (a), let's consider an open-set AL task with a binary classification of cats and dogs, where the images of other animals, e.g., horses and wolves, are regarded as OOD examples.", "It is clear that the group of high purity and high informativeness (HP-HI) is the most preferable for sample selection.", "However, when comparing the group of high purity and low informativeness (HP-LI) and that of low purity and high informativeness (LP-HI), the preference between these two groups of examples is not clear, but rather contingent on the learning stage and the ratio of OOD examples.", "Thus, we coin a new term “purity-informativeness dilemma” to call attention to the best balancing of purity and informativeness.", "Figures REF (b) and REF (c) illustrate the purity-informativeness dilemma.", "The standard AL approach, LL[5], puts more weight on the examples of high informativeness (denoted as HI-focused), while the existing open-set AL approach, CCAL [13], puts more weight on those of high purity (denoted as HP-focused).", "The HP-focused approach improves the test accuracy more significantly than the HI-focused one at earlier AL rounds, meaning that pure as well as easy examples are more beneficial.", "In contrast, the HI-focused approach beats the HP-focused one at later AL rounds, meaning that highly informative examples should be selected even at the expense of purity.", "Furthermore, comparing a low OOD (noise) ratio in Figure REF (b) and a high OOD ratio in Figure REF (c), the shift from HP-dominance to HI-dominance tends to occur later at a higher OOD ratio, which renders this dilemma more difficult.", "In this paper, to solve the purity-informativeness dilemma in open-set AL, we propose a novel meta-model Meta-Query-Net (MQ-Net) that adaptively finds the best balancing between the two factors.", "A key challenge is the best balancing is unknown in advance.", "The meta-model is trained to assign higher priority for in-distribution examples over OOD examples as well as for more informative examples among in-distribution ones.", "The input to the meta-model, which includes the target and OOD labels, is obtained for free from each AL round's query set by leveraging the multi-round property of AL.", "Moreover, the meta-model is optimized more stably through a novel regularization inspired by the skyline query [17], [18] popularly used in multi-objective optimization.", "As a result, MQ-Net can guide the learning of the target model by providing the best balancing between purity and informativeness throughout the entire training period.", "Overall, our main contributions are summarized as follows: We formulate the purity-informativeness dilemma, which hinders the usability of open-set AL in real-world applications.", "As our answer to the dilemma, we propose a novel AL framework, MQ-Net, which keeps finding the best trade-off between purity and informativeness.", "Extensive experiments on CIFAR10, CIFAR100, and ImageNet show that MQ-Net improves the classifier accuracy consistently when the OOD ratio changes from $10\\%$ to $60\\%$ by up to $20.14\\%$ ." ], [ "Active Learning and Open-set Recognition", " Active Learning is a learning framework to reduce the human labeling cost by finding the most informative examples given unlabeled data [9].", "One popular direction is uncertainty-based sampling.", "Typical approaches have exploited prediction probability, e.g., soft-max confidence [19], [3], margin [20], and entropy [21].", "Some approaches obtain uncertainty by Monte Carlo Dropout on multiple forward passes [22], [23], [24].", "LL [5] predicts the loss of examples by jointly learning a loss prediction module with a target model.", "Meanwhile, diversity-based sampling has also been widely studied.", "To incorporate diversity, most methods use a clustering [6], [25] or coreset selection algorithm [7].", "Notably, CoreSet [7] finds the set of examples having the highest distance coverage on the entire unlabeled data.", "BADGE [8] is a hybrid of uncertainty- and diversity-based sampling which uses $k$ -means++ clustering in the gradient embedding space.", "However, this family of approaches is not appropriate for open-set AL since they do not consider how to handle the OOD examples for query selection.", "Open-set Recognition (OSR) is a detection task to recognize the examples outside of the target domain [12].", "Closely related to this purpose, OOD detection has been actively studied [26].", "Recent work can be categorized into classifier-dependent, density-based, and self-supervised approaches.", "The classifier-dependent approach leverages a pre-trained classifier and introduces several scoring functions, such as Uncertainty [27], ODIN [28], mahalanobis distance (MD) [29], and Energy[30].", "Recently, ReAct [31] shows that rectifying penultimate activations can enhance most of the aforementioned classifier-dependent OOD scores.", "The density-based approach learns an auxiliary generative model like a variational auto-encoder to compute likelihood-based OOD scores [32], [33], [34].", "Most self-supervised approaches leverage contrastive learning [35], [36], [37].", "CSI shows that contrasting with distributionally-shifted augmentations can considerably enhance the OSR performance [35].", "The OSR performance of classifier-dependent approaches degrades significantly if the classifier performs poorly [38].", "Similarly, the performance of density-based and self-supervised approaches heavily resorts to the amount of clean IN data [34], [35].", "Therefore, open-set active learning is a challenging problem to be resolved by simply applying the OSR approaches since it is difficult to obtain high-quality classifiers and sufficient IN data at early AL rounds." ], [ "Open-set Active learning", " Two recent approaches have attempted to handle the open-set noise for AL [13], [16].", "Both approaches try to increase purity in query selection by effectively filtering out the OOD examples.", "CCAL [13] learns two contrastive coding models each for calculating informativeness and OODness of an example, and combines the two scores using a heuristic balancing rule.", "SIMILAR [16] selects a pure and core set of examples that maximize the distance on the entire unlabeled data while minimizing the distance to the identified OOD data.", "However, we found that CCAL and SIMILAR are often worse than standard AL methods, since they always put higher weights on purity although informativeness should be emphasized when the open-set noise ratio is small or in later AL rounds.", "This calls for developing a new solution to carefully find the best balance between purity and informativeness." ], [ "Problem Statement", " Let $\\mathcal {D}_{IN}$ and $\\mathcal {D}_{OOD}$ be the IN and OOD data distributions, where the label of examples from $\\mathcal {D}_{OOD}$ does not belong to any of the $k$ known labels $Y=\\lbrace y_i\\rbrace _{i=1}^{k}$ .", "Then, an unlabeled set is a mixture of IN and OOD examples, ${U}=\\lbrace {X}_{IN}, {X}_{OOD}\\rbrace $ , i.e., ${X}_{IN} \\sim \\mathcal {D}_{IN}$ and ${X}_{OOD} \\sim \\mathcal {D}_{OOD}$ .", "In the open-set AL, a human oracle is requested to assign a known label $y$ to an IN example $x \\in {X}_{IN}$ with a labeling cost $c_{IN}$ , while an OOD example $x \\in {X}_{OOD}$ is marked as open-set noise with a labeling cost $c_{OOD}$ .", "AL imposes restrictions on the labeling budget $b$ every round.", "It starts with a small labeled set ${S}_{L}$ , consisting of both labeled IN and OOD examples.", "The initial labeled set ${S}_L$ improves by adding a small but maximally-informative labeled query set ${S}_{Q}$ per round, i.e., ${S}_L\\!\\leftarrow \\!", "{S}_L\\!\\cup \\!", "{S}_{Q}$ , where the labeling cost for ${S}_{Q}$ by the oracle does not exceed the labeling budget $b$ .", "Hence, the goal of open-set AL is defined to construct the optimal query set ${S}_{Q}^{*}$ , minimizing the loss for the unseen target IN data.", "The difference from standard AL is that the labeling cost for OOD examples is introduced, where the labeling budget is wasted when OOD examples are misclassified as informative ones.", "Formally, let $C(\\cdot )$ be the labeling cost function for a given unlabeled set; then, each round of open-set AL is formulated to find the best query set ${S}_{Q}^{*}$ as ${\\begin{array}{c}{S}_Q^{*} = \\operatornamewithlimits{argmin}_{{S}_Q\\!", ":\\ C({S}_Q)\\le b} ~\\mathbb {E}_{(x,y) \\in {T}_{IN}} \\Big [ \\ell _{cls} \\big ( f(x; \\Theta _{{S_L}\\cup {S}_Q}), y \\big ) \\Big ], \\\\[-0.2pt]\\text{where}~~C({S}_Q)=\\sum _{x \\in {S}_Q} \\big ( \\mathbb {1}_{[x \\in {X}_{IN}]} c_{IN}+ \\mathbb {1}_{[x \\in {{X}_{OOD}}]} c_{OOD} \\big ).\\end{array}}$ Here, $f(\\cdot ;\\Theta _{{S}_{L}\\cup {S}_{Q}})$ denotes the target model trained on only IN examples in ${S}_{L}\\cup {S}_{Q}$ , and $\\ell _{cls}$ is a certain loss function, e.g., cross-entropy, for classification.", "For each AL round, all the examples in ${S}_{Q}^{*}$ are removed from the unlabeled set ${U}$ and then added to the accumulated labeled set ${S}_{L}$ with their labels.", "This procedure repeats for the total number $r$ of rounds." ], [ "Purity-Informativeness Dilemma", " An ideal approach for open-set AL would be to increase both purity and informativeness of a query set by completely suppressing the selection of OOD examples and accurately querying the most informative examples among the remaining IN examples.", "However, the ideal approach is infeasible because overly emphasizing purity in query selection does not promote example informativeness and vice versa.", "Specifically, OOD examples with low purity scores mostly exhibit high informativeness scores because they share neither class-distinctive features nor other inductive biases with the IN examples [14], [15].", "We call this trade-off in query selection as the purity-informativeness dilemma, which is our new finding expected to trigger a lot of subsequent work.", "To address this dilemma, we need to consider the proper weights of a purity score and an informative score when they are combined.", "Let $\\mathcal {P}(x)$ be a purity score of an example $x$ which can be measured by any existing OOD scores, e.g., negative energy [30], and $\\mathcal {I}(x)$ be an informativeness score of an example $x$ from any standard AL strategies, e.g., uncertainty [3] and diversity [25].", "Next, supposing $z_{x}=\\langle \\mathcal {P}(x), \\mathcal {I}(x) \\rangle $ is a tuple of available purity and informativeness scores for an example $x$ .", "Then, a score combination function $\\Phi (z_{x})$ , where $z_{x}=\\langle \\mathcal {P}(x), \\mathcal {I}(x) \\rangle $ , is defined to return an overall score that indicates the necessity of $x$ being included in the query set.", "Given two unlabeled examples $x_i$ and $x_j$ , if $\\mathcal {P}(x_i)>\\mathcal {P}(x_j)$ and $\\mathcal {I}(x_i)>\\mathcal {I}(x_j)$ , it is clear to favor $x_i$ over $x_j$ based on $\\Phi (z_{x_i})>\\Phi (z_{x_j})$ .", "However, due to the purity-informativeness dilemma, if $\\mathcal {P}(x_i)>\\mathcal {P}(x_j)$ and $\\mathcal {I}(x_i)<\\mathcal {I}(x_j)$ or $\\mathcal {P}(x_i)<\\mathcal {P}(x_j)$ and $\\mathcal {I}(x_i)>\\mathcal {I}(x_j)$ , it is very challenging to determine the dominance between $\\Phi (z_{x_i})$ and $\\Phi (z_{x_j})$ .", "In order to design $\\Phi (\\cdot )$ , we mainly focus on leveraging meta-learning, which is a more agnostic approach to resolve the dilemma other than several heuristic approaches, such as linear combination and multiplication." ], [ "Meta-Query-Net", " We propose a meta-model, named Meta-Query-Net (MQ-Net), which aims to learn a meta-score function for the purpose of identifying a query set.", "In the presence of open-set noise, MQ-Net outputs the meta-score for unlabeled examples to achieve the best balance between purity and informativeness in the selected query set.", "In this section, we introduce the notion of a self-validation set to guide the meta-model in a supervised manner and then demonstrate the meta-objective of MQ-Net for training.", "Then, we propose a novel skyline constraint used in optimization, which helps MQ-Net capture the obvious preference among unlabeled examples when a clear dominance exists.", "Next, we present a way of converting the purity and informativeness scores estimated by existing methods for use in MQ-Net.", "Note that training MQ-Net is not expensive because it builds a light meta-model on a small self-validation set.", "The overview of MQ-Net is illustrated in Figure REF .", "Figure: Overview of MQ-Net." ], [ "Training Objective with Self-validation Set", "The parameters $\\textbf {w}$ contained in MQ-Net $\\Phi (\\cdot ;\\textbf {w})$ is optimized in a supervised manner.", "For clean supervision, validation data is required for training.", "Without assuming a hard-to-obtain clean validation set, we propose to use a self-validation set, which is instantaneously generated in every AL round.", "In detail, we obtain a labeled query set $S_Q$ by the oracle, consisting of a labeled IN set and an identified OOD set in every round.", "Since the query set $S_Q$ is unseen for the target model $\\Theta $ and the meta-model $\\textbf {w}$ at the current round, we can exploit it as a self-validation set to train MQ-Net.", "This self-validation set eliminates the need for a clean validation set in meta-learning.", "Given the ground-truth labels in the self-validation set, it is feasible to guide MQ-Net to be trained to resolve the purity-informativeness dilemma by designing an appropriate meta-objective.", "It is based on the cross-entropy loss for classification because the loss value of training examples has been proven to be effective in identifying high informativeness examples [5].", "The conventional loss value by a target model $\\Theta $ is masked to be zero if $x \\in X_{OOD}$ since OOD examples are useless for AL, $\\ell _{mce}(x) = \\mathbb {1}_{[l_x = 1]}\\ell _{ce}\\big (f(x; \\Theta ), y\\big ),$ where $l$ is a true binary IN label, i.e., 1 for IN examples and 0 for OOD examples, which can be reliably obtained from the self-validation set.", "This masked loss, $\\ell _{mce}$ , preserves the informativeness of IN examples while excluding OOD examples.", "Given a self-validation data $S_Q$ , the meta-objective is defined such that MQ-Net parameterized by $\\textbf {w}$ outputs a high (or low) meta-score $\\Phi (z_{x}; \\textbf {w})$ if an example $x$ 's masked loss value is large (or small), ${\\begin{array}{c}\\!\\!\\!\\!\\!\\!\\mathcal {L}(S_Q) \\!=\\!\\sum _{i\\in S_Q} \\sum _{j\\in S_Q} \\max \\!\\Big (0, - \\text{Sign}\\big (\\ell _{mce}(x_i),\\ell _{mce}(x_j)\\big ) \\cdot \\big (\\Phi (z_{x_i}; \\textbf {w})-\\Phi (z_{x_j}; \\textbf {w})+ \\eta \\big ) \\Big ) \\\\s.t.", "~~\\forall x_i, x_j,~~ \\mbox{if} ~~ \\mathcal {P}(x_i)>\\mathcal {P}(x_j) ~~ \\mbox{and} ~~ \\mathcal {I}(x_i)>\\mathcal {I}(x_j),~~ \\mbox{then} ~~\\Phi (z_{x_i}; \\textbf {w})>\\Phi (z_{x_j}; \\textbf {w}),\\end{array}}$ where $\\eta >0$ is a constant margin for the ranking loss, and $\\text{Sign}(a, b)$ is an indicator function that returns $+1$ if $a>b$ , 0 if $a=b$ , and $-1$ otherwise.", "Hence, $\\Phi (z_{x_i}; \\textbf {w})$ is forced to be higher than $\\Phi (z_{x_j}; \\textbf {w})$ if $\\ell _{mce}(x_i) > \\ell _{mce}(x_j)$ ; in contrast, $\\Phi (z_{x_i}; \\textbf {w})$ is forced to be lower than $\\Phi (z_{x_j}; \\textbf {w})$ if $\\ell _{mce}(x_i) < \\ell _{mce}(x_j)$ .", "Two OOD examples do not affect the optimization because they do not have any priority between them, i.e., $\\ell _{mce}(x_i) = \\ell _{mce}(x_j)$ .", "In addition to the ranking loss, we add a regularization term named the skyline constraint (i.e., the second line) in the meta-objective Eq.", "(REF ), which is inspired by the skyline query which aims to narrow down a search space in a large-scale database by keeping only those items that are not worse than any other [17], [18].", "Specifically, in the case of $\\mathcal {P}(x_i)>\\mathcal {P}(x_j)$ and $\\mathcal {I}(x_i)>\\mathcal {I}(x_j)$ , the condition $\\Phi (z_{x_i}; \\textbf {w}) > \\Phi (z_{x_j}; \\textbf {w})$ must hold in our objective, and hence we make this proposition as the skyline constraint.", "This simple yet intuitive regularization is very helpful for achieving a meta-model that better judges the importance of purity or informativeness.", "We provide an ablation study on the skyline constraint in Section REF ." ], [ "Architecture of MQ-Net", "MQ-Net is parameterized by a multi-layer perceptron (MLP), a widely-used deep learning architecture for meta-learning [39].", "A challenge here is that the proposed skyline constraint in Eq.", "(REF ) does not hold with a standard MLP model.", "To satisfy the skyline constraint, the meta-score function $\\Phi (\\cdot ; \\textbf {w})$ should be a monotonic non-decreasing function because the output (meta-score) of MQ-Net for an example $x_i$ must be higher than that for another example $x_j$ if the two factors (purity and informativeness) of $x_i$ are both higher than those of $x_j$ .", "The MLP model consists of multiple matrix multiplications with non-linear activation functions such as ReLU and Sigmoid.", "In order for the MLP model to be monotonically non-decreasing, all the parameters in $\\textbf {w}$ for $\\Phi (\\cdot ; \\textbf {w})$ should be non-negative, as proven by Theorem REF .", "Theorem 4.1 For any MLP meta-model $\\textbf {w}$ with non-decreasing activation functions, a meta-score function $\\Phi (z; \\textbf {w})\\!", ": \\mathbb {R}^d \\rightarrow \\mathbb {R}$ holds the skyline constraints if $\\textbf {w}\\succeq 0$ and $z (\\in \\mathbb {R}^d) \\succeq 0$ , where $\\succeq $ is the component-wise inequality.", "An MLP model is involved with matrix multiplication and composition with activation functions, which are characterized by three basic operators: (1) addition: $h(z)=f(z)+g(z)$ , (2) multiplication: $h(z)=f(z) \\times g(z)$ , and (3) composition: $h(z)=f\\circ g(z)$ .", "These three operators are guaranteed to be non-decreasing functions if the parameters of the MLP model are all non-negative, because the non-negative weights guarantee all decomposed scalar operations in MLP to be non-decreasing functions.", "Combining the three operators, the MLP model $\\Phi (z;\\textbf {w})$ , where $\\textbf {w}\\succeq 0$ , naturally becomes a monotonic non-decreasing function for each input dimension.", "Refer to Appendix for the complete proof.", "In implementation, non-negative weights are guaranteed by applying a ReLU function to meta-model parameters.", "Since the ReLU function is differentiable, MQ-Net can be trained with the proposed objective in an end-to-end manner.", "Putting this simple modification, the skyline constraint is preserved successfully without introducing any complex loss-based regularization term.", "The only remaining condition is that each input of MQ-Net must be a vector of non-negative entries." ], [ "Meta-input Conversion", " MQ-Net receives $z_x = \\langle \\mathcal {P}(x), \\mathcal {I}(x) \\rangle $ and then returns a meta-score for query selection.", "All the scores for the input of MQ-Net should be positive to preserve the skyline constraint, i.e., $z \\succeq 0$ .", "Existing OOD and AL query scores are converted to the meta-input.", "The methods used for calculating the scores are orthogonal to our framework.", "The OOD score $\\mathcal {O}(\\cdot )$ is conceptually the opposite of purity and varies in its scale; hence, we convert it to a purity score by $\\mathcal {P}(x)={\\rm Exp}({\\rm Normalize}(-\\mathcal {O}(x)))$ , where ${\\rm Normalize}(\\cdot )$ is the z-score normalization.", "This conversion guarantees the purity score to be positive.", "Similarly, for the informativeness score, we convert an existing AL query score $\\mathcal {Q}(\\cdot )$ to $\\mathcal {I}(x)={\\rm Exp}({\\rm Normalize}(\\mathcal {Q}(x)))$ .", "For the z-score normalization, we compute the mean and standard deviation of $\\mathcal {O}(x)$ or $\\mathcal {Q}(x)$ over the unlabeled examples.", "Such mean and standard deviation are iteratively computed before the meta-training, and used for the z-score normalization at that round." ], [ "Overall Procedure", " For each AL round, a target model is trained via stochastic gradient descent (SGD) on mini-batches sampled from the IN examples in the current labeled set $S_L$ .", "Based on the current target model, the purity and informative scores are computed by using certain OOD and AL query scores.", "The querying phase is then performed by selecting the examples $S_{Q}$ with the highest meta-scores within the labeling budget $b$ .", "The query set $S_Q$ is used as the self-validation set for training MQ-Net at the current AL round.", "The trained MQ-Net is used at the next AL round.", "The alternating procedure of updating the target model and the meta-model repeats for a given number $r$ of AL rounds.", "The pseudocode of MQ-Net can be found in Appendix ." ], [ "Experiment Setting", "Datasets.", "We perform the active learning task on three benchmark datasets; CIFAR10 [40], CIFAR100 [40], and ImageNet [41].", "Following the `split-dataset' setup in open-world learning literature [13], [16], [42], we divide each dataset into two subsets: (1) the target set with IN classes and (2) the noise set with OOD classes.", "Specifically, CIFAR10 is split into the target set with four classes and the noise set with the rest six classes; CIFAR100 into the two sets with 40 and 60 classes; and ImageNet into the two sets with 50 and 950 classes.", "The entire target set is used as the unlabeled IN data, while only a part of classes in the noise set is selected as the unlabeled OOD data according to the given noise ratio.", "In addition, following OOD detection literature [27], [32], we also consider the `cross-dataset' setup, which mixes a certain dataset with two external OOD datasets collected from different domains, such as LSUN [43] and Places365 [44].", "For sake of space, we present all the results on the cross-dataset setup in Appendix .", "Algorithms.", "We compare MQ-Net with a random selection, four standard AL, and two recent open-set AL approaches.", "[leftmargin=9pt, noitemsep] Standard AL: The four methods perform AL without any processing for open-set noise: (1) CONF [3] queries the most uncertain examples with the lowest softmax confidence in the prediction, (2) CORESET [7] queries the most diverse examples with the highest coverage in the representation space, (3) LL [5] queries the examples having the largest predicted loss by jointly learning a loss prediction module, and (4) BADGE [8] considers both uncertainty and diversity by querying the most representative examples in the gradient via $k$ -means++ clustering [45].", "Open-set AL: The two methods tend to put more weight on the examples with high purity: (1) CCAL [13] learns two contrastive coding models for calculating informativeness and OODness, and then it combines the two scores into one using a heuristic balancing rule, and (2) SIMILAR [16] selects a pure and core set of examples that maximize the distance coverage on the entire unlabeled data while minimizing the distance coverage to the already labeled OOD data.", "For all the experiments, regarding the two inputs of MQ-Net, we mainly use CSI [35] and LL [5] for calculating the purity and informativeness scores, respectively.", "For CSI, as in CCAL, we train a contrastive learner on the entire unlabeled set with open-set noise since the clean in-distribution set is not available in open-set AL.", "The ablation study in Section REF shows that MQ-Net is also effective with other OOD and AL scores as its input.", "Implementation Details.", "We repeat the three steps—training, querying, and labeling—of AL.", "The total number $r$ of rounds is set to 10.", "Following the prior open-set AL setup [13], [16], we set the labeling cost $c_{IN}=1$ for IN examples and $c_{OOD}=1$ for OOD examples.", "For the class-split setup, the labeling budget $b$ per round is set to 500 for CIFAR10/100 and $1,000$ for ImageNet.", "Regarding the open-set noise ratio $\\tau $ , we configure four different levels from light to heavy noise in $\\lbrace 10\\%, 20\\%, 40\\%, 60\\% \\rbrace $ .", "In the case of $\\tau =0\\%$ (no noise), MQ-Net naturally discards the purity score and only uses the informativeness score for query selection, since the self-validation set does not contain any OOD examples.", "The initial labeled set is randomly selected uniformly at random from the entire unlabeled set within the labeling budget $b$ .", "For the architecture of MQ-Net, we use a 2-layer MLP with the hidden dimension size of 64 and the ReLU activation fuction.", "We report the average results of five runs with different class splits.", "We did not use any pre-trained networks.", "See Appendix for more implementation details with training configurations.", "All methods are implemented with PyTorch 1.8.0 and executed on a single NVIDIA Tesla V100 GPU.", "The code is available at https://github.com/kaist-dmlab/MQNet.", "Figure: Test accuracy over AL rounds for CIFAR10, CIFAR100, and ImageNet with varying open-set noise ratios." ], [ "Results over AL Rounds", " Figure REF illustrates the test accuracy of the target model over AL rounds on the two CIFAR datasets.", "MQ-Net achieves the highest test accuracy in most AL rounds, thereby reaching the best test accuracy at the final round in every case for various datasets and noise ratios.", "Compared with the two existing open-set AL methods, CCAL and SIMILAR, MQ-Net shows a steeper improvement in test accuracy over rounds by resolving the purity-informativeness dilemma in query selection.", "For example, the performance gap between MQ-Net and the two open-set AL methods gets larger after the sixth round, as shown in Figure REF (b), because CCAL and SIMILAR mainly depend on purity in query selection, which conveys less informative information to the classifier.", "For a better classifier, informative examples should be favored at a later AL round due to the sufficient number of IN examples in the labeled set.", "In contrast, MQ-Net keeps improving the test accuracy even in a later AL round by finding the best balancing between purity and informativeness in its query set.", "More analysis of MQ-Net associated with the purity-informativeness dilemma is discussed in Section REF .", "Table: Last test accuracy (%\\%) at the final round for CIFAR10, CIFAR100, and ImageNet.", "The best results are in bold, and the second best results are underlined." ], [ "Results with Varying Noise Ratios", " Table REF summarizes the last test accuracy at the final AL round for three datasets with varying levels of open-set noise.", "Overall, the last test accuracy of MQ-Net is the best in every case.", "This superiority concludes that MQ-Net successfully finds the best trade-off between purity and informativeness in terms of AL accuracy regardless of the noise ratio.", "In general, the performance improvement becomes larger with the increase in the noise ratio.", "On the other hand, the two open-set AL approaches are even worse than the four standard AL approaches when the noise ratio is less than or equal to 20$\\%$ .", "Especially, in CIFAR10 relatively easier than others, CCAL and SIMILAR are inferior to the non-robust AL method, LL, even with 40$\\%$ noise.", "This trend confirms that increasing informativeness is more crucial than increasing purity when the noise ratio is small; highly informative examples are still beneficial when the performance of a classifier is saturated in the presence of open-set noise.", "An in-depth analysis on the low accuracy of the existing open-set AL approaches in a low noise ratio is presented in Appendix .", "Figure: Visualization of the query score distribution of MQ-Net on CIFAR100.", "xx- and yy-axis indicate the normalized informativeness and purity scores, respectively.", "The background color represents the query score of MQ-Net; the red is high, and the blue is low.", "Gray points represent unlabeled data, and blue and red points are the IN and OOD examples in the query set, respectively.", "The slope of the tangent line on the lowest scored example in the query set is displayed together; the steeper the slope, the more informativeness is emphasized in query selection." ], [ "Answers to the Purity-Informativeness Dilemma", " The high robustness of MQ-Net in Table REF and Figure REF is mainly attributed to its ability to keep finding the best trade-off between purity and informativeness.", "Figure REF (a) illustrates the preference change of MQ-Net between purity and informativeness throughout the AL rounds.", "As the round progresses, MQ-Net automatically raises the importance of informativeness rather than purity; the slope of the tangent line keeps steepening from $-0.74$ to $-1.21$ .", "This trend implies that more informative examples are required to be labeled when the target classifier becomes mature.", "That is, as the model performance increases, ‘fewer but highly-informative’ examples are more impactful than ‘more but less-informative’ examples in terms of improving the model performance.", "Figure REF (b) describes the preference change of MQ-Net with varying noise ratios.", "Contrary to the trend over AL rounds, as the noise ratio gets higher, MQ-Net prefers purity more over informativeness.", "Table: Efficacy of the skyline constraint." ], [ "Ablation Studies", "Various Combination of Meta-input.", "MQ-Net can design its purity and informativeness scores by leveraging diverse metrics in the existing OOD detection and AL literature.", "Table REF shows the final round test accuracy on CIFAR10 for the four variants of score combinations, each of which is constructed by a combination of two purity scores and two informativeness scores; each purity score is induced by the two recent OOD detection methods, ReAct [31] and CSI [35], while each informativeness score is converted from the two existing AL methods, CONF and LL.", "“CONF-ReAct” denotes a variant that uses ReAct as the purity score and CONF as the informativeness score.", "Overall, all variants perform better than standard and open-set AL baselines in every noise level.", "Refer to Table REF for detailed comparison.", "This result concludes that MQ-Net can be generalized over different types of meta-input owing to the learning flexibility of MLPs.", "Interestingly, the variant using CSI as the purity score is consistently better than those using ReAct.", "ReAct, a classifier-dependent OOD score, performs poorly in earlier AL rounds.", "A detailed analysis of the two OOD detectors, ReAct and CSI, over AL rounds can be found in Appendix .", "Efficacy of Self-validation Set.", "MQ-Net can be trained with an independent validation set, instead of using the proposed self-validation set.", "We generate the independent validation set by randomly sampling the same number of examples as the self-validation set with their ground-truth labels from the entire data not overlapped with the unlabeled set used for AL.", "As can be seen from Table REF , it is of interest to see that our self-validation set performs better than the random validation set.", "The two validation sets have a major difference in data distributions; the self-validation set mainly consists of the examples with highest meta-scores among the remaining unlabeled data per round, while the random validation set consists of random examples.", "We conclude that the meta-score of MQ-Net has the potential for constructing a high-quality validation set in addition to query selection.", "Efficacy of Skyline Constraint.", "Table REF demonstrates the final round test accuracy of MQ-Net with or without the skyline constraint.", "For the latter, a standard 2-layer MLP is used as the meta-network architecture without any modification.", "The performance of MQ-Net degrades significantly without the skyline constraint, meaning that the non-constrained MLP can easily overfit to the small-sized self-validation set, thereby assigning high output scores on less-pure and less-informative examples.", "Therefore, the violation of the skyline constraint in optimization makes MQ-Net hard to balance between the purity and informativeness scores in query selection.", "Efficacy of Meta-objective.", "MQ-Net keeps finding the best balance between purity and informativeness over multiple AL rounds by repeatedly minimizing the meta-objective in Eq.", "(REF ).", "To validate its efficacy, we compare it with two simple alternatives based on heuristic balancing rules such as linear combination and multiplication, denoted as $\\mathcal {P}(x)+\\mathcal {I}(x)$ and $\\mathcal {P}(x)\\cdot \\mathcal {I}(x)$ , respectively.", "Following the default setting of MQ-Net, we use LL for $\\mathcal {P}(x)$ and CSI for $\\mathcal {I}(x)$ .", "Table REF shows the AL performance of the two alternatives and MQ-Net for the split-dataset setup on CIFAR10 with the noise ratios of 20$\\%$ and $40\\%$ .", "MQ-Net beats the two alternatives after the second AL round where MQ-Net starts balancing purity and informativeness with its meta-objective.", "This result implies that our meta-objective successfully finds the best balance between purity and informativeness by emphasizing informativeness over purity at the later AL rounds.", "Table: Efficacy of the meta-objective in MQ-Net.", "We show the AL performance of two alternative balancing rules compared with MQ-Net for the split-dataset setup on CIFAR10 with the open-set noise ratios of 20%\\% and 40%40\\%.Table: Effect of varying the labeling cost c OOD c_{OOD}.", "We measure the last test accuracy for the split-dataset setup on CIFAR10 with an open-set noise ratio of 40%\\%.", "The best values are in bold." ], [ "Effect of Varying OOD Labeling Cost", "The labeling cost for OOD examples could vary with respect to data domains.", "To validate the robustness of MQ-Net on diverse labeling scenarios, we conduct an additional study of adjusting the labeling cost $c_{OOD}$ for the OOD examples.", "Table REF summarizes the performance change with four different labeling costs (i.e., 0.5, 1, 2, and 4).", "The two standard AL methods, CONF and CORESET, and two open-set AL methods, CCAL and SIMILAR, are compared with MQ-Net.", "Overall, MQ-Net consistently outperforms the four baselines regardless of the labeling cost.", "Meanwhile, CCAL and SIMILAR are more robust to the higher labeling cost than CONF and CORESET; CCAL and SIMILAR, which favor high purity examples, query more IN examples than CONF and CORESET, so they are less affected by the labeling cost, especially when it is high." ], [ "Conclusion", " We propose MQ-Net, a novel meta-model for open-set active learning that deals with the purity-informativeness dilemma.", "MQ-Net finds the best balancing between the two factors, being adaptive to the noise ratio and target model status.", "A clean validation set for the meta-model is obtained for free by exploiting the procedure of active learning.", "A ranking loss with the skyline constraint optimizes MQ-Net to make the output a legitimate meta-score that keeps the obvious order of two examples.", "MQ-Net is shown to yield the best test accuracy throughout the entire active learning rounds, thereby empirically proving the correctness of our solution to the purity-informativeness dilemma.", "Overall, we expect that our work will raise the practical usability of active learning with open-set noise." ], [ "Acknowledgement", "This work was supported by Institute of Information & Communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No.", "2020-0-00862, DB4DL: High-Usability and Performance In-Memory Distributed DBMS for Deep Learning).", "The experiment was conducted by the courtesy of NAVER Smart Machine Learning (NSML) [46].", "For all authors... Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope?", "Did you describe the limitations of your work?", "See the supplementary material.", "Did you discuss any potential negative societal impacts of your work?", "See the supplementary material.", "Have you read the ethics review guidelines and ensured that your paper conforms to them?", "If you are including theoretical results... Did you state the full set of assumptions of all theoretical results?", "See Sections and .", "Did you include complete proofs of all theoretical results?", "See the supplementary material.", "If you ran experiments... Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)?", "See Section and the supplementary material.", "Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)?", "See Section and the supplementary material.", "Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)?", "Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?", "See Section .", "If you are using existing assets (e.g., code, data, models) or curating/releasing new assets...", "If your work uses existing assets, did you cite the creators?", "See Section .", "Did you mention the license of the assets?", "Did you include any new assets either in the supplemental material or as a URL?", "Did you discuss whether and how consent was obtained from people whose data you're using/curating?", "Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content?", "If you used crowdsourcing or conducted research with human subjects... Did you include the full text of instructions given to participants and screenshots, if applicable?", "Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable?", "Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation?", "Meta-Query-Net: Resolving Purity-Informativeness Dilemma in Open-set Active Learning (Supplementary Material)" ], [ "Complete Proof of Theorem ", "Let $z_{x}=\\lbrace z_x^{\\langle 1 \\rangle }, \\ldots , z_x^{\\langle d \\rangle }\\rbrace $ be the $d$ -dimensional meta-input for an example $x$ consisting of $d$ available purity and informativeness scores.We use only two scores ($d=2$ ) in MQ-Net, one for purity and another for informativeness.", "A non-negative-weighted MLP $\\Phi _{\\textbf {w}}$ can be formulated as $h^{[l]}=\\sigma \\big ( W^{[l]} \\cdot h^{[l-1]} + b^{[l]} \\big ),~ l \\in \\lbrace 1,\\ldots ,L\\rbrace ,$ where $h^{[0]}=z_{x}, h^{[L]}\\in \\mathbb {R}, W^{[l]}\\succeq 0$ , and $b^{[l]} \\succeq 0$ ; $L$ is the number of layers and $\\sigma $ is a non-linear activation function.", "We prove Theorem REF by mathematical induction, as follows: (1) the first layer's output satisfies the skyline constraint by Lemmas REF and REF ; and (2) the $k$ -th layer's output ($k\\ge 2$ ) also satisfies the skyline constraint if the $(k-1)$ -th layer's output satisfies the skyline constraint.", "Therefore, we conclude that the skyline constraint holds for any non-negative-weighted MLP $\\Phi (z;\\textbf {w})\\!", ": \\mathbb {R}^d \\rightarrow \\mathbb {R}$ by Theorem REF .", "Lemma A.1 Let $g^{[1]}(z_{x})\\!=\\!W^{[1]}\\!\\cdot \\!z_{x}\\!+\\!b^{[1]}$ be a non-negative-weighted single-layer MLP with $m$ hidden units and an identity activation function, where $W^{[1]}\\!\\in \\mathbb {R}^{m\\times d}\\succeq 0$ and $b^{[1]}\\in \\mathbb {R}^m\\succeq 0$ .", "Given the meta-input of two different examples $z_{x_i}$ and $z_{x_j}$ , the function $g^{[1]}(z_{x})$ satisfies the skyline constraint as $z_{x_i} \\succeq z_{x_j} \\Rightarrow g^{[1]}(z_{x_i}) \\succeq g^{[1]}(z_{x_j}).$ Let $g^{[1]}(z_{x})$ be $g(z_{x})$ and $W^{[1]}$ be $W$ for notation simplicity.", "Consider each dimension's scalar output of $g(z_{x})$ , and it is denoted as $g^{\\langle p \\rangle }(z_{x})$ where $p$ is an index of the output dimension.", "Similarly, let $W^{\\langle p, n\\rangle }$ be a scalar element of the matrix $W$ on the $p$ -th row and $n$ -th column.", "With the matrix multiplication, the scalar output $g^{\\langle p \\rangle }(z_x)$ can be considered as the sum of multiple scalar linear operators $W^{\\langle p, n\\rangle }\\!\\cdot \\!z_{x}^{\\langle n \\rangle }$ .", "By this property, we show that $g^{\\langle p \\rangle }(z_{x_i})\\!-\\!g^{\\langle p \\rangle }(z_{x_j})\\!\\ge \\!0$ if $z_{x_i}\\!\\succeq \\!z_{x_j}$ by $\\begin{split}g^{\\langle p \\rangle }(z_{x_i})-g^{\\langle p \\rangle }(z_{x_j}) &= W^{\\langle p,\\cdot \\rangle } \\cdot z_{x_i} - W^{\\langle p,\\cdot \\rangle } \\cdot z_{x_j} = \\sum _{n=1}^d{\\big (W^{\\langle p,n\\rangle }\\cdot z^{\\langle n \\rangle }_{x_i} - W^{\\langle p,n\\rangle }\\cdot z^{\\langle n \\rangle }_{x_j} \\big )}\\\\&= \\sum _{n=1}^d{\\big (W^{\\langle p,n\\rangle }\\cdot (z^{\\langle n \\rangle }_{x_i} -z^{\\langle n \\rangle }_{x_j}) \\big )} \\ge 0.", "\\end{split}$ Therefore, without loss of generality, $g(z_{x_i})-g(z_{x_j}) \\succeq 0$ if $z_{x_i} \\succeq z_{x_j}$ .", "This concludes the proof.", "Lemma A.2 Let $h(z_{x})\\!=\\!\\sigma (g(z_{x}))$ where $\\sigma $ is a non-decreasing non-linear activation function.", "If the skyline constraint holds by $g(\\cdot )\\in \\mathbb {R}^d$ , the function $h(z_{x})$ also satisfies the skyline constraint as $z_{x_i} \\succeq z_{x_j} \\Rightarrow h(z_{x_i}) \\succeq h(z_{x_j}).$ By the composition rule of the non-decreasing function, applying any non-decreasing function does not change the order of its inputs.", "Therefore, $\\sigma (g(z_{x_i}))-\\sigma (g(z_{x_j}))\\succeq 0$ if $g(z_{x_i}) \\succeq g(z_{x_j})$ .", "Lemma A.3 Let $h^{[k]}(z_x)\\!=\\sigma \\big (W^{[k]}\\cdot h^{[k-1]}(z_{x})\\!+\\!b^{[k]}\\big )$ be the $k$ -th layer of a non-negative-weighted MLP $(k\\ge 2)$ , where $W^{[k]}\\!\\in \\mathbb {R}^{m^\\prime \\times m}\\succeq 0$ and $b^{[k]}\\in \\mathbb {R}^{m^\\prime }\\succeq 0$ .", "If $h^{[k-1]}(\\cdot )\\in \\mathbb {R}^m$ satisfies the skyline constraint, the function $h^{[k]}(z_x)$ also holds the skyline constraint as $z_{x_i} \\succeq z_{x_j} \\Rightarrow h^{[k]}(z_{x_i}) \\succeq h^{[k]}(z_{x_j}).$ Let $W^{[k]}$ be $W$ , $h^{[k]}(z_{x})$ be $h(z_{x})$ , and $h^{[k-1]}(z_{x})$ be $h_{input}(z_{x})$ for notation simplicity.", "Since an intermediate layer uses $h_{input}(z_{x})$ as its input rather than $z$ , Eq.", "(REF ) changes to $\\begin{split}g^{\\langle p \\rangle }(z_{x_i})-g^{\\langle p \\rangle }(z_{x_j}) &= \\sum _{n=1}^d{\\big (W^{\\langle p,n\\rangle }\\cdot \\big (h_{input}^{\\langle n \\rangle }(z_{x_i}) -h_{input}^{\\langle n \\rangle }(z_{x_j})\\big ) \\big )} \\ge 0, \\end{split}$ where $g^{\\langle p \\rangle }(z_{x_i})$ is the $p$ -th dimension's output before applying non-linear activation $\\sigma $ .", "Since $h_{input}(\\cdot )$ satisfies the skyline constraint, $h_{input}^{\\langle n \\rangle }(z_{x_i})>h_{input}^{\\langle n \\rangle }(z_{x_j})$ when $z_{x_i} \\succeq z_{x_j}$ , $g^{\\langle p \\rangle }(z_{x_i}) > g^{\\langle p \\rangle }(z_{x_j})$ for all $p\\in \\lbrace 1,\\ldots ,m^\\prime \\rbrace $ .", "By Lemma REF , $h^{\\langle p \\rangle }(z_{x_i})-h^{\\langle p \\rangle }(z_{x_j})=\\sigma (g^{\\langle p \\rangle }(z_{x_i}))-\\sigma (g^{\\langle p \\rangle }(z_{x_j})) \\ge 0$ for all $p$ .", "Therefore, $z_{x_i} \\succeq z_{x_j} \\Rightarrow h^{[k]}(z_{x_i}) \\succeq h^{[k]}(z_{x_j})$ .", "Theorem A.4 For any non-negative-weighted MLP $\\Phi (z; \\textbf {w})\\!", ": \\mathbb {R}^d \\rightarrow \\mathbb {R}$ where $\\textbf {w}\\succeq 0$ , the skyline constraint holds such that $z_{x_i}\\succeq z_{x_j} \\Rightarrow \\Phi (z_{x_i}) \\ge \\Phi (z_{x_j})~\\forall z_{x_i}, z_{x_j} \\in \\mathbb {R}^d \\succeq 0$ .", "By mathematical induction, where Lemmas REF and REF constitute the base step, and Lemma REF is the inductive step, any non-negative-weighted MLP satisfies the skyline constraint.", "linenosize= [b] AL Procedure with MQ-Net [1] ${S}_L$ : labeled set, ${U}$ : unlabeled set, $r$ : number of rounds, $b$ : labeling budget, $C$ : cost function, $\\Theta $ : parameters of the target model, w: parameters of MQ-Net Final target model $\\Theta _{*}$ $\\textbf {w} \\leftarrow $ Initialize the meta-model parameters; for $r=1$ to $r$ do /* Training the target model parameterized by $\\Theta $ */ $\\Theta \\leftarrow $ Initialize the target model parameters; $\\Theta \\leftarrow {\\rm TrainingClassifier}({S}_L, \\Theta )$ ; /* Querying for the budget $b$ */ ${S}_Q \\leftarrow \\emptyset $ ; while $C({S}_Q) \\le b$ do   if $r=1$ do   ${S}_Q\\leftarrow {S}_Q \\cup \\operatornamewithlimits{arg\\,max}_{x\\in U}(\\mathcal {P}(x)+\\mathcal {I}(x))$ ;   else do   ${S}_Q\\leftarrow {S}_Q \\cup \\operatornamewithlimits{arg\\,max}_{x\\in U}(\\Phi (x; \\textbf {w}))$ ; ${S}_L\\leftarrow {S}_L \\cup {S}_Q$ ; ${U}\\leftarrow {U}\\!\\setminus \\!", "{S}_Q$ /* Training MQ-Net $\\Phi $ parameterized by $\\textbf {w}$ */ for $t=1$ to meta-train-steps do   Draw a mini-batch $\\mathcal {M}$ and from $S_Q$ ;   $\\textbf {w} \\leftarrow \\textbf {w} -\\alpha \\nabla _{\\textbf {w}}\\big ( \\mathcal {L}_{meta}(\\mathcal {M}) \\big )$ ; return $\\Theta $ ;" ], [ "Detailed Procedure of MQ-Net", "Mini-batch Optimization.", "Mini-batch examples are sampled from the labeled query set $S_Q$ which contains both IN and OOD examples.", "Since the meta-objective in Eq.", "(REF ) is a ranking loss, a mini-batch $\\mathcal {M}$ is a set of meta-input pairs such that $\\mathcal {M}=\\lbrace (z_{x_i}, z_{x_j})|~x_i,x_j\\in S_Q\\rbrace $ where $z_x=\\langle \\mathcal {P}(x), \\mathcal {I}(x) \\rangle $ .", "To construct a paired mini-batch $\\mathcal {M}$ of size $M$ , we randomly sample $2M$ examples from $S_Q$ and pair the $i$ -th example with the $(M\\!+\\!i)$ -th one for all $i \\in \\lbrace 1,\\ldots \\!,M\\rbrace $ .", "Then, the loss for mini-batch optimization of MQ-Net is defined as ${\\begin{array}{c}\\small \\!\\!\\!\\!\\!\\!\\mathcal {L}_{meta}(\\mathcal {M}) \\!=\\!\\!\\!\\sum _{(i,j)\\in \\mathcal {M} }\\!\\!\\max \\!\\Big (0, - \\text{Sign}\\big (\\ell _{mce}(x_i),\\ell _{mce}(x_j)\\big ) \\!\\cdot \\!\\big (\\Phi (z_{x_i}; \\textbf {w})\\!-\\!\\Phi (z_{x_j}; \\textbf {w})\\!+\\!\\eta \\big ) \\Big ) \\!", ":\\!\\textbf {w}\\succeq 0.\\end{array}}$ Algorithm Pseudocode.", "Algorithm describes the overall active learning procedure with MQ-Net, which is self-explanatory.", "For each AL round, a target model $\\Theta $ is trained via stochastic gradient descent (SGD) using IN examples in the labeled set $S_L$ (Lines 3–5).", "This trained target model is saved as the final target model at the current round.", "Next, the querying phase is performed according to the order of meta-query scores from $\\Phi $ given the budget $b$ (Lines 6–13).", "Then, the meta-training phase is performed, and the meta-model $\\textbf {w}$ is updated via SGD using the labeled query set $S_Q$ as a self-validation set (Lines 14–17).", "Lines 3–17 repeat for the given number $r$ of rounds.", "At the first round, because there is no meta-model trained in the previous round, the query set is constructed by choosing the examples whose sum of purity and informativeness scores is the largest (Lines 9–10)." ], [ "Split-dataset Setup", "Training Configurations.", "We train ResNet-18 using SGD with a momentum of 0.9 and a weight decay of 0.0005, and a batch size of 64.", "The initial learning rate of $0.1$ is decayed by a factor of 0.1 at 50$\\%$ and 75$\\%$ of the total training iterations.", "In the setup of open-set AL, the number of IN examples for training differs depending on the query strategy.", "We hence use a fixed number of training iterations instead of epochs for fair optimization.", "The number of training iterations is set to 20,000 for CIFAR10/100 and 30,000 for ImageNet.", "We set $\\eta $ to 0.1 for all cases.", "We train MQ-Net for 100 epochs using SGD with a weight decay of 0.0005, and a mini-batch size of 64.", "An initial learning rate of $0.01$ is decayed by a factor of 0.1 at 50$\\%$ of the total training iterations.", "Since MQ-Net is not trained at the querying phase of the first AL round, we simply use the linear combination of purity and informativeness as the query score, i.e., $\\Phi (x)=\\mathcal {P}(x)+\\mathcal {I}(x)$ .", "For calculating the CSI-based purity score, we train a contrastive learner for CSI with 1,000 epochs under the LARS optimizer with a batch size of 32.", "Following CCAL [47], we use the distance between each unlabeled example to the closest OOD example in the labeled set on the representation space of the contrastive learner as the OOD score.", "The hyperparameters for other algorithms are favorably configured following the original papers." ], [ "Cross-dataset Setup", "Datasets.", "Each of CIFAR10, CIFAR100, and ImageNet is mixed with OOD examples sampled from an OOD dataset combined from two different domains—LSUN [43], an indoor scene understanding dataset of 59M images with 10 classes, and Places365 [44], a large collection of place scene images with 365 classes.", "The resolution of LSUN and Places365 is resized into 32$\\times $ 32 after random cropping when mixing with CIFAR10 and CIFAR100.", "For ImageNet, as in the split-dataset setup in Section REF , we use 50 randomly-selected classes as IN examples, namely ImageNet50.", "Implementation Details.", "For the cross-dataset setup, the budget $b$ is set to $1,000$ for CIFAR-10 and ImageNet50 and $2,000$ for CIFAR-100 following the literature [5].", "Regarding the open-set noise ratio, we also configure four different levels from light to heavy noise in $\\lbrace 10\\%, 20\\%, 40\\%, 60\\% \\rbrace $ .", "The initial labeled set is selected uniformly at random from the entire unlabeled set within the labeling budget $b$ .", "For instance, when $b$ is $1,000$ and $\\tau $ is $20\\%$ , 800 IN examples and 200 OOD examples are expected to be selected as the initial set." ], [ "Results over AL Rounds", " Figure: Test accuracy over AL rounds for the three cross-datasets, CIFAR10, CIFAR100, and ImageNet, with varying open-set noise ratios.Figure REF shows the test accuracy of the target model throughout AL rounds on the three cross-datasets.", "Overall, as analyzed in Section REF , MQ-Net achieves the highest test accuracy in most AL rounds, thereby reaching the best test accuracy at the final round in every case of various datasets and noise ratios.", "Compared with the two existing open-set AL methods, CCAL and SIMILAR, MQ-Net shows a steeper improvement in test accuracy over rounds by resolving the purity-informativeness dilemma in query selection, which shows that MQ-Net keeps improving the test accuracy even in a later AL round by finding the best balancing between purity and informativeness in its query set.", "Together with the results in Section REF , we confirm that MQ-Net is robust to the two different distributions—`split-dataset' and `cross-dataset'—of open-set noise." ], [ "Results with Varying Noise Ratios", " Table REF summarizes the last test accuracy at the final AL round for three cross-datasets with varying levels of open-set noise.", "Overall, the last test accuracy of MQ-Net is the best in every case, which shows that MQ-Net keeps finding the best trade-off between purity and informativeness in terms of AL accuracy regardless of the noise ratio.", "The performance improvement becomes larger as the noise ratio increases.", "Meanwhile, CCAL and SIMILAR are even worse than the four standard AL approaches when noise ratio is less than or equal to 20$\\%$ .", "This trend indicates that focusing on informativeness is more beneficial than focusing on purity when the noise ratio is small.", "Table: Last test accuracy (%\\%) at the final round for three cross-datasets: CIFAR10, CIFAR100, and ImageNet50 mixed with the merger of LSUN and Places365.", "The best results are in bold, and the second best results are underlined." ], [ "In-depth Analysis of CCAL and SIMILAR in a Low-noise Case", " In the low-noise case, the standard AL method, such as CONF, can query many IN examples even without careful consideration of purity.", "As shown in Table REF , with 10$\\%$ noise, the ratio of IN examples in the query set reaches 75.24$\\%$ at the last AL round in CONF.", "This number is farily similar to 88.46$\\%$ and 90.24$\\%$ in CCAL and SIMILAR, respectively.", "In contrast, with the high-noise case (60$\\%$ noise), the difference between CONF and CCAL or SIMILAR becomes much larger (i.e., from 16.28$\\%$ to 41.84$\\%$ or 67.84$\\%$ ).", "That is, considering mainly on purity (not informativeness) may not be effective with the low-noise case.", "Therefore, especially in the low-noise case, the two purity-focused methods, SIMILAR and CCAL, have the potential risk of overly selecting less-informative IN examples that the model already shows high confidence, leading to lower generalization performance than the standard AL methods.", "In contrast, MQ-Net outperforms the standard AL baselines by controlling the ratio of IN examples in the query set to be very high at the earlier AL rounds but moderate at the later AL rounds; MQ-Net achieves a higher ratio of IN examples in the query set than CONF at every AL round, but the gap keeps decreasing.", "Specifically, with 10$\\%$ noise, the ratio of IN examples in the query set reaches 94.76$\\%$ at the first AL round in MQ-Net, which is higher than 87.52$\\%$ in CONF, but it becomes 75.80$\\%$ at the last AL round, which is very similar to 75.24$\\%$ in CONF.", "This observation means that MQ-Net succeeds in maintaining the high purity of the query set and avoiding the risk of overly selecting less-informative IN examples at the later learning stage.", "Table: Test accuracy and ratio of IN examples in a query set for the split-dataset setup on CIFAR10 with open-set noise of 10%\\% and 60%\\%.", "“%\\%IN in S Q S_Q” means the ratio of IN examples in the query set.Table: OOD detection performance (AUROC) of two different OOD scores over AL rounds with MQ-Net." ], [ "In-depth Analysis of Various Purity Scores", " The OSR performance of classifier-dependent OOD detection methods, e.g., ReAct, degrades significantly if the classifier performs poorly [38].", "Also, the OSR performance of self-supervised OOD detection methods, e.g., CSI, highly depends on the sufficient amount of clean IN examples [34], [35].", "Table REF shows the OOD detection performance of two OOD detectors, ReAct and CSI, over AL rounds with MQ-Net.", "Notably, at the earlier AL rounds, CSI is better than ReAct, meaning that self-supervised OOD detection methods are more robust than classifier-dependent methods when the amount of labeled data is small.", "Thus, the versions of MQ-Net using CSI as the purity score is better than those using ReAct, as shown in Section REF ." ], [ "AL Performance with More Rounds", "Figure REF shows the test accuracy over longer AL rounds for the split-dataset setup on CIFAR10 with an open-set noise ratio of 40$\\%$ .", "Owing to the ability to find the best balance between purity and informativeness, MQ-Net achieves the highest accuracy on every AL round.", "The purity-focused approaches, CCAL and SIMILAR, lose their effectiveness at the later AL rounds, compared to the informativeness-focused approaches, CONF, CORESET, and BADGE; the superiority of CONF, CORESET, and BADGE over CCAL and SIMILAR gets larger as the AL round proceeds, meaning that fewer but highly-informative examples are more beneficial than more but less-informative examples for model generalization as the model performance converges.", "However, with low (e.g., 20%) open-set noise cases, most OOD examples are selected as a query set and removed from the unlabeled set in a few additional AL rounds, because the number of OOD examples in the unlabeled set is originally small.", "Thus, the situation quickly becomes similar to the standard AL setting." ], [ "Standard Deviations of Main Results", "Table REF repeats Table REF with the addition of the standard deviations.", "Note that the standard deviations are very small, and the significance of the empirical results is sufficiently high." ], [ "Limitation and Potential Negative Societal Impact", "Limitation.", "Although MQ-Net outperforms other methods on multiple pairs of noisy datasets under the open-set AL settings, there are some issues that need to be further discussed.", "First, the performance gap between standard AL without open-set noise and open-set AL still exists.", "That is, we could not completely eliminate the negative effect of open-set noise.", "Second, although we validated MQ-Net with many OOD datasets, its effectiveness may vary according to the types of the OOD datasets.", "Formulating the effectiveness of MQ-Net based on the characteristics of a given pair of IN and OOD datasets can be an interesting research direction.", "Third, we regarded the OOD examples in a query set to be completely useless in training, but recent studies have reported that the OOD examples are helpful for model generalization [48], [49], [50], [15].", "Therefore, analyzing how to use OOD examples for model generalization and sample selection in AL can also be an interesting research direction.", "Potential Negative Societal Impact.", "As in all active learning approaches, since MQ-Net requires human oracles to label each of queried data examples, the oracle can see these examples in a database, even if the proportion of the revealed examples could be very small.", "Then, if the oracle is not trustworthy, there may be a leak of information.", "Therefore, in active learning, not specifically confined to MQ-Net, this privacy breach issue should be carefully considered, especially if the database contains private or sensitive information.", "Figure: Test accuracy over longer AL rounds for the split-dataset setup on CIFAR10 with an open-set noise ratio of 40%\\%.", "500 examples are selected as a query set in each AL round." ] ]
2210.07805
[ [ "Hybrid Decentralized Optimization: First- and Zeroth-Order Optimizers\n Can Be Jointly Leveraged For Faster Convergence" ], [ "Abstract Distributed optimization has become one of the standard ways of speeding up machine learning training, and most of the research in the area focuses on distributed first-order, gradient-based methods.", "Yet, there are settings where some computationally-bounded nodes may not be able to implement first-order, gradient-based optimization, while they could still contribute to joint optimization tasks.", "In this paper, we initiate the study of hybrid decentralized optimization, studying settings where nodes with zeroth-order and first-order optimization capabilities co-exist in a distributed system, and attempt to jointly solve an optimization task over some data distribution.", "We essentially show that, under reasonable parameter settings, such a system can not only withstand noisier zeroth-order agents but can even benefit from integrating such agents into the optimization process, rather than ignoring their information.", "At the core of our approach is a new analysis of distributed optimization with noisy and possibly-biased gradient estimators, which may be of independent interest.", "Experimental results on standard optimization tasks confirm our analysis, showing that hybrid first-zeroth order optimization can be practical." ], [ "Introduction", "One key enabler of the extremely rapid recent progress of machine learning has been distributed optimization: the ability to efficiently optimize over large quantities of data, and large parameter counts, among multiple nodes or devices, in order to share the computational load, and therefore reduce end-to-end training time.", "Distributed machine learning has become commonplace, and it is not unusual to encounter systems which distribute model training among tens or even hundreds of nodes.", "By and large, the standard distribution strategy in the context of machine learning tasks has been data-parallel [1], using first-order gradient estimators.", "We can formalize this as follows: considering a classical empirical risk minimization setting, we have a set of samples $S$ from a distribution, and wish to minimize the function $f: \\mathbb {R}^d\\rightarrow \\mathbb {R}$ , which is the average of losses over samples from $S$ .", "In other words, we wish to find $x^\\star = \\textnormal { argmin }_{x} \\sum _{s \\in S} f_s (x) / |S|$ .", "Assuming that we have $n$ compute nodes which can process samples in parallel, data-parallel SGD consists of iterations in w`hich each node computes gradient estimator for a batch of samples, and then nodes then exchange this information, either globally, via all-to-all communication, or pair-wise.", "Specifically, in this paper we will focus on the highly-popular decentralized optimization case, in which nodes interact in randomly chosen pairs, exchanging model information, following each local optimization step.", "There is already a vast amount of literature on decentralized optimization in the case where nodes have access to first-order, gradient-based estimators.", "While this setting is prevalent, it does not cover the interesting case where, among the set of nodes, a fraction only have access to weaker, zeroth-order gradient estimators, corresponding to less computationally-capable devices, but which may still possess useful local data and computation.", "In this paper, we initiate the study of hybrid decentralized optimization in the latter setting.", "Specifically, we aim to answer the following key question: Can zeroth-order estimators be integrated in a decentralized setting, and can they boost convergence?", "Roughly, we show that the answer to this question is affirmative.", "To arrive at it, we must overcome a number of non-trivial technical obstacles, and the answer must be qualified by key parameters, such as the first-order/zeroth-order split in the population, and the estimator variance and bias.", "More precisely, a key difficulty we must overcome in the algorithm and in the analysis is the fact that, under standard implementations, zeroth-order estimators are biased, breaking one of the key analytic assumptions in existing work on decentralized optimization, e.g.", "[2], [3], [4], [5], [6].", "Our analysis approach overcomes this obstacle, and provides the first convergence bounds for hybrid decentralized optimization via a novel potential argument.", "Roughly, assuming a $d$ -dimensional, $L$ -smooth and $\\ell $ -strongly-convex finite-sum objective function $f$ , and a population of $n$ nodes, in which $n_1$ have first-order stochastic gradient estimators of variance $\\sigma _1$ , and $n_0$ have zeroth-order estimators of variance $\\sigma _0$ , then our analysis shows that the “stochastic noise” in the convergence of our hybrid decentralized optimization algorithm in this population is given, up to constants, by the following three quantities: $\\frac{\\eta (d n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n^2}, \\frac{\\eta (d n_0\\sigma _0^2+n_1 \\sigma _1^2)}{n^2}, \\textnormal {~~ and~~ }\\frac{\\eta ^2 L d n_0}{n}.$ In this expression, $\\eta $ is the learning rate, and the quantities $\\varsigma _1$ and $\\varsigma _0$ are bounds on the average variance of first-order and zeroth-order estimators at the nodes, respectively, given by the way in which the data is split among these two types of agents.", "Intuitively, the first term is the variance due to the (random) data split, whereas the second term is the added variance due to noise in the two types of gradient estimators.", "(The zeroth-order terms are scaled by the dimension, as is common in this case.)", "The third term bounds the bias induced by the zeroth-order gradient estimators.", "Using this characterization, we show that there exist reasonable parameter settings such that, if zeroth-order nodes do not have extremely high variance, they may in fact be useful for convergence, especially since the third bias term can be controlled via the learning rate $\\eta $ .", "Our analysis approach should be of independent interest: first, we provide a simple and general way of characterizing convergence in a population mixing first- and zeroth-order agents, which can be easily parametrized given population and estimator properties.", "(For instance, we can directly cover the case when the zeroth-order estimators are unbiased [7] as in this case the bias term becomes zero.)", "Second, we do so in a very general communication model which allows agents to interact at different rates (due to randomness), covering both the pair-wise interaction model [8], [6] and the global matching interactions model [2], [3], [4], [5].", "A key remaining question is whether the above characterization can be validated for practical setting.", "For this, we implemented our algorithm, and examined the convergence under various optimization tasks, population relative sizes, and estimator implementations.", "Specifically, we implemented three different types of zeroth-order estimators: a standard biased one, e.g.", "[9], a de-biased estimator [7], and the novel gradient-free estimator of [10], and examined their behavior when mixed with first-order estimators.", "In brief, our results show that, even for fairly high-dimensional and complex tasks, such as fine-tuning the classification layer of a deep neural network, our approach continues to converge.", "Importantly, we observe that our approach allows a system to incorporate information from the zeroth-order agents in an efficient and robust, showing higher convergence speed relative to the case where only first-order information is considered for optimization." ], [ "Related Work.", "The study of decentralized optimization algorithms dates back to [11], and is related to the study of gossip algorithms for information dissemination [12], [13].", "The distinguishing feature of this setting is that optimization occurs jointly, but in the absence of a coordinator node.", "Several classic first-order algorithms have been ported and analyzed in the gossip setting, such as subgradient methods for convex objectives [14], [15], [16] or ADMM [17], [18].", "References [2], [19], [20] consider SGD-type algorithms in the non-convex setting, while references [21], [4], [6] analyzed the use of quantization in the gossip setting.", "By contrast, zeroth-order optimization has been relatively less investigated: [22] proposes a distributed deterministic zeroth-order Frank-Wolfe-type algorithm, whereas recent work by [23] investigated the rates which can be achieved by decentralized zeroth-order algorithms, proposing a multi-stage method which can match the rate of centralized algorithms in some parameter regimes.", "Relative to the latter reference, we focus on simpler decentralized algorithms, which can be easily interface with first-order optimizers, and perform a significantly more in-depth experimental validation.", "Stochastic zeroth-order optimization has been classically applied for gradient-free optimization of convex functions, e.g.", "[9], and has been extended to tackling high-dimensionality and saddle-point constraints, e.g. [24].", "(The area has tight connections to bandit online optimization, under time-varying objective functions, e.g.", "[25], [26], [27]; however, our results are not immediately relevant to this direction, as we are interested in interactions with agents possessing first-order information as well.)", "In this paper, we also investigate Monte-Carlo techniques for unbiasing [7], improved single-point function evaluation for better gradient estimation [28], as well as the very recent forward-mode unbiased estimator of [10]." ], [ "The System Model", "We consider a standard model for the decentralized optimization setting, which is similar to [4], [5], [2], [6].", "Specifically, we have $n \\ge 2$ agents, of which $n_0$ agents have zeroth-order gradient oracles, and $n_1$ have first-order gradient oracles.", "(We describe the exact optimization setup in the next section.)", "Beyond their oracle type, the agents are assumed to be anonymous for the purposes of the protocol.", "The execution will proceed in discrete steps, or rounds, where in each step, two agents are chosen to interact, uniformly at random.", "Specifically, when chosen, each agent performs some local computation, e.g.", "obtains some gradient information from their local oracle.", "Then, the two agents exchange parameter information, and update their local models, after which they are ready to proceed to the next round.", "Notice that this random interaction model is asynchronous, in the sense that the number of interactions taken by agents up to some point in time may be different, due to randomness.", "The basic unit of time used in the analysis, which we call fine-grained time, will be the total number of interactions among agents up to some given point in the execution.", "To express global progress, we will consider parallel time, which is the average number of interactions up to some point, and can be obtained by dividing by $n$ the total number of interactions.", "This corresponds to the intuition that $\\Theta (n)$ interactions may occur in parallel.", "In experiments, we will examine the convergence of the local model at a fixed node.", "This model is an instantiation of the classic population model of distributed computing [8], in an optimization setting.", "The model is similar to the one adopted by [6] for analyzing asynchronous decentralized SGD, and is more general than the ones adopted by [4], [5], [2], [3] for decentralized analysis, since the latter assume that nodes are paired via perfect global random matchings in each round.", "(Our analysis would easily extend to global matching, yielding virtually the same results.)" ], [ "Optimization Setup", "We assume each node $i$ has a local data distribution $\\mathcal {D}^i$ , and that the loss function corresponding to the samples at node $i$ , denoted by $f^i(x):\\mathbb {R}^d \\rightarrow \\mathbb {R}$ can be approximated using its stochastic form $F^i(x, \\xi ^i)$ for each parameter $x \\in \\mathbb {R}^d$ and (randomly chosen) sample $\\xi ^i \\sim \\mathcal {D}^i$ , where $f^i(x) = _{\\xi ^i \\sim \\mathcal {D}^i}\\big [ F^i(x, \\xi ^i) \\big ]$ .", "For simplicity of notation, we assume that nodes in the set $N_0=\\lbrace 1,2, ..., n_0\\rbrace $ are zeroth-order nodes and the nodes in the set $N_1 = [n]/N_0$ are first-order nodes.", "Let $n_0$ and $n_1$ be the sizes of the sets $N_0$ and $N_1$ correspondingly.", "In this setup nodes communicate to solve a distributed stochastic optimization problem, i.e.", "$f^* = \\underset{x \\in \\mathbb {R}^d}{min} \\Big [ f(x) := \\frac{1}{n_0} \\sum _{i \\in N_0} f^i(x) + \\frac{1}{n_1} \\sum _{i \\in N_1}f^i(x)\\Big ].$ This means that we wish to optimize the function $f$ which corresponds to the loss over all data samples.", "Since in the analysis we will wish to throttle the ratio of zeroth-order to first-order agents, we split the entire data among zeroth-order nodes, and we do the same thing for the first-order nodes.", "(Our analysis can be extended to settings where this is not the case, but this will allow us for instance to study what happens when either $n_0$ or $n_1$ goes to zero, without changing our objective function.)", "We make the following assumptions on the optimization objectives: [Strong convexity]assumptionstronglyConvex We assume that the function $f$ is strongly convex with parameter $\\ell >0$ , i.e.", "for all $x, y \\in \\mathbb {R}^d$ : $(x - y)^T(\\nabla f(x) - \\nabla f(y)) \\ge \\ell \\Vert x - y\\Vert ^2.$ [Smooth gradient]assumptionsmoothGradient All the stochastic gradients $\\nabla F^i$ are L-Lipschitz for some constant $L > 0$ , i.e.", "for all $\\xi ^i \\sim \\mathcal {D}^i$ and $x, y \\in \\mathbb {R}^d$ : $\\Vert \\nabla F^i(x, \\xi ^i) - \\nabla F^i(y, \\xi ^i) \\Vert \\le L \\Vert x - y\\Vert .$ If in addition $F^i$ are convex functions, then $\\Vert \\nabla F^i(x, \\xi ^i) - \\nabla F^i(y, \\xi ^i) \\Vert \\le 2L (F^i(x, \\xi ^i)) - F^i(y, \\xi ^i) - \\langle x-y, \\nabla F^i(y, \\xi ^i) \\rangle ).$ Using Assumption REF , one can easily find that the gradients of $f$ and $f^i(x)$ $\\forall i \\in [n]$ are also satisfying the above inequalities.", "Further, we make the following assumptions about the data split and the stochastic gradient estimators: [Balanced data distribution]assumptionglobalVariance The average variance of $\\nabla f^i(x)$ s for both zero and first order nodes is bounded by a global constant values, i.e.", "for all $x \\in \\mathbb {R}^d$ : $\\frac{1}{n_0}\\underset{i \\in N_0}{\\sum } &\\Vert \\nabla f^i(x) - \\nabla f(x) \\Vert ^2 \\le \\varsigma _0^2; \\\\\\frac{1}{n_1}\\underset{i \\in N_1}{\\sum } &\\Vert \\nabla f^i(x) - \\nabla f(x) \\Vert ^2 \\le \\varsigma _1^2.$ [Unbiasedness and bounded variance]assumptionunbiasedness For each i, $\\nabla F^i(x, \\xi ^i)$ is an unbiased estimator of $\\nabla f^i(x)$ and its variance is bounded by a constant $s_i^2$ , i.e.", "for all $x \\in \\mathbb {R}^d$ : $_{\\xi ^i \\sim \\mathcal {D}^i}[\\nabla F^i(x, \\xi ^i)] = \\nabla f^i(x); \\\\ _{\\xi ^i}\\Vert \\nabla F^i(x, \\xi ^i) - \\nabla f^i(x)\\Vert \\le s_i^2.$ Each node has access to an estimator $G^i(x)$ that estimates the local gradient $\\nabla f^i(x)$ at point $x$ .", "For nodes which can perform the gradient computation over a batch of data, i.e.", "first-order nodes, $G^i(x)$ is $\\nabla F^i(x, \\xi ^i)$ , where $\\xi ^i \\sim \\mathcal {D}^i$ ." ], [ "Zeroth-order Optimization", "We now provide a brief introduction relative to standard basic facts and assumptions concerning zeroth-order optimization.", "Let the function $f^i_\\nu (x):=_u[f^i(x+\\nu u)],$ $u\\sim N(0, I_d)$ be the smoothed version of each function $f^i(x)$ .", "Then, node $i$ can estimate the gradient of $f^i_\\nu $ by only evaluating some points of $f^i$ .", "[Zeroth-order estimator]definitionzerothEstimator $G^i_\\nu (x, u, \\xi ^i)=\\frac{F^i(x+\\nu u, \\xi ^i)-F^i(x, \\xi ^i)}{\\nu }u,$ where $u \\sim N(0, I_d)$ and $\\xi ^i \\sim \\mathcal {D}^i$ .", "Note that under Assumption REF , one can easily prove that $G^i_\\nu (x, u, \\xi ^i)$ is an unbiased estimator of $\\nabla f^i_\\nu $ since $ _{u, \\xi ^i} [ G^i_\\nu (x, u, \\xi ^i) ] = _u [ \\frac{f^i(x+\\nu u) - f^i(x)}{\\nu }u ] = \\nabla f^i_\\nu (x).$ As a technical note, in our analysis we will set $\\nu :=\\frac{\\eta }{c}$ , where $\\eta $ is the learning rate and $c$ is a constant to be defined later.", "Therefore, for simplicity we can define $G^i(x):=G^i_\\nu (x, u, \\xi ^i)$ , where $G^i_\\nu (x, u, \\xi )$ is as defined in Definition REF and $\\nu =\\frac{\\eta }{c}$ .", "Since zeroth-order nodes cannot perform gradient computation directly, we use this $G^i(x)$ as their gradient estimator.", "We restate the following well-known fact: [[9], Theorem 1.1 in [24]]lemmanestrov For a Gaussian random vector $u\\sim N(0,I_d) $ we have that $[\\Vert u\\Vert ^k] \\le (d+k)^{k/2}$ for any $k \\ge 2$ .", "Moreover, the following statements hold for any function $f$ whose gradient is Lipschitz continuous with constant $L$ .", "a) The gradient of $f_{\\nu }$ is Lipschitz continuous with constant $L_{\\nu }$ such that $L_{\\nu } \\le L$ .", "b) For any $x \\in \\mathbb {R}^d$ , $|f_{\\nu }(x)-f(x)| &\\le \\frac{\\nu ^2}{2} L d,\\\\\\Vert \\nabla f_{\\nu }(x) - \\nabla f^i(x)\\Vert &\\le \\frac{\\nu }{2}L (d+3)^{\\frac{3}{2}}.$ c) For any $x \\in \\mathbb {R} ^n$ , $ \\frac{1}{\\nu ^2}_u[\\lbrace f(x+\\nu u)-f(x)\\rbrace ^2\\Vert u\\Vert ^2] \\le \\frac{ \\nu ^2}{2}L^2(d+6)^3 + 2(d+4)\\Vert \\nabla f(x)\\Vert ^2.$" ], [ "Algorithm Description.", "We now describe a decentralized optimization algorithm, designed to be executed by a population of $n$ nodes, interacting in pairs chosen uniformly at random as per our model.", "We assume that $n_1$ of the nodes have access to first-order estimators and $n_0$ of them have access to zeroth-order estimators, hence $n=n_1+n_0$ .", "Two copies of the training data are distributed, once among the first-orders and once among the zeroth-orders.", "Thus, each first- and zeroth-order node has access to $\\frac{1}{n_1}$ , $\\frac{1}{n_0}$ of the entire training data, respectively.", "We assume that each node $i$ has access to a local stochastic estimator of the gradient, which we denote by $G^i$ , and maintains a model estimate $X^i$ , as well as the global learning rate $\\eta $ .", "Without loss of generality, we assume that the models are initialized to the same randomly-chosen point.", "Specifically, upon every interaction, the interacting agents $i$ and $j$ perform the following steps: [h] HDO pseudocode for each interaction between randomy chosen nodes $i$ and $j$  *[h]Nodes perform local steps.", "$X^i \\leftarrow X^i - \\eta G^i(X^i)$ ; $X^j \\leftarrow X^j - \\eta G^j(X^j)$ ;  *[h]Nodes average their local models.", "${avg} \\leftarrow (X^i + X^j) / 2$ ; $X^i \\leftarrow avg$ ; $X^j \\leftarrow avg$ ;" ], [ "Discussion.", "On the face of it, the algorithm is straightforward: upon each interaction, each node first performs a local model update based on its estimator, and then nodes average their local models following the interaction.", "Importantly, we do not distinguish between estimator types in the interactions, and nodes are immediately ready to proceed to the next round.", "Yet, this extremely simple structure in the algorithm comes at the cost of a very careful analysis, which will have to show that the above algorithmic pattern works, in spite of the fact that the nodes have different estimators, with different variances and (potentially) bias properties." ], [ "The Convergence of the HDO Algorithm", "This section is dedicated to proving that the following result theoremmaintheorem Assume an objective function $f : \\mathbb {R}^d \\rightarrow \\mathbb {R}$ , equal to the average loss over all data samples, whose optimum $x^*$ we are trying to find using Algorithm REF .", "Let $n_0$ be the number of zeroth-order nodes, and $n_1$ be the number of first-order agents.", "Given the data split described in the previous section, let $f_i$ be local objective function of node $i$ .", "Assume that the functions $f$ and $f_i$ satisfy Assumptions REF , REF , REF and REF .", "Let the total number of steps in the algorithm $T$ , be large enough such that $\\frac{T}{\\log T } = \\Omega (\\frac{n(d+n)(L+1)(\\frac{1}{\\ell }+1)}{\\ell })$ , and let the learning rate be $\\eta = \\frac{4n\\log T }{T \\ell }.$ Assume that zeroth-order nodes use estimators with $\\nu =\\frac{\\eta }{\\sqrt{d}}$ .", "For $1 \\le t \\le T$ , let the sequence of weights $w_t$ be given by $w_t = (1-\\frac{\\eta \\ell }{2n})^{-t}$ and let $S_T = \\sum _{t=1}^{T} w_T$ .", "Finally, define $\\mu _t = \\sum _{i = 1}^n X^i_t/n$ and $y_T=\\sum _{t=1}^T \\frac{w_t \\mu _{t-1}}{S_T}$ to be the mean over local model parameters.", "Then, we can show that HDO provides the following convergence rate: $[f(y_T) - f(x^*)]&+\\frac{\\ell \\Vert \\mu _{T}-x^*\\Vert ^2}{8} =\\\\&O\\Bigg (\\frac{L \\Vert \\mu _0-x^*\\Vert ^2}{T\\log T } +\\frac{\\log (T) (d n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{T \\ell n}+\\frac{\\log (T)(d n_0\\sigma _0^2+n_1 \\sigma _1^2)}{T \\ell n} +\\frac{\\log (T) d n_0}{T \\ell n}\\Bigg ).$" ], [ "Speedup.", "We first emphasize that, in the above bound, the time $T$ refers to the total number of interactions among agents, as opposed to parallel time, corresponding to $T / n$ .", "Notice that this rate is reminiscent of that of sequential SGD for the strongly-convex case.", "However, there are some distinctions: our notion of time is different, as we are counting the total number of gradient oracle queries by the nodes, and there are some additional trailing terms, whose meaning we discuss below.", "We interpret this formula from the perspective of an arbitrary local model.", "For this, notice that the notion of parallel time corresponding to the number of total interactions $T$ , which is by definition $T_p = T / n$ , corresponds (up to constants) to the average number of interactions and gradient oracle queries performed by each node up to time $T$ .", "Thus, for any single model, convergence with respect to its number of performed SGD steps $T_p$ would be $O( \\log (nT_p) / (n T_p))$ (assuming all parameters are constant), which would correspond to $\\Omega (\\frac{n}{\\log (nT_p)})=\\Omega (\\frac{n}{\\log (T)})$ speedup compared to a variant of sequential SGD.", "Notice that this is quite favorable to our algorithm, since we are considering biased zeroth-order estimators for some of the nodes in the population.", "Hence, assuming that $T$ is polynomial in $n$ , we get an almost-linear speedup of $\\Omega \\Big (\\frac{n}{\\log (n)}\\Big )$ ." ], [ "Impact of Zeroth-Order Nodes.", "Notice that our convergence bound cleanly separates in the terms which come from zeroth-order nodes and terms which come from first-order nodes.", "For $n_0=0$ , we get asymptotically the same bound as we would get if all nodes performed pure first-order SGD steps.", "Similarly, when $n_0=n$ we should be able to achieve asymptotically-optimal convergence for biased zeroth-order estimators.", "Further, notice that, if the bias is negligible, then the last term in the upper bound disappears, and we obtain a trade-off between two populations with different variances.", "We can also observe the following theoretical threshold: we asymptotically match the convergence rate in the case with all nodes performing SGD steps, as long as $d n_0 = O(n)$ (assuming all other parameters are constant)." ], [ "Proof Overview.", "The convergence proof, given in full in the Appendix, can be split conceptually into two steps.", "The first aims to bound the variance of the local models $X^i_t$ for each time step $t$ and node $i$ with respect to the mean $\\mu _t = \\sum _i X^i_t$ .", "It views this variance as a potential $\\Gamma _t$ , which as we show has supermartingale-like behavior for small enough learning rate: specifically, this quantity tends to increase due to gradient steps, but is pushed towards the mean $\\mu _t$ by the averaging process.", "The key technical component here is Lemma REF , which provides a careful bound for the evolution of the potential at a step, by modelling optimization as a dynamic load balancing process: each interaction corresponds to a weight generation step (in which gradient estimators are generated) and a load balancing step, in which the “loads” of the two nodes (corresponding to their model values) are balanced through averaging.", "In the second step of the proof, we first bound the rate at which the mean $\\mu _t$ converges towards $x^*$ , where we crucially (and carefully) leverage the variance bound obtained above.", "The main challenge in this part is dealing with biased zeroth-order estimators.", "In fact, even dealing with biased first-order estimators is not trivial, since for example, they are the main reason for the usage of error feedback when stochastic gradients are compressed using biased quantization [29].", "This is our second key technical result.", "With this in hand, we can complete the proof by applying a standard argument which characterizes the rate at which $[f(y_T)-f(x^*)]$ and $[\\Vert \\mu _t-x^*\\Vert ^2$ converge towards 0." ], [ "Notation and Preliminaries.", "In this section, we provide a more in-depth sketch of the analysis of the HDO protocol.", "We begin with some notation.", "Recall that $n$ is the number of nodes, split into first-order ($n_1$ ) and zeroth-order ($n_0$ ).", "We will analyze a sequence of time steps $t = 1, 2,\\ldots , T$ , each corresponding to an individual interaction between two nodes, which are usually denoted by $i$ and $j$ .", "Step 1: Parameter Concentration.", "Next, let $X_t$ be a vector of model estimates at time step $t$ , that is $X_t=(X_t^1, X_t^2, ..., X_t^n)$ .", "Also, let $\\mu _t=\\frac{1}{n} \\sum \\limits _{i=1}^n X_t^i$ , be an average estimate at time step $t$ .", "The following potential function measures the variance of the models: $\\Gamma _t=\\frac{1}{n} \\sum _{i=1}^n \\Vert X_t^i-\\mu _t \\Vert ^2.$ With this in place, one of our key technical results is to provide a supermartingale-type bound on the evolution of the potential $\\Gamma _t$ , in terms $\\eta $ , and average second moment of estimators at step $t$ , defined as $M_t^G := \\frac{1}{n}\\sum _i \\big \\Vert G^i(X_t^i)\\big \\Vert ^2$ .", "lemmaGammaBoundPerStepHelper For any time step $t$ : $\\big [ \\Gamma _{t+1} \\big ] \\le \\big ( 1 - \\frac{1}{2n}\\big )\\big [ \\Gamma _t \\big ] + \\frac{4}{n}\\eta ^2\\big [M_t^G\\big ].$ Notice that, if we had a universal second moment bound on the estimators, that is, for any vector $X$ and node $i$ $\\big \\Vert G^i(X)\\big \\Vert ^2 \\le M$ , for some $M > 0$ , then we would be able to unroll the recursion, and, for any $t \\ge 0$ upper bound $E[\\Gamma _t]$ by $\\eta ^2 M^2$ .", "In the absence of such upper bound we must derive the following upper bound on $\\big [M_t^G\\big ]$ : lemmaMGBound Assume $\\nu : =\\frac{\\eta }{c}$ is fixed, where $\\eta $ and $c$ are the learning rate and a constant respectively.", "Then, for any time step t we have: $\\big [ M_t^G\\big ] &\\le 6(d+4)L^2 [\\Gamma _t] + \\frac{6(d+4)n_0\\varsigma _0^2+3n_1 \\varsigma _1^2}{n}\\\\&+6(2d+9)L[f(\\mu _t)-f(x^*)] \\\\&+ \\frac{2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2}{n} \\\\&+ \\eta ^2 \\frac{n_0}{2nc^2}L^2(d+6)^3.$ First, we check how this upper bound affects the upper bound given by Lemma REF .", "For small enough $\\eta $ , the term containing $[\\Gamma _t]$ (which comes from the upper bound on $\\big [ M_t^G\\big ]$ ) can be upper bounded by $\\frac{1}{4n} [\\Gamma _t]$ , and hence it will just change the factor in front of $[\\Gamma _t]$ to $(1-1/4n)$ .", "Second, since the above bound contains the term with $[f(\\mu _t)-f(x^*)]$ we are not able to bound the potential $\\Gamma $ per step, instead, for weights $w_t = (1-\\frac{\\eta \\ell }{2n})^{-t}$ , we can upper bound $\\sum _{t=1}^{T} w_t [\\Gamma _{t-1}]$ (please see Lemma REF in the Appendix).", "The crucial property is that the upper bound on the weighted sum $\\sum _{t=1}^{T} w_t [\\Gamma _{t-1}]$ , is $O(\\eta ^2 \\sum _{t=1}^{T} w_t [f(\\mu _{t-1})-f(x^*)])+\\sum _{t=1}^{T} w_t O(\\eta ^2).$ (for simplicity, above we assumed that all other parameters are constant.)" ], [ "Step 2: Convergence of the Mean and Risk Bound.", "The above result allows us to characterize how well the individual parameters are concentrated around their mean.", "In turn, this will allow us to provide a recurrence for how fast the parameter average is moving towards the optimum.", "To help with the intuition, we provide the lemma which is simplified version of the one given in the additional material (please see Lemma REF ): Lemma 1 For small enough $\\eta $ and $t \\ge 1$ we have that: $\\Big \\Vert \\mu _{t} -x^* \\Big \\Vert ^2 &\\le (1-\\frac{\\ell \\eta }{2n})\\Vert \\mu _{t-1}-x^*\\Vert ^2 \\\\&- \\Omega (\\frac{\\eta }{n}) \\big [f(\\mu _{t-1})-f(x^*)\\big ]\\\\&+ O\\left( \\frac{\\eta }{n}\\right)[\\Gamma _{t-1}]+O(\\frac{\\eta ^2}{n^2}).$ Note that $O$ and $\\Omega $ hide all other parameters (we assume that all other parameters are constant).", "As mentioned, the main challenge in the proof of this lemma is taking care of biased zeroth-order estimators.", "Recall that $w_t = (1-\\frac{\\eta \\ell }{2n})^{-t}$ , by definition.", "We proceed by multiplying both sides of the above inequality by $w_{t}$ and then summing it up for $1 \\le t \\le T$ .", "Then, once we plug the upper bound on $\\sum _{t=1}^{T} w_t [\\Gamma _{t-1}]$ , for small enough $\\eta $ the term $O(\\frac{\\eta }{n})O(\\eta ^2 \\sum _{t=1}^{T} w_t [f(\\mu _{t-1}-f(x^*)])$ vanishes as it is dominated by the term $-\\sum _{t=1}^T \\Omega (\\frac{\\eta }{n}) \\big [f(\\mu _{t-1})-f(x^*)\\big ]$ .", "We get the final convergence bound after some simple calculations involving division of both sides by $S_T=\\sum _{t=1}^T w_t$ , and using $\\eta =\\frac{4n log(T)}{T \\ell }$ together with the upper bound on $T$ (in turn, this makes sure that $\\eta $ is small enough, so that all upper bounds we mentioned hold)." ], [ "Experimental Setup and Goals.", "In this section, we validate our results numerically by implementing HDO, and examining its behaviour in different scenarios.", "To investigate the convergence behaviour of our algorithm in a setting that matches our analysis, we investigated its behavior on real-world classification tasks from LibSVM [30]; in addition, to investigate a more realistic scenario, in which nodes jointly fine-tune the last (classification) layer of a ResNet50 deep neural network (DNN) on the CIFAR-10 dataset.", "Our main goal is examining whether, under reasonable parameter settings, zeroth-order nodes can be used to enhance the optimization process.", "Further details regarding the models and datasets are presented in the Appendix .", "Our full experimental setup, including code, is available at the following URL: https://anonymous.4open.science/r/Hybrid-Decentralized-Optimization-BCE1.", "In this section, we validate our results numerically by simulating HDO's execution for varying numbers of nodes, varying ratios between first- and zeroth-order nodes, and various “strenghts” for the zeroth order gradient estimators.", "Specifically, we are interested in the convergence behavior of the algorithm, relative to the total number of optimization steps, measured as either the loss value over time, or, alternatively, as the accuracy on the hidden validation set.", "Figure: Convergence vs. total number of nodes for mono-type populations on the Flowers dataset." ], [ "Results.", "In the first experiment, described in Figure REF , we examine the performance of individual zeroth-order gradient estimators over time, as a function of the number of random vectors used for the the gradient estimation.", "We use the CIFAR-10 finetuning task as an example.", "We choose values 5, 10 and 15 for the number of random vectors, and compare against the unbiased forward-only estimator recently proposed by [10].", "(We have found the performance of the latter estimator to be similar to a de-biased regular one [7], and we therefore omit results for explicit de-biasing so as to not overcrowd the figure.)", "The results clearly show an accuracy-vs-steps advantage for higher number of random vectors, and for the unbiased zeroth-order estimators vs. biased ones.", "Since the computational overhead of unbiasing estimators is fairly low, we will adopt unbiased zeroth-order estimators in the following experiments.", "In the second experiment, executed on the Flowers classification task [30], and described in Figure REF , we examine the convergence speed, in terms of training loss at a fixed node, for various sizes of homogeneous populations, containing only one type of estimators.", "The node is chosen so that its number of interactions is in the median, taken among all nodes.", "Specifically, we compare the convergence of the node in a system with 1, 6, or 12 zeroth-order (ZO) nodes, each using 50 random vectors for estimation, relative to a system with 1 or 6 first-order (FO) nodes.", "The results confirm the intuition, as well as our analysis: first-order nodes always outperform the same number of zeroth-order ones; however, zeroth-order nodes can in fact outperform first-order ones if their number is larger (see relationship between 1 FO and 6 ZO, and between 6 FO and 12 ZO).", "Figure: Comparison between the hybrid- and mono-type estimator population on the Flowers dataset, zoomed-in on the initial iterations.In our third experiment (Figure REF ), we fix a population size $n = 16$ , and examine the impact of the ratio between $n_1$ , the number of first-order nodes, and $n_0$ , the number of zeroth-order nodes, on the convergence of the algorithm.", "Here, we return to the task of final-layer CNN fine-tuning on CIFAR-10.", "As expected, the population formed exclusively of first-order agents has the fastest convergence, while the convergence order follows the intuition that more first-orders in the population provide faster convergence, which is backed up by our analysis.", "Our next experiment, presented in Figure REF , aims to examine whether a hybrid system, formed of both FO and ZO agents, can provide a convergence boost relative to a homogeneous system, formed by only one type of agents.", "(We examine the Flowers classification task, but results are identical for CIFAR-10 classification as well.)", "For this reason, we deploy either 2 FO or ZO nodes, 6 ZO nodes, or a hybrid system formed of 2 FO and 6 ZO nodes.", "The results show that 1) a larger population of 6 ZO nodes can outperform smaller populations of 2 FO or ZO nodes; and that 2) this larger uniform population is itself outperformed by a hybrid population.", "Results are obtained over 5 parallel runs, providing us with the confidence intervals shown in the figure." ], [ "Discussion, Limitations, and Future Work", "We have provided a first analysis of the convergence of decentralized gradient-based methods in a population of nodes which mixes first- and zeroth-order gradient estimators.", "Our analysis shows that, even when biased or very noisy, information from zeroth-order agents can still be successfully incorporated into a given protocol, and can in fact provide convergence improvements.", "The above experimental results clearly validate our analysis and the initial premise of our paper, by showing that first-order and zeroth-order estimators can in fact be successfully hybrid-ized in a decentralized population of agents.", "This is good news for environments combining computational devices with heterogeneous computational powers, and shows that one can leverage some agents' local data even if the agents do not have the ability to extract gradients.", "Specifically, a practical embodiment of our approach could be a decentralized learning system in which some more computationally-powerful agents perform backpropagation, acting as first-order agents, whereas a larger fraction of the nodes, with computationally-bounded devices, only estimate gradient information based on forward-passes over their local data, and share this information with the overall system during pair-wise interactions.", "Our analysis can also be generalized to tackle more general underlying interaction graph topologies: we omitted this here for brevity, however the potential analysis can be extended to general regular interaction graphs, where the convergence of the algorithm will depend on the eigenvalue gap of the given graph.", "Another possible extension which we plan to investigate is that of additional gradient estimators estimators, and of larger-scale practical deployments to validate the applicability of our approach in practical settings.", "The goal of our experimental simulation has been to validate the practical feasibility of our approach, and we estimate that this goal has been achieved." ], [ "Acknowledgement", "This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 805223 ScaleML).", "The authors would like to acknowledge Eugenia Iofinova for useful discussions during the inception of this project." ], [ "Experimental setup", "In this section, we describe our experimental setup in detail.", "We begin by carefully describing the way in which we simulated HybridSGD in the sequential form.", "Then, we proceed by explaining different types of gradient estimators that we used in our experiments, together with their implementation methods.", "Finally, we detail the datasets, tasks, and models used for our experiments." ], [ "Simulation", "We attempt to simulate a realistic decentralized deployment scenario sequentially, as follows.", "We assume $n$ nodes, each of which initially has a model copy.", "Each node has an oracle to estimate the gradient of loss function with respect to its model.", "It is assumed that $n_1$ nodes have access to first-order oracle and $n_0$ nodes have access to zeroth-order oracle, which can be biased or unbiased.", "Two copies of training dataset are distributed among first- and zeroth-order nodes so that each first-/zeroth-order node has access to $\\frac{1}{n_1}$ / $\\frac{1}{n_0}$ of training data.", "At each simulation step, we select two nodes uniformly at random and make them interact with each other.", "During the interaction, first each node takes a SGD step and then they share their models and adapt the averaged model as their new model.", "To track the performance of our algorithm, we select the node which has taken median number of steps among the population, and evaluate its model on an unseen validation dataset.", "We measure Validation loss and accuracy of the model with respect to its local steps.", "Moreover, we keep track of the training loss by updating it after each interaction using $\\alpha _{t} = 0.95*\\alpha _{t-1} + 0.05*\\beta _{t}$ where $\\alpha $ and $\\beta $ are the training loss and average interaction loss respectively." ], [ "Estimator types", "First-order Using this estimator node $i$ can estimate the gradient $\\nabla f^i(X^i)$ by computing $\\nabla F^i(X^i, \\xi ^i)$ , where $f^i$ and $F^i$ are the node's local loss function and its stochastic estimator respectively.", "The computation is done using Pytorch built-in .backward() method.", "Unbiased Zeroth-order We implemented this estimator using forward-mode differentiation technique inspired by .", "Using this method, for a randomly chosen vector $u\\sim N(0,I_d)$ , node $i$ can compute $F^i(X^i, \\xi ^i)$ and $u.\\nabla F^i(X^i, \\xi ^i)$ in a single forward pass.", "It will then use $(u.\\nabla F^i(X^i, \\xi ^i))u$ as its gradient estimator.", "Note that the node does not need to compute $\\nabla F^i(X^i, \\xi ^i)$ , hence it is a zeroth-order estimation of the gradient.", "Moreover, $E_{u \\sim N(0, I_d)}[(u.\\nabla F^i(X^i, \\xi ^i))u] = \\nabla F^i(X^i, \\xi ^i)$ which means the gradient estimator is unbiased.", "Biased Zeroth-order For a fixed $\\nu $ and a randomly chosen $u \\sim N(0, I_d)$ , node $i$ can estimate the gradient $\\nabla F^i(X^i, \\xi ^i)$ simply by computing $\\frac{F^i(X^i+\\nu u, \\xi ^i) - F^i(X^i, \\xi ^i)}{\\nu }u$ or $\\frac{F^i(X^i+\\nu u, \\xi ^i) - F^i(X^i-\\nu u, \\xi ^i)}{2\\nu }u$ .", "The computations consist of evaluating only function values, thus they are called zeroth-order estimators.", "However, both of them are biased estimators as their expected values would be equal to the gradient of the smoothed-version of the function, $\\nabla F^i_\\nu (X^i, \\xi ^i)$ , which is close but not necessarily equal to $\\nabla F^i(X^i, \\xi )$ .", "Note that the approximation of zeroth-order estimators can be improved by increasing the number of randomly chosen vectors and averaging the results.", "For the biased zeroth-order estimators, we use the batch matrix multiplication to compute the function values for all the randomly chosen vectors using constant GPU calls.", "Moreover, for unbiased zeroth-order estimators, we simulate the forward-mode differentiation by computing the gradient followed by computing the dot products of the gradient and the randomly chosen vectors.", "For more details on the implementation, we encourage readers to look at the source code of our experiments." ], [ "Datasets and Models", "We used Pytorch to manage the training process in our algorithm.", "We tested our algorithm on two sets of experiments, 1) Training a linear model, on the Year Prediction dataset [32], 2) Training the last (classification) layer of a ResNet50 deep neural network, on the CIFAR-10 [33] and Flowers [34] datasets, and 3) Training a Convolutional Neural Network as a non-convex function on the Fashion MNIST [35] dataset.", "In the following, we describe the model architecture, the training hyper-parameters as well as the way that we prepared the datatsets.", "Training a linear model For this task, we used the Year Predicition dataest, a collection of songs each of which with 90 features mapped to their release year, an integer from 1922 to 2011.", "We applied the standardization on the samples' input and min-max normalization on their outputs.", "We used $2^{17}/10000$ random split for the train/validation datasets.", "As the model, we used a linear layer of size $(90, 1)$ .", "To train the model, we used MSELoss with $\\text{batch size} = 128$ .", "The learning rate was usually set to 0.001; it is stated otherwise.", "Training the classification layer of a ResNet50 deep neural network We did this task on two of the well-known image classification datasets, CIFAR-10 and Flowers.", "For each dataset, we trained a raw ResNet50 model with output features equal to the number of classes of the dataset; 10 for the CIFAR-10 and 102 for the Flowers.", "We trained the models for 150 epochs using SGD optimizer and initial learning rate of $0.001$ .", "We decayed the learning rate by a factor of 10 every 50 epochs.", "After the training process, passed each dataset through its trained model and stored the inputs of the last layer paired with their labels as a new dataset.", "Then to run our experiments, we used these extracted datasets to train a randomly initialized linear model of size $(2048, \\text{number of classes})$ , as if we were tuninig the classification layer of our ResNet50 model.", "We used the default train/test data split of the dataset for our training/validation datasets.", "We always used CrossEntropyLoss as the criterion and $\\text{batch size} = 128$ for these datasets.", "During the linear tuning, the learning was set to 0.001; it is stated otherwise.", "Training a Convolutional Neural Network as a Non-Convex For this task we used the well-known Fashion MNIST dataset with its default train/test split as our training/validation datasets, with no augmentation.", "As the model we used a full CNN architecture with two basic blocks followed by three fully connected layers and a dropout layer; for the details of the architecture we encourage readers to look at our source code.", "We used CrossEnropyLoss as the criterion and batch size is set to 128." ], [ "Experimental results", "In the following there are additional experiments validating our earlier results, presented in the paper.", "Alignment of our results on different datasets ensures the robustness of our statements.", "Figure: The impact of number of random vectors on the biased and unbiased zeroth-order estimators on CIFAR-10.Figure: Training loss versus number of nodes for all biased population on CIFAR-10 dataset.Figure: Training loss vs n for all biased population, training a full CNN on Fashion MNIST." ], [ "Zeroth-order Stochastic Gradient Properties.", "Lemma 2 Let $G^i_{\\nu }(x, u, \\xi ^i)$ be computed by REF .", "Then, under Assumptions REF and REF we have: $\\mathbb {E}_{u,\\xi ^i} \\Vert G^i_{\\nu }(x, u, \\xi ^i)\\Vert ^2 \\le \\tfrac{1}{2} \\nu ^2 L^2 (d+6)^3+ 2 (d+4) \\left[\\Vert \\nabla f^i(x)\\Vert ^2+s_i^2\\right],$ $ \\mathbb {E}_{u, \\xi ^i} \\Vert G^i_\\nu (x, u, \\xi ) - \\nabla f^i(x) \\Vert ^2 \\le \\frac{3\\nu ^2}{2} L^2 (d+6)^3 + 4(d+4)\\left[\\Vert \\nabla f^i(x)\\Vert ^2 + s_i^2\\right].$ Firstly, by plugging in $F^i(x, \\xi ^i)$ in REF under Assumptions REF and REF , we obtain $\\mathbb {E} \\big \\Vert G^i_{\\nu }(x, u, \\xi )\\big \\Vert ^2 \\le \\tfrac{1}{2} \\nu ^2 L^2 (d+6)^3+ 2 (d+4)\\Vert \\nabla F^i(x, \\xi ^i) \\Vert ^2$ Then by getting an expectation and eliminating the randomness of the right-hand side with respect to $\\xi ^i$ , we get $\\mathbb {E}_{u, \\xi ^i} \\big \\Vert G^i_{\\nu }(x, u, \\xi ^i)\\big \\Vert ^2 &\\le \\tfrac{1}{2} \\nu ^2 L^2 (d+6)^3+ 2 (d+4)\\Vert \\nabla F^i(x, \\xi ^i) \\Vert ^2 \\\\&\\overset{Assumption \\ref {asmp:unbiasedness_bounded_local_variance_of_F}}{\\le } \\tfrac{1}{2} \\nu ^2 L^2 (d+6)^3+ 2 (d+4) \\left[\\Vert f^i(x)\\Vert ^2+s_i^2\\right].$ Secondly, using REF we have: $\\big \\Vert G_\\nu (x, u, \\xi ) - \\nabla f_\\nu (x) \\big \\Vert ^2 &= \\big \\Vert G_{\\nu }(x, u, \\xi )\\big \\Vert ^2 + \\big \\Vert \\nabla f_\\nu (x) \\big \\Vert ^2 - 2\\langle \\big (G_\\nu (x, u, \\xi ) \\big ), \\nabla f_\\nu (x)\\rangle \\\\&\\overset{\\ref {E(G_v)}}{=} \\big \\Vert G_{\\nu }(x, u, \\xi )\\big \\Vert ^2 +\\underbrace{ \\big \\Vert \\nabla f_\\nu (x) \\big \\Vert ^2 -2 \\big \\Vert \\nabla f_\\nu (x) \\big \\Vert ^2}_{- \\big \\Vert \\nabla f_\\nu (x) \\big \\Vert ^2 \\le 0}\\le \\big \\Vert G_{\\nu }(x, u, \\xi )\\big \\Vert ^2 \\\\&\\overset{\\ref {eqn:Gv_second_moment_upper_bound}}{\\le } \\tfrac{1}{2} \\nu ^2 L^2 (d+6)^3+ 2 (d+4) \\left[\\Vert f^i(x)\\Vert ^2+s_i^2\\right].$ Finally, together with REF and the inequality above we can deduce: $_{u, \\xi ^i} \\big \\Vert G^i_\\nu (x, u, \\xi ^i) - \\nabla f^i(x) \\Vert ^2 &\\le 2_{u, \\xi ^i} \\big \\Vert G^i_\\nu (x, u, \\xi ^i) - \\nabla f^i_\\nu (x) \\big \\Vert ^2 + 2 \\big \\Vert \\nabla f^i_\\nu (x) - \\nabla f^i(x) \\big \\Vert ^2 \\\\&\\overset{}{\\le } \\nu ^2 L^2 (d+6)^3+ 4 (d+4) \\left[\\Vert f^i(x)\\Vert ^2+s_i^2\\right] + 2 \\big \\Vert \\nabla f^i_\\nu (x) - \\nabla f^i(x) \\big \\Vert ^2 \\\\&\\overset{\\ref {rand_smth_close_grad}}{\\le } \\nu ^2 L^2 (d+6)^3+ 4 (d+4) \\left[\\Vert f^i(x)\\Vert ^2+s_i^2\\right] + \\frac{\\nu ^2}{2}L^2 (d+3)^3\\\\&\\le \\frac{3\\nu ^2}{2} L^2 (d+6)^3+ 4 (d+4) \\left[\\Vert f^i(x)\\Vert ^2+s_i^2\\right].$" ], [ "Definitions", "For the sake of simplicity, we now define some notations for the frequently-used expressions in the proof.", "Definition 3 (Gamma) $\\Gamma _{t} := \\frac{1}{n}\\sum _i\\Vert X_{t}^i-\\mu _{t}\\Vert ^2.$ Definition 4 (Average second-moment of estimator) $M_t^G := \\frac{1}{n}\\sum _i \\big \\Vert G^i(X_t^i)\\big \\Vert ^2.$ Definition 5 (Expectation conditioned step) $_t[Y] := [Y|X^1_t,X^2_t,...,X^n_t].$ Definition 6 (Biasedness of estimators) For node $i$ , using $G^i(x)$ as its gradient estimator we define $b_i$ as the upper bound for its biasedness, i.e.", "$\\big \\Vert \\nabla f^i(x) - \\big [G^i(x)\\big ]\\big \\Vert \\le b_i.$ Note that for an unbiased estimator we have $b_i = 0$ .", "Moreover, for zeroth-order estimators $\\big [G^i(x)\\big ]= \\nabla f^i(x)$ .", "Hence, according to REF , $\\big \\Vert \\nabla f^i(x) - \\big [G^i(x)\\big ]\\big \\Vert $ is bounded for a fixed $\\nu $ .", "Therefore, $b_i$ is well-defined in our setup.", "We further define the average biasedness of estimators as $B:=\\frac{1}{n} \\sum _i b_i.$ Definition 7 (Variance of estimators) For node $i$ , using $G^i(X_t^i)$ as its gradient estimator at step $t$ we define $(\\sigma _t^i)^2$ as the upper-bound of its variance, i.e.", "$E\\big \\Vert \\nabla f^i(X_t^i) - G^i(X_t^i)\\big \\Vert ^2 \\le (\\sigma _t^i)^2.$ Note that for the first-order nodes, i.e.", "$G^i(x)=\\nabla F^i(x)$ , using REF we have $\\sigma _t^i:=s_i$ .", "Moreover, for the zeroth-order nodes, i.e.", "$G^i(X^i_t)$ is computed using REF , according to REF we have $(\\sigma _t^i)^2:= \\frac{3\\nu ^2}{2} L^2 (d+6)^3 + 4(d+4)\\left[\\Vert \\nabla f^i(X_t^i)\\Vert ^2 + s_i^2\\right]$ , which is well-defined considering that $\\nu $ is fixed in our setup.", "We further define the average variance of estimators as $(\\bar{\\sigma }_t)^2:= \\frac{1}{n}\\sum _i(\\sigma _t^i)^2.$" ], [ "Useful Inequalities", "Lemma 8 (Young) For any pair of vectors $x, y$ and $\\alpha >0$ we have $\\langle x,y \\rangle \\le \\frac{\\Vert x\\Vert ^2}{2\\alpha } + \\frac{\\alpha \\Vert y\\Vert ^2}{2}.$ Lemma 9 (Cauchy-Schwarz) For any vectors $x_1, x_2, ..., x_n \\in \\mathbb {}{R}^d$ we have $\\Vert \\sum _{i=1}^n x_i \\Vert ^2 \\le n \\sum _{i=1}^n \\Vert x_i\\Vert ^2.$" ], [ "The Complete Convergence Proof", "In this part, we assume that there exist $n_0$ zeroth-order nodes and $n_1$ first-order nodes, all having access to a shared dataset, hence a shared objective function $f$ that they want to minimize.", "Lemma 10 For any time step $t$ and constants $\\alpha _0 \\ge \\alpha _1 > 0$ let $M_t^f(\\alpha _0, \\alpha _1)=\\frac{\\alpha _0}{n}\\sum _{i \\in N_0} \\big \\Vert \\nabla f^i(X_t^i)\\big \\Vert ^2 + \\frac{\\alpha _1}{n}\\sum _{i \\in N_1} \\big \\Vert \\nabla f^i(X_t^i)\\big \\Vert ^2$ .", "We have that: $[M_t^f(\\alpha _0, \\alpha _1)] \\le 3L^2 \\alpha _0 [\\Gamma _t] + \\frac{3\\alpha _0 n_0\\varsigma _0^2+3\\alpha _1 n_1\\varsigma _1^2}{n}+6 (\\alpha _0+\\alpha _1) L[f(\\mu _t)-f(x^*)].$ $\\begin{split}\\frac{\\alpha _0}{n}\\sum _{i \\in N_0} \\big \\Vert \\nabla f^i(X_t^i)\\big \\Vert ^2&= \\frac{\\alpha _0}{n}\\sum _{i \\in N_0} \\big \\Vert \\nabla f(X_t^i) - \\nabla f^i(\\mu _t) + \\nabla f^i(\\mu _t)-\\nabla f(\\mu _t)+\\nabla f(\\mu _t) - \\nabla f(x^*)\\big \\Vert ^2\\\\&\\overset{\\text{Assumptions }\\ref {asmp:lipschitz} \\text{ and } \\ref {asmp:global_variance}, \\text{Cauchy-Schwarz}}{\\le } 3L^2 \\alpha _0 \\sum _{i \\in N_0}\\Vert X_t^i- \\mu _t\\Vert ^2 + \\frac{3\\alpha _0 n_0\\varsigma _0^2}{n}+6 \\alpha _0 L[f(\\mu _t)-f(x^*)].\\end{split}$ Similarly, in the case of first-order nodes we get: $\\begin{split}\\frac{\\alpha _1}{n}\\sum _{i \\in N_1} \\big \\Vert \\nabla f^i(X_t^i)\\big \\Vert ^2 \\le 3L^2 \\alpha _1 \\sum _{i \\in N_1}\\Vert X_t^i- \\mu _t\\Vert ^2 + \\frac{3\\alpha _1 n_1\\varsigma _1^2}{n}+6 \\alpha _1 L[f(\\mu _t)-f(x^*)].\\end{split}$ By summing up the above inequalities and using the fact that $\\alpha _1 \\ge \\alpha _2$ (together with the definition of $\\Gamma _t$ ), we get the proof of the lemma.", "* $\\begin{split}_t\\big [ M_t^G\\big ] &= \\frac{1}{n}\\sum _i _t \\big \\Vert G^i(X_t^i)\\big \\Vert ^2 \\overset{ (\\ref {eqn:Gv_second_moment_upper_bound})}{\\le } \\frac{1}{n}\\sum _{i \\in N_0} (\\tfrac{1}{2} \\nu ^2 L^2 (d+6)^3+ 2 (d+4) \\left[_t\\Vert \\nabla f^i(X_t^i)\\Vert ^2+s_i^2\\right]) \\\\&\\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad + \\frac{1}{n} \\sum _{i \\in N_1} (_t\\Vert \\nabla f^i(X_t^i)\\Vert ^2+s_i^2)\\\\&\\le _t[M_t^f(2(d+4),1)] + \\frac{2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2}{n} + \\eta ^2 \\frac{n_0}{2nc^2}L^2(d+6)^3.\\end{split}$ Next, we take expectation with respect to $X_t^1, X_t^2, ..., X_t^n$ and use Lemma REF to get: $\\begin{split}\\big [ M_t^G\\big ] \\le 6(d+4)L^2 [\\Gamma _t] &+ \\frac{6(d+4)n_0\\varsigma _0^2+3n_1 \\varsigma _1^2}{n}+6(2d+9)L[f(\\mu _t)-f(x^*)]\\\\&+ \\frac{2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2}{n} + \\eta ^2 \\frac{n_0}{2nc^2}L^2(d+6)^3.\\end{split}$ Which finishes the proof of the lemma.", "Lemma 11 Assume $\\nu : =\\frac{\\eta }{c}$ is fixed, where $\\eta $ and $c$ are the learning rate and a constant respectively.", "Then, for any time step t we have $[(\\bar{\\sigma }_t)^2] \\le 12(d+4)L^2 [\\Gamma _t] &+ \\frac{12n_0(d+4)\\varsigma _0^2+3n_1 \\varsigma _1^2}{n}+6(4d+17)L(f(\\mu _t) - f(x^*)) \\\\&+\\frac{4(d+4)\\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2}{n} + \\eta ^2 \\frac{3 n_0}{2nc^2}L^2(d+6)^3.$ $\\begin{split}_t[(\\bar{\\sigma }_t)^2] &= \\frac{1}{n}\\sum _i_t[(\\sigma _t^i)^2] \\overset{(\\ref {eqn:Gv_variance_upper_bound})}{\\le } \\frac{1}{n}\\sum _{i \\in N_0}(\\frac{3\\nu ^2}{2} L^2 (d+6)^3 + 4(d+4)\\left[_t\\Vert \\nabla f(X_t^i)\\Vert ^2 + s_i^2\\right])\\\\&\\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad + \\frac{1}{n} \\sum _{i \\in N_1} (_t\\Vert \\nabla f^i(X_t^i)\\Vert ^2+s_i^2)\\\\&= _t[M^f_t(4(d+4),1)] + \\frac{4(d+4)\\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2}{n}\\sigma ^2 + \\eta ^2 \\frac{3 n_0}{2nc^2}L^2(d+6)^3.\\end{split}$ Next, we take expectation with respect to $X_t^1, X_t^2, ..., X_t^n$ and use Lemma REF to get: $_t[(\\bar{\\sigma }_t)^2] \\le 12(d+4)L^2 [\\Gamma _t] &+ \\frac{12n_0(d+4)\\varsigma _0^2+3n_1 \\varsigma _1^2}{n}+6(4d+17)L(f(\\mu _t) - f(x^*)) \\\\&+\\frac{4(d+4)\\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2}{n} + \\eta ^2 \\frac{3 n_0}{2nc^2}L^2(d+6)^3.$ Which finishes the proof of the lemma.", "Lemma 12 Assume $\\nu : =\\frac{\\eta }{c}$ is fixed, where $\\eta $ and $c$ are the learning rate and a constant respectively.", "Then, for any time step t we have $B \\le \\eta \\frac{n_0}{2cn}L(d+3)^{\\frac{3}{2}}.$ $\\begin{split}B &= \\frac{1}{n}\\sum _i b_i = \\frac{1}{n}\\sum _i \\Vert \\nabla f^i(X_t^i) - [G^i(X_t^i)]\\Vert = \\frac{1}{n}\\sum _{i \\in N_0} \\Vert \\nabla f^i(X_t^i) - \\nabla f_\\nu (X_t^i) \\Vert \\\\&\\overset{(\\ref {smth_approx})}{\\le } \\frac{\\nu n_0}{2n}L(d+3)^{\\frac{3}{2}} = \\eta \\frac{n_0}{2cn}L(d+3)^{\\frac{3}{2}}.\\end{split}$ * First we can open $_t\\big [ \\Gamma _{t+1} \\big ]$ as $_t\\big [ \\Gamma _{t+1} \\big ] = _t\\Big [ \\frac{1}{n}\\sum _i\\Vert X_{t+1}^i-\\mu _{t+1}\\Vert ^2\\Big ].$ Observe that in this case $\\mu _{t+1}=\\mu _t-\\eta (G_{t}^i(X_t^i)+G_{t}^j(X_t^j))/n$ and $X_{t+1}^i=X_{t+1}^j=(X_t^i+X_t^j)/2-\\eta (G_{t}^i(X_t^i)+G_{t}^j(X_t^j))/2$ .", "Hence, $\\begin{split}&_t\\big [ \\Gamma _{t+1} \\big ] = \\frac{1}{n^2(n-1)}\\sum _i\\sum _{i \\ne j}_t\\Bigg [ \\begin{aligned}[t] &2\\big \\Vert (X_t^i+X_t^j)/2 - \\big ( \\frac{n-2}{2n}\\big )\\eta (G^i(X_t^i) + G^j(X_t^j)) - \\mu _t\\big \\Vert ^2\\\\&+\\sum _{k \\ne i, j}\\big \\Vert X_t^k - \\mu _t + \\frac{\\eta }{n}(G^i(X_t^i) + G^j(X_t^j))\\big \\Vert ^2\\Bigg ]\\end{aligned}\\\\&=\\frac{1}{n^2(n-1)}\\sum _i\\sum _{i \\ne j}_t\\Bigg [2\\Big (\\begin{aligned}[t]& \\Vert (X_t^i+X_t^j)/2 - \\mu _t\\Vert ^2 + \\big ( \\frac{n-2}{2n}\\big )^2\\eta ^2\\big \\Vert G^i(X_t^i) + G^j(X_t^j)\\big \\Vert ^2\\\\&-\\big ( \\frac{n-2}{n}\\big )\\eta \\Big \\langle G^i(X_t^i) + G^j(X_t^j), (X_t^i+X_t^j)/2 - \\mu _t\\Big \\rangle \\Big )\\\\&+ \\sum _{k \\ne i, j}\\Big ( \\Vert X_t^k - \\mu _t\\Vert ^2 + \\big (\\frac{1}{n}\\big )^2\\eta ^2\\big \\Vert G^i(X_t^i) + G^j(X_t^j)\\big \\Vert ^2\\\\&+\\frac{2}{n}\\eta \\Big \\langle G^i(X_t^i) + G^j(X_t^j), X_t^k-\\mu _t\\Big \\rangle \\Big )\\Bigg ]\\end{aligned}\\\\&=\\frac{1}{n^2(n-1)}\\sum _i\\sum _{i \\ne j}_t\\Bigg [\\begin{aligned}[t]&\\sum _k\\Vert X_t^k-\\mu _t\\Vert ^2 - \\Vert X_t^i-\\mu _t\\Vert ^2/2 - \\Vert X_t^j-\\mu _t\\Vert ^2/2 + \\langle X_t^i -\\mu _t, X_t^j-\\mu _t\\rangle \\\\&+\\underbrace{\\Big (\\frac{(n-2)^2}{2n^2}+\\frac{n-2}{n^2}\\Big )}_{\\frac{n-2}{2n}\\le \\frac{1}{2}}\\eta ^2\\big \\Vert G^i(X_t^i) + G^j(X_t^j)\\big \\Vert ^2\\\\&-\\underbrace{\\Big (\\frac{n-2}{n} + \\frac{2}{n}\\Big )}_{1}\\eta \\Big \\langle G^i(X_t^i) + G^j(X_t^j), X_t^i+X_t^j - 2\\mu _t\\Big \\rangle \\Bigg ]\\end{aligned}\\\\&\\le (1 - \\frac{1}{n})_t\\big [ \\Gamma _t \\big ] -\\frac{1}{n^2(n-1)}\\eta \\sum _i\\sum _{i \\ne j}_t \\Big \\langle G^i(X_t^i) + G^j(X_t^j), X_t^i+X_t^j - 2\\mu _t\\Big \\rangle \\\\&+\\frac{1}{n^2(n-1)}\\sum _i\\sum _{i \\ne j}_t \\langle X_t^i -\\mu _t, X_t^j-\\mu _t\\rangle +\\frac{1}{2n^2(n-1)}\\eta ^2\\sum _i\\sum _{i \\ne j} _t \\big \\Vert G^i(X_t^i)+G^j(X_t^j)\\big \\Vert ^2\\\\&\\le (1-\\frac{1}{n})_t\\big [ \\Gamma _t \\big ] \\begin{aligned}[t]& + \\underbrace{\\frac{1}{n^2(n-1)}\\sum _i\\sum _{i \\ne j}_t \\langle X_t^i -\\mu _t, X_t^j-\\mu _t\\rangle }_{P_1:=} + \\underbrace{\\frac{2}{n^2}\\eta ^2 \\sum _i _t\\big \\Vert G^i(X_t^i)\\big \\Vert ^2}_{\\frac{2}{n}\\eta _t\\big [M_t^G\\big ]}\\\\&- \\underbrace{\\frac{1}{n^2(n-1)}\\eta \\sum _i\\sum _{i \\ne j}_t \\Big \\langle G^i(X_t^i) + G^j(X_t^j), X_t^i+X_t^j - 2\\mu _t\\Big \\rangle }_{P_2:=} \\end{aligned}.\\end{split}$ Now we upper bound each of $P_1$ and $P_2$ as following $ P_1 = \\frac{1}{n^2(n-1)}\\sum _i\\sum _{i \\ne j}_t \\langle X_t^i -\\mu _t, X_t^j-\\mu _t\\rangle = \\frac{-1}{n^2(n-1)}\\sum _i_t\\Vert X_t^i - \\mu _t\\Vert ^2 = \\frac{-1}{n(n-1)}_t\\big [ \\Gamma _t \\big ]$ $\\begin{split}P_2 &= \\frac{1}{n^2(n-1)}\\eta \\sum _i\\sum _{i \\ne j}_t \\Big \\langle G^i(X_t^i) + G^j(X_t^j), X_t^i+X_t^j - 2\\mu _t\\Big \\rangle \\\\&= \\frac{2}{n^2(n-1)}\\eta \\Big ( \\sum _i\\sum _{i \\ne j} _t \\Big \\langle G^i(X_t^i), X_t^j - \\mu _t\\Big \\rangle +(n-1)\\sum _i_t\\Big \\langle G^i(X_t^i), X_t^i - \\mu _t\\Big \\rangle \\Big )\\\\&=\\frac{2(n-2)}{n^2(n-1)} \\sum _i _t \\Big \\langle \\eta G^i(X_t^i), X_t^i - \\mu _t\\Big \\rangle \\overset{\\text{Young}}{\\le } \\frac{1}{n^2}\\sum _i\\Big ( 2\\eta ^2_t\\big \\Vert G^i(X_t^i)\\big \\Vert ^2 + \\frac{1}{2}_t\\big \\Vert X_t^i-\\mu _t\\big \\Vert ^2\\Big )\\\\& = \\frac{2}{n}\\eta ^2_t\\big [M_t^G\\big ] + \\frac{1}{2n}_t \\big [ \\Gamma _t \\big ].\\end{split}$ By using (REF ) and (REF ) in inequality (REF ) we get $\\begin{split}_t\\big [ \\Gamma _{t+1} \\big ] &\\le (1-\\frac{1}{n})_t\\big [ \\Gamma _t \\big ] - \\frac{1}{n(n-1)}_t\\big [ \\Gamma _t \\big ] + \\frac{2}{n}\\eta ^2 _t\\big [M_t^G\\big ] + \\frac{2}{n}\\eta ^2_t\\big [M_t^G\\big ] + \\frac{1}{2n}_t \\big [ \\Gamma _t \\big ]\\\\&\\le \\big ( 1 - \\frac{1}{2n}\\big )_t\\big [ \\Gamma _t \\big ] + \\frac{4}{n}\\eta ^2_t\\big [M_t^G\\big ].\\end{split}$ Finally, by taking the expectation with respect to $X_t^1, X_t^2, ..., X_t^n$ we will have $ \\big [ \\Gamma _{t+1} \\big ] \\le \\big ( 1 - \\frac{1}{2n}\\big )\\big [ \\Gamma _t \\big ] + \\frac{4}{n}\\eta ^2\\big [M_t^G\\big ]$ Lemma 13 For any time step $t$ and fixed learning rate $\\eta \\le \\frac{1}{14 L (d+4)^\\frac{1}{2}}$ $\\big [\\Gamma _{t+1}\\big ] \\le \\big ( 1 - \\frac{1}{4n})\\big [ \\Gamma _t \\big ] &+ \\frac{12\\eta ^2(2(d+4)n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n^2}+\\frac{24\\eta ^2(2d+9)L[f(\\mu _t)-f(x^*)]}{n} \\\\&+ \\frac{4\\eta ^2(2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n^2} + \\frac{2\\eta ^4 n_0 L^2(d+6)^3}{n^2c^2}.$ From Lemma REF we get that: $\\big [ \\Gamma _{t+1} \\big ] \\le \\big ( 1 - \\frac{1}{2n}\\big )\\big [ \\Gamma _t \\big ] + \\frac{4}{n}\\eta ^2\\big [M_t^G\\big ].$ Now, by using Lemma REF in the inequality above we have $\\big [ \\Gamma _{t+1} \\big ] &\\le \\big ( 1 - \\frac{1}{2n}\\big )\\big [ \\Gamma _t \\big ] + \\frac{4}{n}\\eta ^2\\big [M_t^G\\big ]\\\\ \\le & \\big ( 1 - \\frac{1}{2n})\\big [ \\Gamma _t \\big ] + \\frac{4\\eta ^2}{n}\\Bigg (6(d+4)L^2 \\Gamma _t+ \\frac{6(d+4)n_0\\varsigma _0^2+3n_1 \\varsigma _1^2}{n}+6(2d+9)L(f(\\mu _t)-f(x^*)) \\\\&\\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad + \\frac{2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2}{n} + \\eta ^2 \\frac{n_0}{2nc^2}L^2(d+6)^3\\Bigg ) \\\\&=\\big ( 1 - \\frac{1}{2n}+\\frac{24\\eta ^2 L^2 (d+4)}{n})\\big [ \\Gamma _t \\big ] + \\frac{12\\eta ^2(2(d+4)n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n^2}+\\frac{24\\eta ^2(2d+9)L[f(\\mu _t)-f(x^*)]}{n} \\\\&\\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad + \\frac{4\\eta ^2(2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n^2} + \\frac{2\\eta ^4 n_0 L^2(d+6)^3}{n^2c^2}.$ We get the proof of the lemma by using $\\eta \\le \\frac{1}{14 L (d+4)^\\frac{1}{2}}$ in the above inequality.", "Next, we define the following weights: for any step $t \\ge 0$ , let $w_t = (1-\\frac{\\eta \\ell }{2n})^{-t}$ .", "This allows us to prove the following lemma: Lemma 14 for any $T \\ge 0$ and $\\eta \\le \\frac{1}{10\\ell }$ : $\\sum _{t=1}^{T} &w_t [\\Gamma _{t-1}] \\le 120\\eta ^2(2d+9)L \\sum _{t=1}^{T-1} w_{t} [f(\\mu _{t-1})-f(x^*)] \\\\&+ \\Bigg (\\frac{60\\eta ^2(2(d+4)n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n}+\\frac{20\\eta ^2(2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n} + \\frac{10\\eta ^4 n_0 L^2(d+6)^3}{nc^2} \\Bigg ) \\sum _{t=1}^{T-1} w_{t}.$ Let $P_t=\\frac{24\\eta ^2(2d+9)L[f(\\mu _t)-f(x^*)]}{n}$ and let $Q=\\frac{12\\eta ^2(2(d+4)n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n^2}+\\frac{4\\eta ^2(2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n^2} + \\frac{2\\eta ^4 n_0 L^2(d+6)^3}{n^2c^2}.$ Then the above lemma gives us that for any $t \\ge 0$ : $[\\Gamma _{t+1}] \\le (1-\\frac{1}{4n})[\\Gamma _t]+P_t+Q$ .", "After unrolling the recursion, we get that for any $t > 1$ , $[\\Gamma _t] \\le \\sum _{i=0}^{t-1} (P_i+Q)(1-\\frac{1}{4n})^{t-1-i}$ .", "Hence, $\\begin{split}\\sum _{t=1}^T w_t [\\Gamma _{t-1}] &\\le \\sum _{t=2}^T w_t \\Bigg (\\sum _{i=0}^{t-2} (P_i+Q)(1-\\frac{1}{4n})^{t-2-i} \\Bigg ) = \\sum _{t=0}^{T-2} (P_t+Q) \\sum _{i=t+2}^T w_i (1-\\frac{1}{4n})^{i-2-t} \\\\ &=(1-\\frac{\\eta \\ell }{2n})^{-1} \\sum _{t=0}^{T-2} (P_t+Q) \\sum _{i=t+2}^T w_{t+1} (1-\\frac{\\eta \\ell }{2n})^{-(i-(t+2))}(1-\\frac{1}{4n})^{i-(t+2)}\\\\&= (1-\\frac{\\eta \\ell }{2n})^{-1} \\sum _{t=0}^{T-2} w_{t+1}(P_t+Q) \\sum _{j=0}^{T-(t+2)} \\left( \\frac{1-\\frac{1}{4n}}{1-\\frac{\\eta \\ell }{2n}}\\right)^j.\\end{split}$ For $\\frac{1}{10\\ell }\\ge \\eta $ , we have $r:=\\frac{1-\\frac{1}{4n}}{1-\\frac{\\eta \\ell }{2n}} \\le 1$ .", "Hence, we can write $\\sum _{j=0}^{T-(t+2)} \\left( \\frac{1-\\frac{1}{4n}}{1-\\frac{\\eta \\ell }{2n}}\\right)^j = \\sum _{j=0}^{T-(t+2)} r^j = \\frac{1-r^{T-(t+1)}}{1-r} \\overset{t \\le T-2}{\\le } \\frac{1}{1-r}.$ By using the above inequality in (REF ) we have $\\sum _{t=1}^T w_t [\\Gamma _{t-1}] &\\le (1-\\frac{\\eta \\ell }{2n})^{-1}\\frac{1}{1-\\frac{1-\\frac{1}{4n}}{1-\\frac{\\eta \\ell }{2n}}} \\sum _{t=0}^{T-2} w_{t+1}(P_t+Q)\\\\&= \\frac{1}{\\frac{1}{4n} - \\frac{\\eta \\ell }{2n}} \\sum _{t=0}^{T-2} w_{t+1}(P_t+Q)=\\frac{1}{\\frac{1}{4n} - \\frac{\\eta \\ell }{2n}} \\sum _{t=1}^{T-1} w_{t}(P_{t-1}+Q).$ Finally, since $\\frac{1}{10\\ell }\\ge \\eta $ we get $\\frac{1}{\\frac{1}{4n} - \\frac{\\eta \\ell }{2n}} \\le 5n$ and the proof of lemma is finished.", "Lemma 15 For $\\eta \\le \\frac{\\sqrt{\\ell c n}}{2\\sqrt{Ln_0}(d+3)^\\frac{3}{4}}$ , we have that $\\Big \\Vert \\mu _{t+1} -x^* \\Big \\Vert ^2 &\\le (1-\\frac{\\ell \\eta }{n} + \\eta ^2 \\frac{4B}{n})\\Vert \\mu _t-x^*\\Vert ^2 - (4\\frac{\\eta }{n} - \\eta ^2\\frac{16L(12d+52)}{n^2} - \\eta ^4 \\frac{64BL}{n^3})\\big [f(\\mu _t)-f(x^*)\\big ]\\\\&+ \\left( 2\\frac{L+\\ell }{n}\\eta + \\eta ^2L^2 \\frac{96d+456}{n^2} + \\eta ^4 \\frac{32BL^2}{n^3}\\right)[\\Gamma _t]\\\\&+ \\frac{\\eta ^2((96d+448)\\varsigma _0^2 n_0+88\\varsigma _1^2 n_1)}{n^3}+\\frac{8\\eta ^2((d+4)\\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n^3}+\\frac{12\\eta ^4 n_0 L^2 (d+6)^3}{n^3c^2}+\\eta ^2 \\frac{2B}{n}.$ Let $F_t$ be the amount by which $\\mu _t$ decreases at step $t$ .", "So, $F_t$ is a sum of $\\frac{\\eta }{n}G^i(X_t^i)$ and $\\frac{\\eta }{n}G^j(X_t^j)$ for agents $i$ and $j$ , which interact at step $t$ .", "Also, let $F^{\\prime }_t$ be the amount by which $\\mu _t$ would decrease if all the agents were contributing at that step using their true local gradients.", "That is $F^{\\prime }_t = \\frac{2\\eta }{n^2}\\sum _i \\nabla f^i(X_t^i)$ .", "To make the calculations more clear, lets define $_t[Y] := [Y|X^1_t,X^2_t,...,X^n_t]$ .", "$\\Big \\Vert \\mu _{t+1} -x^* \\Big \\Vert ^2&= \\Big \\Vert \\mu _t-F_t-x^* \\Big \\Vert ^2 = \\Big \\Vert \\mu _t-F_t-x^* - F^{\\prime }_t+F^{\\prime }_t \\Big \\Vert ^2 \\\\&=\\Big \\Vert \\mu _t-x^*-F^{\\prime }_t \\Big \\Vert ^2+ \\Big \\Vert F^{\\prime }_t-F_t \\Big \\Vert ^2 + 2 \\Big \\langle \\mu _t-x^*- F^{\\prime }_t, F^{\\prime }_t-F_t \\Big \\rangle \\\\&=\\Big \\Vert \\mu _t-x^*-F^{\\prime }_t \\Big \\Vert ^2+ _{X^1_t,X^2_t,...,X^n_t} \\Big [ _t \\Big \\Vert F^{\\prime }_t-F_t \\Big \\Vert ^2\\Big ] \\\\&+ 2 _{X^1_t,X^2_t,...,X^n_t} \\Big [ _t \\Big \\langle \\mu _t-x^*- F^{\\prime }_t, F^{\\prime }_t-F_t \\Big \\rangle \\Big ]\\\\$ This means that in order to upper bound $\\Big \\Vert \\mu _{t+1} -x^* \\Big \\Vert ^2$ , we need to upper bound $\\Big \\Vert \\mu _t-x^*- F^{\\prime }_t \\Big \\Vert ^2$ , $_t \\Big \\Vert F^{\\prime }_t-F_t \\Big \\Vert ^2$ , and $_t \\Big \\langle \\mu _t-x^*- F^{\\prime }_t, F^{\\prime }_t-F_t \\Big \\rangle $ .", "For the first one, when $X_1, X_2, ..., X_n$ are fixed, we have that $\\begin{split}\\Big \\Vert \\mu _t-x^*- F^{\\prime }_t \\Big \\Vert ^2 &= \\Big \\Vert \\mu _t-x^*- \\frac{2\\eta }{n^2} \\sum _i \\nabla f^i(X_t^i)\\Big \\Vert ^2\\\\&= \\big \\Vert \\mu _t-x^*\\big \\Vert ^2 + 4\\frac{\\eta ^2}{n^2} \\underbrace{\\Big \\Vert \\frac{1}{n} \\sum _i \\nabla f^i(X_t^i)\\Big \\Vert ^2}_{R_1:=} -4\\frac{\\eta }{n} \\underbrace{\\Big \\langle \\mu _t-x^*, \\frac{1}{n} \\sum _i \\nabla f^i(X_t^i) \\Big \\rangle }_{R_2:=}\\end{split}$ $ \\begin{split}R_1 &= \\Big \\Vert \\frac{1}{n}\\sum _i \\nabla f^i(X_t^i)\\Big \\Vert ^2 = \\Big \\Vert \\frac{1}{n} \\sum _i \\nabla f^i(X_t^i) - \\nabla f^i(\\mu _t) + \\nabla f^i(\\mu _t) - \\nabla f^i(x^*)\\Big \\Vert ^2\\\\&\\overset{\\text{Cauchy-Schwarz}}{\\le } \\frac{2}{n} \\sum _i \\Big \\Vert \\nabla f^i(X_t^i) - \\nabla f^i(\\mu _t)\\Big \\Vert ^2 + 2\\Big \\Vert \\frac{1}{n} \\sum _i \\nabla f^i(\\mu _t) - \\nabla f^i(x^*) \\Big \\Vert ^2\\\\&\\le \\frac{2L^2}{n}\\sum _i \\big \\Vert X_t^i - \\mu _t\\big \\Vert ^2 + \\frac{4L}{n}\\sum _i \\big (f^i(\\mu _t) - f^i(x^*)\\big )\\\\&= \\frac{2L^2}{n}\\sum _i \\big \\Vert X_t^i - \\mu _t\\big \\Vert ^2 + 4L\\big [f(\\mu _t) - f(x^*) \\big ]\\end{split}$ $\\begin{split}R_2 &= \\Big \\langle \\mu _t-x^*, \\frac{1}{n} \\sum _i \\nabla f^i(X_t^i) \\Big \\rangle = \\frac{1}{n} \\sum _i \\Big \\langle \\mu _t-X_t^i+X_t^i-x^*, \\nabla f_t^i(X_t^i) \\Big \\rangle \\\\&= \\frac{1}{n} \\sum _i \\Big [\\Big \\langle \\mu _t-X_t^i, \\nabla f^i(X_t^i) \\Big \\rangle + \\Big \\langle X_t^i-x^*, \\nabla f^i(X_t^i) \\Big \\rangle \\Big ]\\end{split}$ Using L-smoothness property (Assumption REF ) with $y=X_t^i$ and $x=x^*$ we have $\\Big \\langle \\mu _t-X_t^i, \\nabla f^i(X_t^i) \\Big \\rangle \\ge f^i(\\mu _t) - f^i(X_t^i) - \\frac{L}{2}\\Vert \\nabla f^i(\\mu _t) - \\nabla f^i(X_t^i) \\Vert ^2.$ Additionally, we use the $\\ell $ -strong convexity (Assumption REF ), to get $\\Big \\langle X_t^i-x^*, \\nabla f^i(X_t^i) \\Big \\rangle \\ge (f^i(X_t^i) - f^i(x^*)) + \\frac{\\ell }{2}\\Vert X_t^i-x^*\\Vert ^2.$ Now by plugging (REF ) and (REF ) in inequality (REF ) we get that $\\begin{split}R_2 &\\ge \\frac{1}{n} \\sum _i \\Big [ f^i(\\mu _t) - f^i(X_t^i) - \\frac{L}{2}\\Vert \\nabla f^i(\\mu _t) - \\nabla f^i(X_t^i) \\Vert ^2 + f^i(X_t^i) - f^i(x^*) + \\frac{\\ell }{2}\\Vert X_t^i-x^*\\Vert ^2 \\Big ]\\\\&= \\big [f(\\mu _t)-f(x^*)] - \\frac{L}{2n}\\sum _i \\Vert X_t^i - \\mu _t\\Vert ^2 + \\frac{\\ell }{2n}\\sum _i\\Vert X_t^i - x^*\\Vert ^2\\\\&\\ge \\big [f(\\mu _t)-f(x^*)] - \\frac{L+\\ell }{2n}\\sum _i \\big \\Vert X_t^i - \\mu _t\\big \\Vert ^2 + \\frac{\\ell }{4}\\Vert \\mu _t - x^*\\Vert ^2.\\end{split}$ Now we plug (REF ) and (REF ) back into (REF ) and take expectation into the account to get $\\begin{split}&\\Big \\Vert \\mu _t-x^*- F^{\\prime }_t \\Big \\Vert ^2 \\begin{aligned}[t]&\\le \\big \\Vert \\mu _t-x^*\\big \\Vert ^2 + 4\\frac{\\eta ^2}{n^2} \\Big ( \\frac{2L^2}{n}\\sum _i \\big \\Vert X_t^i - \\mu _t\\big \\Vert ^2 + 4L\\big [f(\\mu _t) - f(x^*) \\big ]\\Big )\\\\&-4\\frac{\\eta }{n} \\Big ( \\big [f(\\mu _t)-f(x^*)\\big ] - \\frac{L+\\ell }{2n}\\sum _i \\big \\Vert X_t^i - \\mu _t\\big \\Vert ^2 + \\frac{\\ell }{4}\\Vert \\mu _t - x^*\\Vert ^2 \\Big )\\end{aligned}\\\\&= (1-\\frac{\\ell \\eta }{n})\\Vert \\mu _t-x^*\\Vert ^2 - (4\\frac{\\eta }{n}-16L\\frac{\\eta ^2}{n^2})\\big [f(\\mu _t)-f(x^*)\\big ] + \\big (2\\frac{L+\\ell }{n}\\eta + 8\\frac{L^2}{n^2}\\eta ^2\\big )[\\Gamma _t].\\end{split}$ For the second one we have that: $ \\begin{split}&_t \\Big \\Vert F^{\\prime }_t-F_t \\Big \\Vert ^2 = \\frac{1}{n(n-1)}\\sum _i\\sum _{i \\ne j} _t\\Big \\Vert \\frac{2\\eta }{n^2}\\sum _r\\nabla f^r(X_t^r) - \\frac{\\eta }{n}(G^i(X_t^i) + G^j(X_t^j))\\Big \\Vert ^2\\\\&\\le \\frac{4\\eta ^2}{n^3}\\sum _i _t\\Big \\Vert \\frac{1}{n}\\sum _r\\nabla f^r(X_t^r) - G^i(X_t^i)\\Big \\Vert ^2 = \\frac{4\\eta ^2}{n^3}\\sum _i _t\\Big \\Vert \\frac{1}{n}\\sum _r\\nabla f^r(X_t^r) - \\nabla f^i(X_t^i) + \\nabla f^i(X_t^i) - G^i(X_t^i)\\Big \\Vert ^2\\\\&\\le \\frac{8\\eta ^2}{n^3}\\sum _i \\Big (_t\\Big \\Vert \\frac{1}{n}\\sum _r\\nabla f^r(X_t^r) - \\nabla f^i(X_t^i)\\Big \\Vert ^2 + _t[(\\sigma _t^i)^2] \\Big ) \\\\ &\\le \\frac{8\\eta ^2}{n^3}\\Big (\\sum _i _t\\Big \\Vert \\frac{1}{n-1}\\sum _{r \\ne i} [\\nabla f^r(X_t^r) - \\nabla f^i(X_t^i)]\\Big \\Vert ^2 + _t[(\\sigma _t^i)^2] \\Big )\\\\&\\le \\frac{8\\eta ^2}{n^3(n-1)} \\sum _i \\sum _{r \\ne i}_t\\Big \\Vert \\nabla f^r(X_t^r) - \\nabla f^i(X_t^i)\\Big \\Vert ^2 + \\frac{8\\eta ^2}{n^2}_t[(\\bar{\\sigma }_t)^2] \\\\&\\le \\frac{8\\eta ^2}{n^3(n-1)} \\sum _i \\sum _{r \\ne i}_t\\Big \\Vert \\begin{aligned}[t] &[\\nabla f^r(X_t^r) - \\nabla f^r(\\mu _t)] + [\\nabla f^r(\\mu _t) - \\nabla f(\\mu _t)] \\\\ &+ [\\nabla f(\\mu _t) - \\nabla f^i(\\mu _t)] + [\\nabla f^i(\\mu _t) - \\nabla f^i(X_t^i)]\\Big \\Vert ^2 + \\frac{8\\eta ^2}{n^2}_t[(\\bar{\\sigma }_t)^2] \\end{aligned}\\\\&\\le \\frac{8\\eta ^2}{n^3(n-1)} \\sum _i 8(n-1) \\Big (_t\\Big \\Vert \\nabla f^i(X_t^i) - \\nabla f^i(\\mu _t) \\Big \\Vert ^2+ _t\\Big \\Vert \\nabla f^i(\\mu _t) - \\nabla f(\\mu _t)\\Big \\Vert ^2\\Big ) + \\frac{8\\eta ^2}{n^2}_t[(\\bar{\\sigma }_t)^2]\\\\&\\le \\frac{64\\eta ^2}{n^3} \\sum _i _t\\Big \\Vert \\nabla f^i(X_t^i) - \\nabla f^i(\\mu _t) \\Big \\Vert ^2 + \\frac{64\\eta ^2}{n^3} \\sum _i _t\\Big \\Vert \\nabla f^i(\\mu _t) - \\nabla f(\\mu _t)\\Big \\Vert ^2 + \\frac{8\\eta ^2}{n^2}_t[(\\bar{\\sigma }_t)^2]\\\\&\\le \\frac{64L^2\\eta ^2}{n^3} \\sum _i _t\\Vert X_t^i - \\mu _t\\Vert ^2 + \\frac{64\\eta ^2(\\varsigma _0^2 n_0+\\varsigma _1^2 n_1)}{n^3}+ \\frac{8\\eta ^2}{n^2}_t[(\\bar{\\sigma }_t)^2] \\\\ &= \\frac{64L^2\\eta ^2}{n^2} _t[\\Gamma _t] + \\frac{64\\eta ^2(\\varsigma _0^2 n_0+\\varsigma _1^2 n_1)}{n^3}+ \\frac{8\\eta ^2}{n^2}_t[(\\bar{\\sigma }_t)^2].\\\\\\end{split}$ Next, we remove conditioning and use Lemma REF to get $\\begin{split}\\Big \\Vert F^{\\prime }_t-F_t \\Big \\Vert ^2 &=\\Bigg [_t \\Big \\Vert F^{\\prime }_t-F_t \\Big \\Vert ^2\\Bigg ] \\le \\frac{64L^2\\eta ^2}{n^2} [\\Gamma _t] + \\frac{64\\eta ^2(\\varsigma _0^2 n_0+\\varsigma _1^2 n_1)}{n^3} + \\frac{8\\eta ^2}{n^2}[(\\bar{\\sigma }_t)]^2 \\\\&\\le \\frac{64L^2\\eta ^2}{n^2} [\\Gamma _t] + \\frac{64\\eta ^2(\\varsigma _0^2 n_0+\\varsigma _1^2 n_1)}{n^3} \\\\&\\quad \\quad \\quad \\quad \\quad + \\frac{8\\eta ^2}{n^2}\\Bigg (12(d+4)L^2 [\\Gamma _t] + \\frac{12n_0(d+4)\\varsigma _0^2+3n_1 \\varsigma _1^2}{n}+6(4d+17)L(f(\\mu _t) - f(x^*)) \\\\&\\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad \\quad +\\frac{4(d+4)\\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2}{n} + \\eta ^2 \\frac{3 n_0}{2nc^2}L^2(d+6)^3 \\Bigg ) \\\\ &=\\frac{\\eta ^2L^2(96d+448)}{n^2} [\\Gamma _t] + \\frac{\\eta ^2((96d+448)\\varsigma _0^2 n_0+88\\varsigma _1^2 n_1)}{n^3} \\\\&\\quad \\quad \\quad + \\frac{48\\eta ^2(4d+17)L[f(\\mu _t) - f(x^*)]}{n^2} +\\frac{8\\eta ^2((d+4)\\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n^3} + \\frac{12\\eta ^4 n_0 L^2 (d+6)^3}{n^3c^2}.\\end{split}$ Now consider the last one.", "We have: $\\begin{split}_t \\Big \\langle \\mu _t-x^*- F^{\\prime }_t, F^{\\prime }_t-F_t \\Big \\rangle & =\\Big \\langle \\mu _{t} - F^{\\prime }_t - x^*, _t(F^{\\prime }_t - F_t) \\Big \\rangle \\\\& \\overset{\\text{Cauchy-Schwarz}}{\\le } \\Vert \\mu _{t} - x^* - F^{\\prime }_t\\Vert \\cdot \\big \\Vert _t(F^{\\prime }_t-F_t)\\big \\Vert \\\\& \\le \\Big [\\Vert \\mu _{t} - x^* \\Vert + \\Vert F^{\\prime }_t\\Vert \\Big ] \\cdot \\underbrace{\\big \\Vert _t(F^{\\prime }_t-F_t)\\big \\Vert }_{R_3:=}.\\end{split}$ $\\begin{split}R_3&=\\big \\Vert _t(F^{\\prime }_t-F_t)\\big \\Vert = \\big \\Vert \\frac{2\\eta }{n^2}\\sum _i\\nabla f^i(X_t^i) - \\frac{2\\eta }{n^2} _t\\big [G^i(X_t^i)\\big ]\\big \\Vert \\\\& \\le \\frac{2\\eta }{n^2} \\sum _i \\big \\Vert \\nabla f^i(X_t^i) - _t\\big [G^i(X_t^i)\\big ]\\big \\Vert \\le \\frac{2\\eta ^2}{n^2} \\sum _i b_i = \\frac{2\\eta ^2}{n}B\\end{split}$ By using the inequality above in (REF ) and taking expectation from both sides we get $\\begin{split}&\\Big \\langle \\mu _t-x^* - F^{\\prime }_t, F^{\\prime }_t-F_t \\Big \\rangle = _{X^1_t,X^2_t,...,X^n_t} \\bigg [ _t \\Big \\langle \\mu _t-x^* - F^{\\prime }_t, F^{\\prime }_t-F_t \\Big \\rangle \\bigg ]\\\\&\\le \\frac{2\\eta ^2}{n}B \\big ( \\Vert \\mu _{t} - x^* \\Vert + \\Vert F^{\\prime }_t\\Vert \\big ) \\overset{(*)}{\\le } 2\\frac{\\eta ^2}{n}B\\big ( \\Vert \\mu _{t}-x^*\\Vert ^2 + \\tfrac{1}{4} + \\Vert F^{\\prime }_t\\Vert ^2 + \\tfrac{1}{4} \\big )\\\\&\\overset{(\\ref {eq:R1})}{\\le } 2\\frac{\\eta ^2}{n}B\\big ( \\Vert \\mu _{t}-x^*\\Vert ^2 + \\frac{4\\eta ^2}{n^2}\\Big \\Vert \\frac{1}{n}\\sum _i \\nabla f^i(X_t^i)\\Big \\Vert ^2 + 0.5 \\big )\\\\&\\le 2\\frac{\\eta ^2}{n}B\\big ( \\Vert \\mu _{t}-x^*\\Vert ^2 + \\frac{4\\eta ^2}{n^2}\\Big [\\frac{2L^2}{n}\\sum _i \\big \\Vert X_t^i - \\mu _t\\big \\Vert ^2 + 4L\\big [f(\\mu _t) - f(x^*) \\big ]\\Big ] + 0.5 \\big )\\\\&\\le \\eta ^2 \\frac{2B}{n}\\Vert \\mu _{t}-x^*\\Vert ^2 + \\eta ^4 \\frac{16BL^2}{n^3}[\\Gamma _t] + \\eta ^4 \\frac{32BL}{n^3}\\big [f(\\mu _t) - f(x^*) \\big ] + \\eta ^2 \\frac{B}{n}.\\end{split}$ To get the (*) inequality, we first used Young's inequality twice with $\\alpha = \\frac{1}{2}$ , to get that $\\Vert \\mu _{t} - x^* \\Vert + \\Vert F^{\\prime }_t\\Vert \\le (\\Vert \\mu _{t} - x^* \\Vert )^2 + \\frac{1}{4} + (\\Vert F^{\\prime }_t\\Vert )^2+\\frac{1}{4}$ and then applied Jensen's inequality to get $(\\Vert \\mu _{t} - x^* \\Vert )^2 + \\frac{1}{4} + (\\Vert F^{\\prime }_t\\Vert )^2+\\frac{1}{4}\\le \\Vert \\mu _{t} - x^* \\Vert ^2 + \\frac{1}{4} + \\Vert F^{\\prime }_t\\Vert ^2+\\frac{1}{4}.$ Then following by (REF ), (REF ) and (REF ), the latter inequality () would be: $\\Big \\Vert \\mu _{t+1} -x^* \\Big \\Vert ^2 &=\\Big \\Vert \\mu _t-x^*-F^{\\prime }_t \\Big \\Vert ^2 + _{X^1_t,X^2_t,...,X^n_t} \\Big [ _t \\Vert F^{\\prime }_t-F_t \\Vert ^2\\Big ]\\\\&+ 2 _{X^1_t,X^2_t,...,X^n_t} \\Big [ _t \\Big \\langle \\mu _t-x^*- F^{\\prime }_t, F^{\\prime }_t-F_t \\Big \\rangle \\Big ]\\\\\\le &(1-\\frac{\\ell \\eta }{n})\\Vert \\mu _t-x^*\\Vert ^2 - (4\\frac{\\eta }{n}-16L\\frac{\\eta ^2}{n^2})\\big [f(\\mu _t)-f(x^*)\\big ] + \\big (2\\frac{L+\\ell }{n}\\eta + 8\\frac{L^2}{n^2}\\eta ^2\\big )[\\Gamma _t]\\\\&\\quad +\\frac{\\eta ^2L^2(96d+448)}{n^2} [\\Gamma _t] + \\frac{\\eta ^2((96d+448)\\varsigma _0^2 n_0+88\\varsigma _1^2 n_1)}{n^3} \\\\&\\quad + \\frac{48\\eta ^2(4d+17)L[f(\\mu _t) - f(x^*)]}{n^2} +\\frac{8\\eta ^2((d+4)\\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n^3} + \\frac{12\\eta ^4 n_0 L^2 (d+6)^3}{n^3c^2}.\\\\&\\quad + \\eta ^2 \\frac{4B}{n}\\Vert \\mu _{t}-x^*\\Vert ^2 + \\eta ^4 \\frac{32BL^2}{n^3}[\\Gamma _t] + \\eta ^4 \\frac{64BL}{n^3}\\big [f(\\mu _t) - f(x^*) \\big ] + \\eta ^2 \\frac{2B}{n}.$ Hence: $&\\Big \\Vert \\mu _{t+1} -x^* \\Big \\Vert ^2 \\le (1-\\frac{\\ell \\eta }{n} + \\eta ^2 \\frac{4B}{n})\\Vert \\mu _t-x^*\\Vert ^2 - (4\\frac{\\eta }{n} - \\eta ^2\\frac{16L(12d+52)}{n^2} - \\eta ^4 \\frac{64BL}{n^3})\\big [f(\\mu _t)-f(x^*)\\big ]\\\\&+ \\left( 2\\frac{L+\\ell }{n}\\eta + \\eta ^2L^2 \\frac{96d+456}{n^2} + \\eta ^4 \\frac{32BL^2}{n^3}\\right)[\\Gamma _t]\\\\&+ \\frac{\\eta ^2((96d+448)\\varsigma _0^2 n_0+88\\varsigma _1^2 n_1)}{n^3}+\\frac{8\\eta ^2((d+4)\\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n^3}+\\frac{12\\eta ^4 n_0 L^2 (d+6)^3}{n^3c^2}+\\eta ^2 \\frac{2B}{n}.$ We get the proof of the Lemma by plugging $\\eta \\le \\frac{\\sqrt{\\ell cn}}{2\\sqrt{Ln_0}(d+3)^\\frac{3}{4}}$ and $B \\le \\frac{\\eta n_0}{2cn}L(d+3)^{\\frac{3}{2}}$ (Lemma REF ) in the above inequality.", "* Let $a_t^2:=\\Vert \\mu _t-x^*\\Vert ^2$ , $e_t:=\\big [f(\\mu _t)-f(x^*)\\big ]$ and $C_1&:=4\\frac{\\eta }{n} - \\eta ^2\\frac{16L(12d+52)}{n^2} - \\eta ^4 \\frac{64BL}{n^3} \\ge 4\\frac{\\eta }{n} - \\eta ^2\\frac{16L(12d+52)}{n^2} - \\eta ^5 \\frac{32n_0L^2(d+3)^{\\frac{3}{2}}}{d^{\\frac{1}{2}}n^4},\\\\C_2&:= 2\\frac{L+\\ell }{n}\\eta + \\eta ^2L^2 \\frac{96d+456}{n^2} + \\eta ^4 \\frac{32BL^2}{n^3} \\le 2\\frac{L+\\ell }{n}\\eta + \\eta ^2L^2 \\frac{96d+456}{n^2} + \\eta ^5 \\frac{16n_0L^3(d+3)^{\\frac{3}{2}}}{d^{\\frac{1}{2}} n^4} \\\\C_3&:=\\frac{\\eta ^2((96d+448)\\varsigma _0^2 n_0+88\\varsigma _1^2 n_1)}{n^3}+\\frac{8\\eta ^2((d+4)\\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n^3}+\\frac{12\\eta ^4 n_0 L^2 (d+6)^3}{n^3c^2}+\\frac{2B \\eta ^2}{n} \\\\ &\\le \\frac{\\eta ^2((96d+448)\\varsigma _0^2 n_0+88\\varsigma _1^2 n_1)}{n^3}+\\frac{8\\eta ^2((d+4)\\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n^3}+\\frac{12\\eta ^4 n_0 L^2 (d+6)^3}{n^3d}+\\frac{L (d+3)^{\\frac{3}{2}}n_0 \\eta ^3}{d^{\\frac{1}{2}} n^2}.$ Where, in the above inequalities, we used Lemma REF .", "Therefore, the recursion from Lemma REF can be written as $a_{t}^2 \\le (1-\\frac{\\ell \\eta }{2n}) a_{t-1}^2 - C_1 e_{t-1} + C_2 [\\Gamma _{t-1}] + C_3.$ Then, we multiply the above recursion by $w_t = (1-\\frac{\\eta \\ell }{2n})^{-t}$ and $w_t a_{t}^2 \\le w_t\\Big ((1-\\frac{\\ell \\eta }{2n}) a_{t-1}^2 - C_1 e_{t-1} + C_2 [\\Gamma _{t-1}] + C_3\\Big ) =w_{t-1} a_{t-1}^2-w_t C_1 e_{t-1} + w_t C_2 [\\Gamma _{t-1}] + w_t C_3.$ By summing the above inequality for $t \\in \\lbrace 1, 2, ..., T\\rbrace $ and cancelling and rearrange terms we get: $w_T a_T^2 \\le w_0 a_0^2 - C_1 \\sum _{t=1}^T w_t e_{t-1} + C_2 \\sum _{t=1}^T w_t [\\Gamma _{t-1}] + C_3 \\sum _{t=1}^T w_t.$ By using Lemma REF in the inequality above we get that $\\begin{split}w_T &a_T^2 \\le w_0 a_0^2 \\begin{aligned}[t]&-C_1 \\sum _{t=1}^T w_t e_{t-1} + 120 C_2 \\eta ^2(2d+9)L \\sum _{t=1}^{T-1} w_{t} [f(\\mu _{t-1})-f(x^*)] \\\\&\\hspace{-2.84526pt}+ C_2 \\Big (\\frac{60\\eta ^2(2(d+4)n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n}+\\frac{20\\eta ^2(2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n} + \\frac{10\\eta ^4 n_0 L^2(d+6)^3}{nc^2} \\Big ) \\sum _{t=1}^{T-1} w_{t} \\\\ &\\hspace{-2.84526pt} + C_3 \\sum _{t=1}^T w_t\\end{aligned}\\\\\\le & w_0 a_0^2 \\begin{aligned}[t]&-\\underbrace{(C_1 - 120 C_2 \\eta ^2(2d+9)L)}_{D_1:=} \\sum _{t=1}^T w_t e_{t-1}\\\\&\\hspace{-17.07164pt} + \\underbrace{\\bigg (C_2 \\Big (\\frac{60\\eta ^2(2(d+4)n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n}+\\frac{20\\eta ^2(2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n} + \\frac{10\\eta ^4 n_0 L^2(d+6)^3}{nc^2} \\Big ) + C_3\\bigg )}_{D_2:=} \\sum _{t=1}^T w_t.", "\\end{aligned}\\end{split}$ Let $S_T:=\\sum _{t=1}^T w_t$ and $y_T:= \\sum _{t=1}^T \\frac{w_t \\mu _{t-1}}{S_T}$ , then by convexity of $f$ we have that $[f(y_T) - f(x^*)] \\le \\frac{1}{S_T}\\sum _{t=1}^T w_t e_{t-1}.$ By choosing small enough $\\eta $ , so that have $D_1 \\ge 0$ , we can combine inequalities (REF ) and (REF ) to get $\\begin{split}[f(y_T) - f(x^*)]+\\frac{w_T a_T^2}{S_T D_1} \\le \\frac{w_0 a_0^2}{S_T D_1} + \\frac{D_2}{D_1}\\end{split}$ Our first goal is to lower bound $D_1$ , we aim to choose upper bound on $\\eta $ so that $D_1=\\Omega (\\frac{\\eta }{n})$ .", "For this it will be enough to set $\\eta = O(\\frac{1}{d (L+\\ell +1)})$ .", "Since $C_1 \\ge 4\\frac{\\eta }{n} - \\eta ^2\\frac{16L(12d+52)}{n^2} - \\eta ^4 \\frac{32n_0L^2(d+3)^{\\frac{3}{2}}}{cn^4}$ , we have $C_1 = \\Omega (\\frac{\\eta }{n})$ .", "Plus, $C_2=O(1/n)$ and hence $120 C_2 \\eta ^2(2d+9)L = O(\\frac{\\eta }{n})$ , thus $D_1=\\Omega (\\frac{\\eta }{n})$ , as desired.", "Also: $S_t = \\sum _{t=1}^T w_t = \\sum _{t=1}^T (1-\\frac{\\eta \\ell }{2n})^{-t} \\ge (1-\\frac{\\eta \\ell }{2n})^{-T} \\ge e^{\\frac{\\eta \\ell T}{2n}}.$ By setting $\\eta = \\frac{4n\\log (T)}{T \\ell }$ , we get $S_t \\ge T^2$ .", "Therefore, we have $ \\frac{w_0 a_0^2}{S_T D_1} = O \\left( \\frac{w_0 a_0^2 \\ell }{T\\log (T)} \\right)=O \\left( \\frac{L \\Vert \\mu _0-x^*\\Vert ^2}{T\\log (T)} \\right).$ Next, we upper bound $D_2$ .", "Since $\\eta =O(\\frac{1}{d(L+\\ell +1)})$ we get that $C_2=O((L+\\ell )\\eta /n)$ .", "Additionally, $\\frac{60\\eta ^2(2(d+4)n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n}&+\\frac{20\\eta ^2(2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n} + \\frac{10\\eta ^4 n_0 L^2(d+6)^3}{n d} \\\\&= O\\Bigg (\\frac{\\eta ^2(d n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n}+\\frac{\\eta ^2(d n_0\\sigma _0^2+n_1 \\sigma _1^2)}{n} + \\frac{\\eta ^2 n_0}{n}\\Bigg ).$ By using $\\eta = O(\\frac{1}{(L+\\ell +1)n})$ in the equation above, we get $C_2\\Bigg (\\frac{60\\eta ^2(2(d+4)n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n}&+\\frac{20\\eta ^2(2(d+4) \\sum _{i \\in N_0} s_i^2+\\sum _{i \\in N_1} s_i^2)}{n} + \\frac{10\\eta ^4 n_0 L^2(d+6)^3}{n d}\\Bigg ) \\\\&= O\\Bigg (\\frac{\\eta ^2(d n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n^3}+\\frac{\\eta ^2(d n_0\\sigma _0^2+n_1 \\sigma _1^2)}{n^3} + \\frac{\\eta ^2 n_0}{n^3}\\Bigg ).$ Finally, we have that $C_3 = O\\Bigg (\\frac{\\eta ^2(d n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n^3}+\\frac{\\eta ^2(d n_0\\sigma _0^2+n_1 \\sigma _1^2)}{n^3} + \\frac{\\eta ^2 n_0}{n^3}+\\frac{\\eta ^3 d L n_0}{n^2}\\Bigg ).$ If we put together the above inequalities we get that $D_2&=O\\Bigg (\\frac{\\eta ^2(d n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n^3}+\\frac{\\eta ^2(d n_0\\sigma _0^2+n_1 \\sigma _1^2)}{n^3} +\\frac{\\eta ^3 L d n_0}{n^2}\\Bigg ).$ By plugging $D_1=\\Omega (\\frac{\\eta }{n})$ and $\\eta = \\frac{4n\\log (T)}{T \\ell }$ , in the equation above we get that $\\frac{D_2}{D_1} &= O\\Bigg (\\frac{\\eta (d n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{n^2}+\\frac{\\eta (d n_0\\sigma _0^2+n_1 \\sigma _1^2)}{n^2} +\\frac{\\eta ^2 L d n_0}{n}\\Bigg )\\\\ &= O\\Bigg (\\frac{\\log (T)(d n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{T \\ell n}+\\frac{\\log (T)(d n_0\\sigma _0^2+n_1 \\sigma _1^2)}{T \\ell n} +\\frac{\\log (T) d n_0}{T \\ell n}\\Bigg ).$ We also have that $D_1 \\le C_1 \\le \\frac{4\\eta }{n}$ , and $\\frac{w_T}{S_T}=\\frac{(1-\\frac{\\eta \\ell }{2n})^{-T}}{\\sum _{t=1}^T (1-\\frac{\\eta \\ell }{2n})^{-t}} \\ge (1-\\frac{\\eta \\ell }{2n}) \\Big (1-\\frac{\\eta \\ell }{2n})^{-1}-1 \\Big )=\\frac{\\eta \\ell }{2n}.$ Hence, $\\frac{w_T}{S_T D_1} \\ge \\frac{\\ell }{8}.$ By putting together the above above inequalities we get the final convergence bound: $[f(y_T) - f(x^*)]&+\\frac{\\ell \\Vert \\mu _{T}-x^*\\Vert ^2}{8}=[f(y_T) - f(x^*)]+\\frac{\\ell a_T^2}{8} \\le [f(y_T) - f(x^*)]+\\frac{w_T a_T^2}{S_T D_1} \\le \\frac{w_0 a_0^2}{S_T D_1} + \\frac{D_2}{D_1} \\\\&=O\\Bigg (\\frac{L \\Vert \\mu _0-x^*\\Vert ^2}{T\\log (T)}) +\\frac{\\log (T)(d n_0\\varsigma _0^2+n_1 \\varsigma _1^2)}{T \\ell n}+\\frac{\\log (T)(d n_0\\sigma _0^2+n_1 \\sigma _1^2)}{T \\ell n} +\\frac{\\log (T) d n_0}{T \\ell n}\\Bigg ).$ Finally, we need to gather all upper bounds on $\\eta $ and compute lower bound on $T$ so that the upper bounds are satisfied.", "We need $\\eta =O(\\frac{1}{d (L+\\ell +1)}), \\eta =O(\\frac{1}{n (L+\\ell +1)})$ in the proof of this theorem, the lemmas we used require $\\eta \\le \\frac{1}{10\\ell }$ and $\\eta \\le \\frac{\\sqrt{\\ell c n}}{2\\sqrt{Ln_0}(d+3)^\\frac{3}{4}}=O\\Big (\\frac{\\ell }{Ld}\\Big ).$ Considering $\\ell \\le L$ , we get that we need $\\eta =O\\Big (\\frac{1}{(d+n)(L+1)(\\frac{1}{\\ell }+1)}\\Big )$ .", "Thus, we need $\\frac{T}{\\log (T)} = \\Omega (\\frac{n(d+n)(L+1)(\\frac{1}{\\ell }+1)}{\\ell }).$" ] ]
2210.07703
[ [ "A polynomial-time approximation scheme for the maximal overlap of two\n independent Erd\\H{o}s-R\\'enyi graphs" ], [ "Abstract For two independent Erd\\H{o}s-R\\'enyi graphs $\\mathbf G(n,p)$, we study the maximal overlap (i.e., the number of common edges) of these two graphs over all possible vertex correspondence.", "We present a polynomial-time algorithm which finds a vertex correspondence whose overlap approximates the maximal overlap up to a multiplicative factor that is arbitrarily close to 1.", "As a by-product, we prove that the maximal overlap is asymptotically $\\frac{n}{2\\alpha-1}$ for $p=n^{-\\alpha}$ with some constant $\\alpha\\in (1/2,1)$." ], [ "Introduction", "In this paper we study the random optimization problem of maximizing the overlap for two independent Erdős-Rényi graphs over all possible vertex correspondence.", "More precisely, we fix two vertex sets $V=\\lbrace v_1,\\dots , v_n\\rbrace ,\\mathsf {V}=\\lbrace \\mathsf {v}_1,\\dots ,\\mathsf {v}_n\\rbrace $ with edge sets $E_0=\\lbrace (v_i,v_j):1\\le i<j\\le n\\rbrace ,\\ \\mathsf {E}_0=\\lbrace (\\mathsf {v}_i,\\mathsf {v}_j):1\\le i<j\\le n\\rbrace \\,.$ For a parameter $p\\in (0,1)$ , we consider two Erdős-Rényi random graphs $ G = (V,E)$ and $\\mathsf {G}= (\\mathsf {V},\\mathsf {E})$ where $E \\subset E_0$ and $\\mathsf {E}\\subset \\mathsf {E}_0$ are obtained from keeping each edge in $E_0$ and $\\mathsf {E}_0$ (respectively) with probability $p$ independently.", "For convenience, we abuse the notation and denote by $G$ and $\\mathsf {G}$ the adjacency matrices for the two graphs respectively.", "That is to say, $G_{i, j} = 1$ if and only if $(v_i, v_j)\\in E$ and $\\mathsf {G}_{i, j} = 1$ if and only if $(\\mathsf {v}_i, \\mathsf {v}_j) = 1$ for all $1\\le i<j\\le n$ .", "Let $S_n$ be the collection of all permutations on $[n]=\\lbrace 1, \\ldots , n\\rbrace $ .", "For $\\pi \\in S_n$ , define $\\mathrm {Overlap}(\\pi )\\stackrel{\\text{def}}{=}\\sum _{1\\le i<j\\le n} G_{i,j}\\mathsf {G}_{\\pi (i),\\pi (j)}$ to be the number of edges $(v_i, v_j)\\in E$ with $(\\mathsf {v}_{\\pi (i)}, \\mathsf {v}_{\\pi (j)})\\in \\mathsf {E}$ .", "Our main result is a polynomial-time approximation scheme (PTAS) for this optimization problem.", "Theorem 1.1 Assume $p=n^{-\\alpha }$ for some $\\alpha \\in (1/2,1)$ .", "For any fixed $\\varepsilon >0$ , there exist a constant $C=C(\\varepsilon )$ and an algorithm (see Algorithm REF ) with running time $O(n^C)$ which takes $G,\\mathsf {G}$ as input and outputs a permutation $\\pi ^*\\in S_n$ , such that $\\mathbb {P}\\left[\\frac{\\mathrm {Overlap}(\\pi ^*)}{n}\\ge \\frac{1-\\varepsilon }{2\\alpha -1}\\right]=1-o(1),\\quad \\text{as }n\\rightarrow \\infty .$ In addition, $\\max _{\\pi \\in S_n} \\frac{\\mathrm {Overlap}(\\pi )}{n}$ converges to $\\frac{1}{2\\alpha - 1}$ in probability as $n\\rightarrow \\infty $ .", "Remark 1.2 For $\\alpha \\in (0, 1/2)$ , a straightforward computation yields that with probability tending to 1 as $n\\rightarrow \\infty $ , the overlap is asymptotic to $\\frac{n^2 p^2}{2}$ for all $\\pi \\in S_n$ .", "Thus, this regime is trivial if one's goal is to approximate the maximal overlap in the asymptotic sense.", "For the “critical” case when $\\alpha $ is near $1/2$ , the problem seems more delicate and our current method falls short in approximating the maximal overlap asymptotically.", "Remark 1.3 One can find Hamiltonian cycles in both graphs with an efficient algorithm (see [35] as well as references therein) and obviously this leads to an overlap with $n$ edges.", "Also, a simple first moment computation as in the proof of Proposition REF shows that the maximal overlap is of order $n$ .", "Therefore, the whole challenge in this paper is to nail down the asymptotic constant $1/(2\\alpha - 1)$ .", "Remark 1.4 Theorem REF provides an algorithmic lower bound on the maximal overlap which is sharp asymptotically (i.e., the correct order with the correct leading constant), and we emphasize that prior to our work the asymptotic maximal overlap was not known even from a non-constructive perspective.", "In fact, the problem of the asymptotic maximal overlap was proposed by Yihong Wu and Jiaming Xu as an intermediate step in analyzing the hardness of matching correlated random graphs, and Yihong Wu and Jiaming Xu have discussed extensively with us on this problem.", "It is fair to say that we have come up with a fairly convincing roadmap using the second moment method, but the technical obstacles seem at least somewhat daunting such that we did not manage to complete the proof.", "The current paper was initiated with the original goal of demonstrating an information-computation gap for this random optimization problem (as we have believed back then), but it turned out to evolve in a somewhat unexpected way: not only the maximal overlap can be approximated by a polynomial time algorithm, but also it seems (as of now) this may even be a more tractable approach even if one's goal is just to derive the asymptotic maximal overlap.", "Finally, the very stimulating discussions we had with Yihong Wu and Jiaming Xu, while not explicitly used in the current paper, have been hugely inspiring to us.", "We record this short “story” here as on the one hand it may be somewhat enlightening to the reader, and on the other hand we would like to take this opportunity to thank Yihong Wu and Jiaming Xu in the warmest terms." ], [ "Background and related results", "Our motivation is twofold: on the one hand, we wish to accumulate insights for understanding the computational phase transition for matching two correlated random graphs; on the other hand, as mentioned in Remark REF we wished to study the computational transition for the random optimization problem of the maximal overlap, which is a natural and important combinatorial optimization problem known to be NP-hard to solve in the worst-case scenario.", "We will further elaborate both of these two points in what follows.", "Recently, there has been extensive study on the problem of matching the vertex correspondence between two correlated graphs and the closely related problem of detecting the correlation between two graphs.", "From the applied perspective, some versions of graph matching problems have emerged from various applied fields such as social network analysis [33], [34], computer vision [6], [4], computational biology [39], [42] and natural language processing [21].", "From the theoretical perspective, graph matching problems seem to provide another important set of examples with the intriguing information-computation gap.", "The study of information-computation gaps for high-dimensional statistical problems as well as random optimization problems is currently an active and challenging topic.", "For instance, there is a huge literature on the problem of recovering communities in stochastic block models (see the monograph [1] for an excellent account on this topic) together with some related progress on the optimization problem of extremal cuts for random graphs [11], [31]; there is also a huge literature on the hidden clique problem and the closely related problem of submatrix detection (see e.g.", "[43] for a survey and see e.g.", "[19], [26] and references therein for a more or less up-to-date review).", "In addition, it is worth mentioning that in the last few decades various framework has been proposed to provide evidence on computational hardness for such problems with random inputs; see surveys [47], [2], [37], [24], [18] and references therein.", "As for random graph matching problems, so far most of the theoretical study was carried out for Erdős-Rényi graph models since Erdős-Rényi graph is arguably the most canonical random graph model.", "Along this line, much progress has been made recently, including information-theoretic analysis [8], [7], [22], [45], [44], [12], [13] and proposals for various efficient algorithms [36], [46], [25], [23], [17], [38], [3], [14], [5], [9], [10], [32], [16], [20], [15], [27], [28].", "As of now, it seems fair to say that we are still relatively far away from being able to completely understand the phase transition for computational complexity of graph matching problems—we believe, and we are under the impression that experts on the topic also believe, that graph matching problems do exhibit information-computation gaps.", "Thus, (as a partial motivation for our study) it is plausible that understanding the computational aspects for maximizing the overlap should shed lights on the computational transition for graph matching problems.", "As hinted earlier, it is somewhat unexpected that a polynomial time approximation scheme for the maximal overlap problem exists while the random graph matching problem seems to exhibit an information-computation gap.", "As a side remark, before finding this approximation algorithm, we have tried to demonstrate the computational hardness near the information threshold via the overlap gap property but computations suggested that this problem does not exhibit the overlap gap property.", "In addition, we point out that efficient algorithms have been discovered to approximate ground states for some spin glass problems [41], [30], which take advantage of the so-called full replica symmetry breaking property.", "From what we can tell, [41], [30] seem to be rare natural examples for which approximation algorithms were discovered for random instances whereas the worse-case problems are known to be hard to solve.", "In a way, our result contributes yet another example of this type and possibly of a different nature since we do not see full replica symmetry breaking in our algorithm." ], [ "An overview of our method", "The main contribution of our paper is to propose and analyze the algorithm as in Theorem REF .", "The basic idea for our algorithm is very simple: roughly speaking, we wish to match vertices of two graphs sequentially such that the increment on the number of common edges (within matched vertices) per each matched pair is $(2\\alpha -1)^{-1}$ .", "An obvious issue is that $(2\\alpha -1)^{-1}$ may not be an integer.", "Putting this aside for a moment, we first explain our idea in the simple case when $p=n^{-\\frac{3}{4}+\\delta }$ with some arbitrarily small $\\delta >0$ .", "In this case, our goal is to construct some $\\pi \\in S_n$ such that $\\mathrm {Overlap}(\\pi )\\ge (2-\\varepsilon )n$ for an arbitrarily small $\\varepsilon >0$ .", "Assume we have already determined $\\pi (1),\\ldots ,\\pi (k)$ for some $k\\in \\left[\\frac{\\varepsilon }{3} n,\\left(1-\\frac{\\varepsilon }{3}\\right)n\\right]$ (i.e., assume we have already matched $ v_1,\\dots ,v_{k}\\in V$ to $\\mathsf {v}_{\\pi (1)},\\dots ,\\mathsf {v}_{\\pi (k)}\\in \\mathsf {V}$ ).", "We wish to find some $\\ell \\in [n]\\setminus \\lbrace \\pi (1),\\dots ,\\pi (k)\\rbrace $ such that $\\sum _{1\\le j\\le k}G_{j,k+1}\\mathsf {G}_{\\pi (j), \\ell }\\ge 2\\,.$ Provided that this is feasible, we may set $\\pi (k+1)= \\ell $ .", "Therefore, the crux is to show that (REF ) is feasible for most of the steps.", "Ignoring the correlations between different steps for now, we can then regard $\\sum _{1\\le j\\le k} G_{j,k+1}\\mathsf {G}_{\\pi (j), \\ell }$ as a Binomial variable $\\mathbf {B}(k, p^2)$ for each $\\ell \\in [n]\\setminus \\lbrace \\pi (1),\\dots ,\\pi (k)\\rbrace $ and thus (REF ) holds with probability of order $(k p^2)^2 \\gtrsim n^{\\delta - 1}$ .", "Since there are at least $\\varepsilon n$ potential choices for $\\ell $ , (ignoring the potential correlations for now again) it should hold with overwhelming probability that there exists at least one $\\ell $ satisfying (REF ).", "This completes the heuristics underlying the success of such a simple iterative algorithm.", "The main technical contribution of this work is to address the two issues we ignored so far: the integer issue and the correlations between iterations.", "In order to address the integer issue, it seems necessary to match $\\chi $ vertices every step with an increment of $\\zeta $ common edges for some suitably chosen $\\chi , \\zeta $ with $\\zeta /\\chi \\approx (2\\alpha - 1)^{-1}$ .", "Thus, one might think that a natural analogue of (REF ) shall be the following: there exist $\\ell _1, \\ldots , \\ell _{\\chi } \\in [n]\\setminus \\lbrace \\pi (1),\\dots ,\\pi (k)\\rbrace $ such that $\\sum _{1\\le j\\le k}\\sum _{1 \\le i\\le \\chi } G_{j,k+i}\\mathsf {G}_{\\pi (j), \\ell _i}\\ge \\zeta \\,.$ However, (REF ) is not really feasible since in order for (REF ) to hold it is necessary that for some $i\\in \\lbrace 1,\\dots ,\\chi \\rbrace $ , there exists $\\ell \\in [n]\\setminus \\lbrace \\pi (1),\\dots ,\\pi (k)\\rbrace $ such that $\\sum _{1\\le j\\le k} G_{j,k+i}\\mathsf {G}_{\\pi (j), \\ell } \\ge \\lceil (2\\alpha - 1)^{-1} \\rceil $ (where $\\lceil x\\rceil $ denotes the minimal integer that is at least $x$ ).", "A simple first moment computation suggests that the preceding requirement cannot be satisfied for most steps.", "In light of the above discussions, we see that in addition to common edges between the matched vertices and the “new” vertices, it will be important to also take advantage of common edges that are within “new” vertices, which requires to also carefully choose a collection of vertices from $ V$ (otherwise typically there will be no edges within a constant number of fixed vertices).", "In order to implement this, it turns out that we may carefully construct a rooted tree $\\mathbf {T}$ with $\\chi $ non-leaf vertices and $\\zeta $ edges, and we wish that the collection of added common edges per step contains an isomorphic copy of $\\mathbf {T}$ .", "To be more precise, let $\\operatorname{M} = \\lbrace j\\in [n]: v_j \\mbox{ has been matched}\\rbrace $ and let $\\pi (\\operatorname{M}) = \\lbrace \\pi (j): j\\in \\operatorname{M}\\rbrace $ , and in addition let $k$ be the minimal integer in $[n]\\setminus \\operatorname{M}$ .", "We then wish to strengthen (REF ) to the following: there exists $\\xi = \\zeta + 1 - \\chi $ integers $i_1, \\ldots , i_{\\xi } \\in \\operatorname{M}$ , $\\chi $ integers $j_1 = k, j_2, \\ldots , j_{\\chi } \\in [n] \\setminus \\operatorname{M} $ and $\\chi $ integers $\\ell _1, \\ldots , \\ell _{\\chi }\\in [n]\\setminus \\lbrace \\pi (1),\\dots ,\\pi (k)\\rbrace $ such that the following hold: the subgraph of $G$ induced on $\\lbrace v_{i_1}, \\ldots , v_{i_{\\xi }}\\rbrace \\cup \\lbrace v_{j_1}, \\ldots , v_{j_\\chi }\\rbrace $ contains $\\mathbf {T}$ as a subgraph with leaf vertices $\\lbrace v_{i_1}, \\ldots , v_{i_{\\xi }}\\rbrace $ ; the subgraph of $\\mathsf {G}$ induced on $\\lbrace \\mathsf {v}_{\\pi (i_1)}, \\ldots , \\mathsf {v}_{\\pi (i_{\\xi })}\\rbrace \\cup \\lbrace \\mathsf {v}_{\\ell _1}, \\ldots , \\mathsf {v}_{\\ell _\\chi }\\rbrace $ contains $\\mathbf {T}$ as a subgraph with leaf vertices $\\lbrace \\mathsf {v}_{\\pi (i_1)}, \\ldots , \\mathsf {v}_{\\pi (i_{\\xi })}\\rbrace $ .", "(In this case, we then set $\\pi (j_i) = \\ell _i$ for $1\\le i\\le \\chi $ .)", "In the above, we require the non-leaf vertices of isomorphic copies of $\\mathbf {T}$ to be contained in “new” vertices in order to make sure that the edges we add per each iteration are disjoint.", "In order for the existence of isomorphic copies of $\\mathbf {T}$ , we need to pose some carefully chosen balanced conditions on $\\mathbf {T}$ ; see Lemma REF , (REF ) and (REF ) below.", "Figure: A single step in the iterative matching algorithm: the black vertices are matched ones, and the blue vertices are the ones matched in the current step (where a tree is added on both sides with leaves being black vertices).The existence of $\\mathbf {T}$ , once appropriate balanced conditions are posed, is not that hard to show in light of [3], [28].", "However, the major challenge comes in since we need to yet treat the correlations between different iterative steps.", "This is indeed fairly delicate and most of the difficult arguments in this paper (see Section ) are devoted to address this challenge.", "Finally, we point out that in order for a high precision approximation of $(2\\alpha - 1)^{-1}$ , we will have to choose integers $\\chi , \\zeta $ fairly large.", "Since our algorithm is required to check all $\\chi $ -tuples over unmatched vertices, we only obtain an algorithm with polynomial running time where the power may grow in the precision of approximation.", "It remains an interesting question whether a polynomial time algorithm with a fixed power can find a matching that approximates the maximal overlap up to a constant that is arbitrarily close to 1." ], [ "Upper bound of the maximal overlap", "In this subsection, we prove the upper bound of Theorem REF in the next proposition.", "Proposition 2.1 For any constant $\\rho >\\frac{1}{2\\alpha -1}$ , we have $\\mathbb {P}\\left[\\max _{\\pi \\in S_n}\\mathrm {Overlap}(\\pi )\\ge \\rho n\\right]\\rightarrow 0\\,,\\quad \\text{as }n\\rightarrow \\infty \\,.$ By a union bound we see the left hand side of (REF ) is bounded by $\\sum _{\\pi \\in S_n}\\mathbb {P}\\left[\\mathrm {Overlap}(\\pi )\\ge \\rho n\\right]=n!\\mathbb {P}\\left[\\mathbf {B}\\ge \\rho n\\right],$ where $\\mathbf {B}$ is distributed as a binomial variable $\\mathbf {B}(\\binom{n}{2},p^2)$ .", "By standard large deviation estimates for binomial variables (see e.g.", "[29]), we have $\\mathbb {P}\\left[\\mathbf {B}\\ge \\rho n\\right]\\le \\exp \\left(-\\rho n\\log \\left(\\frac{\\rho n}{\\binom{n}{2}p^2}\\right)+\\rho n\\right)=\\exp \\left(-\\rho (2\\alpha -1+o(1))n\\log n+O(n)\\right),$ which is of smaller magnitude than $\\exp (-n\\log n)\\ll (n!", ")^{-1}$ .", "This completes the proof.", "We introduce some asymptotic notation which will be used later.", "For non-negative sequences $f_n$ and $g_n$ , we write $f_n \\lesssim g_n$ if there exists a constant $C>0$ such that $f_n \\le C g_n$ for all $n\\ge 1$ .", "We write $f_n \\asymp g_n$ if $f_n \\lesssim g_n$ and $g_n \\lesssim f_n$ ." ], [ "Preliminaries of the algorithm", "The rest of the paper is devoted the description and analysis of the algorithm in Theorem REF .", "Since $\\alpha \\in (1/2,1)$ , we may pick an integer $k\\ge 1$ such that $\\frac{1}{2\\alpha -1}\\in (k,k+1]$ .", "Henceforth we will fix $\\varepsilon >0$ together with a positive constant $\\eta $ sufficiently close to 0.", "More precisely, we choose $\\eta >0$ such that $&\\frac{1-2\\eta }{2\\alpha +2\\eta -1}>\\frac{1-\\varepsilon }{2\\alpha -1}\\,,\\quad \\frac{1}{2\\alpha +2\\eta -1}>k\\,,\\\\\\mbox{and }&\\bigg \\lfloor \\frac{(k+1)(2{(\\alpha +\\eta )}-1)-1}{2-2{(\\alpha +\\eta )}}\\bigg \\rfloor =\\bigg \\lfloor \\frac{(k+1)(2{\\alpha }-1)-1}{2-2{\\alpha }}\\bigg \\rfloor \\,,$ where $\\lfloor x\\rfloor $ is the greatest integer $\\le x$ and $\\lceil x\\rceil $ is the minimal integer $\\ge x$ .", "As described in Section REF , our matching algorithm works by searching for isomorphic copies of certain well-chosen tree in an iterative manner.", "We next construct this tree with a number of “balanced conditions” which will play an essential role in the analysis of the algorithm later.", "Before doing so, we introduce some notation conventions.", "For any simple graph $\\mathbf {H}$ , let $V(\\mathbf {H}),E(\\mathbf {H})$ be the set of vertices and edges of $\\mathbf {H}$ , respectively.", "Throughout the paper, we will use bold font for a graph (e.g., $\\mathbf {T}$ ) to emphasize its structural information; we use normal font for subgraphs of $G$ (e.g., $ {T}$ ) and we use mathsf font for subgraphs of $\\mathsf {G}$ (e.g., $\\mathsf {T}$ ).", "In addition, we use sans-serif font such as $\\operatorname{L},\\operatorname{Q}$ to denote the subsets of $\\lbrace 1,\\cdots ,n\\rbrace $ which serve as the subscripts of vertices in $G$ and $\\mathsf {G}$ .", "Lemma 2.2 There exist an irrational number $\\alpha _\\eta $ and two integers $\\chi ,\\zeta $ such that there is a rooted tree $\\mathbf {T}$ with leaf set $\\mathbf {L}$ and non-leaf set $\\mathbf {Q}$ such that the following hold: (i) $\\alpha <\\alpha _\\eta <\\alpha +\\eta , |\\mathbf {Q}|=\\chi , |E(\\mathbf {T})|=\\zeta $ and $0<\\chi -(2\\alpha _\\eta -1)\\zeta <1-\\alpha _\\eta $ .", "(ii) For each $u\\in \\mathbf {Q}$ which is adjacent to some leaves, $u$ is adjacent to exactly $k$ leaves.", "(iii) For any subgraph $\\mathbf {F}\\subsetneq \\mathbf {T}$ with $\\mathbf {L} \\subset \\mathbf {F}$ , it holds that $|V(\\mathbf {T})\\setminus V(\\mathbf {F})|<\\alpha _\\eta |E(\\mathbf {T})\\setminus E(\\mathbf {F})|\\,.$ (iv) For any subtree $\\mathbf {T}_0\\subsetneq \\mathbf {T}$ , it holds that $|V(\\mathbf {T}_0)\\cap \\mathbf {Q}|-(2\\alpha _\\eta -1)|E(\\mathbf {T}_0)|>\\chi -(2\\alpha _\\eta -1)\\zeta \\,.$ Recall that $k=\\lceil \\frac{1}{2\\alpha -1}\\rceil -1$ .", "First, we claim there exist positive integers $n_1\\le n_2\\le \\cdots \\le n_l$ such that $2\\alpha -1<\\frac{1+\\frac{1}{n_1}+\\frac{1}{n_1n_2}+\\cdots +\\frac{1}{n_1n_2\\cdots n_l}}{k+1+\\frac{1}{n_1}+\\frac{1}{n_1n_2}+\\cdots +\\frac{1}{n_1n_2\\cdots n_{l-1}}}<2(\\alpha +\\eta )-1\\,.$ In order to see this, pick an irrational number $\\beta $ such that $ 2\\alpha -1<\\frac{1+\\beta }{k+1+\\beta }<2(\\alpha +\\eta )-1\\,.$ Then we can express $\\beta $ as the infinite sum $\\beta =\\frac{1}{n_1}+\\frac{1}{n_1n_2}+\\frac{1}{n_1n_2n_3}+\\cdots $ with positive integers $n_1\\le n_2\\le \\cdots $ determined by the following inductive procedure: assuming $n_1,\\dots ,n_{k-1}$ have been fixed, we choose $n_k$ such that $\\frac{1}{n_k} < n_1\\cdots n_{k-1}\\left(\\beta -\\frac{1}{n_1}-\\cdots -\\frac{1}{n_1\\cdots n_{k-1}}\\right) < \\frac{1}{n_k-1}$ (note that the irrationality of $\\beta $ ensures the strict inequality above).", "It is straightforward to verify that this yields the desired expression.", "With this at hand, we obtain the desired relation (REF ) by an appropriate truncation.", "Writing $\\ell =n_1n_2\\cdots n_l$ , we set $\\chi =\\ell \\left(1+\\frac{1}{n_1}+\\cdots +\\frac{1}{n_1n_2\\cdots n_l}\\right) \\mbox{ and } \\zeta =\\ell \\Big (k + 1+\\frac{1}{n_1}+\\cdots +\\frac{1}{n_1n_2\\cdots n_{l-1}}\\Big )\\,.$ Thus, we have $\\zeta =k\\ell +\\chi -1$ .", "We define a rooted tree $\\mathbf {T}$ with $(\\ell +2)$ generations as follows: the 0-th generation contains a single root; for $0\\le i\\le l-1$ , each vertex in the $i$ -th generation has $n_{l-i}$ children; each vertex in the $l$ -th generation has $k$ children (which are leaves).", "It is clear that $\\mathbf {T}$ has $\\chi $ non-leaf vertices and $\\zeta $ edges.", "In addition, $\\mathbf {T}$ satisfies (ii).", "Choose $ \\widetilde{\\alpha }$ such that $\\chi /\\zeta = 2\\widetilde{\\alpha }-1$ .", "Then by (REF ), we see that (i) holds for $\\alpha _\\eta \\in (\\alpha ,\\widetilde{\\alpha })$ which are sufficiently close to $\\widetilde{\\alpha }$ .", "In the rest of the proof, we will show that for some irrational $\\alpha _\\eta <\\widetilde{\\alpha }$ sufficiently close to $\\widetilde{\\alpha }$ , (iii) and (iv) also hold.", "We remark that the proof is somewhat technical and it may be skipped at the first reading.", "We begin with (REF ).", "We say $\\mathbf {T}_0$ is an entire subtree of $\\mathbf {T}$ if $\\mathbf {T}_0$ is a subtree of $\\mathbf {T}$ such that $\\mathbf {T}_0$ contains all neighbors of $u$ in $\\mathbf {T}$ for any vertex $u$ with degree larger than 1 in $\\mathbf {T}_0$ .", "For a subgraph $\\mathbf {F} \\subset \\mathbf {T}$ , we consider the following procedure: remove all the vertices and edges in $\\mathbf {F}$ from $\\mathbf {T}$ and thus what remains is a union of vertices and edges while some edges may be incomplete since its one or both endpoints may have been removed; for each aforementioned incomplete edge that remains and for each of the endpoint that has been removed, we add a distinct vertex to replace the removed endpoint.", "Figure: The red part on the left is the subgraph 𝐅\\mathbf {F}.", "The black part on the right is 𝐓∖ ☆ 𝐅\\mathbf {T}\\setminus _\\star \\mathbf {F}.", "There are three entire subtrees (black) in 𝐓∖ ☆ 𝐅\\mathbf {T}\\setminus _\\star \\mathbf {F}.At the end of this procedure, we obtain a collection of trees that are both edge-disjoint and vertex-disjoint and we denote this collection of trees as $\\mathbf {T}\\setminus _\\star \\mathbf {F}$ .", "In addition, for each tree in $\\mathbf {T}\\setminus _\\star \\mathbf {F}$ , it can be regarded as a subtree of $\\mathbf {T}$ (if we identify the added vertex in the tree in $\\mathbf {T}\\setminus _\\star \\mathbf {F}$ to the corresponding vertex in $\\mathbf {T}$ ) and one can verify that each such subtree is an entire subtree of $\\mathbf {T}$ .", "Therefore, in order to prove (REF ) it suffices to show that for any entire subtree $\\mathbf {T}_0 \\subset \\mathbf {T}$ where $\\mathbf {Q}_0$ is the collection of vertices with degree larger than 1 in $\\mathbf {T}_0$ , we have $|\\mathbf {Q}_0|\\le \\alpha |E(\\mathbf {T}_0)|$ (note that $\\alpha <\\alpha _\\eta $ ) and in addition the equality holds only when the tree is a singleton.", "For a subtree $\\mathbf {T}_0$ of $\\mathbf {T}$ , assume its root $\\mathbf {v}$ (i.e., the vertex in $V(\\mathbf {T}_0)$ closest to the root of $\\mathbf {T}$ ) lies in the $i$ -th generation of $\\mathbf {T}$ .", "We define the height of $\\mathbf {T}_0$ by $h(\\mathbf {T}_0)={\\left\\lbrace \\begin{array}{ll}l+1-i,\\quad &\\text{ if }|V(\\mathbf {T}_0)|\\le 2\\text{ or }\\mathbf {v}\\in \\mathbf {Q}_0\\,,\\\\l-i,\\quad &\\text{ if }|V(\\mathbf {T}_0)|>2\\text{ and }\\mathbf {v}\\notin \\mathbf {Q}_0\\,.\\end{array}\\right.", "}$ Since obviously the equality in (REF ) holds if $|V(\\mathbf {T}_0)|\\le 2$ , we then deal with the other cases.", "Let $m = |\\lbrace i: n_i=1\\rbrace |$ .", "From (REF ) we see $\\frac{1}{n_1}+\\cdots +\\frac{1}{n_1\\cdots n_{l-1}}<\\frac{(k+1)(2\\widetilde{\\alpha }-1)-1}{2-2\\widetilde{\\alpha }}<\\frac{1}{n_1}+\\cdots +\\frac{1}{n_1\\cdots n_l}\\,.$ Since $n_1=\\cdots =n_m=1$ and $n_t\\ge 2$ for $t>m$ , we get from () that $m= \\Big \\lfloor \\frac{(k+1)(2\\widetilde{\\alpha }-1)-1}{2-2\\widetilde{\\alpha }}\\Big \\rfloor =\\Big \\lfloor \\frac{(k+1)(2{\\alpha }-1)-1}{2-2{\\alpha }}\\Big \\rfloor \\,.$ We now prove (REF ) for any entire subtree $\\mathbf {T}_0$ with $h(\\mathbf {T}_0) \\le m+1$ .", "If $1/2<{\\alpha }\\le 3/4$ , then we have $k\\ge 2$ and $m=0$ .", "Thus, $\\mathbf {T}_0$ must be a star graph with $k$ or $k+1$ leaves, implying that $|\\mathbf {Q}_0|-\\alpha |E(\\mathbf {T}_0)| \\le 1-k\\alpha <0\\,.$ If $3/4<\\alpha <1$ , then $k=1$ and $\\frac{2m+3}{2m+4}< \\alpha \\le \\frac{2m+5}{2m+6}$ .", "Thus, $\\mathbf {T}_0$ must be a chain with length at most $m+2$ , implying that $|\\mathbf {Q}_0|-\\alpha |E(\\mathbf {T}_0)|\\le (m+1)-(m+2)\\alpha <0\\,.$ For an entire subtree $\\mathbf {T}_0$ with $h(\\mathbf {T}_0) \\ge m+2$ , we prove (REF ) by proving it for generalized entire subtrees, where $\\mathbf {T}_0$ is a generalized entire subtree if $\\mathbf {T}_0$ contains all the children (in $\\mathbf {T}$ ) of $u$ for each $u\\in \\mathbf {Q}_0$ .", "We prove a stronger version of (REF ) since we will prove it by induction (and it is not uncommon that proving a stronger statement by induction makes the proof easier).", "For the base case when $h(\\mathbf {T}_0) = m+2$ , it follows since $|\\mathbf {Q}_0|-\\alpha |E(\\mathbf {T}_0)|\\le {\\left\\lbrace \\begin{array}{ll}1-2\\alpha ,\\quad &1/2<\\alpha \\le 3/4\\,,\\\\(2m+3)-(2m+4)\\alpha ,\\quad &3/4<\\alpha <1\\,,\\end{array}\\right.", "}$ which is negative by preceding discussions.", "Now suppose that (REF ) holds whenever $\\mathbf {T}_0$ is a generalized entire subtree with $h(\\mathbf {T}_0) = h$ , and we next consider a generalized entire subtree $\\mathbf {T}_0$ with $h(\\mathbf {T}_0)=h+1$ .", "Denote the non-leaf vertex of $\\mathbf {T}_0$ at the $(l-h-1)$ -th generation (in $\\mathbf {T}$ ) by $\\mathbf {o}_0$ (this is well-defined since $\\mathbf {o}_0$ uniquely exists as long as $\\mathbf {T}_0$ contains more than 2 vertices).", "In addition, denote the subtrees of $\\mathbf {T}_0$ rooted at children of $\\mathbf {o}_0$ by $\\mathbf {T}_1,\\dots ,\\mathbf {T}_s$ where $\\mathbf {T}_i$ contains all the descendants (in $\\mathbf {T}_0$ ) of its root.", "Then $\\mathbf {T}_i$ is a generalized entire subtree with height $h$ for $i = 1, \\ldots , s$ .", "Thus, by our induction hypothesis we get (denoting by $\\mathbf {Q}_i$ the non-leaf vertices in $\\mathbf {T}_i$ ) $|\\mathbf {Q}_i|\\le \\alpha |E(\\mathbf {T}_i)| \\mbox{ for all } 1\\le i\\le s\\,.$ Note that $|\\mathbf {Q}_0|=1+|\\mathbf {Q}_1|+\\ldots +|\\mathbf {Q}_s|$ and $|E(\\mathbf {T}_0)|\\ge s+|E(\\mathbf {T}_1)|+\\cdots +|E(\\mathbf {T}_s)|$ .", "Combined with (REF ) and the fact $s\\ge 2$ , it yields that $|\\mathbf {Q}_0|-\\alpha |E(\\mathbf {T}_0)|<0$ , completing the inductive step for the proof of (REF ) (so this proves (REF )).", "We next prove (REF ).", "It suffices to show that for any subtree $\\mathbf {T}_0\\subsetneq \\mathbf {T}$ , $\\frac{|V(\\mathbf {T}_0)\\cap \\mathbf {Q}|}{|E(\\mathbf {T}_0)|}>\\frac{|\\mathbf {Q}|}{|E(\\mathbf {T})|}=\\frac{\\chi }{\\zeta }=2\\widetilde{\\alpha }-1\\,.$ Indeed, (REF ) implies that $|V(\\mathbf {T}_0)\\cap \\mathbf {Q}|-(2\\widetilde{\\alpha }-1)|E(\\mathbf {T}_0)|>0=\\chi -(2\\widetilde{\\alpha }-1)\\zeta \\mbox{ for all } \\mathbf {T}_0\\subsetneq \\mathbf {T}\\,.$ Writing $A$ the set of $\\alpha _\\eta $ for which (REF ) holds, we get from the preceding inequality that $\\widetilde{\\alpha }\\in A$ .", "By the finiteness of $\\mathbf {T}$ , we have that $A$ is an open set and thus we can pick an irrational $\\alpha _\\eta \\in A$ with $\\alpha _\\eta <\\widetilde{\\alpha }$ and sufficiently close to $\\widetilde{\\alpha }$ .", "Define $\\mathbb {T}=\\lbrace \\mathbf {T}_v: v\\in \\mathbf {T}\\rbrace $ where $\\mathbf {T}_v$ is a subtree rooted at $v$ containing all descendants of $v$ in $\\mathbf {T}$ .", "We first verify (REF ) for subtrees in $\\mathbb {T}$ .", "Denoting $n_0=1$ , we see that for any $0\\le j\\le l$ and any vertex $v$ in the $(l-j)$ -th generation, $\\frac{|V(\\mathbf {T}_v)\\cap \\mathbf {Q}|}{|E(\\mathbf {T}_v)|}=\\frac{\\frac{1}{n_0}+\\frac{1}{n_1}+\\cdots +\\frac{1}{n_1\\cdots n_j}}{k+\\frac{1}{n_0}+\\frac{1}{n_1}+\\cdots +\\frac{1}{n_1\\cdots n_{j-1}}}\\stackrel{\\operatorname{def}}{=}2\\alpha _j-1\\,.$ We now show $\\alpha _j>\\widetilde{\\alpha }$ for any $0\\le j\\le l-1$ .", "This is true for $j=0$ by the choice of $\\eta $ in (REF ).", "For $j \\ge 1$ , we assume otherwise that the inequality does not hold for some $j\\ge 1$ .", "Then we may pick the minimal $j_0\\in \\lbrace 1,2,\\dots ,l-1\\rbrace $ such that $\\alpha _{j_0}\\le \\widetilde{\\alpha }$ .", "By minimality of $j_0$ , we have that $\\alpha _{j_0-1}>\\widetilde{\\alpha }$ and thus $n_{j_0}>\\frac{1}{2\\widetilde{\\alpha }-1}$ .", "Since $\\lbrace n_j\\rbrace _{j=0}^{l}$ is non-decreasing, we get $n_j>\\frac{1}{2\\widetilde{\\alpha }-1}$ for all $j\\ge j_0$ , and thus $\\alpha _j<\\widetilde{\\alpha }$ for any $j>j_0$ .", "This implies $\\widetilde{\\alpha }=\\alpha _l<\\widetilde{\\alpha }$ , arriving at a contradiction and thus verifying (REF ) for subtrees in $\\mathbb {T}$ .", "Finally, we reduce the general case to subtrees in $\\mathbb {T}$ as follows: for any subtree $\\mathbf {T}_0\\subset \\mathbf {T}$ , we denote its root by $\\mathbf {o}_0$ (this is the closest vertex in $\\mathbf {T}_0$ to the root of $\\mathbf {T}$ ) and consider $\\mathbf {T}_{\\mathbf {o}_0}\\in \\mathbb {T}$ .", "It is clear that $\\mathbf {T}_0$ can be obtained from $\\mathbf {T}_{\\mathbf {o}_0}$ by deleting some vertices and edges.", "Further, the numbers of vertices and edges deleted are the same and let us denote by $N$ this number.", "We then have $\\frac{|V(\\mathbf {T}_0)\\cap \\mathbf {Q}|}{|E(\\mathbf {T}_0)|}\\ge \\frac{|V(\\mathbf {T}_{\\mathbf {o}_0})\\cap \\mathbf {Q}|-N}{|E(\\mathbf {T}_{\\mathbf {o}_0})|-N}\\ge \\frac{|\\mathbf {T}_{\\mathbf {o}_0}\\cap \\mathbf {Q}|}{E(\\mathbf {T}_{\\mathbf {o}_0})}\\ge {2\\widetilde{\\alpha }-1}\\,,$ with equality holds if and only if $N=0$ and $\\mathbf {T}_{\\mathbf {o}_0}=\\mathbf {T}$ (that is, $\\mathbf {T}_0=\\mathbf {T}$ ).", "This completes the proof of (REF ).", "In what follows, we fix $\\alpha _\\eta ,\\chi ,\\zeta $ and the tree $\\mathbf {T}$ given in Lemma REF , and let $\\xi = \\zeta -\\chi +1$ .", "We label the vertices of $\\mathbf {T}$ by $1,2,\\dots ,\\chi ,\\chi +1,\\dots ,\\chi + \\xi $ , such that the root is labeled by 1, $\\mathbf {Q}=\\lbrace 1,2,\\dots ,\\chi \\rbrace $ and $\\mathbf {L}=\\lbrace \\chi +1,\\dots ,\\chi + \\xi \\rbrace $ .", "Recall that $E(\\mathbf {T})=\\lbrace (i,j):1\\le i<j\\le \\chi + \\xi , i\\mbox{ is adjacent to } j \\mbox{ in }\\mathbf {T}\\rbrace \\,.$ For later proof, it would be convenient to consider Erdős-Rényi graphs with edge density $p_\\eta =n^{-\\alpha _\\eta }$ , and we can reduce our problem to this case by monotonicity.", "Indeed, since $\\alpha _\\eta >\\alpha $ we have $p_\\eta < p$ for all $n$ and thus $\\mathbf {G}(n,p)$ is stochastically dominated by $\\mathbf {G}(n,p_\\eta )$ .", "In addition, since monotonicity in $p$ naturally holds for $\\mathrm {Overlap}(\\pi )$ , we may change the underlying distribution from $\\mathbf {G}(n,p)$ to $\\mathbf {G}(n,p_\\eta )$ and prove the lower bound for the latter.", "In what follows we use $G$ and $\\mathsf {G}$ to denote two independent Erdős-Rényi graphs $\\mathbf {G}(n,p_\\eta )$ (as well as the respective adjacency matrices); we drop the superscript $\\eta $ here for notation convenience.", "We denote by $\\mathbb {P}$ the joint law of $(G, \\mathsf {G})$ .", "Finally, we remark that the assumption of irrationality for $\\alpha _\\eta $ in Lemma REF is purely technical, which will be useful in later proofs (for the purpose of ruling out equalities)." ], [ "Description of the algorithm", "We first introduce some notations.", "For any nonempty subset $\\operatorname{S}\\subset [n]$ and any integer $1\\le m\\le |\\operatorname{S}|$ , define (recalling (REF )) $\\mathfrak {A}(\\operatorname{S},m)=\\lbrace (i_1,\\dots ,i_m) \\in S^m: i_1,\\dots ,i_m\\text{ are distinct}\\rbrace \\,.$ In addition, for $m\\in \\lbrace \\chi ,\\xi \\rbrace $ , we sample a random total ordering $\\prec _m$ on the set $\\mathfrak {A}([n],m)$ uniformly from all possible orderings and fix it (This can be done efficiently, since it is equivalent to sampling a uniform permutation on a set with cardinality polynomial in $n$ , which can be done in poly-time in $n$ ).", "We will omit the subscript when it is clear in the context.", "For any tuple $\\operatorname{L} = (t_1, \\ldots , t_m)$ and any permutation $\\pi $ , we write $\\pi (\\operatorname{L}) = (\\pi (t_1), \\ldots , \\pi (t_m))$ .", "For a $\\xi $ -tuple $\\operatorname{L}=(t_{\\chi +1},\\dots ,t_{\\chi + \\xi })\\in \\mathfrak {A}([n], \\xi )$ and a $\\chi $ -tuple $\\operatorname{Q}=(t_1,\\dots ,t_\\chi )\\in \\mathfrak {A}([n],\\chi )$ with $\\operatorname{L}\\cap \\operatorname{Q}=\\emptyset $ (i.e., the coordinates of $\\operatorname{L}$ are disjoint from the coordinates of $\\operatorname{Q}$ ), we let $ L =\\lbrace v_i:i\\in \\operatorname{L}\\rbrace $ , $Q =\\lbrace v_i:i\\in \\operatorname{Q}\\rbrace $ and define $\\lbrace L\\bowtie _{G} Q\\rbrace = \\lbrace G_{t_i,t_j}= 1 \\mbox{ for all } (i,j)\\in E(\\mathbf {T})\\rbrace \\,.$ Let $\\lbrace L\\lnot \\bowtie _{G}Q\\rbrace $ be the complement of $\\lbrace L\\bowtie _{G} Q\\rbrace $ .", "In addition, similar notations of $\\mathsf {L}$ , $\\mathsf {Q}$ , $\\lbrace \\mathsf {L}\\bowtie _{\\mathsf {G}}\\mathsf {Q}\\rbrace ,\\lbrace \\mathsf {L}\\lnot \\bowtie _{\\mathsf {G}}\\mathsf {Q}\\rbrace $ apply for the graph $\\mathsf {G}$ , where the $G_{i, j}$ 's in $(\\ref {eq:def-TsimT^{\\prime }})$ are replaced by corresponding $\\mathsf {G}_{i, j}$ 's.", "For any $\\xi $ -tuple $\\operatorname{L}\\subset \\mathfrak {A}([n], \\xi )$ and any subset $\\operatorname{U}\\subset [n]$ , let $\\lbrace L\\bowtie _{G} U\\rbrace $ be the event that $\\lbrace L\\bowtie _{G} Q\\rbrace $ occurs for at least one $\\chi $ -tuple $\\operatorname{Q}\\subset \\mathfrak {A}(\\operatorname{U}\\setminus \\operatorname{L},\\chi )$ where $U = \\lbrace v_i:i\\in \\operatorname{U}\\rbrace $ .", "Further, for $r\\in \\operatorname{U}$ , let $\\lbrace L\\bowtie _{G,r} U\\rbrace $ denote the event that there exists $\\operatorname{Q} = (t_1, \\ldots , t_{\\chi }) \\in \\mathfrak {A}(\\operatorname{U} \\setminus \\operatorname{L}, \\chi )$ such that the event $\\lbrace L\\bowtie _{G}Q\\rbrace $ holds and $t_1 = r$ .", "Similar notations of $\\lbrace \\mathsf {L}\\bowtie _\\mathsf {G}\\mathsf {U}\\rbrace ,\\lbrace \\mathsf {L}\\bowtie _{\\mathsf {G},r} \\mathsf {U}\\rbrace $ apply for $\\mathsf {G}$ .", "Moreover, we define a mapping $\\Pi = \\Pi _\\pi : [n]\\rightarrow \\mathsf {V}$ corresponding to $\\pi $ such that $\\Pi (i)=\\mathsf {v}_{\\pi (i)}$ , and we define a mapping $I_{G}:[n]\\rightarrow V$ by $I_{G}(i)= v_i$ (we also define $I_{\\mathsf {G}}$ similarly).", "Now we are ready to describe our matching algorithm.", "Roughly speaking, our algorithm proceeds iteratively in the following greedy sense: in the $s$ -th step, assuming we have already matched a set $\\operatorname{M}_{s-1}$ (i.e., we have determined the value of $\\pi ^*(i)$ for $i\\in \\operatorname{M}_{s-1}$ ), we then pick some $u\\in \\operatorname{R}_{s-1}=[n]\\setminus \\operatorname{M}_{s-1}$ and try to find a triple of tuples $(L,Q,\\mathsf {Q})$ with $\\begin{split}L&=\\lbrace (v_{t_{\\chi +1}},\\cdots ,v_{t_{\\chi + \\xi }}):\\,(t_{\\chi +1}\\cdots ,t_{\\chi + \\xi }) \\in \\mathfrak {A}(\\operatorname{M}_{s-1}, \\xi )\\rbrace ,\\\\Q&=\\lbrace (v_{t_{1}},\\cdots , v_{t_{\\chi }}):\\,(t_1,\\dots ,t_\\chi )\\in \\mathfrak {A}(\\operatorname{R}_{s-1},\\chi )\\rbrace ,\\\\\\mathsf {Q}&=\\lbrace (\\mathsf {v}_{t_1^{\\prime }},\\cdots ,\\mathsf {v}_{t_\\chi ^{\\prime }}):\\,(t_1^{\\prime },\\dots ,t_\\chi ^{\\prime })\\in \\mathfrak {A}([n]\\setminus \\pi ^*(\\operatorname{M}_{s-1}),\\chi )\\rbrace \\,.\\end{split}$ such that $t_1=u$ and both $\\lbrace L\\bowtie _{G} Q\\rbrace $ and $\\lbrace \\Pi \\left( I_{G}^{-1}(L)\\right)\\bowtie _{\\mathsf {G}} \\mathsf {Q}\\rbrace $ happen.", "If such $L,Q,\\mathsf {Q}$ exist, we let $\\pi ^*(t_j)=t_j^{\\prime }$ and $\\Pi (t_i)=\\mathsf {v}_{t_j^{\\prime }}$ for $1\\le j\\le \\chi $ (that is, we determine the values of $\\pi ^*$ at $t_1, \\ldots , t_{\\chi }$ in this step); else we just choose the value $\\pi ^*(u)$ arbitrarily (that is, we just determine the value of $\\pi ^*$ at $u$ in an arbitrary manner).", "This completes the construction for the $s$ -th step.", "We expect that in most steps we are able to find such triples and thus the increment for the overlap is at least $\\zeta $ .", "Since there are around ${n}/{\\chi }$ steps in total, this would imply that at the end of the procedure $\\mathrm {Overlap}(\\pi ^*)\\approx {\\zeta }n/\\chi \\approx \\frac{n}{2\\alpha -1}$ , as desired.", "While the above algorithm may indeed achieve our goal, we will further incorporate the following technical operations in order to facilitate the analysis of the algorithm: Fixing a large integer $\\kappa _0$ with $\\kappa _0>4\\zeta /\\eta $ , in each step we will first exclude the vertices which have been “used” for at least $\\kappa _0$ times.", "In each step, when seeking the desired triple $(L,Q,\\mathsf {Q})$ , we will check the tuples in the order given by $\\prec $ .", "We next present our full algorithm formally.", "Initially, we set $\\pi ^*(i)=i$ for all $1\\le i\\le \\eta n$ and let $\\operatorname{M}_0=\\lbrace 1,2,\\dots ,\\lfloor \\eta n\\rfloor \\rbrace $ .", "Let $\\operatorname{R}_0=[n]\\setminus \\operatorname{M}_0$ .", "Set $\\operatorname{EXP}_0= \\operatorname{SUC}_0= \\operatorname{FAIL}_0=\\emptyset $ .", "As we will see in the formal definition below, we will define $\\operatorname{EXP}_s= \\operatorname{SUC}_s\\cup \\operatorname{FAIL}_s\\subset \\mathfrak {A}([n], \\xi )$ to be the set of $\\xi $ -tuples which are explored during the $s$ -th step, where $ \\operatorname{SUC}_s$ (respectively $ \\operatorname{FAIL}_s$ ) denotes the collection of tuples which were successfully matched (respectively, failed to be matched).", "Our algorithm then proceeds as follows:   Greedy Matching Algorithm loaalgorithmGreedy Matching Algorithm [1] Define $\\pi ^*(i),1\\le i\\le \\eta n$ and $\\operatorname{M}_0,\\operatorname{R}_0, \\operatorname{EXP}_0, \\operatorname{SUC}_0, \\operatorname{FAIL}_0$ as above.", "$s=1,2,\\dots $ $|\\operatorname{R}_{s-1}|\\ge \\eta n$ Set $I_s=0$ ; set a triple $ \\operatorname{MT}_s= \\mathtt {null}$ ; set $\\mathtt {M}_s=\\operatorname{M}_{s-1}$ .", "$u\\in \\operatorname{M}_{s-1}$ $u$ appears in at least $\\kappa _0$ tuples in $\\lbrace \\operatorname{MT}_t: 1\\le t\\le s-1\\rbrace $ Delete $u$ from $\\mathtt {M}_s$ .", "Set $\\operatorname{EXP}_{s}=\\operatorname{SUC}_{s}=\\operatorname{FAIL}_{s}=\\emptyset $ .", "Find the minimal element $u_s\\in \\operatorname{R}_{s-1}$ .", "Let $\\operatorname{CAND}_s$ be the collection of $\\operatorname{L}\\in \\mathfrak {A}(\\mathtt {M}_s, \\xi )$ so that $\\lbrace I_{G}(\\operatorname{L})\\bowtie _{G,u_s} \\operatorname{R}_{s-1}\\rbrace $ holds.", "Label elements in $\\operatorname{CAND}_s$ by $\\operatorname{L}_1,\\dots ,\\operatorname{L}_l$ such that $\\operatorname{L}_1\\prec \\cdots \\prec \\operatorname{L}_l$ .", "Label all $\\chi $ -tuples in $\\mathfrak {A}([n]\\setminus \\pi ^*(\\operatorname{M}_{s-1}),\\chi )$ by $\\operatorname{Q}_1,\\dots ,\\operatorname{Q}_m$ so that $\\operatorname{Q}_1\\prec \\cdots \\prec \\operatorname{Q}_m$ .", "$i=1,2,\\dots ,l$ $j=1,2,\\dots ,m$ Check whether $\\lbrace \\Pi (\\operatorname{L}_i)\\bowtie _{\\mathsf {G}} I_\\mathsf {G}(\\operatorname{Q}_j)\\rbrace $ happens where $\\Pi = \\Pi _{\\pi ^*}$ .", "$\\lbrace \\Pi (\\operatorname{L}_i)\\bowtie _{\\mathsf {G}} I_\\mathsf {G}(\\operatorname{Q}_j)\\rbrace $ happens Find a $\\chi $ -tuple $\\operatorname{Q}\\in \\mathfrak {A}(\\operatorname{R}_{s-1},\\chi )$ such that $\\lbrace I_{G}(\\operatorname{L}_i)\\bowtie _{G, u_s} I_{G}(\\operatorname{Q})\\rbrace $ holds.", "The existence of $\\operatorname{Q}$ is guaranteed by definition of $\\operatorname{L}_i$ .", "Set $I_s=1$ and $\\operatorname{MT}_s=(\\operatorname{L}_i,\\operatorname{Q},\\operatorname{Q}_j)$ .", "Add $\\operatorname{L}_i$ into $\\operatorname{SUC}_{s}$ .", "Then break the for cycle, and break the for cycle again.", "$j=m$ Add $\\operatorname{L}_i$ into $\\operatorname{FAIL}_{s}$ .", "This means we have checked all possible tuples $\\operatorname{Q}_j,j=1,2,\\dots ,m$ but failed to find the desired one.", "Set $\\operatorname{EXP}_{s}=\\operatorname{SUC}_{s}\\cup \\operatorname{FAIL}_{s}$ .", "$I_s=1$ Recall $\\operatorname{MT}_s=(\\operatorname{L}_i,\\operatorname{Q}, \\operatorname{Q}_j)$ .", "Set $\\pi ^*$ on coordinates of $\\operatorname{Q}$ so that $\\pi ^*(\\operatorname{Q}) = \\operatorname{Q}_j$ .", "Set $\\operatorname{M}_{s}$ as the union of $\\operatorname{M}_{s-1}$ and all coordinates in $\\operatorname{Q}$ ; set $\\operatorname{R}_{s}=[n]\\setminus \\operatorname{M}_{s}$ .", "Set $\\pi ^*(u_s)$ as the minimal element in $[n]\\setminus \\pi ^*(\\operatorname{M}_{s-1})$ .", "Set $\\operatorname{M}_{s}=\\operatorname{M}_{s-1}\\cup \\lbrace u_s\\rbrace ,\\operatorname{R}_{s}=[n]\\setminus \\operatorname{M}_{s}$ .", "set $\\pi ^*(u)$ for $u\\in \\operatorname{R}_s$ arbitrarily such that $\\pi ^*$ becomes a permutation on $[n]$ .", "Break the for cycle.", "$\\pi ^*$ .", "Remark 2.3 $\\operatorname{CAND}_s$ stands for the candidate tuples which may be successfully matched in the $s$ -th step, while $\\operatorname{EXP}_s$ denotes for the set of tuples in $\\operatorname{CAND}_s$ that have been checked during the algorithm.", "In most part of this paper, we do not distinguish $\\operatorname{CAND}_s$ and $\\operatorname{EXP}_s$ , and they are indeed equal when $I_s=0$ (i.e., we have checked all candidate tuples but failed to match up).", "However, one should keep in mind that typically it holds $|\\operatorname{EXP}_{s}|\\ll |\\operatorname{CAND}_s|$ .", "Heuristically, this is because the checking procedure is according to a uniform ordering $\\prec $ and it stops as long as any successful matching triple is found.", "We shall make this precise and incorporate it as an ingredient for proofs in Section REF ." ], [ "Aanalysis of the algorithm", "In this subsection we prove that Algorithm REF satisfies the condition of Theorem REF .", "To this end, we need to analyze the conditional probability for the event $\\lbrace I_s=1\\rbrace $ given the behavior of Algorithm REF in previous steps.", "For notation convenience, in what follows we write $\\pi = \\pi ^*$ and write $\\operatorname{R}^{\\prime }_s = [n] \\setminus \\pi (\\operatorname{M}_s)$ .", "Since in each step we always have $|\\operatorname{R}_{s+1}|\\ge |\\operatorname{R}_s|-\\chi $ , the algorithm runs for at least $S=\\lfloor \\frac{1-2\\eta }{\\chi }n\\rfloor $ steps.", "We will only consider the first $S$ steps.", "For each $1\\le s\\le S$ , define $\\mathcal {F}_{s-1}$ to be the $\\sigma $ -field generated by $\\operatorname{M}_t, \\operatorname{SUC}_t, \\operatorname{FAIL}_t, \\operatorname{CAND}_t, I_t, \\operatorname{MT}_t$ for $t = 1, \\ldots , s-1$ as well as $\\pi (i)$ for $i\\in \\operatorname{M}_{s-1}$ (here we denote the matching triples in $\\operatorname{MT}_t$ by $(\\operatorname{L}_t,\\operatorname{Q}_t,\\operatorname{Q}^{\\prime }_t)$ if $I_t=1$ , and denote $\\operatorname{MT}_t=\\mathtt {null}$ if $I_t=0$ ).", "Then $\\mathcal {F}_{s-1}$ contains all the information generated by Algorithm REF in the first $s-1$ steps.", "Thus conditioning on a realization of $\\mathcal {F}_{s-1}$ is equivalent to conditioning on a realization of the first $s-1$ steps of Algorithm REF .", "We further denote $\\mathcal {F}_{s-1/2}$ as the $\\sigma $ -field generated by $\\mathcal {F}_{s-1}$ and $ \\operatorname{CAND}_{s}$ .", "With slight abuse of notation, we will write $\\mathbb {P}[\\cdot \\mid \\mathcal {F}_{s-1}]$ (respectively $\\mathbb {P}[\\cdot \\mid \\mathcal {F}_{s-1/2}]$ ) for the conditional probability given some particular realization of $\\mathcal {F}_{s-1}$ (respectively some realization of $\\mathcal {F}_{s-1/2}$ ).", "Let $\\begin{aligned}\\mathsf {Fail}_{s-1}=& \\bigcup _{1\\le t\\le s-1}\\Big \\lbrace \\big (\\Pi (\\operatorname{L}),I_\\mathsf {G}(\\operatorname{Q}^{\\prime })\\big ):\\operatorname{L}\\in \\operatorname{FAIL}_{t},\\,\\operatorname{Q}^{\\prime }\\in \\mathfrak {A}(\\operatorname{R}_t^{\\prime },\\chi )\\Big \\rbrace \\\\\\bigcup &\\bigcup _{1\\le t\\le s-1,\\atop I_t=1}\\Big \\lbrace \\big (\\Pi (\\operatorname{L}_{t}),I_\\mathsf {G}(\\operatorname{Q}^{\\prime })):\\operatorname{Q}^{\\prime }\\in \\mathfrak {A}(\\operatorname{R}_t^{\\prime },\\chi ),\\operatorname{Q}^{\\prime }\\prec \\operatorname{Q}^{\\prime }_t\\Big \\rbrace \\end{aligned}$ and $\\text{Suc}_{s-1}=\\bigcup _{1\\le t\\le s-1,\\atop I_t=1}\\lbrace (I_G(\\operatorname{L}_t),I_G(\\operatorname{Q}_t))\\rbrace \\,,\\quad \\mathsf {Suc}_{s-1}= \\bigcup _{1\\le t\\le s-1,\\atop I_t=1}\\lbrace (\\Pi (\\operatorname{L}_t),I_{\\mathsf {G}}(\\operatorname{Q}_t^{\\prime }))\\rbrace \\,.$ Since the two graphs are independent, we have that for any event $\\mathcal {B}$ measurable with respect to $\\mathsf {G}$ , $\\mathbb {P}[\\mathcal {B}\\mid \\mathcal {F}_{s-1/2}]=\\mathbb {P}[\\mathcal {B}\\mid \\mathcal {F}_{s-1}]=\\mathbb {P}[\\mathcal {B}\\mid \\mathcal {A}_s^1,\\mathcal {A}_s^2]$ with $\\mathcal {A}_s^1=\\bigcap _{(\\mathsf {L},\\mathsf {Q})\\in \\mathsf {Fail}_{s-1}}\\lbrace \\mathsf {L}\\lnot \\bowtie _\\mathsf {G}\\mathsf {Q}\\rbrace \\,,\\quad \\mathcal {A}_s^2=\\bigcap _{(\\mathsf {L},\\mathsf {Q})\\in \\mathsf {Suc}_{s-1}}\\lbrace \\mathsf {L}\\bowtie _{\\mathsf {G}} \\mathsf {Q}\\rbrace \\,.$ Now we investigate the event $\\lbrace I_s=0\\rbrace $ conditioned on some realization of $\\mathcal {F}_{s-1}$ .", "As in the algorithm, we first pick $u_s\\in \\operatorname{R}_{s-1}$ and find all $\\xi $ -tuples $\\operatorname{L}\\in \\operatorname{CAND}_s$ (recall that $I_G(\\operatorname{L})\\bowtie _{G,u_s} R_{s-1}$ for $\\operatorname{L}\\in \\operatorname{CAND}_s$ ).", "Note that whenever $\\mathcal {F}_{s-1}$ is given, this procedure is measurable with respect to $G$ .", "Provided with $\\operatorname{CAND}_s$ , we have that $I_s=0$ is equivalent to $\\sum _{\\operatorname{L} \\in \\operatorname{CAND}_s} \\mathbf {1}_{\\lbrace \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\rbrace }=0 \\mbox{ where } \\mathsf {R}_{s-1}=I_\\mathsf {G}(\\operatorname{R}_{s-1}^{\\prime })\\,.$ Since our aim is to lower-bound the preceding sum, it turns out more convenient to consider the sum over tuples with additional properties.", "To this end, we will define certain $s$ -good tuples ($s$ -good is measurable with respect to $\\mathcal {F}_{s-1}$ ; see Definition REF below) and consider $X_s\\stackrel{\\operatorname{def}}{=}\\sum _{\\operatorname{L} \\in \\operatorname{CAND}_s\\atop \\operatorname{L}\\text{ is }s\\text{-good}} \\mathbf {1}_{\\lbrace \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\rbrace }\\,.$ Clearly, $\\lbrace I_s=0\\rbrace \\subset \\lbrace X_s = 0\\rbrace $ .", "For any $1\\le s\\le S$ , we shall define two good events $\\mathcal {G}_s^i,i=1,2$ , where $\\mathcal {G}_s^1$ is measurable with respect to $\\mathcal {F}_{s-1}$ and $\\mathcal {G}_s^2$ is measurable with respect to $\\mathcal {F}_{s-1/2}$ .", "The precise definitions of $\\mathcal {G}_s^1,\\mathcal {G}_s^2$ will be given in the next section.", "Roughly speaking, $\\mathcal {G}_s^1$ states that the cardinality of $\\operatorname{Fail}_{s-1}$ is not unusually large, while $\\mathcal {G}_s^2$ says that the number of $s$ -good tuples in $\\operatorname{CAND}_s$ is not too small and the intersecting profile for pairs of $s$ -good tuples in $\\operatorname{CAND}_s$ behaves in a typical way.", "The following three propositions, whose proofs are postponed until Section , are key ingredients for the proof of Theorem REF .", "Proposition 2.4 For $1\\le s\\le S$ , any realization of $\\mathcal {F}_{s-1}$ and any $\\xi $ -tuple $\\operatorname{L}\\in \\mathfrak {A}(\\operatorname{M}_{s-1},\\xi )$ , $\\mathbb {P}[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {A}_s^1,\\mathcal {A}_s^2]\\le \\mathbb {P}[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}]\\,.$ Furthermore, for any realization on the good event $\\mathcal {G}_s^1$ and $s$ -good $\\xi $ -tuple $\\operatorname{L}\\in \\mathfrak {A}(\\operatorname{M}_{s-1},\\xi )$ , it holds uniformly that $\\mathbb {P}[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {A}_s^1,\\mathcal {A}_s^2]\\ge \\big [1-o(1)\\big ] \\mathbb {P}[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}]\\,.$ In addition, under such additional assumptions, for some positive constants $c_1, c_2$ depending only on $\\eta , \\alpha _\\eta $ and $\\mathbf {T}$ , we have $c_1n^\\chi p_\\eta ^\\zeta \\le \\mathbb {P}[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {A}_s^1,\\mathcal {A}_s^2]\\le c_2n^\\chi p_\\eta ^\\zeta \\,.$ Proposition 2.5 For any realization of $\\mathcal {F}_{s-1/2}$ on $\\mathcal {G}_s^1 \\cap \\mathcal {G}_s^2$ , it holds uniformly for $1\\le s \\le S$ that $\\begin{aligned}\\mathbb {E}\\left[X_s^2\\mid \\mathcal {F}_{s-1/2}\\right]\\le \\big [1+o(1)\\big ]\\big (\\mathbb {E}\\left[X_s\\mid \\mathcal {F}_{s-1/2}\\right]\\big )^2\\,.\\end{aligned}$ Proposition 2.6 The good events are typical, i.e.", "for $1\\le s\\le S$ , it holds uniformly that $\\mathbb {P}\\big [\\mathcal {G}_s^1\\big ]=1-o(1)\\,,\\quad \\mathbb {P}\\big [\\mathcal {G}_s^2\\big ]=1-o(1)\\,.$ [Proof of Theorem REF ] It is clear that this algorithm runs in polynomial time, so it remains to show (REF ).", "For each $1\\le s\\le S$ , whenever the realization of $\\mathcal {F}_{s-1/2}$ satisfies $\\mathcal {G}_s^1\\cap \\mathcal {G}_s^2$ , we can apply the second moment method and obtain that $\\mathbb {P}[I_s=1\\mid \\mathcal {F}_{s-1/2}]\\ge \\mathbb {P}[X_s>0\\mid \\mathcal {F}_{s-1/2}]\\ge \\frac{\\left(\\mathbb {E}[X_s\\mid \\mathcal {F}_{s-1/2}]\\right)^2}{\\mathbb {E}[X_s^2\\mid \\mathcal {F}_{s-1/2}]}\\,.$ Thus, from Proposition REF we get $\\mathbb {P}[I_s=0\\mid \\mathcal {F}_{s-1/2}]=o(1)$ , uniformly for all $1\\le s\\le S$ and all realization of $\\mathcal {F}_{s-1/2}$ satisfying $\\mathcal {G}_s^1\\cap \\mathcal {G}_s^2$ .", "Combined with Proposition REF , it yields that $\\begin{aligned}&\\ \\mathbb {E}\\Big [\\sum _{s=1}^S\\mathbf {1}_{\\lbrace I_s=0\\rbrace }\\Big ] = \\sum _{s=1}^S\\Big (\\mathbb {E}\\big [\\mathbf {1}_{\\lbrace I_s=0\\rbrace \\cap (\\mathcal {G}_s^1\\cap \\mathcal {G}_s^2)^c}\\big ]+\\mathbb {E}\\big [\\mathbf {1}_{\\lbrace I_s=0\\rbrace \\cap \\mathcal {G}_s^1\\cap \\mathcal {G}_s^2}\\big ]\\Big )\\\\\\le &\\ \\sum _{s=1}^S\\Big (\\mathbb {P}\\big [(\\mathcal {G}_s^1\\cap \\mathcal {G}_s^2)^c\\big ]+\\mathbb {E}\\big [\\mathbb {P}[I_s=0\\mid \\mathcal {F}_{s-1/2}]\\mathbf {1}_{\\mathcal {G}_s^1\\cap \\mathcal {G}_s^2}\\big ]\\Big )=o(n)\\,.\\end{aligned}$ Define the increment of the target quantity $\\mathrm {Overlap}(\\pi )$ in the $s$ -th step by $\\mathcal {O}_s=\\sum _{i<j, (i,j)\\in \\mathfrak {A}(\\operatorname{M}_{s+1},2)\\setminus \\mathfrak {A}(\\operatorname{M}_s,2)} G_{i,j}\\mathsf {G}_{\\pi (i),\\pi (j)}\\,.$ Then $\\lbrace I_s= 1\\rbrace \\subset \\lbrace \\mathcal {O}_s\\ge \\zeta \\rbrace $ for all $1\\le i\\le S$ .", "Therefore, $\\mathrm {Overlap}(\\pi )\\ge &\\ \\sum _{s=1}^S \\mathcal {O}_s\\ge \\zeta \\sum _{s=1}^S \\mathbf {1}_{\\lbrace I_s=1\\rbrace }=\\zeta S-\\zeta \\sum _{s=1}^S\\mathbf {1}_{\\lbrace I_s=0\\rbrace }\\\\\\ge &\\ \\frac{1-2\\eta -o(1)}{2\\alpha _\\eta -1}n-\\zeta \\sum _{s=1}^S\\mathbf {1}_{\\lbrace I_s=0\\rbrace }\\,.$ Since $\\frac{1-2\\eta }{2\\alpha _\\eta -1}>\\frac{1-\\varepsilon }{2\\alpha -1}$ by the choice of $\\eta $ in (REF ) and Lemma REF (i), we conclude from Markov's inequality that $\\begin{aligned}\\mathbb {P}\\left[\\mathrm {Overlap}(\\pi )\\le \\frac{1-\\varepsilon }{2\\alpha -1}n\\right]\\le \\frac{\\zeta \\cdot \\mathbb {E}\\left[\\sum _{i=1}^S\\mathbf {1}_{\\lbrace I_s=0\\rbrace }\\right]}{\\left(\\frac{1-2\\eta -o(1)}{2\\alpha _\\eta -1}-\\frac{1-\\varepsilon }{2\\alpha -1}\\right)n}\\stackrel{(\\ref {eq:sum-is-o(n)})}{=}o(1)\\,,\\end{aligned}$ completing the proof of the theorem." ], [ "Complimentary proofs", "In this section, we prove Propositions REF , REF and REF .", "We need some more definitions and notations.", "For a subgraph $\\mathbf {H} \\subset \\mathbf {T}$ , we define its capacity by $ \\operatorname{Cap}(\\mathbf {H})=|V(\\mathbf {H})\\cap \\mathbf {Q}|-\\alpha _\\eta |E(\\mathbf {H})|\\,.$ For a subtree $\\mathbf {T}_0\\subset \\mathbf {T}$ , we say it is dense if $\\operatorname{Cap}(\\mathbf {F})> \\operatorname{Cap}(\\mathbf {T})$ for any subgraph $\\mathbf {F}\\subsetneq \\mathbf {T}_0$ with $V(\\mathbf {T}_0)\\cap \\mathbf {L} \\subset \\mathbf {F}$ .", "In addition, we choose a subgrah $\\mathbf {F}_0\\subset \\mathbf {T_0}$ with $V(\\mathbf {T}_0)\\cap \\mathbf {L}\\subset V(\\mathbf {F}_0)$ such that $\\operatorname{Cap}(\\mathbf {F}_0)$ is minimal (among all such choices for $\\mathbf {F}_0$ ).", "We define a quantity $\\mathbb {D}(\\mathbf {T}_0)$ by $\\log _n \\mathbb {D}(\\mathbf {T}_0)=\\operatorname{Cap}(\\mathbf {T}_0)-\\operatorname{Cap}(\\mathbf {F}_0)\\,.$ Throughout this section, we will frequently deal with the conditional probability that $\\mathsf {G}$ contains a subgraph $\\mathsf {T}\\cong \\mathbf {T}$ with fixed leaves given the existence of certain subgraph $\\mathsf {H}$ in $\\mathsf {G}$ .", "To this end, we decompose such event according to the intersecting pattern of $\\mathsf {T}\\cap \\mathsf {H}$ .", "For each possible realization $\\mathsf {F}$ of $\\mathsf {T}\\cap \\mathsf {H}$ , take $\\mathbf {F}\\subset \\mathbf {T}$ such that $\\mathbf {F}$ is the image of $\\mathsf {F}$ under the isomorphism from $\\mathsf {T}$ to $\\mathbf {T}$ .", "Then it is straightforward from Markov's inequality that $\\mathbb {P}[\\exists \\mathsf {T}\\subset \\mathsf {G} \\mbox{ such that } \\mathsf {T} \\cong \\mathbf {T} \\mbox{ and }\\mathsf {T}\\cap \\mathsf {H}=\\mathsf {F}\\mid \\mathsf {H}\\subset \\mathsf {G}]\\le n^{|V(\\textbf {T})\\setminus V(\\textbf {F})|}p_\\eta ^{|E(\\textbf {T})\\setminus E(\\textbf {F})|}\\,.$ (In the above, $\\mathsf {H}$ is a fixed subgraph with vertex set in $\\mathsf {V}$ , and the event $\\mathsf {H}\\subset \\mathsf {G}$ means that every edge in $\\mathsf {H}$ is contained in $\\mathsf {G}$ .)", "Thus, in order to upper-bound $\\mathbb {P}[\\exists \\mathsf {T}\\subset \\mathsf {G} \\mbox{ such that } \\mathsf {T} \\cong \\mathbf {T} \\mid \\mathsf {H}\\subset \\mathsf {G}]$ , we may sum over the right hand side of (REF ) over all possible $\\mathsf {F}$ .", "Finally, we introduce some notations.", "For $1\\le s\\le S$ , fix a realization of $\\mathcal {F}_{s-1}$ .", "Recall the definitions of $\\operatorname{Fail}_{s-1}$ , $\\operatorname{Suc}_{s-1}$ , $\\mathsf {Suc}_{s-1}$ and $\\mathcal {A}_s^1,\\mathcal {A}_s^2$ in (REF ), (REF ) and (REF ).", "Clearly $\\mathcal {A}_s^1$ is decreasing and $\\mathcal {A}_s^2$ is increasing.", "We let (below $L\\cup Q$ means the set of all vertices appearing in $L$ and $Q$ ) $O_{s-1}= \\bigcup _{(L,Q)\\in \\operatorname{Suc}_{s-1}}\\lbrace v:v\\in L\\cup Q\\rbrace \\text{\\quad and\\quad } \\mathsf {O}_{s-1}= \\bigcup _{(\\mathsf {L},\\mathsf {Q})\\in \\mathsf {Suc}_{s-1}}\\lbrace \\mathsf {v}:\\mathsf {v}\\in \\mathsf {L}\\cup \\mathsf {Q}\\rbrace .$ Then $\\mathsf {O}_{s-1}$ is the set of vertices involved in the event $\\mathcal {A}_s^2$ .", "From the algorithm we see $O_{s-1}\\subset M_{s-1}=I_G(\\operatorname{M}_{s-1})\\,,\\quad \\mathsf {O}_{s-1}\\subset \\mathsf {M}_{s-1}=\\Pi (\\operatorname{M}_{s-1})\\,.$ We define ${GO}_{s-1}$ (respectively $\\mathsf {GO}_{s-1}$ ) as the graph on $M_{s-1}$ (respectively $\\mathsf {M}_{s-1}$ ) which is the union of the trees that certify the events $\\lbrace L\\bowtie _{G} Q\\rbrace $ for $(L,Q)\\in \\operatorname{Suc}_{s-1}$ (respectively $\\lbrace \\mathsf {L}\\bowtie _{\\mathsf {G}} \\mathsf {Q}\\rbrace $ for $(\\mathsf {L},\\mathsf {Q})\\in \\mathsf {Suc}_{s-1}$ ).", "Then it is clear that the event $\\mathcal {A}_s^2$ is equivalent to $\\mathsf {GO}_{s-1}\\subset \\mathsf {G}$ .", "For a given realization of $\\mathcal {F}_{s-1}$ , we see that $O_{s-1},\\mathsf {O}_{s-1}, {GO}_{s-1}$ and $\\mathsf {GO}_{s-1}$ are deterministic, and in addition ${GO}_{s-1}\\cong \\mathsf {GO}_{s-1}$ through $\\Pi \\circ I_G^{-1}$ restricted on $M_{s-1}$ .", "In addition, the rule of Algorithm REF implies $\\Delta ( {GO}_s)\\le \\kappa _0+\\Delta (\\textbf {T})\\,,$ where $\\Delta (\\cdot )$ is the maximal vertex degree.", "This is because except for the first time when a vertex is “used” it will always participate as a leaf vertex and thus only contributes 1 to the degree.", "For simplicity, we will denote $\\widehat{\\mathbb {P}}$ for the conditional probability on $\\mathsf {G}$ given by $\\mathbb {P}\\big [\\cdot |\\mathcal {A}_s^2\\big ]=\\mathbb {P}\\big [\\cdot \\mid \\mathsf {GO}_s\\subset \\mathsf {G}\\big ].$" ], [ "Proof of Proposition ", "We continue to fix a realization of $\\mathcal {F}_{s-1}$ as above and also fix a $\\xi $ -tuple $\\operatorname{L}\\in \\mathfrak {A}(\\operatorname{M}_{s-1},\\xi )$ .", "We first show the upper bound (REF ).", "Note that $\\widehat{\\mathbb {P}}$ is a product measure which admits the FKG inequality.", "As a result, $\\begin{split}&\\ \\mathbb {P}\\left[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}|\\mathcal {A}_s^1,\\mathcal {A}_s^2\\right]=\\widehat{\\mathbb {P}}\\left[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}|\\mathcal {A}_s^1\\right]\\\\\\le &\\ \\widehat{\\mathbb {P}}[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}]=\\mathbb {P}[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}]\\,,\\end{split}$ where the inequality holds because $\\lbrace \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\rbrace $ is increasing and $\\mathcal {A}_s^1$ is decreasing, and the last equality follows from independence (recall (REF )).", "This gives the upper bound.", "Now we turn to the lower bound (REF ).", "The precise definitions of $\\mathcal {G}_s^1$ and $s$ -good tuples will be given later in this subsection, and we just assume $\\mathcal {F}_{s-1}$ satisfies $\\mathcal {G}_s^1$ and $\\operatorname{L}$ is $s$ -good for now.", "Let $\\mathcal {U}$ be the event that there is a unique $\\chi $ -tuple $\\operatorname{Q}\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )$ such that $\\Pi (\\operatorname{L})\\bowtie _{\\mathsf {G}} I_\\mathsf {G}(\\operatorname{Q})$ .", "Then we have $&\\ \\mathbb {P}[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {A}_s^1,\\mathcal {A}_s^2]\\ge \\mathbb {P}\\left[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1},\\mathcal {U} \\mid \\mathcal {A}_s^1,\\mathcal {A}_s^2\\right]\\nonumber \\\\=&\\ \\frac{1}{\\widehat{\\mathbb {P}}[\\mathcal {A}_s^1]}\\sum _{\\operatorname{Q}\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )}{\\widehat{\\mathbb {P}}\\left[\\Pi (\\operatorname{L})\\bowtie _{\\mathsf {G}}I_\\mathsf {G}(\\operatorname{Q}),\\mathcal {U},\\mathcal {A}_s^1\\right]}\\,.$ For each term in the preceding sum, note that $\\mathcal {U}$ is decreasing conditioned on $\\Pi (\\operatorname{L})\\bowtie _{\\mathsf {G}} I_\\mathsf {G}(\\operatorname{Q})$ and thus we have ${\\widehat{\\mathbb {P}}\\left[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q}),\\mathcal {U},\\mathcal {A}_s^1\\right]}= &\\ \\widehat{\\mathbb {P}}\\left[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q}),\\mathcal {U}\\right]\\cdot {\\widehat{\\mathbb {P}}\\left[\\mathcal {A}_s^1|\\ \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q}),\\mathcal {U}\\right]}\\nonumber \\\\=&\\ {\\mathbb {P}}\\left[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q}),\\mathcal {U}\\right]\\cdot \\widehat{\\mathbb {P}}\\left[\\mathcal {A}_s^1\\mid \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q}),\\mathcal {U}\\right]\\nonumber \\\\\\ge &\\ {\\mathbb {P}}\\left[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q}),\\mathcal {U}\\right] \\cdot \\widehat{\\mathbb {P}}\\left[\\mathcal {A}_s^1\\mid \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})\\right],$ where the second equality follows from independence and the last inequality follows from FKG inequality ($\\widehat{\\mathbb {P}}[\\cdot \\mid \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})]$ is still a product measure).", "Thus to prove the (REF ), the key is to show the following two lemmas.", "Lemma 3.1 For any $\\operatorname{L}\\in \\mathfrak {A}(\\operatorname{M}_{s-1}, \\xi )$ , it holds that $\\mathbb {P}\\left[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q}),\\mathcal {U}\\right]\\ge \\big [1-o(1)\\big ]\\mathbb {P}\\left[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})\\right].$ Lemma 3.2 On some good event $\\mathcal {G}_s^1$ (defined in Definition REF below), we have that for any $s$ -good $\\operatorname{L}\\in \\mathfrak {A}(\\operatorname{M}_{s-1}, \\xi )$ , ${\\widehat{\\mathbb {P}}\\left[\\mathcal {A}_s^1\\mid \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})\\right]}\\ge \\big [1-o(1)\\big ]\\widehat{\\mathbb {P}}\\left[\\mathcal {A}_s^1\\right].$ We now continue to complete the proof for (REF ).", "Note that ${\\mathbb {P}}\\left[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\right]\\le \\sum _{ \\operatorname{Q}\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )}\\mathbb {P}\\left[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})\\right].$ Combined with (REF ), (REF ) and Lemmas REF and REF , this verifies (REF ).", "We next prove (REF ).", "The upper bound follows easily from Markov's inequality.", "As for the lower bound, by Lemma REF we have $&\\mathbb {P}[\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}]\\ge \\sum _{\\operatorname{Q}\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )}\\mathbb {P}\\left[\\Pi (\\operatorname{L})\\bowtie _{\\mathsf {G}} I_\\mathsf {G}(\\operatorname{Q}),\\mathcal {U}\\right] \\nonumber \\\\\\ge & \\big [1-o(1)\\big ]\\sum _{\\operatorname{Q}\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )}\\mathbb {P}\\left[\\Pi (\\operatorname{L})\\bowtie _{\\mathsf {G}} I_\\mathsf {G}(\\operatorname{Q})\\right]\\gtrsim n^\\chi p_\\eta ^\\zeta \\,.", "$ Combined with (REF ) and (REF ), it yields (REF ), completing the proof of Proposition REF .", "It remains to prove Lemmas REF and REF .", "[Proof of Lemma REF ] (REF ) is equivalent to $ \\mathbb {P}\\left[\\mathcal {U}\\mid \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})\\right]\\ge 1-o(1)$ .", "We now show $\\mathbb {P}\\left[\\mathcal {U}^c\\mid \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})\\right]=o(1)$ by a union bound.", "Let $\\mathsf {T}$ denote the tree in $\\mathsf {G}$ that certifies $\\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})$ (so in particular $V(\\mathsf {T})=\\lbrace \\mathsf {v}: \\mathsf {v}\\in \\Pi (\\operatorname{L})\\cup I_\\mathsf {G}(\\operatorname{Q})\\rbrace $ and $\\mathsf {L}=\\Pi (\\operatorname{L})$ is the leaf set of $\\mathsf {T}$ ).", "On the event $\\mathcal {U}^c$ , there exists another tree $\\mathsf {T}^{\\prime }\\cong \\mathsf {T}$ with $V(\\mathsf {T}) \\ne V(\\mathsf {T}^{\\prime })$ such that $\\mathsf {T}^{\\prime } \\subset \\mathsf {G}$ and the leaf set of $\\mathsf {T}^{\\prime }$ is also $\\mathsf {L}$ .", "Therefore, $\\mathsf {T}$ and $\\mathsf {T}^{\\prime }$ intersect at some subgraph of $\\mathsf {F}$ with $\\mathsf {L}\\subset \\mathsf {F}\\subsetneq \\mathsf {T}$ .", "By a union bound, we see $&\\ \\mathbb {P}[\\mathcal {U}^c\\mid \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})] \\nonumber \\\\\\le & \\sum _{\\mathsf {L}\\subset \\mathsf {F}\\subsetneq \\mathsf {T}}\\mathbb {P}[\\mbox{there exists } \\mathsf {T}^{\\prime }\\subset \\mathsf {G}\\mbox{ such that } \\mathsf {T}^{\\prime } \\cong \\mathsf {T} \\mbox{ and } \\mathsf {T}^{\\prime }\\cap \\mathsf {T}=\\mathsf {F}\\mid \\mathsf {T}\\subset \\mathsf {G}] \\nonumber \\\\\\stackrel{(\\ref {eq:observation})}{\\le }&\\sum _{ \\mathbf {L}\\subset \\mathbf {F}\\subsetneq \\mathbf {T}}n^{|V(\\textbf {T})\\setminus V(\\textbf {F})|}p_\\eta ^{|E(\\mathbf {T})\\setminus E(\\mathbf {F})|}=\\sum _{\\mathbf {L}\\subset \\mathbf {F}\\subsetneq \\mathbf {T}}n^{|V(\\mathbf {T})\\setminus V(\\mathbf {F})|-\\alpha _\\eta |E(\\mathbf {T})\\setminus E(\\mathbf {F})|}\\,, $ which is $o(1)$ by (REF ), as desired.", "The rest of this subsection is devoted to the proof of Lemma REF .", "Recalling the definition of $\\mathsf {GO}_{s-1}$ , we see $\\widehat{\\mathbb {P}}$ is a product measure on the space $\\Omega =\\lbrace 0,1\\rbrace ^{\\mathsf {E}_0 \\setminus E(\\mathsf {GO}_{s-1})}$ .", "Denote $\\mathsf {T}\\cong \\textbf {T}$ as the subtree of $\\mathsf {G}$ that certifies $\\lbrace \\Pi (\\operatorname{L})\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})\\rbrace $ as before.", "For $\\omega \\in \\Omega $ , we write $\\omega =\\omega _1\\oplus \\omega _2$ , where $\\omega _1\\in \\Omega _1 = \\lbrace 0, 1\\rbrace ^{\\mathsf {E}_0 \\setminus (E(\\mathsf {GO}_{s-1}) \\cup E(\\mathsf {T}))}$ and $\\omega _2 \\in \\Omega _2 = \\lbrace 0, 1\\rbrace ^{E(\\mathsf {T})}$ (here $\\oplus $ means concatenation).", "It is then clear from the definition that $\\widehat{\\mathbb {P}}[\\omega _1\\oplus \\omega _2]=\\widehat{\\mathbb {P}}[\\omega _1]\\widehat{\\mathbb {P}}[\\omega _2] \\mbox{ and } \\lbrace \\Pi (\\operatorname{L})\\bowtie _{\\mathsf {G}} I_\\mathsf {G}(\\operatorname{Q})\\rbrace = \\lbrace \\omega _2=(1,\\ldots ,1)\\rbrace \\,.$ For $i\\in \\lbrace 0, 1\\rbrace $ , define ${A}_s^{i}=\\left\\lbrace \\omega _1\\in \\Omega _1:\\omega _1\\oplus \\lbrace i,\\dots ,i\\rbrace \\in \\mathcal {A}_s^1\\right\\rbrace \\,.$ The fact that $\\mathcal {A}_s^1$ is decreasing implies that ${A}_s^{0}$ is also decreasing and ${A}_{s}^{1}\\subset {A}_s^{0}$ .", "The left hand side of (REF ) can be expressed as $\\widehat{\\mathbb {P}}\\big [{A}_s^{1}\\big ]$ .", "For $\\widehat{\\mathbb {P}}\\big [\\mathcal {A}_s^1\\big ]$ we have $\\widehat{\\mathbb {P}}\\left[\\mathcal {A}_s^1\\right]=&\\ \\sum _{\\omega _1\\in \\Omega _1}\\sum _{\\omega _2\\in \\Omega _2}\\mathbf {1}_{\\mathcal {A}_s^1}( \\omega _1\\oplus \\omega _2)\\widehat{\\mathbb {P}}[\\omega _1\\oplus \\omega _2]\\\\\\le &\\ \\sum _{\\omega _1\\in \\Omega _1}1_{\\mathcal {A}_s^1}(\\omega _1\\oplus \\lbrace 0,\\dots ,0\\rbrace ){\\mathbb {P}}[\\omega _1]\\sum _{\\omega _2\\in \\Omega _2}\\mathbb {P}[\\omega _2]\\\\=&\\ \\widehat{\\mathbb {P}}\\left[\\big \\lbrace \\omega _1:\\omega _1\\oplus \\lbrace 0,\\dots ,0\\rbrace \\in \\mathcal {A}_s^1\\big \\rbrace \\right]=\\widehat{\\mathbb {P}}\\left[{A}_s^{0}\\right]\\,.$ Provided with this, we see (REF ) reduces to $\\frac{\\widehat{\\mathbb {P}}[{A}_s^{1}]}{\\widehat{\\mathbb {P}}[{A}_s^{0}]}=\\widehat{\\mathbb {P}}\\big [{A}_s^{1}\\mid {A}_s^{0}\\big ]\\ge 1-o(1)\\,.$ To this end, we note that for $\\omega _1 \\in A^0_s \\setminus A^1_s$ , we have $\\mathcal {A}_s^1$ holds if the edges in $E(\\mathsf {T})$ are all closed while $\\mathcal {A}_s^1$ fails after opening these edges.", "There are two possible scenarios for this: (i) opening edges in $\\mathsf {T}$ changes the realization of $\\mathcal {F}_{s-1}$ and hence alters the event $\\mathcal {A}_s^1$ ; (ii) opening edges in $\\mathsf {T}$ does not change the realization of $\\mathcal {F}_{s-1}$ , but some of these edges participate in the tuple which certifies the failure of $\\mathcal {A}_s^1$ .", "Denote by $\\mathcal {E}_s$ the event that after opening edges in $E(\\mathsf {T})$ , there exists a subgraph $\\mathsf {T}^{\\prime } \\subset \\mathsf {G}$ with an isomorphism $\\phi :\\mathbf {T}\\rightarrow \\mathsf {T}^{\\prime }$ such that $E(\\mathsf {T}^{\\prime }) \\cap E(\\mathsf {T}) \\ne \\emptyset \\mbox{ and } \\phi (\\mathbf {L}) = \\Pi (\\operatorname{L}^{\\prime }) \\mbox{ for some }\\operatorname{L}^{\\prime }\\in \\bigcup _{1\\le t \\le s-1}\\operatorname{EXP}_{t}\\,.$ Clearly, $\\mathcal {E}_s$ is an increasing event.", "We claim that both scenarios imply $\\mathcal {E}_s$ .", "Indeed, if $\\mathcal {E}_s$ does not hold, then after opening edges in $E(\\mathsf {T})$ , the realization of $\\mathcal {F}_{s-1}$ remains to be the same by the rule of Algorithm REF .", "So, in particular $\\mathsf {Fail}_{s-1}$ remains to be the same.", "In addition, from the definition of $\\mathcal {A}_s^1$ , we see under the event $\\mathcal {E}_s^c$ , whether $\\mathcal {A}_s^1$ happens does not depend on openess/closedness for edges in $E(\\mathsf {T})$ .", "This proves the claim and now it suffice to show $\\widehat{\\mathbb {P}}\\big [\\mathcal {E}_s\\mid {A}_s^{0}\\big ]\\le \\widehat{\\mathbb {P}}[\\mathcal {E}_s]=o(1)\\,,$ where the first inequality follows from FKG.", "We divide $\\mathcal {E}_s$ according to the intersecting patterns of $\\mathsf {T}^{\\prime }$ with $\\mathsf {T}$ and $ \\mathsf {GO}_{s-1}$ .", "Denote $\\mathcal {P}$ for the pairs $(\\mathbf {F}_1,\\mathbf {F}_2)$ with $\\mathbf {F}_1,\\mathbf {F}_2\\subset \\mathbf {T}$ such that $\\mathbf {F}_1$ is a proper subtree of $\\mathbf {T}$ with at least one edge, $V(\\mathbf {F}_1)\\cap V(\\mathbf {F}_2)=\\emptyset $ and $ \\mathbf {L} \\subset V(\\mathbf {F}_1)\\cup V( \\mathbf {F}_2)$ .", "For a pair $(\\mathbf {F}_1,\\mathbf {F}_2)\\in \\mathcal {P}$ , define $\\mathcal {E}_s(\\mathbf {F}_1,\\mathbf {F}_2)$ to be the event that (REF ) holds and that $\\phi \\big (E(\\mathbf {F}_1)\\big )\\cap E(\\mathsf {T})\\ne \\emptyset \\,,\\quad \\phi \\left(\\mathbf {F}_1\\cup \\mathbf {F}_2\\right)= \\mathsf {T}^{\\prime }\\cap (\\mathsf {GO}_{s-1}\\cup \\mathsf {T})\\,.$ Recalling (REF ), we obtain $\\mathcal {E}_s\\subset \\mathcal {E}_s(\\mathbf {T},\\emptyset )\\cup \\bigcup _{(\\mathbf {F}_1,\\mathbf {F}_2)\\in \\mathcal {P}}\\mathcal {E}_s(\\textbf {F}_1,\\mathbf {F}_2)\\,.$ (Indeed, $\\mathbf {F}_1$ can be taken as any component of $\\phi ^{-1}(\\mathsf {T}^{\\prime }\\cap (\\mathsf {T}\\cup \\mathsf {GO}_{s-1}))$ which contains at least one edge in $E(\\mathsf {T})$ .)", "We want to exclude the case $\\mathcal {E}_s(\\mathbf {T},\\emptyset )$ from the assumption that $\\operatorname{L}$ is $s$ -good.", "To this end, we make the following definition.", "Definition 3.3 For a $\\xi $ -tuple $\\operatorname{L}\\in \\mathfrak {A}(\\operatorname{M}_{s-1},\\xi )$ , we say it is $s$ -bad, if for some $T \\cong \\mathbf {T}$ with leaf set $L=I_G(\\operatorname{L})$ and $V(T)\\cap M_{s-1}=L$ , we have that the graph $T\\cup GO_{s-1}$ contains a subgraph $T^*\\cong \\mathbf {T}$ satisfying that $E(T^*)\\cap E(T)\\ne \\emptyset $ and that the leaf set of $T^*$ equals to $L^*=I_G(\\operatorname{L}^*)$ for some $\\operatorname{L}^*\\in \\bigcup _{1\\le t\\le s-1}\\operatorname{EXP}_{t}$ (note that in this definition for `some' $T\\cong \\mathbf {T}$ is equivalent to for `any' $T\\cong \\mathbf {T}$ ).", "Otherwise we say $\\operatorname{L}\\in \\mathfrak {A}(\\operatorname{M}_{s-1},\\xi )$ is $s$ -good.", "From the definition and the fact that $GO_{s-1}\\cong \\mathsf {GO}_{s-1}$ through $\\Pi \\circ I_G^{-1}$ , we see that $\\mathcal {E}_s\\cap \\mathcal {E}_s(\\mathbf {T},\\emptyset )=\\emptyset $ under the assumption that $\\operatorname{L}$ is $s$ -good.", "Thus, (REF ) can be strengthened as $\\mathcal {E}_s\\subset \\bigcup _{(\\mathbf {F}_1,\\mathbf {F}_2)\\in \\mathcal {P}}\\mathcal {E}_s(\\mathbf {F}_1,\\mathbf {F}_2)\\,.$ For a pair $(\\mathbf {F}_1,\\mathbf {F}_2)\\in \\mathcal {P}$ , let $\\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)$ be the number of $\\xi $ -tuples $\\operatorname{L}^{\\prime }\\in \\bigcup _{1\\le t\\le s-1}\\operatorname{EXP}_{t}$ such that there exist two embeddings $\\phi _i:\\mathbf {F}_i\\rightarrow \\mathsf {T}\\cup \\mathsf {GO}_{s-1},i=1,2$ such that $\\phi _1\\big (E(\\mathbf {F}_1)\\big )\\cap E(\\mathsf {T})\\ne \\emptyset \\text{ and }\\phi _1\\big (V(\\mathbf {F}_1)\\cap \\mathbf {L}\\big )\\cup \\phi _2\\big (V(\\mathbf {F}_2)\\cap \\mathbf {L}\\big )= \\Pi (\\operatorname{L}^{\\prime })\\,.$ (Note that $ \\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)$ depends on $\\mathsf {T}$ although we did not include $\\mathsf {T}$ in the notation.)", "Then $ \\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)$ is the number of possible choices for the leaf set of $\\mathsf {T}^{\\prime }$ which may potentially certify the event $\\mathcal {E}_s(\\mathbf {F}_1,\\mathbf {F}_2)$ .", "For each fixed choice, we see from (REF ) that the probability that this indeed certifies the event $\\mathcal {E}_s(\\mathbf {F}_1,\\mathbf {F}_2)$ is upper-bounded by $\\operatorname{Prob}(\\mathbf {F}_1,\\mathbf {F}_2)=n^{|V(\\mathbf {T})|-|V(\\mathbf {F}_1)|-|V(\\mathbf {F}_2)|}p_\\eta ^{|E(\\mathbf {T})|-|E(\\mathbf {F}_1)|-|E(\\mathbf {F}_2)|}\\,.$ Then by a union bound, for any pair $(\\mathbf {F}_1,\\mathbf {F}_2)\\in \\mathcal {P}$ it holds that $\\widehat{\\mathbb {P}}\\left[\\mathcal {E}_s(\\mathbf {F}_1,\\mathbf {F}_2)\\right]\\le \\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)\\times \\operatorname{Prob}(\\mathbf {F}_1,\\mathbf {F}_2)\\,.$ Let $\\mathcal {P}_0$ be the collection of $(\\mathbf {F}_1,\\mathbf {F}_2) \\in \\mathcal {P}$ which maximizes the right hand side of (REF ).", "We claim that for any $(\\mathbf {F}_1,\\mathbf {F}_2) \\in \\mathcal {P}_0$ the components of $\\mathbf {F}_2$ are dense trees which intersect $\\mathbf {L}$ .", "We now prove this claim by contradiction, and we divide the proof into two cases.", "If $\\mathbf {F}_2$ contains a tree component disjoint from $\\mathbf {L}$ , we can simply remove this component from $\\mathbf {F}_2$ to get a pair $(\\mathbf {F}_1,\\mathbf {F}_2^{\\prime })\\in \\mathcal {P}$ with $\\operatorname{Prob}(\\mathbf {F}_1,\\mathbf {F}_2^{\\prime })> \\operatorname{Prob}(\\mathbf {F}_1,\\mathbf {F}_2),\\quad \\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2^{\\prime })\\ge \\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)\\,,$ contradicting the maximality.", "If $\\mathbf {F}_2$ contains a tree component $\\mathbf {T}_0$ which intersects $\\mathbf {L}$ but is not dense, we may find some $\\mathbf {F}_0\\subsetneq \\mathbf {T}_0$ which contains $V(\\mathbf {T}_0)\\cap \\mathbf {L}$ such that $\\operatorname{Cap}(\\mathbf {F}_0)< \\operatorname{Cap}(\\mathbf {T}_0)$ (this is feasible since $\\alpha _\\eta $ is irrational).", "We define $\\mathbf {F}^{\\prime }_2$ by replacing $\\mathbf {T}_0$ with $\\mathbf {F}_0$ in $\\mathbf {F}_2$ and get a pair $(\\mathbf {F}_1,\\mathbf {F}_2^{\\prime })\\in \\mathcal {P}$ .", "Again, it satisfies $\\operatorname{Prob}(\\mathbf {F}_1,\\mathbf {F}_2^{\\prime })>\\operatorname{Prob}(\\mathbf {F}_1,\\mathbf {F}_2) \\mbox{ and } \\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2^{\\prime })\\ge \\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)\\,,$ contradicting the maximality.", "This completes the verification of the claim.", "Recall (REF ).", "We are now ready to define our good event $\\mathcal {G}_s^1$ .", "Definition 3.4 Fix some large constant $\\kappa _1=\\kappa _1(\\eta ,\\alpha _\\eta ,\\mathbf {T})$ which will be determined later.", "We define $\\mathcal {G}_s^1$ to be the event that for any $(\\mathbf {F}_1,\\mathbf {F}_2)\\in \\mathcal {P}_0$ and for any $\\mathsf {T}\\subset \\mathsf {G}$ , $\\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)\\le {\\left\\lbrace \\begin{array}{ll}\\kappa _1n\\times (np_\\eta )^{|E(\\mathbf {T})|-|E(\\mathbf {F}_2)|},\\quad &\\text{if }V(\\mathbf {F}_1)\\cap \\mathbf {L} = \\emptyset \\,,\\\\\\kappa _1\\mathbb {D}(\\mathbf {F}_1)\\times (np_\\eta )^{|E(\\mathbf {T})|-|E(\\mathbf {F}_1)|-|E(\\mathbf {F}_2)|},&\\text{if }V(\\mathbf {F}_1)\\cap \\mathbf {L} \\ne \\emptyset \\,.\\end{array}\\right.", "}$ Since $|\\mathcal {P}|$ is uniformly bounded in $n$ , it suffices to show that on $\\mathcal {G}_s^1$ $\\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)\\times \\operatorname{Prob}(\\mathbf {F}_1,\\mathbf {F}_2) = o(1) \\mbox{ for any }(\\mathbf {F}_1, \\mathbf {F}_2)\\in \\mathcal {P}_0\\,.$ To this end, we may write $(\\mathbf {F}_1,\\mathbf {F}_2)=(\\mathbf {T}_0,\\mathbf {T}_1\\cup \\cdots \\cup \\mathbf {T}_l)$ , where $\\mathbf {T}_i$ is a subtree of $\\mathbf {T}$ for $0\\le i\\le l$ .", "The proof is divided into two cases.", "In the case that $V(\\mathbf {T}_0)\\cap \\mathbf {L} = \\emptyset $ , the target product is upper-bounded by (recalling (REF )) $&\\kappa _1n^{\\chi +\\zeta }p_\\eta ^{2\\zeta }\\times n^{-|V(\\mathbf {T}_0)|+1}p_\\eta ^{-|E(\\mathbf {T}_0)|}\\times \\prod _{i=1}^{l}n^{-|V(\\mathbf {T}_i)\\cap \\mathbf {Q}|-|E(\\mathbf {T}_i)|}p_\\eta ^{-2|E(\\mathbf {T}_i)|}\\\\=&\\ \\kappa _1n^{\\chi -(2\\alpha _\\eta -1)\\zeta }\\times (np_\\eta )^{-|E(\\mathbf {T}_0)|}\\times \\prod _{i=1}^l n^{-|V(\\mathbf {T}_i)\\cap \\mathbf {Q}|+(2\\alpha _\\eta -1)|E(\\mathbf {T}_i)|}\\,.$ Note that the second term (i.e., $(np_\\eta )^{-|E(\\mathbf {T}_0)|}$ ) is no more than $n^{\\alpha _\\eta -1}$ since $E(\\mathbf {T}_0)\\ne \\emptyset $ and that the third term is bounded by 1 from (REF ).", "Thus, we see the expression above is upper-bounded by $\\kappa _1n^{\\chi -(2\\alpha _\\eta -1)\\zeta }\\times n^{\\alpha _\\eta -1}$ , which is $o(1)$ by Lemma REF (i).", "This proves the desired bound in this case.", "In the case that $V(\\mathbf {T}_0)\\cap \\mathbf {L} \\ne \\emptyset $ , we may write $\\mathbb {D}(\\mathbf {T}_0)=n^{\\operatorname{Cap}(\\mathbf {T}_0)- \\operatorname{Cap}(\\mathbf {F}_0)}=n^{\\operatorname{Cap}(\\mathbf {T}_0)-\\sum _{j=1}^m \\operatorname{Cap}(\\mathbf {T}_j^{\\prime })}\\,,$ where $\\mathbf {F}_0 \\subset \\mathbf {T}_0$ is the union of disjoint trees $\\mathbf {T}_1^{\\prime }, \\ldots , \\mathbf {T}_m^{\\prime }$ with $V(\\mathbf {T}_0)\\cap \\mathbf {L} \\subset V(\\mathbf {F}_0)$ such that $ \\operatorname{Cap}(\\mathbf {F}_0)$ is minimized.", "As a result, we see the product $ \\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)\\times \\operatorname{Prob}(\\mathbf {F}_1,\\mathbf {F}_2)$ is upper-bounded by $\\kappa _1n^{\\chi +\\zeta }p_\\eta ^{2\\zeta }\\times n^{-\\sum _{j=1}^m \\operatorname{Cap}(\\mathbf {T}_j^{\\prime })}\\times (np_\\eta )^{-|E(\\mathbf {T}_0)|}\\times \\prod _{i=1}^{l}n^{-|V(\\mathbf {T}_i)\\cap \\mathbf {Q}|-|E(\\mathbf {T}_i)|}p_\\eta ^{-2|E(\\mathbf {T}_i)|}\\,.$ If some of $\\mathbf {T}_j^{\\prime }$ is not a singleton, then (REF ) is upper-bounded by (recalling that the product above is upper-bounded by 1) $&\\ \\kappa _1n^{\\chi +\\zeta }p_\\eta ^{2\\zeta }\\times n^{-\\sum _{j=1}^m \\operatorname{Cap}(\\mathbf {T}_j^{\\prime })}\\times (np_\\eta )^{-\\sum _{j=1}^m|E(\\mathbf {T}_j^{\\prime })|}\\\\\\le &\\ \\kappa _1n^{\\chi -(2\\alpha _\\eta -1)\\zeta }\\prod _{j=1}^m n^{-|V(\\mathbf {T}_j^{\\prime })\\cap Q(\\mathbf {T})|+(2\\alpha _\\eta -1)|E(\\mathbf {T}_j^{\\prime })|}\\,,$ which is $o(1)$ by Lemma REF (i) and (iv).", "If each $\\mathbf {T}_j^{\\prime }$ is a singleton, then (REF ) is upper-bounded by $\\kappa _1n^{\\chi +\\zeta }p_\\eta ^{2\\zeta }\\times (np_\\eta )^{-1} \\le \\kappa _1n^{\\chi -(2\\alpha _\\eta -1)\\zeta }\\times n^{\\alpha _\\eta -1}\\,,$ which is also $o(1)$ Lemma REF (i).", "This completes the proof in this case, and thus finally completes the proof of Lemma REF ." ], [ "Proof of Proposition ", "We continue to fix some $1\\le s\\le S$ and a realization of $\\mathcal {F}_{s-1}$ .", "We further fix a realization of $\\operatorname{CAND}_s$ , and hence we also get the set of $s$ -good $\\xi $ -tuples in $\\operatorname{CAND}_s$ , denoted as $\\operatorname{GC}_s=\\lbrace \\operatorname{L}_1,\\dots ,\\operatorname{L}_l\\rbrace $ .", "For any nonempty subset $\\mathbf {R} \\subset \\mathbf {L}$ , denote $\\mathbf {Span}(\\mathbf {R})$ for the subtree of $\\mathbf {T}$ spanned by $\\mathbf {R}$ (i.e., the minimal subtree that contains $\\mathbf {R}$ ).", "Throughout this section, it will be convenient to partition pairs in $\\operatorname{GC}_s\\times \\operatorname{GC}_s$ according to their intersecting patterns.", "More precisely, for two $\\xi $ -tuples $\\operatorname{L} = (t_1, \\ldots , t_\\xi )$ and $\\operatorname{L}^{\\prime } = (t^{\\prime }_1, \\ldots , t^{\\prime }_\\xi )$ , we let $\\operatorname{Loc}(\\operatorname{L},\\operatorname{L}^{\\prime }) = \\lbrace 1\\le i\\le \\xi : t^{\\prime }_i \\in \\lbrace t_1, \\ldots , t_\\xi \\rbrace \\rbrace $ .", "For any subset $\\mathbf {R}\\subset \\mathbf {L}$ (Recall that $\\mathbf {L}=\\lbrace \\chi +1,\\dots ,\\chi +\\xi \\rbrace $ ), we let $\\operatorname{IP}_s(\\mathbf {R})=\\big \\lbrace (\\operatorname{L}_i,\\operatorname{L}_j) \\in \\operatorname{GC}_s \\times \\operatorname{GC}_s :\\operatorname{Loc}(\\operatorname{L}_i,\\operatorname{L}_j)=\\lbrace r-\\chi , r\\in \\mathbf {R}\\rbrace \\big \\rbrace \\,.$ We are now ready to define our good event $\\mathcal {G}_s^2$ .", "Definition 3.5 Fix some positive constants $\\kappa _2,\\kappa _3$ depending only on $\\eta ,\\alpha _\\eta $ and $\\mathbf {T}$ which will be determined later.", "The good event $\\mathcal {G}_s^2$ is the intersection of the following two events: (i) $l=|\\operatorname{GC}_s|\\ge \\kappa _2(np_\\eta )^\\zeta $ .", "(ii) For any nonempty subset $\\mathbf {R}\\subset \\mathbf {L}$ , $|\\operatorname{IP}_s(\\mathbf {R})| \\le \\kappa _3(np_\\eta )^\\zeta \\times \\mathbb {D}(\\mathbf {Span}(\\mathbf {R}))\\times (np_\\eta )^{|E(\\mathbf {T})\\setminus E(\\mathbf {Span}(\\mathbf {R}))|}\\,.$ We next assume that the realization $\\mathcal {F}_{s-1/2}=\\sigma (\\mathcal {F}_s\\cup \\operatorname{CAND}_s)$ satisfies $\\mathcal {G}_s^1\\cap \\mathcal {G}_s^2$ and prove (REF ).", "For simplicity, we write $\\mathsf {L}_i=\\Pi (\\operatorname{L}_i)$ .", "Since $\\mathcal {F}_{s-1}$ satisfies $\\mathcal {G}_s^1$ , from Proposition REF we see for any $\\mathsf {L}_i\\in \\Pi \\left(\\operatorname{GC}_s\\right)$ , $\\mathbb {P}[\\mathsf {L}_i\\bowtie _{\\mathsf {G}} \\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1/2}]=\\mathbb {P}[\\mathsf {L}_i\\bowtie _{\\mathsf {G}}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]\\ge \\big [1-o(1)\\big ]\\mathbb {P}[\\mathsf {L}_i\\bowtie _{\\mathsf {G}} \\mathsf {R}_{s-1}]\\ge c_1n^\\chi p_\\eta ^\\zeta \\,.$ Combined with (i) in $\\mathcal {G}_s^2$ , this yields a lower bound on the right hand side of (REF ): $\\begin{split}\\big (\\mathbb {E}\\left[X_s\\mid \\mathcal {F}_{s-1/2}\\right]\\big )^2=\\left(\\sum _{i=1}^l \\mathbb {P}[\\mathsf {L}_i\\bowtie _{\\mathsf {G}} \\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1/2}]\\right)^2\\gtrsim l^2\\big (n^{\\chi }p_\\eta ^{\\zeta }\\big )^2\\gtrsim \\big (n^{\\chi +\\zeta }p_\\eta ^{2\\zeta }\\big )^2\\,.\\end{split}$ For the left hand side of (REF ), we may expand it out and break the sum into several parts as follows: $\\nonumber \\ \\mathbb {E}\\left[X_s^2\\mid \\mathcal {F}_{s-1/2}\\right]=&\\sum _{i,j=1}^l \\mathbb {P}[\\mathsf {L}_i\\bowtie _{\\mathsf {G}} \\mathsf {R}_{s-1},\\mathsf {L}_j\\bowtie _{\\mathsf {G}}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1/2}]\\\\=&\\sum _{(\\operatorname{L}_i,\\operatorname{L}_j)\\in \\operatorname{IP}_s(\\emptyset )}\\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1} ,\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}] \\\\+&\\sum _{\\emptyset \\ne \\mathbf {R}\\subset \\mathbf {L}}\\sum _{(\\operatorname{L}_i,\\operatorname{L}_j)\\in \\operatorname{IP}_s(\\mathbf {R})}\\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1},\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]\\,.$ We estimate the probabilities in the sum above by the following two lemmas.", "Lemma 3.6 For any $(\\operatorname{L}_i,\\operatorname{L}_j)\\in \\operatorname{IP}_s(\\emptyset )$ , it holds that $\\begin{aligned}&\\ \\mathbb {P}[\\mathsf {L}_i\\bowtie _{\\mathsf {G}} \\mathsf {R}_{s-1}, \\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]\\\\\\le &\\ \\big [1+o(1)\\big ]\\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]\\,.\\end{aligned}$ Lemma 3.7 For any nonempty subset $\\mathbf {R} \\subset \\mathbf {L}$ , let $\\mathbf {F}_{\\mathbf {R}}$ be the subgraph of $\\mathbf {T}$ with $V(\\mathbf {F}_{\\mathbf {R}})\\cap \\mathbf {L} = \\mathbf {R}$ such that $ \\operatorname{Cap}(\\mathbf {F}_{\\mathbf {R}})$ is minimized out of all such subgraphs.", "Then for any $(\\operatorname{L}_i,\\operatorname{L}_j)\\in \\operatorname{IP}_s(\\mathbf {R})$ , it holds that $\\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1},\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]\\lesssim \\big (n^\\chi p_\\eta ^\\zeta \\big )^2\\times n^{- \\operatorname{Cap}(\\mathbf {F}_{\\mathbf {R}})}.$ [Proof of Proposition REF ] From Lemma REF , (REF ) is upper-bounded by $\\big [1+o(1)&\\big ]\\sum _{(\\operatorname{L}_i,\\operatorname{L}_j)\\in \\operatorname{IP}_s(\\emptyset )}\\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]\\\\\\le \\big [1+o(1)&\\big ]\\sum _{i,j=1}^l\\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1} \\mid \\mathcal {F}_{s-1}]\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1} \\mid \\mathcal {F}_{s-1}]$ which is $\\big [1+o(1)\\big ]\\big (\\mathbb {E}\\left(X_s\\mid \\mathcal {F}_{s-1}\\right)\\big )^2$ .", "In addition, for each nonempty subset $\\mathbf {R}\\subset \\mathbf {L}$ , by (ii) in $\\mathcal {G}_s^2$ and Lemma REF we see that $\\sum _{(\\operatorname{L}_i,\\operatorname{L}_j)\\in \\operatorname{IP}_s(\\mathbf {R})}\\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1},\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]$ is bounded by a constant multiplies $\\nonumber &\\ (np_\\eta )^\\zeta \\times \\mathbb {D}(\\textbf {Span}(\\mathbf {R}))\\times (np_\\eta )^{|E(\\mathbf {T})\\setminus E(\\mathbf {Span}(\\mathbf {R}))|}\\times \\big (n^\\chi p_\\eta ^\\zeta \\big )^2\\times n^{- \\operatorname{Cap}(\\mathbf {F}_{\\mathbf {R}})}\\\\=&\\ \\big (n^{\\chi +\\zeta }p_\\eta ^{2\\zeta }\\big )^2\\times n^{-|E(\\textbf {Span}(\\mathbf {R}))|+|V(\\textbf {Span}(\\mathbf {R}))\\cap \\mathbf {Q}|}\\times n^{-2 \\operatorname{Cap}(\\mathbf {F}_{\\mathbf {R}})}\\,, $ where we used the fact that $\\mathbb {D}(\\mathbf {Span}(\\mathbf {R}))=n^{\\operatorname{Cap}(\\mathbf {Span}(\\mathbf {R}))-\\operatorname{Cap}(\\mathbf {F}_{\\mathbf {R}})}$ .", "Suppose $\\mathbf {F}_{\\mathbf {R}}$ is a union of subtrees $\\mathbf {T}_1,\\dots ,\\mathbf {T}_r$ of $\\mathbf {T}$ .", "We note that $|E(\\mathbf {Span}(\\mathbf {R}))|-|V(\\mathbf {Span}(\\mathbf {R}))\\cap \\mathbf {Q}| =|\\mathbf {R}| - 1 \\ge |\\mathbf {R}| - r = \\sum _{i=1}^r\\big (|E(\\mathbf {T}_i)|-|V(\\mathbf {T}_i)\\cap \\mathbf {Q}|\\big )\\,,$ Thus, the term $n^{-|E(\\mathbf {Span}(\\mathbf {R}))|+|V(\\mathbf {Span}(\\mathbf {R}))\\cap \\mathbf {Q}|}\\times n^{-2 \\operatorname{Cap}(\\mathbf {F}_{\\mathbf {R}})}$ in (REF ) is bounded by $\\prod _{i=1}^r n^{-|E(\\mathbf {T}_i)|+|V(\\mathbf {T}_i)\\cap \\mathbf {Q}|-2 \\operatorname{Cap}(\\mathbf {T}_i)}=\\prod _{i=1}^r n^{-|V(\\mathbf {T}_i)\\cap \\mathbf {Q}|+(2\\alpha _\\eta -1)|E(\\mathbf {T}_i)|}\\,,$ which is $o(1)$ from Lemma REF (iv).", "This shows that (REF ) is $o\\Big (\\big (n^{\\chi +\\zeta }p_\\eta ^{2\\zeta }\\big )^2\\Big )$ for each $\\mathbf {R}$ .", "Since the number of possible $\\mathbf {R}$ is uniformly bounded in $n$ , we complete the proof by combining with (REF ).", "It remains to prove Lemma REF and Lemma REF .", "We note that for any two tuples $\\mathsf {L}_i,\\mathsf {L}_j\\in \\Pi \\left(\\operatorname{GC}_s\\right)$ , it holds $&\\ \\mathbb {P}[\\mathsf {L}_i\\bowtie _{\\mathsf {G}} \\mathsf {R}_{s-1},\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]=\\widehat{\\mathbb {P}}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1},\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {A}_s^1] \\nonumber \\\\\\le &\\ \\widehat{\\mathbb {P}}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1},\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}]= \\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1},\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}]\\,,$ where the inequality follows from the FKG inequality and the last equality follows from independence.", "Then it is clear that $\\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1},\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1} \\mid \\mathcal {F}_{s-1}]$ is upper-bounded by $&\\sum _{\\operatorname{Q}\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )}\\mathbb {P}[\\mathsf {L}_i\\bowtie _{\\mathsf {G}} I_\\mathsf {G}(\\operatorname{Q}),\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}] \\nonumber \\\\=&\\sum _{\\operatorname{Q}\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )}\\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})]\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathsf {L}_i\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})]\\,.$ By Proposition REF and (REF ), we have $\\sum _{\\operatorname{Q}\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )}\\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})]\\le \\big [1+o(1)\\big ]\\mathbb {P}[\\mathsf {L}_i\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]\\lesssim n^\\chi p_\\eta ^\\zeta \\,.$ Thus it remains to show for each $\\operatorname{Q}\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )$ , we have $\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1} \\mid \\mathsf {L}_i\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})]\\le {\\left\\lbrace \\begin{array}{ll}\\big [1+o(1)\\big ]\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}]\\,,&\\text{ if }(\\operatorname{L}_i,\\operatorname{L}_j)\\in \\operatorname{IP}_s(\\emptyset )\\,;\\\\O\\big (n^\\chi p_\\eta ^\\zeta \\times n^{- \\operatorname{Cap}(\\mathbf {F}_{\\mathbf {R}})}\\big )\\,,&\\text{ if }(\\operatorname{L}_i,\\operatorname{L}_j)\\in \\operatorname{IP}_s(\\mathbf {R}), \\mathbf {R}\\ne \\emptyset \\,.\\end{array}\\right.", "}$ Fix such a tuple $\\operatorname{Q}$ and denote $\\mathsf {T} \\subset \\mathsf {G}$ the subtree that certifies $\\lbrace \\mathsf {L}_i\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})\\rbrace $ .", "Similarly for each $\\operatorname{Q}^{\\prime }\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )$ , we let $\\mathsf {T}^{\\prime } \\subset \\mathsf {G}$ be the subtree that certifies $\\lbrace \\mathsf {L}_j\\bowtie _{\\mathsf {G}} I_\\mathsf {G}(\\operatorname{Q}^{\\prime })\\rbrace $ .", "Then from a union bound we get $&\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1} \\mid \\mathsf {L}_i\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})]\\le \\ \\sum _{\\operatorname{Q}^{\\prime }\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )}\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q}^{\\prime })\\mid \\mathsf {L}_i\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})] \\nonumber \\\\=&\\ \\sum _{\\begin{array}{c}\\operatorname{Q}^{\\prime }\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )\\\\E(\\mathsf {T})\\cap E(\\mathsf {T}^{\\prime })=\\emptyset \\end{array}}\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q}^{\\prime })]+\\sum _{\\begin{array}{c}\\operatorname{Q}^{\\prime }\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )\\\\E(\\mathsf {T})\\cap E(\\mathsf {T}^{\\prime })\\ne \\emptyset \\end{array}}\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q}^{\\prime })\\mid \\mathsf {L}_i\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})]\\,,$ where the equality follows from independence.", "For any subset $\\mathbf {R} \\subset \\mathbf {L}$ (possibly $\\mathbf {R} = \\emptyset $ ), we define $\\mathcal {P}(\\mathbf {R})$ as the collection of nonempty subgraphs $\\mathbf {F} \\subset \\mathbf {T}$ with $V(\\mathbf {F})\\cap \\mathbf {L}=\\mathbf {R}$ .", "[Proof of Lemma REF ] For the case $(\\operatorname{L}_i,\\operatorname{L}_j)\\in \\operatorname{IP}_s(\\emptyset )$ , we have $\\mathsf {L}_i\\cap \\mathsf {L}_j=\\emptyset $ .", "The first sum in (REF ) is bounded by $\\big [1+o(1)\\big ]\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]$ for the same reason as (REF ).", "As for the second sum in (REF ), it can be upper-bounded by $\\sum _{\\mathbf {F}\\in \\mathcal {P}(\\emptyset )}\\sum _{\\begin{array}{c}{\\operatorname{Q}^{\\prime }\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )}\\\\ \\mathsf {T} \\cap \\mathsf {T}^{\\prime }\\cong \\mathbf {F}\\end{array}}\\mathbb {P}[\\mathsf {L}_j\\bowtie _{\\mathsf {G}}I_\\mathsf {G}(\\operatorname{Q}^{\\prime })\\mid \\mathsf {L}_i\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})]\\,.$ For each $\\mathbf {F}\\in \\mathcal {P}(\\emptyset )$ , it is readily to see the second summation above is bounded by $n^{|\\mathbf {Q}\\setminus V(\\mathbf {F})|}p_\\eta ^{|E(\\mathbf {T})\\setminus E(\\mathbf {F})|}=n^\\chi p_\\eta ^\\zeta \\times n^{-|V(\\mathbf {F})|}p_\\eta ^{-|E(\\mathbf {F})|}=o\\big (n^\\chi p_\\eta ^\\zeta \\big )=o\\big (\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}\\mathsf {R}_{s-1}\\mid \\mathcal {F}_{s-1}]\\big )\\,,$ where we used the fact that $|V(\\mathbf {F})|>|E(\\mathbf {F})|$ for any $\\mathbf {F}\\in \\mathcal {P}(\\emptyset )$ since $\\mathbf {F}$ is a forest.", "Since the cardinality of $\\mathcal {P}(\\emptyset )$ is uniformly bounded in $n$ , this concludes Lemma REF .", "[Proof of Lemma REF ] For the case $(\\operatorname{L}_i,\\operatorname{L}_j)\\in \\operatorname{IP}_s(\\mathbf {R})$ with $\\mathbf {R}\\ne \\emptyset $ , similarly, the first sum in (REF ) remains to be $O(n^\\chi p_\\eta ^\\zeta )$ .", "Note that the second sum can be expressed as $\\sum _{\\mathbf {F}\\in \\mathcal {P}(\\mathbf {R})}\\sum _{\\begin{array}{c}{\\operatorname{Q}^{\\prime }\\in \\mathfrak {A}(\\operatorname{R}_{s-1}^{\\prime },\\chi )}\\\\ \\mathsf {T}\\cap \\mathsf {T}^{\\prime }\\cong \\mathbf {F}\\end{array}}\\mathbb {P}[\\mathsf {L}_j\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q}^{\\prime })\\mid \\mathsf {L}_i\\bowtie _\\mathsf {G}I_\\mathsf {G}(\\operatorname{Q})]\\le O\\big (n^\\chi p_\\eta ^\\zeta \\times \\max _{\\mathbf {F}\\in \\mathcal {P}(\\mathbf {R})}n^{-\\operatorname{Cap}(\\mathbf {F})}\\big )\\,.$ Since $\\operatorname{Cap}(\\mathbf {F}_\\mathbf {R}) = \\min _{\\mathbf {F}\\in \\mathcal {P}(\\mathbf {R})} \\operatorname{Cap}(\\mathbf {F})$ , this yields Lemma REF ." ], [ "Proof of Proposition ", "We start with some notations.", "For any simple graph $\\mathbf {H}$ and $\\mathbf {R} \\subset V(\\mathbf {H})$ with $|\\mathbf {R}|=r\\ge 1$ , and any $r$ -tuple $\\operatorname{I} \\in \\mathfrak {A}([n],r)$ , we define $\\operatorname{EXT}(\\operatorname{I};\\mathbf {R},\\mathbf {H})$ to be the collection of subgraphs $H$ in $ G$ satisfying the following condition: there is an isomorphism $\\phi :\\mathbf {H}\\rightarrow H$ such that $\\phi (\\mathbf {R})= ( v_i)_{i\\in \\operatorname{I}}$ (note that here the equality is equal in the sense of a tuple, not just in the sense of a set).", "In addition, we let $\\operatorname{Ext}(\\operatorname{I};\\mathbf {R},\\mathbf {H}) = |\\operatorname{EXT}(\\operatorname{I};\\mathbf {R},\\mathbf {H})|$ .", "For a pair $(\\mathbf {R},\\mathbf {H})$ , we say it is a dense pattern, if for any subgraph $\\mathbf {H}^{\\prime }\\subsetneq \\mathbf {H}$ with $\\mathbf {R}\\subset V(\\mathbf {H}^{\\prime })$ , $|V(\\mathbf {H})\\setminus V(\\mathbf {H}^{\\prime })|<\\alpha _\\eta |E(\\mathbf {H})\\setminus E(\\mathbf {H}^{\\prime })|.$ We say $(\\mathbf {R}, \\mathbf {H})$ is a sparse pattern, if for any subgraph $\\mathbf {H}^{\\prime }\\subset \\mathbf {H}$ with $\\mathbf {R}\\subset V(\\mathbf {H}^{\\prime })$ and $E(\\mathbf {H}^{\\prime }) \\ne \\emptyset $ , $|V(\\mathbf {H}^{\\prime })\\setminus \\mathbf {R}|>\\alpha _\\eta |E(\\mathbf {H}^{\\prime })|.$ Note that dense and sparse patterns are mutually exclusive (this can be checked by proof of contradiction with taking $\\mathbf {H}^{\\prime } = \\mathbf {R}$ in (REF ) and taking $\\mathbf {H}^{\\prime } = \\mathbf {H}$ in (REF )) but they are not mutually complementary, and in addition sparse pattern corresponds to the $\\alpha _\\eta $ -safe extension in [40].", "Recall (REF ) for the definition of $\\mathbb {D}(\\mathbf {T}_0)$ for $\\mathbf {T}_0\\subset \\mathbf {T}$ .", "We introduce yet another good event $\\mathcal {G}$ as follow: Definition 3.8 Fix a large constant $\\kappa _4=\\kappa _4(\\alpha _\\eta ,\\mathbf {T})$ which will be determined later.", "Define $\\mathcal {G}$ as the event that for any $\\emptyset \\ne \\mathbf {R}\\subset \\mathbf {L}$ and any tree $\\mathbf {T}_0 \\subset \\mathbf {T}$ with $V(\\mathbf {T}_0)\\cap \\mathbf {L} =\\mathbf {R}$ , $\\max _{\\operatorname{I}\\in \\mathfrak {A}([n],|\\mathbf {R}|)}\\operatorname{Ext}(\\operatorname{I};\\mathbf {R},\\mathbf {T}_0)\\le \\kappa _4\\mathbb {D}(\\mathbf {T}_0)\\,.$ Lemma 3.9 For some large constant $\\kappa _4$ , it holds that $\\mathbb {P}[\\mathcal {G}^c]=o(1)$ .", "Since the number of pairs $(\\mathbf {R},\\mathbf {T}_0)$ is bounded in $n$ , we only need to prove (REF ) holds with probability tending to 1 for any fixed pair.", "For a fixed pair $(\\mathbf {R},\\textbf {T}_0)$ , we take a subgraph $\\textbf {F}_0 \\subset \\mathbf {T}_0$ with $V(\\mathbf {F}_0)\\cap \\mathbf {L} = \\mathbf {R}$ such that $ \\operatorname{Cap}(\\textbf {F}_0)$ is minimized.", "Note that subgraphs in $\\operatorname{EXT}(\\operatorname{I};\\mathbf {R},\\mathbf {T}_0)$ can be “constructed” as follows: Step 1. choose a subgraph $F\\cong \\textbf {F}_0$ in $G$ with fixed leaves in $\\mathbf {R}$ ; Step 2. add vertices and edges to $F$ and get a final subgraph $T\\cong \\textbf {T}$ in $G$ .", "Since both $\\mathbf {F}_0$ and $\\mathbf {T}_0\\setminus _\\star \\mathbf {F}_0$ are forests, we may assume $\\mathbf {T}_1,\\ldots ,\\mathbf {T}_l$ are the components of $\\mathbf {F}_0$ , and $\\mathbf {T}_1^{\\prime },\\ldots ,\\mathbf {T}_m^{\\prime }$ are the components of $\\mathbf {T}_0\\setminus _\\star \\mathbf {F}_0$ .", "Denote $\\mathbf {R}_i=V(\\mathbf {T}_i)\\cap \\mathbf {R}$ for $1\\le i\\le l$ and $\\mathbf {R}_j^{\\prime }=V(\\mathbf {F}_0)\\cap V(\\mathbf {T}_j^{\\prime })$ for $1\\le j\\le m$ .", "By the minimality of $\\operatorname{Cap}(\\mathbf {F}_0)$ and the irrationality of $\\alpha _\\eta $ , we see that $(\\mathbf {R}_i,\\textbf {T}_i)$ is a dense pattern for each $1\\le i\\le l$ and $(\\mathbf {R}_j^{\\prime },\\textbf {T}_j^{\\prime })$ is a sparse pattern for each $1\\le j\\le m$ .", "It then suffices to show the following two items.", "For any fixed dense pattern $(\\mathbf {R},\\textbf {H})$ , there exists a large constant $\\kappa $ such that probability $1-o(1)$ , $\\max _{\\operatorname{I}\\in \\mathfrak {A}([n],|\\mathbf {R}|)}\\operatorname{Ext}(\\operatorname{I};\\mathbf {R},\\textbf {H})\\le \\kappa \\,.$ For any fixed sparse pattern $(\\mathbf {R},\\textbf {H})$ , there exists a large constant $\\kappa $ such that with probability $1-o(1)$ , $\\max _{\\operatorname{I}\\in \\mathfrak {A}([n],|\\mathbf {R}|)}\\operatorname{Ext}(\\operatorname{I};\\mathbf {R},\\textbf {H})\\le \\kappa n^{|V(\\textbf {H})\\setminus \\mathbf {R}|-\\alpha _\\eta |E(\\textbf {H})|}\\,.$ By [40] (where the condition $\\alpha _\\eta $ -safe is equivalent to the sparsity of $(\\mathbf {R},\\mathbf {H})$ ), we see that (REF ) holds.", "Thus, it remains to prove (REF ).", "To this end, we will use the following intuition: for a dense pattern $(\\mathbf {R},\\textbf {H})$ , if $\\operatorname{Ext}(\\operatorname{I};\\mathbf {R},\\textbf {H})$ is large for some $\\operatorname{I}\\in \\mathfrak {A}([n],|\\mathbf {R}|)$ , then there exists a subgraph $K\\subset G$ with bounded size such that $|V(K)|-\\alpha _\\eta |E(K)|<-1$ .", "We next elaborate this precisely.", "Since $(\\mathbf {R},\\textbf {H})$ is dense, we have $-\\delta \\stackrel{\\operatorname{def}}{=}\\max _{\\begin{array}{c}\\textbf {H}^{\\prime }\\subsetneq \\textbf {H},\\mathbf {R}\\subset V(\\textbf {H}^{\\prime })\\end{array}}\\big (|V(\\textbf {H})\\setminus V(\\textbf {H}^{\\prime })|-\\alpha _\\eta |E(\\textbf {H})\\setminus E(\\textbf {H}^{\\prime })|\\big )<0\\,.$ If $\\operatorname{Ext}(\\operatorname{I};\\mathbf {R},\\textbf {H})$ exceeds a large integer $\\kappa $ for some $\\operatorname{I}\\in \\mathfrak {A}([n],|\\mathbf {R}|)$ , then there exist $\\kappa $ subgraphs $H_1,\\dots , H_\\kappa \\cong \\mathbf {H}$ in $G$ , such that each isomorphism from $\\mathbf {H}$ to $H_i$ maps $\\mathbf {R}$ to $(v_i)_{i\\in \\operatorname{I}}$ .", "Let $K_i= H_1\\cup H_2\\cup \\cdots \\cup H_i$ for $1\\le i\\le \\kappa $ .", "We note that whenever $K_{i+1}\\setminus K_i\\ne \\emptyset $ , each of its component is isomorphic to some $\\mathbf {H}\\setminus _\\star \\mathbf {H}^{\\prime }$ with $\\mathbf {H}^{\\prime }\\subset \\mathbf {H}$ and $\\mathbf {R}\\subset V(\\mathbf {H}^{\\prime })$ .", "Denoting $P(K_i)=|V(K_i)|-\\alpha _\\eta |E(K_i)|$ , we then deduce from (REF ) that $P(K_{i+1})\\le P(K_i)$ for any $i\\ge 1$ , with equality holds if and only if $K_{i+1}= K_i$ , If $K_{i+1}\\ne K_i$ , then $P(K_{i+1})\\le P(K_i)-\\delta $ .", "In addition, for each fixed $K_i$ , there exists a large integer $N=N(K_i)$ depending only on the size of $K_i$ such that $K_{i+N}\\ne K_i$ .", "Combined with the fact that $P(K_1)=|V(H)|-\\alpha _\\eta |E(H)|$ is bounded in $n$ , we see $P(K_\\kappa )<-1$ for $\\kappa $ large enough.", "Clearly, we also have $|V(K_\\kappa )|\\le \\kappa |V(\\textbf {H})|$ .", "Therefore, $\\begin{aligned}&\\ \\mathbb {P}\\big [\\max _{\\operatorname{I}\\in \\mathfrak {A}([n],|\\mathbf {R}|)}\\operatorname{Ext}(\\operatorname{I};\\mathbf {R},\\textbf {H})\\ge \\kappa \\big ]\\\\\\le &\\ \\mathbb {P}\\big [\\exists \\ {K}\\subset G \\text{ with }|V(K)|\\le \\kappa |V(\\mathbf {H})|\\text{ and }P(K)<-1\\big ]\\\\\\le & \\sum _{\\begin{array}{c}|V(\\textbf {K})|\\le \\kappa |V(\\textbf {H})|\\\\ P(\\mathbf {K})<-1\\end{array}} n^{|V(\\textbf {K})|}p_\\eta ^{|E(\\textbf {K})|}=\\sum _{\\begin{array}{c}|V(\\textbf {K})|\\le \\kappa |V(\\textbf {H})|\\\\ P(\\mathbf {K})<-1\\end{array}} n^{P(\\textbf {K})}=O\\big (n^{-1}\\big )\\,.\\end{aligned}$ This proves (REF ), and thus completes the proof of the lemma.", "We will prove Proposition REF by induction.", "Assuming for some fixed $1\\le s\\le S$ it holds uniformly that for all $1\\le t\\le s-1$ , $\\mathbb {P}\\big [(\\mathcal {G}_t^1)^c\\big ]+\\mathbb {P}\\big [(\\mathcal {G}_t^2)^c\\big ]=o(1)\\,,$ we will show that (REF ) holds for $t=s$ (This will in particular prove the base case of $t=1$ ).", "Recall definition (REF ) for $X_t,1\\le t\\le s-1$ , the induction hypothesis (REF ) is used in the following lemma: Lemma 3.10 Assuming (REF ) for $1\\le t\\le s-1$ , we have that $\\mathbb {P}[X_t<\\log n]=o(1)\\text{ uniformly for all }1\\le t\\le s-1\\,.$ From the induction hypothesis we see $\\mathbb {P}[X_t<\\log n]$ is no more than $ \\mathbb {P}\\big [(\\mathcal {G}_t^1)^c\\big ]+\\mathbb {P}\\big [(\\mathcal {G}_t^2)^c\\big ]+\\mathbb {P}\\big [X_t<\\log n\\mid \\mathcal {G}_t^1\\cap \\mathcal {G}_t^2\\big ]=o(1)+\\mathbb {P}\\big [X_t<\\log n\\mid \\mathcal {G}_t^1\\cap \\mathcal {G}_t^2\\big ]\\,.$ Note that $\\mathbb {E}[X_t\\mid \\mathcal {G}_t^1\\cap \\mathcal {G}_t^2]\\gg \\log n$ , by applying the Paley-Zygmund inequality and then make use of Proposition REF , we get the second term above is also $o(1)$ .", "The estimates are uniform for all $1\\le t\\le s-1$ and the result follows.", "To show the desired result, we cannot condition on the full information of $\\mathcal {F}_{s-1}$ .", "Instead, we turn to $\\mathcal {I}_{s-1}=\\sigma \\lbrace \\operatorname{M}_{s-1}, \\pi (\\operatorname{M}_{s-1}),\\operatorname{MT}_t,1\\le t\\le s-1\\rbrace $ .", "To the contrary of $\\mathcal {F}_{s-1}$ , the information of $\\bigcup _{1\\le t\\le s-1}\\operatorname{EXP}_t$ is not fully contained in $\\mathcal {I}_{s-1}$ .", "Recall the set $\\mathtt {M}_s$ in Algorithm REF and note that $\\mathtt {M}_s$ can be determined from $\\mathcal {I}_{s-1}$ .", "In addition, by our choice $\\kappa _0>4\\zeta /\\eta $ and the fact that each $\\operatorname{MT}_t$ contains no more than $2\\zeta $ elements in $[n]$ , it holds deterministically that $|\\mathtt {M}_s|\\ge {\\eta n}/2$ .", "Denote $V_{R}=\\lbrace v_i:i\\in \\operatorname{R}_{s-1}\\rbrace $ (recall $\\operatorname{R}_{s-1} = [n]\\setminus \\operatorname{M}_{s-1}$ ) and $ V_{\\mathtt {M}}=\\lbrace v_i:i\\in \\mathtt {M}_{s}\\rbrace $ .", "For any $v\\in V$ , let $N_{R}(v)$ and $N_{\\mathtt {M}}(v)$ be the numbers of neighbors of $v$ in $V_{R}$ and $V_{\\mathtt {M}}$ , respectively.", "The next lemma describes the properties of the conditional distribution of $G$ given $\\mathcal {I}_{s-1}$ , which will be useful later.", "Lemma 3.11 Recall $E_0$ as in (REF ).", "For any realization of $\\mathcal {I}_{s-1}$ , the graph $G$ conditioned on $\\mathcal {I}_{s-1}$ is given by $ {GO}_{s-1}\\cup {GO}^\\dagger _{s-1}$ , where $ {GO}^\\dagger _{s-1}$ is a graph on $V$ with edges in $ E_0\\setminus E( {GO}_{s-1})$ which is stochastically dominated by an Erdős-Rényi graph on $(V, E_0\\setminus E({GO}_{s-1}))$ with edge density $p_\\eta $ (i.e., each edge in $E_0\\setminus E({GO}_{s-1})$ is preserved with probability $p_\\eta $ ).", "It is clear that $ {GO}_{s-1} \\subset G$ under such conditioning, and thus it remains to understand the behavior for the remaining graph ${GO}^\\dagger _{s-1}$ .", "For any $e \\in E_0\\setminus E( {GO}_{s-1})$ and any realization $\\omega _{\\setminus e}$ for edges in $ E_0\\setminus E( {GO}_{s-1})$ except $e$ , note that if $\\omega _{\\setminus e}$ together with $e\\in G$ (as well as $ {GO}_{s-1} \\subset E$ ) yields the realization $\\mathcal {I}_{s-1}$ , then $\\omega _{\\setminus e}$ together with $e\\notin E$ also yields the realization $\\mathcal {I}_{s-1}$ .", "Therefore, $\\mathbb {P}[e\\notin E \\mid \\mathcal {I}_{s-1}, \\omega _{\\setminus e}] \\ge 1 - p_\\eta $ , completing the proof.", "In what follows, we fix a realization of $\\mathcal {I}_{s-1}$ .", "Definition 3.12 For a triple $(\\mathbf {R},\\mathbf {H},\\mathbf {v})$ with $\\mathbf {R}\\subset V(\\mathbf {H}), \\mathbf {v}\\in V(\\mathbf {H})\\setminus \\mathbf {R}$ and a vertex $v\\in V_{R} =\\lbrace v_i:i\\in [n]\\setminus \\operatorname{M}_{s-1}\\rbrace $ , we say a subgraph $H \\subset G$ is an $(\\mathbf {R},\\mathbf {H},\\mathbf {v})$ -attaching graph rooted at $v$ if there is an isomorphism $\\phi :\\mathbf {H}\\rightarrow H$ , such that $\\phi (\\mathbf {R})\\subset M_{s-1},\\phi (\\mathbf {v})= v$ , and any two vertices in $\\phi (\\mathbf {R})$ has graph distance at most $\\zeta $ on $ {GO}_{s-1}$ .", "Lemma 3.13 There exists a large constant $\\kappa =\\kappa (\\eta ,\\alpha _\\eta ,\\mathbf {T},\\kappa _0)$ such that $\\mathbb {P}[\\mathcal {G}_\\kappa |\\mathcal {I}_{s-1}] = 1-o(1)$ where $\\mathcal {G}_{\\kappa } = \\mathcal {G}_{\\kappa , s}$ is the following event: for any triple $(\\mathbf {R},\\mathbf {H},\\mathbf {v})$ where $(\\mathbf {R},\\mathbf {H})$ is a dense pattern with $\\mathbf {H}$ being a subtree of $\\mathbf {T}$ and $\\mathbf {R}=V(\\mathbf {H})\\cap \\mathbf {L}$ , and for any vertex $ v\\in V_{ R}$ , the number of $(\\mathbf {R},\\mathbf {H},\\mathbf {v})$ -attaching graphs rooted at $v$ is bounded by $\\kappa $ .", "The proof is similar to that of Lemma REF , and we begin with introducing $P_\\zeta (K)$ as an analogue of $P(K)$ in the proof of Lemma REF .", "For a subgraph $K\\subset G$ , we draw an adjunctive edge between any pair of vertices $u, v\\in V(K)$ if and only if $u, v$ has graph distance at most $\\zeta $ on $ {GO}_{s-1}$ .", "This gives an adjunctive graph on $V(K)$ , and we denote the collection of its components by $\\mathfrak {C}_\\zeta (K)$ .", "Then we define $P_\\zeta (K)=|\\mathfrak {C}_\\zeta (K)|-\\alpha _\\eta |E(K)\\setminus E(GO_{s-1})|$ .", "Our proof is essentially by contradiction, that is, we will show that if $\\mathcal {G}_\\kappa $ fails then a rare event must occur.", "To this end, let $H_1,\\dots , H_\\kappa \\cong \\mathbf {H}$ be distinct $(\\mathbf {R},\\mathbf {H},\\mathbf {v})$ -attaching graphs rooted at some $v\\in V_{R}$ , let $\\phi _i:\\mathbf {H}\\rightarrow H_i$ be the isomorphism as in Definition REF and let $ K_{i}= H_1\\cup \\cdots \\cup H_i$ for $1\\le i\\le \\kappa $ .", "We claim that for $1\\le i < \\kappa $ , $P_\\zeta (K_{i+1}) \\le P_\\zeta (K_{i}) - \\delta \\mathbf {1}_{\\lbrace K_{i+1}\\lnot \\subset K_i\\cup {GO}_{s-1}\\rbrace }\\,,$ where $\\delta >0$ is a constant which does not depend on $i$ or $\\kappa $ .", "We now fix $i$ and prove (REF ).", "To this end, we consider components $C_1,\\dots , {C}_r$ of $K_{i+1}\\setminus _\\star K_i$ (recall the definition of $H_1\\setminus _\\star H_2$ for two simple graphs as in the proof of Lemma REF , and we view $ C_j,1\\le j\\le r$ as subgraphs of $H_{i+1}$ ).", "Let $N_j$ be the number of components in $\\mathfrak {C}_\\zeta ( {K}_{i+1})$ intersecting $ {C}_j$ but not containing $v$ , and let $E_j = |E(C_j) \\setminus E(GO_{s-1})|$ .", "Then, we have $P_\\zeta (K_{i+1})-P_\\zeta (K_i)\\le \\sum _{j=1}^r (N_j-\\alpha _\\eta E_j)\\,.$ For each ${C}_j$ , let ${F}_j$ be the subgraph on vertices $V(H_{i+1})\\setminus \\left(V( {C}_j) \\setminus (V( {K}_i)\\cup V({GO}_{s-1}))\\right)$ with edges $E(H_{i+1})\\setminus (E({C}_j) \\setminus (E(K_i)\\cup E({GO}_{s-1})))$ .", "We further write $F_j= {F}_{j,0}\\cup {T}_{j, 1}\\cup \\ldots {T}_{j, k_j}$ , where $ {F}_{j, 0}$ is the union of components of $ {F}_j$ which intersect $\\phi _{i+1}(\\mathbf {R})$ , and $ {T}_{j, l}$ 's (for $l=1,\\dots ,k_j$ ) are the remaining tree components (here possibly $k_j = 0$ ).", "Writing $\\mathbf {F}_{j,0} =\\phi _{i+1}^{-1}( {F}_{j,0})$ , we have $E_j=|E( {H}_{i+1})|-|E( {F}_{j,0})|-\\sum _{l=1}^{k_j}|E( {T}_{j, l})|=|E(\\mathbf {H})\\setminus E(\\mathbf {F}_{j,0})|-\\sum _{l=1}^{k_j} |E( {T}_{j, l})|\\,.$ For the estimation of $N_j$ , we claim that $N_j\\le k_j + |V( {H}_{i+1})|-|V( {F}_{j,0})|-\\sum _{l=1}^{k_j}|V( {T}_{j, l})|=|V(\\mathbf {H})\\setminus V(\\mathbf {F}_{j,0})|-\\sum _{l=1}^{k_j}|E( {T}_{j,l})|\\,.$ Indeed, since $V( {T}_{j, l})$ belongs to a single component in $\\mathfrak {C}_\\zeta ( {K}_{i+1})$ for $1\\le l\\le k_j$ and also the vertices in $ {F}_{j,0}$ are in a single component in $\\mathfrak {C}_\\zeta ( {K}_{i+1})$ (by the definition of $(\\mathbf {R},\\mathbf {H},\\mathbf {v})$ -attaching and the fact that $\\phi _{i+1}(\\mathbf {R})\\subset V( {F}_{j,0})$ ), we get that the number of components in $\\mathfrak {C}_\\zeta ( {K}_{i+1})$ which intersect $ {C}_j$ is at most $1+k_j+|V( {H}_{i+1})|-|V( {F}_{j,0})|-\\sum _{l=1}^{k_j}|V( {T}_{j,l})|$ (this is because, the number of components on $V(F_j)$ is at most $1+k_j$ and each vertex outside $V( F_j)$ induces at most one component).", "Moreover, since $N_j$ does not count the component in $\\mathfrak {C}_\\zeta ( {K}_{i+1})$ which contains $v$ , we can remove one of the above components (possibly $F_{j,0}$ or one of the $T_{j,l}$ 's) when counting $N_j$ .", "This verifies (REF ).", "Combined with (REF ), it yields that $N_j-\\alpha _\\eta E_j\\le &\\ |V(\\mathbf {H})\\setminus V(\\mathbf {F}_{j,0})|-\\alpha _\\eta |E(\\mathbf {H})\\setminus E(\\mathbf {F}_{j,0})|-(1-\\alpha _\\eta )\\sum _{l=1}^{k_j}|E(T_{j, l})|\\,,$ which is $\\le -\\delta \\mathbf {1}_{\\lbrace \\mathbf {F}_{j}\\ne \\mathbf {H}\\rbrace }$ for some $\\delta >0$ since $(\\mathbf {R},\\mathbf {H})$ is a dense pattern (recall (REF ) and $\\mathbf {R}\\subset V(\\mathbf {F}_{j,0})$ ).", "Letting $\\mathbf {F}_j = \\phi _{i+1}^{-1}(F_j)$ , we then conclude (REF ) by the observation that $\\lbrace {K}_{i+1}\\lnot \\subset {K}_i\\cup {GO}_{s-1}\\rbrace \\subset \\bigcup _{j=1}^r \\lbrace \\mathbf {F}_j\\ne \\mathbf {H}\\rbrace , \\mbox{ and thus } \\mathbf {1}_{\\lbrace {K}_{i+1}\\lnot \\subset {K}_i\\cup {GO}_{s-1}\\rbrace }\\le \\sum _{j=1}^r\\mathbf {1}_{\\lbrace \\mathbf {F}_j\\ne \\mathbf {H}\\rbrace }\\,.$ In addition, for each $i$ , we claim that there exists a large constant $N$ depending only on the size of $K_i$ , such that $K_{j+1}\\subset K_j\\cup GO_{s-1}$ cannot hold simultaneously for all $j\\in \\lbrace i,i+1,\\dots ,i+N\\rbrace $ .", "We prove this by contradiction.", "Suppose otherwise and then we see $H_{i+1},\\dots ,H_{i+N}$ must all be contained in $K_i\\cup GO_{s-1}$ .", "Furthermore, since each $H_j$ is connected, these graphs must be contained in the $\\zeta $ -neighborhood of $v$ in $K_i\\cup GO_{s-1}$ .", "Note that the number of vertices in this $\\zeta $ -neighborhood is at most $\\sum _{j=0}^\\zeta (\\Delta (GO_{s-1}))^j$ where $\\Delta (GO_{s-1})$ is the maximal degree and is uniformly bounded.", "Recalling (REF ), we arrive at a contradiction if $N$ is chosen sufficiently large.", "Combining the preceding claim and (REF ), we can choose $\\kappa = \\kappa (\\zeta , N, \\delta , \\chi )$ sufficiently large such that that $P_\\zeta (K_\\kappa ) < -1$ on $\\mathcal {G}_\\kappa ^c$ .", "We next bound the enumeration for $K_\\kappa $ given $\\mathcal {I}_{s-1}$ .", "To this end, we note that for each component in $\\mathfrak {C}_\\zeta (K_\\kappa )$ the number of choices is $O(n)$ (where the $O$ -term depends on $(\\zeta , N, \\delta , \\chi )$ ); this is because for each such component once a vertex is fixed the number of choices for remaining vertices is $O(1)$ by connectivity and by (REF ).", "Then in light of Lemma REF (i), we can show that $\\mathbb {P}[\\mathcal {G}_\\kappa ^c] \\rightarrow 0$ via a union bound over all possible choices of $K_\\kappa $ (which is similar to that for (REF )).", "This completes the proof.", "Now we are ready to present the proof of Proposition REF .", "[Proof of Proposition REF ] Assume (REF ).", "Recall $\\mathcal {G}$ as in Definition REF and recall $\\mathcal {G}_\\kappa = \\mathcal {G}_{\\kappa , s}$ from the statement of Lemma REF .", "For $\\mathcal {G}_s^1$ , recall Definition REF and the notations therein.", "We claim that $\\mathcal {G}\\cap \\mathcal {G}_{\\kappa }\\subset \\mathcal {G}_s^1$ .", "Provided with this, we obtain from Lemmas REF and REF that $\\mathbb {P}\\big [(\\mathcal {G}_s^1)^c\\big ]\\le \\mathbb {P}\\big [\\mathcal {G}^c\\big ]+\\mathbb {P}\\big [(\\mathcal {G}_\\kappa )^c\\big ]=o(1)+\\mathbb {E}\\Big [\\mathbb {P}\\big [(\\mathcal {G}_\\kappa )^c\\mid \\mathcal {I}_{s-1}\\big ]\\Big ]=o(1)\\,.$ We next prove the claim.", "To this end, for each fixed pair $(\\mathbf {F}_1,\\mathbf {F}_2)=(\\mathbf {T}_0,\\mathbf {T}_1\\cup \\cdots \\cup \\mathbf {T}_l)\\in \\mathcal {P}_0$ , denote $\\mathbf {R}_i=V(\\mathbf {T}_i)\\cap \\mathbf {L}$ for $0\\le i\\le l$ .", "From the definition of $\\mathcal {P}_0$ , we see $\\mathbf {T}_i$ is a dense tree and thus $(\\mathbf {R}_i,\\mathbf {T}_i)$ is a dense pattern for each $1\\le i\\le l$ .", "Fix an arbitrary subgraph $\\mathsf {T}\\cong \\mathbf {T}$ in $\\mathsf {G}$ with leaf set $\\mathsf {L}$ and $V(\\mathsf {T}) \\cap \\mathsf {M}_{s-1} = \\mathsf {L}$ .", "Recall the definition of $\\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)$ (with respect to $\\mathsf {T}$ ) below (REF ): it counts the number of $\\xi $ -tuples $\\operatorname{L}^{\\prime }\\in \\bigcup _{1\\le t \\le s-1}\\operatorname{EXP}_{t}$ which satisfy that there are two embeddings $\\phi _1:\\mathbf {T}_0\\rightarrow \\mathsf {T}\\cup \\mathsf {GO}_{s-1}$ and $ \\phi _2:\\mathbf {T}_1\\cup \\cdots \\cup \\mathbf {T}_l\\rightarrow \\mathsf {T}\\cup \\mathsf {GO}_{s-1}$ such that $\\phi _1\\big (E(\\mathbf {T}_0)\\big )\\cap E(\\mathsf {T})\\ne \\emptyset \\text{ and }\\phi _1(\\mathbf {R}_0)\\cup \\phi _2(\\mathbf {R}_1\\cup \\cdots \\cup \\mathbf {R}_l)= I_G(\\operatorname{L}^{\\prime })\\,.$ Note that $GO_{s-1}\\cong \\mathsf {GO}_{s-1}$ through $\\Pi \\circ I_G^{-1}$ .", "Combined with the fact that $\\operatorname{L}^{\\prime }\\in \\bigcup _{1\\le t\\le s-1}\\operatorname{EXP}_t$ implies $\\operatorname{L}^{\\prime }\\bowtie _{G} [n]$ , it yields that each such tuple corresponds to an embedding $\\psi :\\mathbf {T}\\rightarrow G$ which satisfies the following conditions: (i) $\\psi (\\mathbf {R}_0)$ is contained in the $\\zeta $ -neighborhood of $L=I_G\\circ \\Pi ^{-1}(\\mathsf {L})$ on $GO_{s-1}$ ; (ii) for each $1\\le i\\le l$ , we have $\\psi (\\mathbf {R}_i)\\subset GO_{s-1}$ and the diameter of $\\psi (\\mathbf {R}_i)$ with respect to the graph metric on ${GO}_{s-1}$ is at most $\\zeta $ .", "Thus, we can bound $\\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)$ by the number of such embeddings.", "To this end, note that on $\\mathcal {G}$ we have $\\Delta (G)\\le \\kappa _4np_\\eta \\,.$ For the number of possible realizations of $\\psi \\big (V(\\mathbf {T}_0)\\big )$ , when $\\mathbf {R}_0=\\emptyset $ it is bounded by $O\\big (n\\times (np_\\eta )^{|E(\\mathbf {T}_0)|}\\big )$ using (REF ); when $\\mathbf {R}_0\\ne \\emptyset $ it is bounded by $O\\big (\\mathbb {D}(\\mathbf {T}_0)\\big )$ using (REF ) and (REF ) (More precisely, the number of ways of choosing $\\psi (\\mathbf {R}_0)$ is $O(1)$ by (REF ) and for each fixed $\\psi (\\mathbf {R}_0)$ , the number of ways of choosing the rest of $\\psi \\big (V(\\mathbf {T}_0)\\big )$ is $O\\big (\\mathbb {D}(\\mathbf {T}_0)\\big )$ by (REF )).", "In addition, for any edge $(\\mathbf {u},\\mathbf {v})\\in E(\\mathbf {T})\\setminus ( E(\\mathbf {T}_0)\\cup \\cdots \\cup E(\\mathbf {T}_l))$ , once $\\psi (\\mathbf {u})$ is fixed, the number of choices for $\\psi (\\mathbf {v})$ is $O(np_\\eta )$ by (REF ).", "For each $1\\le i\\le l$ and any $\\mathbf {v}\\in V(\\mathbf {T}_i)$ , once $\\psi (\\mathbf {v})$ is fixed, each realization of $\\psi \\big (V(\\mathbf {T}_i)\\big )$ corresponds to a $(\\mathbf {R}_i,\\mathbf {T}_i,\\mathbf {v})$ -attaching graph rooted at $\\psi (\\mathbf {v})$ , and thus the number of such realizations is at most $\\kappa $ by $\\mathcal {G}_\\kappa $ .", "Provided with preceding observations, we may bound the number of these embeddings (which in turn bounds $\\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)$ ) by first choosing $\\psi \\big (V(\\mathbf {T}_0)\\big )$ and then choosing the $\\psi $ -values for the remaining vertices on $\\mathbf {T}$ inductively.", "More precisely, the ordering for choosing vertices satisfy the following properties (see Figure REF for an illustration): (i) we first determine the $\\psi $ -value of vertices in $V(\\mathbf {T}_0)$ ; (ii) for the remaining vertices, the vertex whose $\\psi $ -value is to be chosen is neighboring to some vertex whose $\\psi $ -value has been chosen; (iii) whenever we choose $\\psi (\\mathbf {v})$ for any $\\mathbf {v}\\in V(\\mathbf {T}_i)$ , we also choose $\\psi \\big (V(\\mathbf {T}_i)\\big )$ immediately (which has at most $\\kappa $ choices as noted above).", "Therefore, we conclude that $\\operatorname{Enum}(\\mathbf {F}_1,\\mathbf {F}_2)\\lesssim {\\left\\lbrace \\begin{array}{ll}n\\times (np_\\eta )^{|E(\\mathbf {T}_0)|}\\times (np_\\eta )^{|E(\\mathbf {T})\\setminus (E(\\mathbf {T}_0\\cup \\cdots \\cup (\\mathbf {T}_l))|},\\quad &\\text{if }\\mathbf {R}_0=\\emptyset \\,,\\\\\\mathbb {D}(\\mathbf {T}_0)\\times (np_\\eta )^{|E(\\mathbf {T})\\setminus (E(\\mathbf {T}_0\\cup \\cdots \\cup (\\mathbf {T}_l))|},\\quad & \\text{if }\\mathbf {R}_0\\ne \\emptyset \\,,\\end{array}\\right.", "}$ Figure: In order to enumerate, as described above we choose the blue, black and red parts of 𝐓\\mathbf {T} in order.which implies (REF ) with an appropriate choice of $\\kappa _1$ .", "This proves the claim.", "Next we investigate $\\mathcal {G}_s^2$ as in Definition REF .", "For Item (i), we first note that by the sub-Gaussian concentration of Bernoulli variables, it holds that each vertex in $G$ has degree at least $(1-\\eta /8)np_\\eta $ except with exponentially small probability.", "Furthermore, from Lemma REF we also obtain that under any conditioning of $\\mathcal {I}_{s-1}$ , except with exponentially small probability we have for any $v\\in V_R$ , the total number of edges between $v$ and vertices in $V\\setminus V_{\\mathtt {M}}$ is no more than $(1+\\eta /8)(n-|\\mathtt {M}_s|)p_\\eta $ .", "Therefore, with probability $1-o(1)$ for each vertex $v\\in V_R$ the number of neighbors in $V_{\\mathtt {M}}$ is at least $(1-\\eta /8)np_\\eta -(1+\\eta /8)(n-|\\mathtt {M}_s|)p_\\eta \\ge ( |\\mathtt {M}_s|-\\eta n/4)p_\\eta \\ge \\eta np_\\eta /4\\,.$ A similar argument applies for $V_R$ and we get that with probability $1-o(1)$ , each $v\\in V_R$ has at least $\\eta np_\\eta /4$ neighbors in $V_R$ .", "Provided with these degree lower bounds, the number of $\\xi $ -tuples in $\\operatorname{CAND}_s$ is at least of order $(np_\\eta )^{\\zeta }$ .", "In order to get $\\operatorname{GC}_s$ , we still need to remove those $s$ -bad tuples in $\\operatorname{CAND}_s$ , and it suffice to show that the total number of removed tuples is typically $o\\big ((np_\\eta )^\\zeta \\big )$ .", "By Markov's inequality, we only need to show that $\\mathbb {E}| \\operatorname{CAND}_s\\setminus \\operatorname{GC}_s|= o\\big ((np_\\eta )^\\xi \\big )$ .", "Taking conditional expectation with respect to $\\mathcal {I}_{s-1}$ and using the averaging over conditioning principle, we see $\\mathbb {E}|\\operatorname{CAND}_s\\setminus \\operatorname{GC}_s|= \\mathbb {E}\\Big [\\sum _{\\operatorname{L}\\in \\mathfrak {A}(\\mathtt {M}_s,\\xi )} \\mathbb {P}\\big [\\operatorname{L}\\in \\operatorname{CAND}_s\\setminus \\operatorname{GC}_s\\mid \\mathcal {I}_{s-1}\\big ]\\Big ]\\,.$ Denote $\\mathfrak {A}_s$ as the set of tuples in $\\mathfrak {A}(R_{s-1},\\chi )$ with the first coordinate equals to $u_s$ (which is the minimal number in $R_{s-1}$ ), from the definition of $\\operatorname{CAND}_s$ and a union bound we see $\\mathbb {P}\\big [\\operatorname{L}\\in \\operatorname{CAND}_s\\setminus \\operatorname{GC}_s\\mid \\mathcal {I}_{s-1}\\big ]\\le \\sum _{\\operatorname{Q}\\in \\mathfrak {A}_s}\\mathbb {P}\\big [I_G(L)\\bowtie _G Q,\\operatorname{L}\\notin \\operatorname{GC}_s\\mid \\mathcal {I}_{s-1}\\big ]\\,,$ where $L,Q$ denote for $I_G(\\operatorname{L}),I_G(\\operatorname{Q})$ , respectively.", "We further denote $T_{L,Q}$ for the tree that certifies the event $\\lbrace L\\bowtie _{G} Q\\rbrace $ .", "From Definition REF , $\\operatorname{L}\\notin \\operatorname{GC}_s$ implies that there exists a subgraph $T^*\\cong \\mathbf {T}$ in $T_{L,Q}\\cup GO_{s-1}$ with leaf set $L^*$ , such that $E(T^*)\\cap E(T_{L,Q})\\ne \\emptyset $ and $I_G^{-1}(L^*)\\in \\operatorname{EXP}_{t^*}$ for some $1\\le t^*\\le s-1$ .", "We write $\\operatorname{d}_{GO_{s-1}}$ for the graph distance on $GO_{s-1}$ , then it is readily to see that the two tuples $\\operatorname{L}=I_G^{-1}(L)=(l_1,\\dots ,l_\\xi )$ and $\\operatorname{L}^*=(l_1^*,\\dots ,l_\\xi ^*)$ must satisfy the following two properties: (i) For any $i\\in [\\xi ]$ , there exists $j\\in [\\xi ]$ such that $\\operatorname{d}_{GO_{s-1}}(v_{l_i^*},v_{l_j})\\le \\zeta $ .", "(ii) For at least two indices $p\\in [\\xi ]$ , there exists $q\\in [\\xi ]$ such that $\\operatorname{d}_{GO_{s-1}}(v_{l_p},v_{l_q^*})\\le \\zeta $ .", "We denote by $\\mathtt {L}(\\operatorname{L})$ the collection of all $\\xi $ -tuples $\\operatorname{L}^*\\in \\mathfrak {A}(\\operatorname{M}_{s-1},\\xi )$ that satisfy Property (i) above.", "For any triple $(\\operatorname{L},\\operatorname{Q},\\operatorname{L}^*)$ with $\\operatorname{L}^*\\in \\mathtt {L}(\\operatorname{L})$ , we divide the event $\\operatorname{L}^*\\in \\operatorname{EXP}_{t^*}$ for some $1\\le t^*\\le s-1$ into two cases according to whether $t^* \\in \\mathtt {t}(\\operatorname{L}) \\mbox{ where } \\mathtt {t}(\\operatorname{L}) = \\lbrace t^{\\prime }: d_{GO_{s-1}}(v_{u_{t^{\\prime }}},v_{l_i})\\le \\zeta \\text{ for some }i\\in [\\xi ]\\rbrace \\,.$ For the case that $t^*\\notin \\mathtt {t}(\\operatorname{L})$ , the tree that certifies $\\operatorname{L}^{\\prime }\\in \\operatorname{CAND}_{t^*}$ can not be fully contained in $T_{L,Q}\\cup GO_{s-1}$ .", "From Lemma REF and the proof of Lemma REF (see e.g., (REF )), we see such case happens with probability $o(1)$ uniformly under any conditioning $\\mathcal {I}_{s-1}\\cup \\lbrace T_{L,Q}\\subset G\\rbrace $ .", "For the case that $t^*\\in \\mathtt {t}(\\operatorname{L})$ , we apply a union bound and thus we get $\\mathbb {P}[L\\bowtie _{G} Q,\\operatorname{L}\\notin \\operatorname{GC}_s\\mid \\mathcal {I}_{s-1}]\\le &\\ o(1)\\times \\mathbb {P}[L\\bowtie _{G} Q\\mid \\mathcal {I}_{s-1}]\\\\+&\\ \\sum _{\\operatorname{L}^*\\in \\mathtt {L}(\\operatorname{L})}\\sum _{t^* \\in \\mathtt {t}(\\operatorname{L})}\\mathbb {P}[L\\bowtie _{G} Q,\\operatorname{L}^*\\in \\operatorname{EXP}_{t^*}\\mid \\mathcal {I}_{s-1}]\\,.$ From Lemma REF we see $\\mathbb {P}[L\\bowtie _G Q\\mid \\mathcal {I}_{s-1}]\\le p_\\eta ^\\zeta $ for any realization of $\\mathcal {I}_{s-1}$ .", "Thus the first term above is $o(p_\\eta ^\\zeta )$ .", "To treat the remaining terms, we note that (a) $\\operatorname{L}^*\\in \\operatorname{EXP}_{t^*}$ implies that $ \\operatorname{L}^*\\in \\bigcup _{1\\le t\\le s-1}\\operatorname{SUC}_{t}$ or $\\operatorname{L}^*\\in \\operatorname{FAIL}_{t^*}\\setminus \\bigcup _{1\\le t\\le s-1}\\operatorname{SUC}_{t}$ ; (b) if $\\operatorname{L}^*\\in \\bigcup _{1\\le t\\le s-1} \\operatorname{SUC}_{t}$ , then from Property (ii) above, $\\mbox{ there are two indecies $i,j\\in \\operatorname{L}$ such that $\\operatorname{d}_{GO_{s-1}}(v_{i},v_{j})\\le 3\\zeta $};\\\\$ (c) $\\lbrace \\operatorname{L}^*\\in \\operatorname{FAIL}_{t^*}\\rbrace = \\lbrace I_{t^*}=0\\rbrace \\cup \\lbrace I_{t^*}=1, \\operatorname{L}^*\\prec \\operatorname{L}_{t^*}\\rbrace $ (recall that $\\operatorname{MT}_t=(\\operatorname{L}_t,\\operatorname{Q}_t,\\operatorname{Q}_t^{\\prime })$ for $1\\le t\\le s-1$ with $I_t=1$ ).", "Combining these together, we see for each pair $(\\operatorname{L},\\operatorname{Q})$ , any $\\operatorname{L}^*\\in \\mathtt {L}(\\operatorname{L})$ and any $t^*\\in \\mathtt {t}(\\operatorname{L})$ , it holds that $\\mathbb {P}[L\\bowtie _{G} Q,\\operatorname{L}^*\\in \\operatorname{EXP}_{t^*}\\mid \\mathcal {I}_{s-1}]$ is bounded by $p_\\eta ^\\zeta $ times $\\begin{aligned}&\\ \\mathbf {1}_{\\lbrace \\operatorname{L}\\text{ satisfies }(\\ref {eq-L-condition-b})\\rbrace }+\\mathbb {P}[X_t<\\log n\\mid \\mathcal {I}_{s-1},L\\bowtie _G Q]\\\\+&\\ \\mathbb {P}[X_{t^*}\\ge \\log n,\\operatorname{L}^*\\notin \\bigcup _{1\\le t\\le s-1}\\operatorname{SUC}_{t},\\operatorname{L}^*\\prec \\operatorname{L}_{t^*}\\mid \\mathcal {I}_{s-1},L\\bowtie _{G} Q]\\,.\\end{aligned}$ In order to bound the second term in (REF ), note that if $\\lbrace X_t<\\log n\\rbrace $ holds under $\\mathcal {I}_{s-1}\\cap \\lbrace L\\bowtie _G Q\\rbrace $ , then it also holds under $\\mathcal {I}_{s-1}$ together with any other configuration on the set $E(T_{L, Q})$ .", "Thus, $\\mathbb {P}[X_{t^*}<\\log n\\mid \\mathcal {I}_{s-1},L\\bowtie _{G} Q]\\le \\mathbb {P}[X_{t^*}<\\log n\\mid \\mathcal {I}_{s-1}]\\,.$ In order to bound the third term in (REF ), denote $\\operatorname{Avai}_t$ for the (random) set of tuples in $\\operatorname{CAND}_t$ that can be successfully matched, $1\\le t\\le s-1$ .", "Then $|\\operatorname{Avai}_t|=X_t$ and the conditional law of $\\prec $ given $\\mathcal {I}_{s-1}\\cap \\lbrace L\\bowtie _{G} Q\\rbrace $ is uniform conditioned on the following event: $\\lbrace \\operatorname{L}_t\\text{ is $\\prec $-minimal among }\\operatorname{Avai}_t, \\mbox{ for all } 1\\le t\\le s-1 \\text{ with }I_t=1\\rbrace \\,.$ In particular, under such conditioning, for $\\operatorname{L}^*\\notin \\bigcup _{1\\le t\\le s-1}\\operatorname{SUC}_{t}$ it holds with conditional probability $1/X_{t^*}$ that $\\operatorname{L}^* \\prec \\operatorname{L}_{t^*}$ .", "As a result, the third term is bounded by $(\\log n)^{-1}=o(1)$ .", "Summing over all $(\\operatorname{L},\\operatorname{Q})$ and combining all the arguments above altogether, we get $\\sum _{\\operatorname{L}\\in \\mathfrak {A}(\\mathtt {M}_s,\\xi )} &\\mathbb {P}\\big [\\operatorname{L}\\in \\operatorname{CAND}_s\\setminus \\operatorname{GC}_s\\mid \\mathcal {I}_{s-1}\\big ] \\nonumber \\\\\\le \\sum _{\\operatorname{L}\\in \\mathfrak {A}(\\mathtt {M}_s,\\xi )}\\sum _{\\operatorname{Q}\\in \\mathfrak {A}_s}\\sum _{\\operatorname{L}^*\\in \\mathtt {L}(\\operatorname{L})}\\sum _{t^*\\in \\mathtt {t}(\\operatorname{L})}&p_\\eta ^\\zeta \\Big (o(1)+\\mathbf {1}_{\\lbrace \\operatorname{L}\\text{ satisfies }(\\ref {eq-L-condition-b})\\rbrace }+\\mathbb {P}[X_{t^*}<\\log n\\mid \\mathcal {I}_{s-1}]\\Big )\\,.$ We now bound the number of effective terms in the summation of (REF ).", "Clearly, $|\\mathfrak {A}(\\mathtt {M}_t,\\xi )|\\le n^\\xi $ and $|\\mathfrak {A}_s|\\le n^{\\chi -1}$ .", "In addition, by (REF ) we see both $|\\mathtt {L}(\\operatorname{L})|$ and $|\\mathtt {t}(\\operatorname{L})|$ are uniformly bounded for $\\operatorname{L}\\in \\mathfrak {A}(\\mathtt {M}_s,\\xi )$ .", "Furthermore, the number of $\\operatorname{L}\\in \\mathfrak {A}(\\mathtt {M}_s,\\xi )$ that satisfy (REF ) is of order $O\\big (n^{\\xi -1}\\big )$ , and for each $1\\le t^*\\le s-1$ the number of tuples $\\operatorname{L}\\in \\mathfrak {A}(\\mathtt {M}_s,\\xi )$ such that $\\mathtt {t}(\\operatorname{L})$ contains $t^*$ is also of order $O\\big (n^{\\xi -1}\\big )$ (since some vertex in $L$ must be close to $v_{u_t}$ on the graph $GO_{s-1}$ ).", "Altogether, we see the right hand side of (REF ) is upper-bounded by (recall that $\\zeta =\\xi +\\chi -1$ ) $O\\big (n^\\zeta \\big )\\times o\\big (p_\\eta ^\\zeta \\big )+O\\big (n^{\\zeta -1}\\big )\\times p_\\eta ^\\zeta +O\\big (n^{\\zeta -1}\\big )\\times p_\\eta ^\\zeta \\times \\sum _{1\\le t^*\\le s-1}\\mathbb {P}[X_{t^*}<\\log n\\mid \\mathcal {I}_{s-1}]\\,.$ Averaging over $\\mathcal {I}_{s-1}$ and recalling (REF ) and (REF ), we obtain that $\\mathbb {E}|\\operatorname{CAND}_s\\setminus \\operatorname{GC}_s| =o\\big ((np_\\eta )^\\zeta \\big )+O\\big (n^{\\zeta -1}p_\\eta ^\\zeta \\big )\\sum _{1\\le t^*\\le s-1}\\mathbb {P}[X_{t^*}<\\log n]\\stackrel{(\\ref {eq;good-step-est})}{=}o\\big ((np_\\eta )^\\zeta \\big )\\,,$ as desired.", "This implies that Item (i) of $\\mathcal {G}_s^2$ in Definition REF holds with probability $1-o(1)$ .", "Finally, we treat Item (ii) of $\\mathcal {G}_s^2$ in Definition REF and it suffices to show that it holds on the event $\\mathcal {G}$ .", "Since each tuple in $\\operatorname{GC}_s$ corresponds to a subgraph $T\\cong \\mathbf {T}$ in $G$ rooted at $v_{u_s}$ , (REF ) implies that $|\\operatorname{GC}_s|=O\\big ((np_\\eta )^\\zeta \\big )$ .", "For each fixed $\\operatorname{L}_i\\in \\operatorname{GC}_s$ and a nonempty subset $\\mathbf {R} \\subset \\mathbf {L}$ , we bound the number of $\\operatorname{L}_j\\in \\operatorname{GC}_s$ such that $(\\operatorname{L}_i,\\operatorname{L}_j)\\in \\operatorname{IP}_s(\\mathbf {R})$ as follows: each such $\\operatorname{L}_j$ corresponds to a subgraph ${T}\\cong \\mathbf {T}$ of ${G}$ with leaves in $\\mathbf {R}$ mapped to a fixed subset in $V(G)$ under the isomorphism.", "In order to bound the enumeration for such ${T}$ , we use the following two-step procedure to choose ${T}$ : choose a subgraph ${T}_0\\cong \\mathbf {Span}(\\mathbf {R})$ of ${G}$ with leaves in $\\mathbf {R}$ mapped to a fixed subset in $V(G)$ under the isomorphism; choose ${T}\\cong \\mathbf {T}$ of ${G}$ with $\\mathbf {Span}(\\mathbf {R})$ mapped to $T_0$ under the isomorphism.", "Thus, by Definition REF and by (REF ), the number of choices for such $T$ is $O\\big (\\mathbb {D}(\\mathbf {Span}(\\mathbf {R}))\\times (np_\\eta )^{|E(\\mathbf {T})\\setminus E(\\mathbf {Span}(\\mathbf {R}))|}\\big )$ on $\\mathcal {G}$ , and thus Item (ii) of $\\mathcal {G}_s^2$ holds with probability $1-o(1)$ .", "This completes the proof of Proposition REF ." ] ]
2210.07823
[ [ "Wide Binaries as a Modified Gravity test: prospects for detecting\n triple-system contamination" ], [ "Abstract Several recent studies have shown that velocity differences of very wide binary stars, measured to high precision with GAIA, can potentially provide an interesting test for modified-gravity theories which attempt to emulate dark matter; in essence, MOND-like theories (with external field effect included) predict that wide binaries (wider than $\\sim 7$ kAU) should orbit $\\sim 15\\%$ faster than Newtonian for similar orbit parameters; such a shift is readily detectable in principle in the sample of 9,000 candidate systems selected from GAIA EDR3 by Pittordis and Sutherland (2022).", "However, the main obstacle at present is the observed ``fat tail\" of candidate wide-binary systems with velocity differences at $\\sim 1.5 - 6 \\times$ circular velocity; this tail population cannot be bound pure binary systems, but is likely to be dominated by triple or quadruple systems with unresolved or undetected additional star(s).", "While this tail can be modelled and subtracted, obtaining an accurate model for the triple population is crucial to obtain a robust test for modified gravity.", "Here we explore prospects for observationally constraining the triple population: we simulate a population of hierarchical triples ``observed\" as in PS22 at random epochs and viewing angles; then evaluate various possible methods for detecting the third star, including GAIA astrometry, RV drift, and several imaging methods from direct Rubin images, speckle imaging and coronagraphic imaging.", "Results are encouraging, typically 90 percent of the triple systems in the key regions of parameter space are detectable; there is a moderate ``dead zone\" of cool brown-dwarf companions at $\\sim 25-100$ AU separation which are not detectable with any of our baseline methods.", "A large but feasible observing campaign can clarify the triple/quadruple population and make the gravity test decisive." ], [ "Introduction", "A number of recent studies have shown that velocity differences of wide stellar binaries offer an interesting test for modified-gravity theories similar to MoND, which attempt to eliminate the need for dark matter (see e.g.", "[12], [13] [14], [20], [24] and [11]).", "Such theories require a substantial modification of standard GR below a characteristic acceleration threshold $a_0 \\sim 1.2 \\times 10^{-10} \\, {\\rm m \\, s^{-2}}$ (see review by [7]).", "A key advantage of wide binaries is that at separations $7 \\, {\\rm kAU}$ , the relative accelerations are below this threshold, so MoND-like theories predict significant deviations from GR; while wide binaries should contain neglible dark matter, so DM theories predict no change from GR/Newtonian gravity.", "Thus in principle the predictions of DM vs modified gravity in wide binaries are unambiguously different, unlike the case for galaxy-scale systems where the DM distribution is uncertain.", "Wide binaries in general have been studied since the 1980s ([27], [2]), but until recently the precision of ground-based proper motion measurements was a serious limiting factor: wide binaries could be reliably selected based on similarity of proper motions, see e,g, [28], [19], [18], [16], [5], [3].", "However, the typical proper motion precision $\\sim 1 \\, {\\rm mas}\\, {\\rm yr}^{-1} $ from ground-based or Hipparcos measurements was usually not good enough to actually measure the internal velocity differences, except for a limited number of nearby systems.", "The launch of the GAIA spacecraft [8] in 2014 offers a spectacular improvement in precision; the proper motion precision of order $30 \\, \\mu {\\rm as} \\, {\\rm yr}^{-1} $ corresponds to transverse velocity precision $0.0284 {\\rm \\, km \\, s^{-1} }$ at distance 200 parsecs, around one order of magnitude below wide-binary orbital velocities, so velocity differences can be measured to good precision over a substantial volume; and this will steadily improve with future GAIA data extending eventually to a 10-year baseline.", "Recent studies of WBs from GAIA include e.g.", "[6] and [15].", "In earlier papers in this series, [21] (hereafter Pittordis2018) compared simulated WB orbits in MoND versus GR, to investigate prospects for the test in advance of GAIA DR2.", "This was applied to a sample of candidate WBs selected from GAIA DR2 data by [22] (hereafter Pittordis2019), and an expanded sample from GAIA EDR3 by [23] (hereafter Pittordis2022).", "To summarise results, simulations show that (with MoND external field effect included), wide binaries at $10 \\, {\\rm kAU}$ show orbital velocities typically 15 to 20 percent faster in MOND than GR, at equal separations and masses.", "This leads to a substantially larger fraction of “faster\" binaries with observed velocity differences between 1.0 to 1.5 times the Newtonian circular-orbit value.", "In Newtonian gravity, changing the eccentricity distribution changes the shape of the distribution mainly at lower velocities, but has little effect on the distribution at the high end from 1.0 to 1.5 times circular velocity.", "Therefore, the predicted shift from MOND is distinctly different from changing the eccentricity distribution within Newtonian gravity; so given a large and pure sample of several thousand WBs with precise 2D velocity difference measurements, we could decisively distinguish between GR and MOND predictions.", "The main limitation at present is that Pittordis2019 and Pittordis2022 showed the presence of a “fat tail\" of candidate binaries with velocity differences $\\sim 1.5$ to $6 \\times $ the circular-orbit velocity; these systems are too fast to be pure bound binaries in either GR or MOND, and a likely explanation [1] is higher-order multiples e.g.", "triples where either one star in the observed “binary\" is itself an unresolved closer binary, or the third star is at resolvable separation but is too faint to be detected by GAIA; the third star on a closer orbit thus substantially boosts the velocity difference of the two observed stars in the wide “binary\".", "In Pittordis2022 we made a simplified model of this triple population, then fitted the full distribution of velocity differences for WB candidates using a mix of binary, triple and flyby populations.", "These fits found that GR is significantly preferred over MOND if the rather crude PS22 triple model is correct, but we do not know this at present.", "Allowing much more freedom in the triple modelling is computationally expensive due to many degrees of freedom, and is likely to lead to significant degeneracy between gravity modifications and varying the triple population.", "Therefore, observationally constraining the triple population, or eliminating most of it by additional observations, is the next key step to make the WB gravity test more secure.", "In this paper we explore prospects for observationally constraining the triple population: we generate simulated triple systems “observed\" at random epochs, inclinations and viewing angles, and then test whether the presence of the third star is detectable by any of various methods including direct, speckle or coronagraphic imaging; radial velocity drift; or astrometric non-linear motion in the future GAIA data; we see below that prospects are good, in that 80 to 95% of triple systems in the PS22 sample should be potentially detectable as such by at least one of the methods.", "The plan of the paper is as follows: in Section  we outline the parameters (semi-major axis distributions, eccentricities, masses etc) used in the simulated triple systems.", "In Section  we describe the various methods and thresholds adopted for defining a simulated detection of the third star.", "In Section  we show various results and plots, indicating which methods are successful at detecting a third object in various regions of observable parameter space, in particular as a function of projected velocity (relative to circular-orbit value) and projected separation.", "We summarise our conclusions in Sec.", "." ], [ "Triple simulations", "In general the detectability of a third star in a candidate “wide binary\" depends on many parameters including mass ratio, orbit size, eccentricity, inclination and phase, and the system distance; this is in principle calculable analytically, but the calculations are complex so this is best handled by a Monte-Carlo type simulation as here; we set up a large number of simulated triple systems and take “snapshots\" of these at random times and viewing angles, then evaluate detectability of the third star for each simulated snapshot.", "In this section we describe the parameters and distribution functions used to set up simulated triple systems.", "For simplicity, we model each triple system as two independent Kepler orbits, “Outer\" and “Inner\"; we label the stars so that stars 2 and 3 are the inner binary, with $M_2 > M_3$ , while the “outer\" orbit consists of star 1 and the barycenter of stars 2 and 3; subscripts “inn\" and “out\" below denote these.", "Thus, the “wide binary\" observed in PS22 comprises star 1 plus either star 2, or the unresolved blend of stars 2+3.", "This wide system is always well resolved, since the PS22 subsample used in fitting has angular separations $\\ge 5 \\, {\\rm kAU}/ 300 \\, {\\rm pc}= 16.6\\, $ arcsec; the inner pair may be resolved or unresolved, as below.", "We choose semimajor axes so the distribution of $a_{out}$ is uniform in $\\log _{10}a_{out}$ from $0.1 \\, {\\rm kAU}\\le a_{out} \\le 100 \\, {\\rm kAU}$ ; this is chosen significantly wider than the range of interest $ 5 \\le r_p \\le 20 \\, {\\rm kAU}$ used in Pittordis2022 and below, so we downselect later in projected separation.", "The distribution of $a_{inn}$ is uniform in $\\log a$ between $1 \\, {\\rm AU}$ to $0.2 a_{out}$ , where the outer limit is an approximate criterion to ensure long-term stability of the system.", "The orbit eccentricities are chosen independently from the distribution of [25], which is $f(e) = 0.4 + 1.2e$ .", "Relative orbit inclinations are random, so a rotation matrix is generated to convert the inner-orbit separations and velocities into the plane of the outer orbit.", "Masses are drawn from a simplified distribution which is flat in $M$ from $0.01 \\,M_\\odot $ to $0.7 \\,M_\\odot $ , then declining with a $-2.35$ power law above $0.7 \\,M_\\odot $ .", "For each triple we draw three random masses independently from this distribution, then if $M_3 > M_2$ we swap labels so $M_2 \\ge M_3$ ; we then require $M_1, M_2 > 0.4 \\,M_\\odot $ for detectability, otherwise re-draw a new triplet of masses.", "After setting up these orbit parameters, we then pick 10 random epochs for each orbit (i.e.", "choose mean anomaly uniform in 0 to $2\\pi $ ); solve the Kepler equations to get the true anomaly (angle from pericenter) and generate a relative 3D velocity and separation from these; we then “observe\" each system at each random time from 10 random viewing directions, to produce the relevant observables of 2D sky-projected velocities and angular separations.", "We also generate a random distance for each system consistent with the distribution of distances $\\le 300 \\, {\\rm pc}$ in PS22, and compute angular separations.", "The inner-orbit velocity $_{inn} \\equiv _2 - _3$ is then suppressed by a factor $f_{pb}$ , the ratio of photocentre-barycenter distance to total separation in the inner orbit.", "There are then two cases on observations as simulated in PS22; if the inner-orbit angular separation is $\\theta _{inn} \\le 1 $ , we assume that GAIA measures “object 2\" as the photocenter of stars 2+3 ; while if $\\theta _{inn} \\ge 1 $ we assume GAIA measures the photocenter as star 2 only, and star 3 is assumed to be resolved but undetected.", "Therefore $f_{pb}$ is defined by $f_{pb} = {\\left\\lbrace \\begin{array}{ll}\\frac{M_3}{M_2+M_3} - \\frac{L_3}{L_2 + L_3} \\qquad (\\theta _{inn} < 1 \\text{ arcsec} ) \\\\\\frac{M_3}{M_2+M_3} \\qquad (\\theta _{inn} \\ge 1 \\text{ arcsec} )\\end{array}\\right.", "}$ where the $L_{2,3}$ are the model luminosities.", "The observable velocity difference in the wide system is then defined by $\\mathbf {v}_{3D,obs} = \\mathbf {v}_{out} - f_{pb} \\, \\mathbf {R}\\mathbf {v}_{inn}$ where $\\mathbf {v}_{out}$ is the outer orbit velocity (star 1 relative to the barycentre of 2+3), and $\\mathbf {v}_{inn}$ is the relative velocity between stars 2+3 in its own plane.", "Then $_{3D,obs}$ is projected to 2-dimensional projected velocity $\\Delta v_p$ according to the random viewing direction; the result of this is a set of 800,000 random triple-system snapshots which include the key observables $\\Delta v_{p}$ and $r_p$ along with other orbit parameters.", "We then take subsets of these in slices of these observables, and then test whether or not the presence of star 3 is detectable by a number of methods; these tests are outlined in the next section." ], [ "Detectability of the third star", "Here for each simulated triple-system snapshot, we test whether the third star is detectable by imaging, astrometry/radial velocity methods, or both or neither.", "We note that the outer orbits here have separation $\\ge 16$ arcsec, orbital period $> 10^5$ years and negligible acceleration, so we assume our outer “star 1\" has no effect on the detectability; only the parameters of the inner binary 2+3 are relevant.", "The methods are subdivided into cases of increasing difficulty/observing cost as follows:" ], [ "Imaging", "Imaging is optimal for relatively wide inner orbits $r_{p,inn} 30\\, {\\rm AU}$ and when star 3 is above the bottom of the main sequence, since it is simple and a positive detection may be obtained in a single epoch.", "For imaging, we test in increasing order of “observing cost\" as follows: first direct seeing-limited imaging, then speckle imaging, then coronography, with detection criteria defined as follows: For direct seeing-limited imaging we adopt parameters appropriate for the future Rubin first-year dataset; we adopt a model point-spread function with (pessimistic) 1.5 arcsec seeing and a Moffatt profile with $\\beta = -2.5$ .", "We then assume the third star is detectable if the PSF peak of star 3 exceeds the PSF of star 2 at the centroid of star 3 (e.g.", "assuming 20 percent PSF subtraction accuracy for a 5-sigma detection of star 3).", "We note here that false-positive “triples\" from random background stars are a concern in the case of direct imaging at separations $1\\,$ arcsec; however, resolved third stars brighter than $G 20$ were already rejected by PS22; for faint stars detectable by Rubin, most should be M-dwarfs or later, and more distant background stars should mostly have colours / magnitudes inconsistent with an M/L/T dwarf at the measured distance of star 2.", "If our star 3 fails to pass the detection threshold for seeing-limited deep imaging above, we next test for speckle imaging; although the contrast is not as good as for AO-assisted coronography (see below), speckle imaging has advantages of low observing overheads (helpful for large samples and short exposures as here), and can detect companions near the diffraction limit at $I-$ band.", "We assume the quoted performance of the twin instruments Alopeke and Zorro at Gemini N/S respectively [10]; we approximate the detectable delta-magnitude vs separation as two straight lines as follows: $\\Delta m_{max} = {\\left\\lbrace \\begin{array}{ll} 0 + 5 (\\theta - 0.05)/0.15 \\ \\text{ if } 0.05 \\le \\theta \\le 0.2 \\text{ arcsec} \\\\5 + 0.2 (\\theta - 0.2) \\ \\text{ if } 0.2 < \\theta < 1 \\text{ arcsec}\\end{array}\\right.", "}$ Here these 2 lines are slightly pessimistic compared to the actual red-band performance curves in [10].", "(Note here the outer angular limit of 1 arcsec is set by the small size of the detector window region, due to the need to read-out the window at over 100 frames/sec.)", "If the angular separation is $0.05 \\le \\theta \\le 1$ arcsec, and the simulated contrast $\\Delta m$ of star 3 vs star 2 is below the $\\Delta m_{max}$ given by Eq.", "REF , we define star 3 as detectable by speckle imaging.", "Otherwise, we test for coronagraphy.", "For coronography, we adopt detection limits similar to the projected performance of the near-future ERIS instrument [4] on ESO VLT.", "This is an near-infrared AO-assisted imager/spectrograph including a coronagraph option.", "Based on Figure 8 of [4], we adopt a contrast limit of $10^{-3}$ to inner working angle of $2.44 \\, \\lambda /D$ , corresponding to $0.14$ arcsec at K-band; this contrast limit is rather conservative, as the Figure indicates actual contrast performance closer to $10^{-4}$ .", "(We note here that dedicated planet-imagers such as SPHERE can achieve much better contrast; however these require a very bright primary star for the required AO performance, so most of the PS22 sample are too faint for SPHERE).", "Assuming an approximate K-band luminosity-mass relation $L_K \\propto M^{2.6}$ , the $10^{-3}$ contrast translates to a limiting mass ratio $M_3/M_2 \\ge 0.07$ .", "We also assume a lower mass limit $M_3 > 0.06 \\,M_\\odot $ , since old objects below this mass will have cooled to the late-T or Y spectral class and be too faint to reliably detect at our median 180 pc distance; most L-dwarfs should be detectable if above the contrast limit.", "In most cases, this lower mass limit is more stringent than the contrast limit.", "If none of the above tests result in a simulated detection, we define the third star as non-detected in imaging." ], [ "Astrometry/RV", "A substantial fraction of our simulated triples have angular separations too small, or third stars too faint, to be detected in imaging.", "The GAIA astrometry can detect the influence of a third star via the non-linear motion of the photocentre of stars 2+3, as in e.g.", "[9] for GAIA DR3, i.e.", "deviations from a 5-parameter fit with position, parallax and constant proper motion.", "Here we assume the 10-year extended mission, since the detectability of uniform acceleration improves as $\\sim T^{2.5}$ for fixed scanning cadence; so the 10-year extended mission is much superior to the baseline 5-year mission for detecting a near-constant acceleration signal.", "For the 10-year mission, we assume that the acceleration signal is non-degenerate with the annual parallax signal (this may not be a good approximation if using the 2.75-year baseline of GAIA DR3, but should be good for the 10-year mission).", "For GAIA astrometric detection, we split according to whether the inner-orbit period is more or less than 10 years.", "If less than 10 years, a full orbit will be seen by GAIA, with a median of 140 visits.", "A relevant parameter here is the single-visit astrometric precision; this varies as a function of G magnitude, and a fitting formula is given by Eq.", "15 of [17] as follows $\\sigma _{AL} & = & \\frac{ 100 + 7.75 \\sqrt{ -1.631 + 680.766 \\, z+ 32.732 \\, z^2} }{ \\sqrt{9} } \\, \\, \\mu {\\rm as} \\nonumber \\\\z & \\equiv & 10^{0.4 \\,[max(G, 14) - 15] }$ where $\\sigma _{AL}$ is the 1D along-scan astrometric precision for a single focal plane transit, crossing 9 CCDs.", "This gives a value of $76 \\, \\mu {\\rm as} $ at $G < 14$ , rising to $102 \\, \\mu {\\rm as} $ at $G = 15$ near the median of Pittordis2022, and $226 \\, \\mu {\\rm as} $ at $G =17$ , the faint-end cutoff of the Pittordis2022 sample.", "For the present case we assume the orbit is reliably detectable if $2 \\, f_{pb} \\, a_{inn} / d \\ge 500 \\, \\mu {\\rm as} $ where the left-hand-side is the shift of the photocentre of stars 2+3.", "For inner orbits longer than 10 years, GAIA sees only a partial orbit, and for much longer periods this approximates a uniform acceleration.", "We compute the angular acceleration $\\dot{\\mathbf {\\mu }}$ of the observable photocentre; in this case the difference between the position at mid-mission and the mid-point between initial and final positions is given by $0.5 \\, \\dot{\\mathbf {\\mu }} \\, T^2$ for $T = 5\\, \\text{yr}$ ; if this exceeds $500 \\, \\mu {\\rm as} $ , corresponding to angular acceleration $\\dot{\\mathbf {\\mu }} > 40 \\, \\mu {\\rm as} \\, \\,{\\rm yr}^{-2}$ , we define the acceleration as significantly detectable.", "Alternatively, a third star may be detected by radial velocity drift of star 2.", "For these purposes, this is less costly than planet-detection since we can flag triple systems without requiring full characterisation, and false-positives are less wasteful, so a significant RV shift between just two well-separated epochs would be sufficient to flag a system as triple.", "This does still require a minimum of two spectroscopic observations, so is significantly more costly than imaging observations.", "For radial velocity drift, we compute the radial component of acceleration of star 2 due to star 3.", "We estimate an observable radial-velocity precision as $0.03 {\\rm \\, km \\, s^{-1} }$ per single epoch, which is realistic for stars at $G \\sim 16$ mag (note that planet-hunting precision can be much better $\\sim 0.001 {\\rm \\, km \\, s^{-1} }$ , but is only achievable for considerably brighter stars $G 12$ ).", "We then assume that a shift in RV exceeding $0.25 {\\rm \\, km \\, s^{-1} }$ over an arbitrary 5-year baseline is detectable at high significance, i.e.", "radial acceleration of star 2 exceeding $0.05 {\\rm \\, km \\, s^{-1} }\\,{\\rm yr}^{-1}$ .", "Finally, short-period spectroscopic binaries may be detected via splitting of spectral lines in a single-epoch spectrum, or a secondary peak or asymmetry in autocorrelation of a spectrum.", "Based on estimates in [26], we choose a detection criterion as a radial-velocity difference between stars 2 and 3 above $20 {\\rm \\, km \\, s^{-1} }$ , and mass ratio $M_3/M_2 > 0.7$ ; this only occurs for small and fast inner orbits, and turns out to almost never occur in our relevant range of parameter space, so this test is unimportant later.", "We apply all the tests above to each simulated triple snapshot, with results in the next section." ], [ "Simulation results", "For each of the simulated triple snapshots above, we have various observables and detectability results defined as above.", "As in Pittordis2022, we take slices in outer orbit projected separation $r_p$ , with four slices $5.0 - 7.1 \\, {\\rm kAU}$ , $7.1-10.0 \\, {\\rm kAU}$ , $10.0 - 14.1 \\, {\\rm kAU}$ , $14.1-20 \\, {\\rm kAU}$ .", "In Pittordis2022 the main relative-velocity observable is the dimensionless parameter $\\tilde{v}\\equiv \\frac{ \\Delta v_p }{ v_c(r_p)} \\ \\ ,$ where $\\Delta v_p$ is the projected velocity difference of the 2 objects (assuming both stars are at the mean of the two distances from GAIA), and $v_c(r_p)$ is the Newtonian circular-orbit velocity at the observed projected separation, using masses estimated from a main sequence mass-luminosity relation.", "Here we define the simulated $\\Delta v_p$ as the 2D projection of $v_{3D,obs}$ from Eq.", "REF above, i.e.", "the velocity of star 1 relative to the “observable centre\" of stars 2 and 3.", "The observed histograms of $\\tilde{v}$ were used in the fitting procedure shown in Figures 13 – 19 of Pittordis2022.", "We recall that the range $1.0 \\le \\tilde{v}\\le 1.5$ is particularly important for gravity testing: the key signature of MOND is an excessive fraction of pure binaries in this range, and this excess cannot be mimicked by changing the eccentricity distribution in GR.", "Here, we take slices in $r_p$ and $\\tilde{v}$ as above, and then evaluate for each of the simulated systems whether the third object is detectable by each of the various methods in Sec.", "above.", "Detection percentages by method are shown in Table REF .", "Detection results are shown as scatter plots in Figures REF - REF , relating to the main parameters of the third star, i.e.", "the mass ratio $M_3/M_2$ and the projected separation of the inner orbit $r_{p,inn}$ .", "(Orbit parameters $a_{inn}$ and $e_{inn}$ would be more fundamental, but these will rarely be measurable in practice given the long periods, so we focus on direct observables here).", "Plots are colour-coded depending on detectability of the third object by various method(s) as follows: black points are undetectable by any of the methods in Section , while magenta points are detectable by both GAIA astrometric deviations and at least one of the imaging methods (nearly always speckle imaging or coronography).", "Colours other than black or magenta are detectable by either some imaging method(s) or astrometry, but not both.", "Figure: As Figure , for observable velocity ratio between 1.0<v ˜<2.01.0 < \\tilde{v}< 2.0.Figure: As Figure , for observable velocity ratio between 2.0<v ˜<3.02.0 < \\tilde{v}< 3.0.Table: For simulated triples with outer projected separation 5 kAU <r p <20 kAU 5\\, {\\rm kAU}< r_p < 20\\, {\\rm kAU},divided in selected v ˜\\tilde{v} bins, the table shows the percentageof systems where the third star is detectable by various methods, or none.The subtotal row is the subtotal of rows 1-3, so rows 4-8 sum to almost 100 percent.The grey, blue and orange points to the right of these scatter plots show triples detectable by seeing-limited imaging, speckle imaging and ERIS coronography respectively.", "As expected given the median distance 180 pc, seeing-limited imaging is effective at the largest separations $r_{p,inn} 150 \\, {\\rm AU}$ , while speckle imaging and coronography are effective down to $\\sim 25 \\, {\\rm AU}$ .", "To the left of each plot, the brown and green points show inner orbits detectable by GAIA astrometry but none of the imaging methods; these are coded as period $< 10 \\,{\\rm yr}$ (brown) or $> 10 \\,{\\rm yr}$ (green); here longer periods dominate over shorter, so most astrometric detections will observe a partial orbit.", "As above, the magenta points near the centre show triples detectable by both GAIA astrometry and at least one of the imaging methods (where speckle imaging is the largest contributor).", "These magenta points are concentrated near projected separations $r_{p,inn} \\sim 30 \\, {\\rm AU}$ ; this corresponds to angular separations $\\sim 0.16 \\,\\text{arcsec}$ at the median distance of Pittordis2022, and orbital periods $\\sim 150$ years; this implies that GAIA observes around 7 percent of a full orbit and the curvature is sufficient to be detectable, though there is little chance of constraining the overall orbit shape with such a short arc.", "The existence of this magenta overlap region in the centre is a major positive feature of our results: this implies that for main-sequence third stars above $M_3 \\ge 0.08 \\,M_\\odot $ , they are detectable at any separation by one or more methods.", "Here we comment on and explain various features of the scatter plots: In general the inner-orbit projected separation in each slice has a substantial scatter, caused by the randomness in relative orbit phases, inclinations and viewing angles.", "The divisions between colour regions are relatively clear-cut: this occurs because 90 percent of the sample have $90 \\, {\\rm pc}< d < 300 \\, {\\rm pc}$ so the scatter in distances is moderate, so projected separations map to angular separations with likewise moderate scatter.", "The “bow-shaped\" overall distribution of points is caused by the $f_{pb}$ factor above, which for unresolved inner orbits is maximal at intermediate mass ratios $M_3 /M_2 \\sim 0.7$ .", "The velocity perturbation from star 3 depends on the product $f_{pb} \\, v_{23}$ , so at a given value of $\\tilde{v}$ , larger $f_{pb}$ correlates with slower and wider inner orbits.", "At wider separations in Figure 3 there is a cloud of points at upper right making a y-shape feature, caused by the larger $f_{pb}$ approaching 0.5 when the third star is assumed separately resolved by GAIA.", "At small separations $10 \\, {\\rm AU}$ , nearly all third objects are detectable by GAIA astrometry down to our adopted mass lower limit $0.01 \\,M_\\odot $ ; even smaller Jupiter-like planets produce reflex motion of star 2 too small to be a concern here.", "(“Hot Jupiters\" can produce reflex velocities up to $\\sim 0.1 {\\rm \\, km \\, s^{-1} }$ , but those have periods much less than 1 year, and the resulting perturbation on the mean proper motion of star 2 will average down to a much smaller value over a multi-year GAIA baseline.", "So Jupiter-mass planets at any separation are not a significant contaminant for the purpose of the gravity test as in Pittordis2022. )", "The non-detectable third stars (black points) are primarily cool brown dwarfs below $0.06 \\,M_\\odot $ at projected separations $30 \\, {\\rm AU}$ ; here the orbital periods are well over 100 years so astrometric accelerations are very small, and also such objects are often too faint for ERIS coronography; L-type brown dwarfs are accessible, but late-T or Y dwarfs are extremely challenging at $\\sim 200\\, {\\rm pc}$ distance.", "Such objects in principle may well be detectable either with JWST $3 - 5 \\mu {\\rm m}$ imaging at separations $\\sim 0.5 \\,\\text{arcsec}$ , or ELT coronagraphy at smaller separations; however we have not considered this at the present time, due to the large challenge of getting enough observing time on these major facilities for sample sizes of many hundreds of candidate systems; this is deferred for a future study.", "It is notable from Table REF that the fraction of triples detectable with GAIA astrometric acceleration increases steeply with $\\tilde{v}$ , from 12 percent at $0.5 < \\tilde{v}< 1$ up to 65 percent at $2 < \\tilde{v}< 3$ ; this may be explained because for fixed angles, orbit phase and masses, orbital velocity scales $\\propto r^{-1/2}$ while acceleration scales $\\propto r^{-2}$ ; so inner-orbit acceleration is steeply correlated with velocity.", "It is also notable that the fraction of triples detectable overall (by any method) is quite weakly correlated with outer orbit separation $r_p$ , but is quite strongly correlated with $\\tilde{v}$ ; as in Table REF , the overall detection percentage is 80 percent at $0.5 < \\tilde{v}< 1$ , rising to 95 percent at $2 < \\tilde{v}< 3$ .", "This occurs because pure binaries must have $\\tilde{v}< \\sqrt{2}$ in GR (modulo observational errors), or $\\tilde{v}< 1.65$ in the MOND model chosen in PS22, with over 90% expected to have values $\\tilde{v}< 1$ in GR.", "To achieve a value $2 \\le \\tilde{v}\\le 3$ , the inner-orbit perturbation then has to dominate the outer-orbit velocity; this means that the lower-right portion of Figure REF is almost empty, since a low-mass $M_3$ at large $r_{p,inn}$ cannot produce a fast enough perturbation on star 2.", "The slow slice $0.5 \\le \\tilde{v}\\le 1$ in Figure REF includes cases where the inner-orbit perturbation is similar or less than the outer-orbit total velocity, since the simulated (and observed) $\\tilde{v}$ is a weighted resultant of both orbit velocities; so brown-dwarf $M_3$ at large separations commonly end up in this slice (along with a substantial fraction of the pure-binary systems, which were simulated in Pittordis2022 but not repeated here; we simulate pure triples because it is simple to scale results to a general mixture of pure binaries and triples, as below)." ], [ "Implications for follow-up observations", "We now turn to the prospects for a realistic observing strategy.", "One point is that given observed $r_p$ and $\\tilde{v}$ as in PS22, the potential third-star may scatter over a rather wide region in $M_3, r_{p,inn}$ plane, so there is not much guidance on which detection method to use; therefore it is best to try all detection methods in ascending order of observing cost; but once a system is proven as a triple by any method, further observations are not strictly necessary.", "To minimise observing cost, it is most economical to defer observations until after the GAIA 10-year data release and a year or two of Rubin survey data is available, since these can detect a fairly substantial fraction of third-stars with no added observing cost, only moderate software and manpower effort.", "Adding radial-velocity observations appears to add surprisingly little value, since almost all systems showing measurable RV drift are also detectable with GAIA astrometry; even if an RV observing program started now, GAIA will pass its 10-year lifespan sooner than our assumed 5-year RV baseline is achieved.", "Also, focusing effort on the faster $\\tilde{v}$ systems is potentially more economical in observing time, since they are less numerous in the PS22 sample, the large majority of observed systems in this range should be triples, and can be“discarded\" after a detection; while the pure binaries will return null results and require observing all the methods to get maximum purity of the survivors.", "Somewhat counter-intuitively, this means that regions of parameter-space containing more triples are easier for follow-up.", "Combining the four $r_p$ slices above, the PS22 sample (refined with additional RUWE < 1.4 cut) contains 428 systems with $5 < r_p < 20 \\, {\\rm kAU}$ and $2 \\le \\tilde{v}\\le 3$ ; the PS22 model fits predict that these comprise no pure binaries, but approximately 83 percent triples and 17 percent flyby pairs.", "According to the simulated triples above, about 73 percent of the triples in this slice (60 percent of the total) should reveal as such in GAIA or Rubin data.", "This leaves $\\sim 180$ systems, $ \\sim 100$ triples and 80 flybys, to follow-up initially with speckle imaging; this should reveal approximately 60 further triples, leaving only 120 systems or 240 stars to observe with coronographic imaging; these numbers look relatively achievable for complete followup.", "A campaign focused on $1 \\le \\tilde{v}\\le 2$ systems is more directly relevant to the gravity test, since it is this slice which contains the key difference between the GR and MOND pure-binary distributions in the fitting procedure; but this is also more costly in observing time.", "This slice contains 1520 systems in the PS22 sample (with RUWE cut as above) at $5 < r_p < 20 \\, {\\rm kAU}$ , and nearly 40 percent of these will be pure-binary systems; those should obviously return a null result for all the third-star tests, and will demand both speckle and coronographic imaging of these to get high completeness for triples, hence over 1100 stars for a complete sample; this looks possible but a fairly large-scale observing program.", "For the low-velocity slice $0.5 \\le \\tilde{v}\\le 1$ , this is still more challenging: this slice contains 2523 systems in the PS22 sample (updated as above), and fits predict that 66 percent of these are pure-binary systems; as above, we need to follow-up all the pure binaries along with the subset of triples which did not reveal a detection in astrometry/direct imaging.", "So this implies 4000 stars requiring speckle and coronagraphic imaging to get reasonably complete detection of the triples.", "This is very challenging for followup of the entire sample.", "However, note that it is not necessary to detect all the triples individually to obtain a useful gravity test: as long as we can get a reasonably secure statistical model of the triple population, we can extrapolate this model to undetectable third stars, and correct for random-sampling in the fraction of systems actually observed.", "Therefore it seems likely that a weighted campaign is most efficient, observing quasi-random subsamples of systems with a selection probability varying as a rising function of $\\tilde{v}$ up to some upper limit.", "This should help to deliver an accurate model for the overall population of triples in parameter space.", "Optimising the detailed design of such a program is left to future work.", "For the present, the main positive result from this study is that a high fraction of triple systems in the PS22 sample of wide-binary candidates can be securely detected as triple with a large but realistic observing program.", "This should allow building a robust model for the overall triple population; then a joint fit of binaries, triples and other systems to a wide-binary sample similar to the PS22 sample (updated with improved GAIA data) should allow a strong test for MOND-like modified gravity in the medium-term future." ], [ "Conclusions", "We have simulated prospects for observational detection of a third star in the sample of candidate wide-binary systems selected and analysed by Pittordis2022; this is motivated by the importance of understanding or removing triple systems to allow a test for modified gravity.", "We generated a large sample of simulated hierarchical triple systems with random masses, orbit sizes, eccentricities and inclinations, then took “snapshot\" observations of these at random epochs and viewing angles to produce observables $r_p$ , $\\tilde{v}$ as in Pittordis2022.", "For each system, we then evaluated the detectability of the third star using various observing methods including seeing-limited imaging, speckle imaging, coronagraphic imaging, GAIA astrometric acceleration, and radial velocity drift, and assessed total detectability across parameter space.", "The prospects are encouraging in that a high fraction of triples are detectable as such, increasing with velocity ratio $\\tilde{v}$ from $\\sim 80$ percent at $\\tilde{v}< 1$ to 95 percent at $2 < \\tilde{v}< 3$ .", "It is fortunate that the lower limit of speckle imaging somewhat overlaps with the upper limit for GAIA astrometric acceleration, so almost all main-sequence third stars are detectable at any separation; non-detections are predominantly cool brown dwarfs at $r_p 25 \\, {\\rm AU}$ which produce too small astrometric acceleration.", "In principle, a realistic observing program starting soon after the GAIA extended mission 10-year data release can strongly constrain the population of triples, and then velocity differences in wide binaries should become a strong test for MOND-like theories of modified gravity at low accelerations.", "—————————————————————-" ] ]
2210.07781
[ [ "Convolutional Neural Networks: Basic Concepts and Applications in\n Manufacturing" ], [ "Abstract We discuss basic concepts of convolutional neural networks (CNNs) and outline uses in manufacturing.", "We begin by discussing how different types of data objects commonly encountered in manufacturing (e.g., time series, images, micrographs, videos, spectra, molecular structures) can be represented in a flexible manner using tensors and graphs.", "We then discuss how CNNs use convolution operations to extract informative features (e.g., geometric patterns and textures) from the such representations to predict emergent properties and phenomena and/or to identify anomalies.", "We also discuss how CNNs can exploit color as a key source of information, which enables the use of modern computer vision hardware (e.g., infrared, thermal, and hyperspectral cameras).", "We illustrate the concepts using diverse case studies arising in spectral analysis, molecule design, sensor design, image-based control, and multivariate process monitoring." ], [ "Introduction", "Manufacturing is seeing an increasing use of real-time sensing and instrumentation technologies that generate data in the form of images/video (e.g., infrared and thermal), vibration/audio, and other complex data forms such as chemical spectra and geometrical structures (e.g., 3D printed objects, synthesized molecules, crystals).", "Manufacturing is also seeing the increasing use of automation systems that aim to exploit such data to make decisions (e.g., optimize production and detect anomalies).", "Moreover, modern automation systems are being designed to take instructions/targets in the form of complex data objects (e.g., voice, text, chemical spectra, and molecular structures).", "Modern automation systems used in manufacturing embed highly sophisticated computing workflows that use tools from data science and machine learning (ML) to extract and interpret actionable information from complex data streams.", "Such workflows resemble those used in other advanced technologies such as autonomous vehicles (e.g., aerial, terrestrial, and aquatic) and robotics.", "Moreover, such workflows begin to resemble human systems in which visual, auditive, tactile, and olfactory signals (data) are routinely used to make decisions.", "For instance, the human olfactory system generates signals when exposed to specific chemical structures and such signals are processed and interpreted by the brain to detect anomalies.", "Similarly, the human visual system generates interpretable signals when exposed to objects with specific geometrical and color features and our auditory system generates interpretable signals when exposed with specific frequencies.", "As such, from a conceptual stand-point, we can see that sensing and data science technologies are enabling increasing convergence between industrial (artificial) and human (natural) perception and decision-making.", "This opens new and exciting opportunities to synergize human and artificial intelligence with the ultimate goal of making manufacturing more efficient, safe, sustainable, and reliable.", "In this chapter, we focus on ML technologies that enable information extraction from complex data sources commonly encountered in manufacturing.", "Specifically, we review basic concepts of convolutional neural networks (CNNs) and outline how these tools can be used to conduct diverse decision-making tasks of interest in manufacturing.", "At their core, CNNs use a powerful and flexible mathematical operation known as convolution to extract information from data objects that are represented in the form of regular grids (1D vectors, 2D matrices, and high-dimensional tensors) and irregular grids (2D graphs and high-dimensional hypergraphs).", "These data representations are flexible and can be used to encode a wide range of data objects such as audio signals, chemical spectra, molecules, images, and videos.", "Moreover, such representations can encode multi-channel data, which allows capturing color and multi-variate inputs (e.g., multi-variate time series and molecular graphs).", "CNNs use convolution operations to identify features that can be extracted from the data that best explain an output; such features are identified by identifying optimal convolution filters or operators, which are the parameters that are learned by the CNN.", "The learning process of the operators requires sophisticated optimization algorithms and can be a computationally expensive process.", "CNNs is a class of models of an emergent ML field known as geometric deep learning, which leverages tools from geometry and topology to represent and process data.", "The earliest version of a CNN was proposed in 1980 by Kunihiko Fukushima [21] and was used for pattern recognition.", "In the late 1980s, the LeNet model proposed by LeCun et al.", "introduced the concept of backward propagation, which streamlined learning computations using optimization techniques [45].", "Although the LeNet model had a simple architecture, it was capable of recognizing hand-written digits with high accuracy.", "In 1998, Rowley et al.", "proposed a CNN model capable of performing face recognition tasks (this work revolutionized object classification and detection) [75].", "The complexity of CNN models (and their predictive power) has dramatically expanded with the advent of parallel computing architectures such as graphics processing units [65].", "Modern CNN models for image recognition include SuperVision [44], GoogLeNet [86], VGG [82], and ResNet [29].", "New models are currently being developed to perform diverse computer vision tasks such as object detection [69], semantic segmentation [49], action recognition [81], and 3D analysis [37].", "Nowadays, CNNs are routinely used in smartphones (unlock feature based on face recognition) [64].", "While CNNs were originally developed for computer vision, the grid data representation used by CNNs is flexible and can be used to process datasets arising in many different applications.", "For instance, in the field of chemistry, Hirohara and co-workers proposed a matrix representations of SMILES strings (which encodes molecular topology) by using a technique known as one-hot encoding [30].", "The authors used this representation to train a CNN that could predict the toxicity of chemicals; it was shown that the CNN outperformed traditional models based on fingerprints (an alternative molecular representation).", "In biology, Xie and co-workers applied CNNs to count and detect cells from micrographs [92].", "More recently, CNNs have been expanded to process graph data representations, which has greatly expanded their application scope.", "These types of CNNs (known as graph neural networks) have been widely used in the context of molecular property predictions [17], [26].", "Manufacturing covers a broad space of important products and processes that is virtually impossible to enumerate; in this chapter, we focus our attention on applications of CNNs to examples of potential relevance to chemical and biological manufacturing (which cover domains such as pharmaceuticals, agricultural products, food products, consumer products, petrochemicals, and materials).", "We also highlight that these manufacturing sectors are seeing an emergent use of autonomous platforms that enable flexible and high-throughput experimentation and/or on-demand production production; as such, the concepts discussed can be applicable in such context.", "We provide specific case studies that we believe provide representative examples on how CNNs can be used to facilitate decision-making in manufacturing.", "Specifically, we show how to use CNNs to i) decode multivariate time series data; ii) decode complex signals generated from microscopy and flow cytometry to detect contaminants in air and solution; iii) decode real-time ATR-FTIR spectra to characterize plastic waste streams; iv) predict surfactant properties directly from their molecular structures, and v) map image data into signals for feedback control." ], [ "Data Objects and Mathematical Representations", "A wide range of datasets encountered in manufacturing can be represented in the form of a couple of fundamental mathematical objects: tensors and graphs.", "Such representations are so general that, in fact, it is difficult to imagine a dataset that cannot be represented in this way.", "The key distinction between a tensor and a graph is that a tensor is a regular object (e.g., a regular mesh) while a graph is not (e.g., an irregular mesh).", "Moreover, tensors implicitly encode positional context, while graphs might or might not encode positional context.", "Differences between tensors and graph representations play a key role in the way that CNN architectures are designed to extract information from data.", "Unfortunately, it is often not obvious what data representation is most suitable for a particular application and sometimes the representation might not naturally emerge from the data.", "In fact, one can think of CNNs as a tool for representation learning, in the sense that it aims to learn the best way to represent the data to make a prediction." ], [ "Tensor Representations", "Data objects are often attached to a grid; the most common example of this is a grayscale image, which can be represented as a 2D grid object.", "Here, every spatial grid point is a pixel and the data entry in such pixel is the intensity of light.", "A grayscale video is simply a sequence of grayscale images and can be represented as a 3D grid object (a couple of spatial dimensions plus time); here, every space-time grid point is a voxel and the data entry is the intensity of light.", "A common misconception of grid data is that it can only be used to represent images and videos but its scope is much broader.", "For instance, 3D density fields of chemicals or velocities (as those generated using molecular and fluid dynamics simulations) can be represented as a 3D grid; moreover, the thermal field of a surface can be represented as a 2D grid.", "Tensors are mathematical objects used to represent grid data.", "A tensor is a generalization of vectors (1D tensors) and matrices (2D tensors) to high dimensions [27].", "A key property of a tensor is that it implicitly encodes positional context and order (it lives in a Euclidean space); specifically, every entry of a tensor has an associated set of coordinates/indexes that uniquely specify the location of an entry in the tensor.", "Due to its positional context, the nature of the tensor can altered by rotations; for instance, rotating an image (or transposing its associated matrix) distorts its properties.", "Figure: Representation of a grayscale image (left) as a 2D grid object (middle).", "The grid is a matrix in which each entry represents a pixel and the numerical value in the entry is the intensity of light.", "Representation of the grayscale image as a manifold (right); this reveals geometrical patterns of the image.Tensors are flexible objects that can also be used to represent multi-attribute/multi-channel grid data.", "For example, color images can be represented as a superposition of three grids (red, green, and blue channels).", "Here, each channel is a 2D tensor (a matrix) and the stacking of these three channels is a 3D tensor.", "Channels can also be used to represent multi-variate data in each entry of a grid.", "For instance, an audio or sensor signal is a time series that can be represented as a one-channel vector, while a multivariate time series (e.g., as such obtained from a collection of sensors in a manufacturing facility) can be represented as multi-channel vector.", "It is important to highlight that there is an inherent duality between images (reality as perceived by human vision or an optical device) and tensors (its mathematical representation).", "Specifically, images are optical fields that our visual sensing system capture and process to navigate the world and make decisions, while tensors are artificial mathematical representations used for computer processing.", "Making this distinction explicit is important, because humans typically excel at extracting information from visual data (without having any knowledge of mathematics) compared to numerical data (e.g., number sequences).", "As such, this begs the questions: Why do automation systems present data to human operators as numbers?", "What are the best visual representations that humans can use to interpret and analyze data more easily?", "These questions are at the core of human-computer interaction and highlight the relevance of data visualization and processing techniques.", "It is also important to highlight that the human vision system and the brain have inherent limitations in sensing and interpreting optical fields.", "For instance, the human vision and auditory system cannot capture all frequencies present in an optical field or an audio signal; as such, we need instrumentation (e.g., microscopes and nocturnal vision systems) that reveal/highlight information that cannot be captured with our limited senes.", "Moreover, the human brain often gets “confused\" by distortions of optical fields and audio signals (e.g., rotations and deformations) and by noise (e.g., fog and white noise).", "These limitations can be overcome with the use of artificial intelligence tools such as CNNs.", "Here, sensing signals such as images and audio signals are represented mathematically as grid data to extract information.", "Unfortunately, grid data representations are limited in that they inherently represent regular objects and are susceptible to rotation and deformations." ], [ "Graph Representations", "Graphs provide another flexible and powerful mathematical data representation.", "A graph is a topological object that consists of a set of nodes and a set of edges; each node is a point in a graph object and each edge connects a pair of nodes.", "The connectivity (topology) of a graph is represented as an adjacency matrix.", "Figure: Representation of a grayscale image (left) as graph (middle).", "Each node in the graph represents a pixel and the weight in the node encodes the intensity of light.", "The graph is a topologically invariant object that is not affected by deformations (right).A graph defines an irregular grid and one can attach multi-channel data (features) to nodes and edges in such grid.", "For instance, when representing a molecule, one can attach multiple features to each atom (e.g., identity and charge).", "Here, one can think of each feature as a the channel of the graph; graphs with multiple features in nodes and edges are also known as multi-attribute graphs.", "An example on how multi-attribute graphs are used to represent molecules is presented in Figure REF .", "Figure: Representation of a molecule as a multi-channel/multi-attribute graph.", "The molecule topology is encoded in the adjacency matrix, while features of the atoms are encoded as vectors that are stacked in a feature matrix.", "Reproduced from with permission from the American Chemical Society.A fundamental property of a graph is that it does not directly encode positional context (it does not live in a Euclidean space).", "As such, the graph can be stretched and rotated or the nodes can be shuffled around in space without affecting the underlying topology of the graph.", "In other words, the topology of the graph is fully dictated by the node-edge connectivity.", "This gives graph representations many useful properties (such as rotational invariance), which can be highly beneficial when representing certain systems.", "For instance, in molecular simulations, molecules might move around randomly in space, but their connectivity (e.g., hydrogen bonds) might not be affected [36].", "Another key property of graphs is that, because these are irregular grid objects, one can develop CNNs that learn features of graphs of different sizes (e.g., polymer and peptide sequences).", "It is worth highlighting that, while graphs are inherently 2D objects with no positional context, it is possible to encode 3D (or higher dimensional) position to each node in a graph as features [93].", "For instance, in a graph that represents the time evolution of a supply chain network, node features can contain the specific time point and the geographical coordinates.", "Moreover, it is possible to generalize graph representations to hypergraphs, which allow representing objects in which an edge connects multiple nodes.", "These more advanced geometric representations give rise to an emerging field of machine learning known as geometric deep learning [7] and to topological data analysis [85], [83].", "These approaches enable the analysis of complex 3D (and higher-dimensional) objects." ], [ "Color Representations", "The human vision system uses color as a key source of information to make decisions.", "For computer processing, color is typically represented using a red-green-blue (RGB) color space (see Figure REF ).", "The RGB color space is an additive space in which the red, green, and blue channels are mixed together to produce a wide range of colors [31].", "Before the RGB space became the default in computer electronic systems, it already had a solid theory based on human perception of color.", "The choice of these three primary colors is related to the physiology of the human eye; specifically, the three photoreceptor cells (cone cells) in the human eye have peak sensitivity at three wavelengths associated red, green, and blue.", "The RGB color space was designed to maximize the difference in response of cone cells to different wavelengths of light [32].", "However, there are many other color spaces that can be useful in different situations.", "For example, the CIEL*A*B (or LAB) color space is a powerful representation that can be used for quantitative color comparisons.", "Figure: Representation of a color image (left) as a superposition of red, green, and blue channels (middle).", "Each channel is a 2D grid object (a matrix) and stacking the channels generates a 3D tensor.The LAB space was defined by the International Commission on Illumination (CIE) to be a perceptually uniform color space, meaning that a group of colors separated by the same distance in LAB color space will look equally different.", "The three coordinates of the LAB color space represent the brightness of the color ($L*=0$ for black, $L*=100$ for white), its position between red and green ($a$ , negative values indicate green, positive values indicate red), and its position between yellow and blue ($b$ , negative values indicate blue, positive values indicate yellow).", "One can convert the RGB color space to a LAB color space using a nontrivial, nonlinear transformation.", "The LAB space can be used to highlight image features that might not be directly obvious to the human eye; this is a principle that can be exploited by CNNs to extract hidden information from images.", "Unlike RGB and LAB imaging, which captures only three wavelength bands in the visible spectrum, spectral imaging can use multiple bands across the electromagnetic spectrum [11].", "Spectral imaging can use infrared, visible spectrum, ultraviolet, X-rays, or some combination of the above.", "Multispectral and hyperspectral cameras can capture hundreds of bands for each pixel in an image, which can be interpreted as a complete spectrum.", "Due to the wealth of information encoded in spectral images, this technique has been widely used in agriculture [50], healthcare [19].", "and non-destructive analysis of materials [53]." ], [ "CNN Architectures", "CNNs denote a broad class of ML models that utilize specialized convolutional blocks to extract feature information from data objects.", "CNNs were originally designed to extract data from grid objects but they recently have been adapted to extract information from graphs.", "The key mathematical operation behind CNNs is convolution; loosely speaking, this operation transforms the data in a domain of the data object by applying a weighted sum of the entries in such domain.", "The weights of such sum are defined by a convolution filter or operator.", "The goal of the CNN is to identify optimal filters or operators that maximize information extracted from a collection of input data objects; here, “maximum information\" means that such information can predict outcomes (labels) associated with the input data objects.", "For instance, one can train a CNN to predict the toxicity of a molecule directly from its molecular structure; this is done by training a CNN with a collection of molecules and associated toxicity levels.", "Figure: High-level view of a CNN architecture.", "The input data object (e.g., grid or graph) is propagated through a sequence of convolution, activation, and pooling operations to make an output prediction.", "The trainable parameters of the operations are learned by minimizing a loss function.", "The derivative/gradient of the loss function is computed using backpropagation.", "Reproduced from with permission from Wiley publishing.A CNN architecture is simply a forward propagation of the input data object through a sequence of convolution, activation, and pooling operations that contain parameters that are learned by minimizing a loss function.", "A backward propagation (backpropagation) scheme is used to compute the gradient of the loss function with respect to the parameters.", "In this section, we present a basic discussion on the mathematics of convolution and computational procedures behind the training of CNNs.", "With this, we aim to highlight the inner workings of CNNs and associated bottlenecks and challenges." ], [ "Convolution Operations", "We will first consider CNN networks $f_{\\text{cnn}}$ that map an input tensor $V$ to output predictions $\\hat{y}$ .", "A particular convolutional layer takes an input object $V$ and convolves it by applying a convolutional operator $U$ which employs a set of convolutional filters or operators to produce a feature map $\\Psi $ .", "The convolution operation is denoted as $\\Psi = U * V$ and is simply a weighted summation between the entries of the convolution operator and the input object.", "Here, we also notice that the operator is typically much smaller than the input object and thus the operator is applied over a moving window (neighborhood) that covers the entire input object.", "This operation is not well-defined by the boundaries of $V$ , as some indexing will violate the input domain, but this is typically resolved by adding padding of zero-valued entries around the image (a technique called zero-padding).", "We can also express this operation in the compact form $\\Psi = f_c(V; U)$ .", "As shown in Figure REF , different operators highlight different features of the input object.", "Moreover, we note that one can propose a wide range of operators to extract information from the input data.", "In a CNN architecture, these operators are learned in a way that they extract information that best explains an output of interest.", "Figure: Convolution of a 3-channel grid (a color image).", "Each channel is processed using a different convolution operator.", "Each operator highlights/extracts different features (e.g., geometrical features) from the image.", "Reproduced from with permission from Wiley publishing.Following Figure REF , a common and intuitive interpretation of applying convolution operators is in terms of pattern recognition.", "The filters that comprise $U$ each embed a particular pattern that, when convolved with a particular neighborhood of $V$ (as depicted in Figure REF ), gives a score of how well the pattern is matched (where larger values denote greater matching).", "Hence, the feature map $\\Psi $ can be interpreted as a scoresheet of how certain patterns (as encoded by the filters in $U$ ) are manifested in $V$ .", "Figure: Convolution as a pattern matching technique.", "The convolution operator highlights regions in the grid in which the patterns of the image and of the operator matched.", "Reproduced from with permission from Wiley publishing.In a graph CNN architecture [95], the input graph is propagated based on its topology; here, convolution operations are performed analogously to grid data objects, by using weighted summations of the features of a node and of its neighboring nodes.", "A message passing framework [26] is typically used as a generalized approach to build GNN architectures that incorporate both node and edge features." ], [ "Activation Functions", "The feature map output of a convolutional layer is typically mapped element-wise through an activation function $\\alpha $ to yield the activated object $A$ .", "This operation can be written as $A = h_c(\\Psi )$ and helps the CNN encode nonlinear behavior.", "Common choices for the activation function include: ${\\begin{array}{c}\\alpha _\\text{sig}(z) = \\frac{1}{1 + e^{-z}} \\\\\\alpha _\\text{tanh}(z) = \\text{tanh}(z) \\\\\\alpha _\\text{ReLU}(z) = \\text{max}(0, z).\\end{array}}$ Here, the Rectified Linear Unit (ReLU) function $\\alpha _\\text{ReLU}(\\cdot )$ is highly popular since it generally exhibits greater sensitivity to changes in the input [63].", "Moreover, this function can be easily represented using piecewise linear functions and mixed-integer programming formulations." ], [ "Pooling", "Another key component of CNNs are pooling layers; pooling operations are dimension reduction mappings $f_p$ that seek to summarize/reduce the activation $A$ by collapsing certain sub-regions of smaller dimension (referred to as pooling).", "Pooling helps to make the learned representation more invariant to small length-scale perturbations [62].", "Common choices include max-pooling and average-pooling where the maximum or average of a sub-region is used to scalarize, respectively." ], [ "Convolution Blocks", "Convolution blocks in CNNs denote the combination of convolution, activation, and pooling of a given input.", "In other words, a convolution block employs the mapping $f_{\\text{cb}}$ such that: $P = f_{\\text{cb}}(V; U) = f_p(h_c(f_c(V; U))).$ Blocks can employ more complex operation nesting (e.g., multiple convolutional layers), but we consider blocks that follow (REF ) for simplicity in presentation.", "Moreover, these can take pooled outputs $P$ as input and thus facilitate the use of multiple convolutional blocks in succession via recursively calling $f_{\\text{cb}}$ .", "These blocks essentially act as a feature extractors via convolutional filters they employ.", "Moreover, this feature extraction becomes more specialized for blocks that are located deeper in the CNN.", "In the context of computer vision, this means that the first blocks extract simple features (e.g., edges or colors) and the deeper blocks can extract more complex patterns (e.g., particular shapes).", "Thus, we can think of performing multiple convolution operations as a multi-stage distillation process where the successive feature spaces capture patterns of increased length-scale and complexity (i.e., distill the image data into an increasingly purified features) [14]." ], [ "Feedforward Neural Networks", "The output $P$ of the last convolution block is typically flattened (vectorized); this can be represented as the mapping $f_f$ and yields the feature vector $v$ : $v = f_f(P).$ The feature vector is then fed into a feedforward neural network model $f_d$ which predicts the desired state space vector $\\hat{y}$ .", "Figure REF illustrates a typical image CNN model that implements the components described above.", "It employs a couple of convolution blocks and can be described in the functional form: $\\hat{y} = f_{\\text{cnn}}(V),$ where the mapping $f_{\\text{cnn}}$ is a nested set of convolution, activation, pooling, and flattening operations.", "This emphasizes that convolution blocks are feature information extractors and the dense layers act as the predictors whose feature space is the flattened output $v$ of the final block." ], [ "Data Augmentation", "Data augmentation denotes a class of techniques that aim to artificially expand the size of a training dataset by applying varied perturbations/transformations to the input data.", "In the context of computer vision, augmentation is often used to expand the size of the training image set in an effort to decrease the likelihood of the CNN encountering novel images and in an effort to avoid rotation invariance issues associated with grid data objects.", "Image augmentation generally denotes perturbing the training images such that the CNN sensor can be robust to those types of visual disturbance.", "Common perturbations include rotation, translation, cropping, blurring, brightness changing, splattering, and more.", "There are many software tools available that implement these transformations, which include TensorFlow and ImgAug [24], [42].", "Figure REF shows how a training image is augmented via a variety of perturbations.", "This methodology helps mitigate the risk of CNNs encountering novel images, but it is not typically possible to account for all the disturbance a process might encounter.", "Figure: Examples of image perturbation methods used in data augmentation techniques.", "Reproduced from with permission from Elsevier." ], [ "Training and Testing Procedures", "Training CNN procedures seek optimal model parameters (i.e., convolution operators and dense network weights) that minimize the error incurred by the output CNN predictions relative to the outputs of the training dataset.", "Here, we consider a training set $|\\mathcal {K}|$ that contains input-output pairs.", "The prediction error minimized in the training procedure is called the loss function $L$ .", "For example, regression models typically use a sum-of-squared-error (SSE) loss function: $L(\\hat{y}) = ||\\hat{y} - y||_2^2.$ Thus, by grouping all the CNN model parameters into $\\theta $ we can express model training as a standard optimization problem: $\\begin{aligned}&\\min _\\theta && \\sum _{k \\in \\mathcal {K}} L(\\hat{y}^{(k)}) \\\\&\\text{s.t.}", "&& \\hat{y}^{(k)} = f_{\\text{cnn}}(V^{(k)}; \\theta ), && k \\in \\mathcal {K}.\\end{aligned}$ This can readily be expressed as an unconstrained optimization problem by inserting the constraint equations directly into the objective.", "Stochastic Gradient Descent (SGD) is typically used to solve this problem due to the large amount of training data, the high number of model parameters, and the model complexity.", "Moreover, forward and backward propagation techniques are used to evaluate the objective and derivative values required by each iteration of the SGD algorithm (see Figure REF ).", "For classification models, one typically uses an entropy loss function." ], [ "CNN Architecture Optimization", "Neural architecture search (NAS) has been designed to automatically search for the best CNN architecture for a given dataset.", "Specifically, it uses a search method (e.g., reinforcement learning, evolutionary algorithms, or stochastic gradient descent) to explore the user-defined search space and chooses the best architecture based on the performance of the generated model (e.g., validation accuracy) on a given task.", "The search space contains all possible architectures.", "The architectures discovered by NAS have been shown to outperform manually designed architectures in various tasks such as image classification [68], image segmentation [47], natural language processing [18], and time series prediction [56]." ], [ "Transfer Learning", "The training process of a CNN can be highly computationally intensive because this requires performing a large number of convolution operations (e.g., at each SGD iteration and for a large training set).", "However, it is possible to use the filters/operators of existing, trained CNNs to extract information for a new dataset and such information can be used for different tasks (e.g., develop a different ML model such as a support vector machine).", "This approach is known as transfer learning and is based on the observation that, while the filters used are not optimal for the new dataset at hand, they can still extract valuable information.", "For instance, in the context of human learning, one can often detect features for objects that we have not seen before." ], [ "Case Studies", "In this section we present a set of case studies that highlight potential uses of CNNs in a manufacturing context.", "Our goal with this is not to provide an exhaustive list of applications, but rather to highlight the capabilities of CNNs as well as to show how data representations can be used in creative ways to tackle diverse problems." ], [ "Detecting Contaminants in Solution", "This case study is based on the results presented in [39].", "Endotoxins are lipopolysaccharides (LPS) present in the outer membrane of bacteria [4].", "Recently, it has been observed that micrometer-sized liquid crystal (LC) droplets dispersed in an aqueous solution can be used as a sensing method to detect and measure the concentration of endotoxins of different bacterial organisms.", "After exposure to endotoxins, LC droplets undergo transitions that yield distinct optical signatures that can be quantified using flow cytometry (Figure REF ).", "Figure: Overview of the interaction between an endotoxin and LC emulsions.", "(a) Generation of FSC/SSC scatter fields.", "The endotoxin-LC emulsions are pumped into the flow cytometer in the direction of the sheath flow.", "Laser light is scattered from the LC droplets and collected at two angles (FSC and SSC).", "By combining the FSC and SSC data points for 10,000 LC droplets, we generate an FSC/SSC field.", "(b) Scatter field generated by LC droplets exposed to 100 pg/mL of endotoxin.", "Marginal probability densities of FSC (top) and SSC (right) light in log scale are generated with 50 bins.", "(c) 2D grid of the scatter fields by binning and counting the number of events in a bin.", "Reproduced from with permission from the Royal Society of Chemistry.Flow cytometry produces complex data objects in the form of scatter point clouds of forward and side scattering (FSC/SSC); here, each point represents a scattering event of a given droplet.", "The key observation is that a point could be converted into a 2D grid data object via binning; this is done by discretizing the FSC/SSC domain and by counting the number of points in a bin.", "The 2D grid object obtained is a matrix that we can visualize as a grayscale image; here, each pixel is a bin and the intensity is the number of events/droplets in the bin (bins with more droplets appear darker).", "This visualization is a 2D projection of a 3D histogram (the third dimension corresponds to the number of events, also known as the frequency).", "In other words, the 2D grid object captures the geometry (shape) of the joint probability density of the FSC/SSC.", "These geometrical features can be extracted automatically using a CNN.", "Figure: Effect of endotoxin concentration on the FSC/SSC scatter field (represented as 2D grid objects).", "Scatter fields obtained using LC droplets exposed to different concentrations of endotoxin.", "As the endotoxin concentration increases, the LC droplet population shifts from a bipolar to a radial configuration.", "Reproduced from with permission from the Royal Society of Chemistry.The effect of endotoxin concentration on the FSC/SSC scatter fields (after binning) is shown in Figure REF ; clear differences in the patterns can be observed at concentrations that are far apart but differences are subtle at close concentrations.", "We trained a 2D CNN that can automatically detect these changes and predict concentrations from the scatter fields.", "We used the following procedure to obtain the 2D grid data objects (samples) that were fed to the CNN.", "For each sample, we generated bins for a given scatter field by partitioning the FSC and SSC dimensions into 50 segments (the grid has $50 \\times 50=2,500$ pixels).", "For each sample, we were also given reference scatter fields that represented limiting behavior: bipolar control (negative) and radial control (positive).", "Each sample is given by a 3-channel object $\\mathcal {V}$ where the first channel is the negative reference matrix $\\mathcal {V}_{(1)}$ , the second channel is the target matrix $\\mathcal {V}_{(2)}$ , and the third channel is a positive reference matrix $\\mathcal {V}_{(3)}$ (each channel contains a $50\\times 50$ matrix).", "This procedure is illustrated in Figure REF .", "This multi-channel data representation approach magnifies differences in the target matrix from the references (we will see that neglecting the negative/positive references does not give accurate predictions).", "Figure: EndoNet architecture.", "The input to EndoNet is a 3-channel object 𝒱∈ℝ 50×50×3 \\mathcal {V} \\in \\mathbb {R}^{50 \\times 50 \\times 3}; the channels correspond to the target, the negative reference, and the positive reference (each is a matrix of dimension 50×5050 \\times 50).", "EndoNet includes a convolutional block with 64, 3×33 \\times 3 operators and a max-pooling block with 2×22 \\times 2 operators.", "The feature map 𝒜\\mathcal {A} generated by the convolution and activation blocks is a tensor ℝ 48×48×64 \\mathbb {R}^{48 \\times 48 \\times 64}.", "The max-pooling block generates a feature map 𝒫∈ℝ 24×24×64 \\mathcal {P} \\in \\mathbb {R}^{24 \\times 24 \\times 64} that is flattened into a long vector v∈ℝ 36863 v \\in \\mathbb {R}^{36863}.", "This vector is passed to two dense layers (each having 32 hidden units).", "The predicted endotoxin concentration y ^\\hat{y} is the output from the dense layer activated by a linear function.", "Reproduced from with permission from the Royal Society of Chemistry.The 3-channel data object was fed to a CNN, which we call EndoNet.", "EndoNet has an architecture of Conv(64)-MaxPool-Flatten-Dense(32)-Dense(32)-Dense(1).", "The output block generates a scalar prediction $\\hat{y}$ (corresponding to the endotoxin concentration).", "In other words, the CNN seeks to predict the endotoxin concentration from the input flow cytometry fields.", "The architecture of EndoNet is shown in Figure REF .", "The regression results for EndoNet are presented in Figure REF .", "EndoNet extracts pattern information within and between each channel of the input image.", "Capturing differences between channels provides context for the CNN and has the effect of highlighting the domains in the scatter field that contain the most information.", "To validate this hypothesis, we conducted predictions for EndoNet using the 1-channel representation as input (we ignored the positive and negative channels).", "For the 1-channel representation, we obtained an RMSE of 0.97; for the 3-channel representation we obtained and RMSE of 0.78.", "It is particularly remarkable that EndoNet can accurately predict concentrations that span eight orders of magnitude.", "Figure: Predicted and actual concentrations at different concentrations using 1-channel and 3-channel representations.", "Reproduced from with permission from the Royal Society of Chemistry." ], [ "Detecting Contaminants in Air", "This case study is based on work presented in [84].", "LCs also provide a versatile platform for sensing of air contaminants [80], [61] and for sensing of heat transfer and shear stress (mechanical sensing) [34].", "In the context of air chemical sensing, an LC sensor can be prepared by supporting a thin LC film on a chemically functionalized surface.", "The molecules within the LC film (the mesogen) bind to the surface and assume a homeotropic orientation that provides an initial optical signal.", "Subsequent exposure of the LC film to an analyte leads to diffusive transport of the analyte through the LC phase and displacement of the mesogen at the surface, triggering rich space-time optical responses (see Figure REF ).", "Figure: Working design principles of a liquid crystal chemical sensor.", "A functionalized LC film selectively responds to the presence of an air contaminant and triggers an optical response.", "Reproduced from with permission from the American Chemical Society.A challenge in the development of LC sensors is their sensitivity to interfering species.", "For instance, LC sensors designed for detection of dimethyl methylphosphonate (DMMP), CH3PO(OCH3)2, might exhibit similar optical responses when exposed to humid nitrogen [94].", "These issues are illustrated in the experimental responses shown in Figure REF .", "Our goal here is to determine whether or not one can unravel hidden patterns in the optical responses that can help discern between chemical species.", "Figure: Optical responses of liquid crystals under gaseous N 2 \\textrm {N}_2-water (30% relative humidity) and N 2 \\textrm {N}_2-DMMP (10 PPM) environments.", "LCs were deposited into microwells with a diameter of 3mm to enable high-throughput data collection.", "Reproduced from with permission from the American Chemical Society.We designed a CNN for automatically detect contaminant presence from optical LC images (micrographs).", "In summary, the approach us based on transfer learning; specifically, we used VGG16, which is a pre-trained CNN for extracting features from the optical responses and these features were fed into a simple support vector machine (SVM) that predicts if the contaminant is present.", "The results indicate that features extracted from the first and second convolutional layers of VGG16 allow for a perfect classification accuracy.", "The results also indicate that complex spatial color patterns are developed within seconds in the LC response, which leads us to hypothesize that fluctuations in LC tilt orientation (angle) play a key role in sensor selectivity.", "Moreover, the analysis reveals that color is a key source of information that the CNN is looking for.", "The dataset analyzed in this study was obtained from six videos that show the response of LCs to a gaseous stream of $\\textrm {N}_2$ containing 10 ppm DMMP and six videos that show the response of LCs to a gaseous stream of $\\textrm {N}_2$ containing 30% relative humidity (both at room temperature).", "Each video tracks the dynamic evolution of multiple independent microwells (the total number of microwells recorded was 391).", "We split each frame into several images, each containing a single microwell at a specific time.", "The total number of microwell snapshots generated was 75,081.", "Figure: Schematic of pre-trained VGG16 architecture used for feature extraction of LC micrographs.", "Reproduced from with permission from the American Chemical Society.Figure: Schematic of ML framework including feature extraction and classification based on SVM.", "Reproduced from with permission from the American Chemical Society.The trained VGG16 network is freely available through the Keras software and are what is used during feature extraction [25].", "This highlights the fact that highly sophisticated CNNs are openly available nowadays for conducting feature extraction.", "A simplified representation of the VGG16 architecture is shown in Figure REF .", "The VGG16 network has been pre-trained to classify highly complex images from the internet (not related to our particular application, such as cat and dogs) and the deepest layers have been carefully tuned to differentiate such images.", "The early layers of the network are the most general and are easier to interpret; accordingly, we use the outputs of the first and second convolutional blocks to inform features for LSVM classification.", "A visual representation of this process for the first and second convolutional blocks is provided in Figure REF .", "The framework using VGG16 features and LSVM was able to classify water and DMMP micrographs with 100% accuracy.", "This result was achieved when using all of the 128 features of the second convolutional layer.", "An accuracy of 98% is obtained when we use the 64 features of the first convolutional layer.", "These results indicate that LC features that develop early in the sensor response are highly informative and sufficient to discriminate among chemical environments." ], [ "Molecule Design", "ML techniques have been recently used for property predictions of molecules such as water solubility [33], [52], [3], toxicity [57], [1], [38], and lipophilicity [79], [87].", "A fundamental step in the use of ML algorithms is the pre-definition or pre-calculation of molecular descriptors [58], [73], [43], [59]; such descriptors are used as input data to develop quantitative structure-property relationship models [48].", "Among the various ML techniques, graph CNNs (GNNs) [17], [26] have gained special popularity because they can directly incorporate molecular graph representations, thus retaining key structural information on molecules while avoiding the need to pre-calculate/pre-define molecular descriptors for which density functional theory (DFT) or molecular dynamics (MD) simulations may be required.", "Overall, GNNs have shown strong predictive power in molecular property predictions, and have great potential to be applied to other fields for more accurate model development as well as enabling high-throughput screening of materials for manufacturing.", "We show how to use GNNs for predicting critical micelle concentrations (CMCs) of surfactants.", "This study is based on the work presented in [67].", "When dissolved in water, surfactant monomers will undergo a cooperative aggregation process, called self-assembly, to form spherical micelles or related aggregate structures [35].", "The formation of micelles in a solution can induce significant changes in key solution properties including the electrical conductivity, surface tension, light scattering, and reactivity [74], [35].", "Consequently, predicting conditions under which surfactants self-assemble is important for surfactant selection and design [13].", "A critical parameter that characterizes surfactant self-assembly behavior is the CMC, which is the minimum surfactant concentration at which self-assembly occurs.", "Conventionally, the CMCs are obtained experimentally by tensiometry, but it is laborious and expensive [23], [78], [20].", "Here, we show that GNNs can predict CMC values directly from the molecular graph of a surfactant monomer.", "We gathered experimental CMC data measured at room temperature in water for 202 surfactants [74], [23], [22], [60] covering all surfactant classes.", "The proposed GNN architecture consists of two graph convolutional layers, one average pooling layer, two fully connected hidden layers, and one final output layer.", "A graph convolution layer updates each atom by aggregating the features of itself and its neighbors and maps the updated features into a hidden layer with 256 hidden features.", "The GCN model contains a total of 216,833 trainable parameters.", "Figure: Overview of surfactant molecular structures and self-assembly process in micelles.", "(a–d) Sample structures of four classes of surfactants.", "Surfactants are categorized by the properties of their head groups as nonionic (a), cationic (b), anionic (c), or zwitterionic (d).", "(e) Surfactant monomers aggregate into spherical micelles in water with hydrophilic head groups facing toward the solvent and hydrophobic tail groups sequestered inside the micelle core.", "(f) Snapshots of a surfactant micelle from a representative MD simulation with water shown in blue.", "Reproduced from with permission from the American Chemical Society.Figure: GNN predictions for all classes of surfactants.", "Left: low-dimensional distribution of surfactant fingerprints using t-SNE.", "The test samples (red crosses) are widespread, and most of the designed surfactants (green points) fall outside of the clusters of the existing data set.", "Right: parity plot between the predicted and experimental log CMC values (training data in blue and test data in red).", "The best-fit slope of the test data is 0.91 (R 2 {}^2 = 0.92), and the test RMSE is 0.30.", "Molecular structures are shown for the selected extreme points.", "Structure (a) is an anionic surfactant (minor outlier) with a high log CMC value.", "Structure (b) is a cationic surfactant (minor outlier) with a high log CMC value.", "Structure (c) is a zwitterionic surfactant (major outlier).", "Structure (d) is a nonionic surfactant with a low log CMC value.", "Reproduced from with permission from the American Chemical Society.As illustrated in Figure refcmcparity, the cross-validation (CV) RMSE on all classes of surfactants has a mean value of 0.39 with no significant outliers.", "We tested the model performance on a test data set, which contains samples from each surfactant class.", "We verified the distribution of the test samples using t-distributed stochastic neighbor embedding (t-SNE) [88], a nonlinear dimension reduction technique to visualize high-dimensional data, on the molecular fingerprints [73] of the surfactants.", "Figure REF illustrates that the test samples are widespread, indicating the inclusion of unlike surfactant structures and classes in the test data set, which cover a much more diverse spectrum of surfactants than the data sets used in previous QSPR models.", "Figure REF also shows a parity plot between the experimental and predicted log CMC values for the training and testing sets.", "Cationic surfactants have the lowest test RMSE (0.07) followed by nonionic (0.18) and anionic (0.32) surfactants, and the model performs worst for zwitterionic surfactants (0.76).", "The parity plot also suggested a slightly lower accuracy for surfactants with relatively large log CMC values (>4.5).", "The overall predictability of the GCN model outperforms that of a prior QSPR model reported in the literature [23].", "The differences in the molecular structures found in our dataset highlights the wide variety of surfactants that the GCN model can capture." ], [ "Decoding of Spectra", "We now show how to use CNNs to decode real-time spectra; specifically, we show how to characterize plastic components using real-time ATR-FTIR spectra.", "This case study also aims to illustrate how innovative (nonintuitive) data representations can be used to obtain more information from spectra.", "This study is based on the work reported in [40].", "Plastics are used in a wide range of applications such as food packaging, construction, transportation, health care, and electronics.", "Notably, only 20% of all plastics produced were recycled [71]; this recycling rate is notably low compared to that of other materials (e.g., aluminum has a recycling rate of nearly 100%).", "A key obstacle that arises here is the characterization of plastic components in mixed-platic-waste (MPW) streams.", "ATR-FTIR (attenuated total reflection-Fourier transform infrared spectroscopy) is an instrumentation technique that can be used for analyzing plastic components found in MPW in real-time; as such, one can envision the development of fast, online ML techniques that can analyze ATR-FTIR spectra to characterize MPW streams.", "Here, we show that CNNs can characterize plastic components of MPW by decoding ATR-FTIR spectra.", "Figure: Normalized infrared spectral intensities of various plastic materials.", "Each spectrum is a vector of length 4150.", "The resulting spectra contain significant noise and systematic errors.", "Reproduced from with permission from Elsevier.Experimental data was obtained by preparing small sheets of plastics of different shapes and used ATR-FTIR to scan sheets for 10 different types; this data collection approach mimics how rigid waste plastics are found in online processing of MPW streams.", "The spectra collected can be represented as 1D vectors and analyzed by using 1D CNNs [12].", "The 1D CNN extracts features of a spectrum by convolving it with different filters; however, the 1D representation might fail to capture correlations across frequencies.", "Gramian Angular fields (GAF) have been recently used to encode 1D series data into matrices that capture correlation structures and that are processed using 2D CNNs; this data transformation approach has been shown to improve classification accuracy for time series data [90].", "A GAF represents vectors in a polar coordinate system and converts these angles into symmetric matrices using various operations.", "There are a couple of GAF types: Gramian Angular Summation fields (GASF) and Gramian Angular Difference fields (GADF).", "The conversion of spectra to GASF and GADF matrices is illustrated in Figure REF ; here, the GAF matrices are represented as grayscale images.", "Figure: Conversion from 1D signal to GASF and GADF matrices.", "The 1D signal is first mapped to the polar coordinate system and finally converted to GASF and GADF matrices.", "Encoding the 1D signal into GAF matrices captures the relationship between the signal intensity at different wavenumbers.", "Reproduced from with permission from Elsevier.Figure: Architectures of (a) PlasticNet (1D) and (b) PlasticNet (2D).", "PlasticNet (1D) inputs a vector of 4150 and outputs the predicted plastic type.", "It contains 4 1D convolutional layers (each has 64 filters of dim 3), 1D max-pooling layers (each has a pooling window size of 2), a flatten layer, and 3 fully-connected layers (each has 64 units and a dropout ratio of 0.2).", "The activation functions between the layers are ReLUs.", "The final output activation function is softmax.", "PlasticNet (2D) inputs a GASF and a GADF matrix.", "The input size varies from 50×50×250 \\times 50 \\times 2 to 250×250×2250 \\times 250 \\times 2.", "It has four 2D convolutional layers (each has 64 filters of 3×33 \\times 3), two 2D max-pooling layers (each has a pooling window size of 2×22 \\times 2).", "The flatten, fully-connected layers and activation function setups are the same as the ones of PlasticNet (1D).", "Reproduced from with permission from Elsevier.The comparison of architectures of PlasticNet (1D) and (2D) is shown in Figure REF .", "Classification results for PlasticNet (1D) and (2D) are presented in Figure REF .", "PlasticNet (2D) has a higher accuracy when the input size is larger than 100×100, compared to PlasticNet (1D) on raw IR spectra (77.7%).", "Specifically, PlasticNet (2D) increases the accuracy of the PlasticNet (1D) by 12.4%; this confirms that correlation information in spectra is important for classification.", "To validate the effectiveness of the proposed CNN models, we compared them with four commonly used ML classifiers, including Radial Basis Function (RBF) based Support Vector Machine (RBF-SVM), Random Forest (RF), k-Nearest Neighbors (kNN), Gaussian Process Classifier (GPC).", "The accuracy of PlasticNet (2D) is slightly higher ( 1%) than that of RBF-SVM when the input size is larger than $200 \\times 200$ .", "This indicates that RBF-SVM is comparable to CNN-based methods; however, SVMs offer limited flexibility to capture different representations for IR data.", "The accuracy of all methods saturates at 87%, which suggests that the dataset itself contains significant errors that neither the CNN-based nor the SVM methods can explain.", "In subsequent work, we have shown that decoding fast MIR (mid-infrared spectroscopy) spectra using CNNs can achieve a nearly perfect plastic classification accuracy [96].", "Figure: Comparison of the accuracy of CNN-based methods and other ML algorithms.", "PlasticNet (2D) with an input size of 200×200×2200 \\times 200 \\times 2 has the highest accuracy of 87.29%.", "SVM with RBF kernels has a comparable accuracy of 86.14%.", "The accuracy of PlasticNet (2D) is higher than that of PlasticNet (1D), indicating that the conversion from the original 1D signal to 2D GAF matrices captures more information.", "The accuracy of PlasticNet (2D) increases as the input matrix increases, indicating that a larger input matrix contains more information.", "Reproduced from with permission from Elsevier.Figure: Confusion matrix for PlasticNet (2D) with an input size 200×200×2.", "The overall accuracy is 87.3%.", "Each column represents a true plastic species, and each row represents a model predicted plastic species.", "The entries along the diagonal are where the plastic species are correctly classified.", "Many diagonal entries are close to one, indicating that the PlasticNet (2D) has excellent classification accuracy.", "However, some plastic types cannot be classified with high accuracy (e.g., PC and AC).", "Reproduced from with permission from Elsevier." ], [ "CNNs for Multivariate Process Monitoring", "Multivariate process monitoring is a common task performed in manufacturing to identify abnormal/faulty behavior.", "Here, the idea is to collect multivariate time series (for different process variables) under different modes of operation (each mode is induced by a specific fault).", "The goal is to identify features (signatures) in the time series to determine if the process is a given mode.", "The case study presented here uses benchmark data for the Tennessee-Eastman (TE) process [16].", "This study was based on the work presented in [91], [41].", "Figure: Representation of a multivariate time series as a 2D image.", "(a) 52 process variables collected 60 times over a 3-hour period.", "The variables are normalized with a zero mean and a unit variance.", "(b) An 52×60{52 \\times 60} matrix (visualized as an image).", "The red line in (a) and red row in (b) indicate the same data.", "Each row of the image represents one of the time series in (a).", "The fault number is 7 for (a) and (b), 9 for (c), and 15 for (d).", "We can see that (c) and (d) are visually similar but belong to different fault groups.", "Reproduced from with permission from Wiley publishing.The TE process units include a reactor, condenser, compressor, separator and stripper.", "The TE process produces two products (G and H) and a byproduct (F) from four reactants (A, C, D and E).", "Component B is an inert compound.", "In total, the TE process contains a total of 52 measured variables; 41 of them are process variables and 11 are manipulated variables.", "This process exhibits 20 different types of faults related to changes in feed temperatures, compositions, reaction kinetics, and so on.", "Figure: Confusion matrix from the CNN prediction.", "Each column represents a true fault type, and each row represents a CNN predicted fault type.", "The entries along the diagonal are where the fault types are correctly classified.", "Most of the diagonal entries are close to 1, indicating that the CNN has good classification accuracy.", "Reproduced from with permission from Wiley publishing.The TE process data is obtained from Harvard Dataverse [70].", "The 52 process variables are sampled every 3 minutes; the transformation of multivariate signal data to matrices is shown in Figure REF .", "We construct an input data sample by using 52 signal vectors (each vector contains 60 time points) that are combined into ${52 \\times 60}$ matrix ($V$ ).", "We have a total of 6947 input samples and the training:validation:testing data ratio used is 11:4:5.", "Figure REF is the confusion matrix obtained for the CNN; the overall classification accuracy was 0.7561.", "With the exception of faults 3, 4, 5, 9, and 15, most faults can be identified accurately.", "Fault 3, 9 and 15 are especially difficult to detect because the mean, variance, and higher-order variances do not vary significantly." ], [ "CNNs for Image-Based Feedback Control", "In this study, we consider techniques for incorporating CNN sensors (CNNs that map image signals to controllable state signals) in feedback control systems.", "We place a particular focus on the need for real-time novelty/anomaly detection approaches that provide robustness in effectively mitigating the consequences of visual disturbances.", "The concepts discussed here are a summary of the work presented in [66].", "This work in turn draws upon existing emergent applications of image data in control, as those presented in [89], [54], [72], [51].", "Figure: Feedback control system that incorporates a CNN sensor to convert image data into a controllable measurement signal that can be used for feedback control.", "Reproduced from with permission from Elsevier.Figure: A summary of the SAFE-OCC novelty detection framework that operates on a feature map to produce a novelty signal.", "Reproduced from with permission from Elsevier.We consider leveraging a CNN sensor to autonomously map image data to a controllable state variable such that we achieve closed-loop control.", "With this, we remove the human operator from the control loop in the sense that they will no longer need to actively interpret and act upon visual process data.", "Such a control system is depicted in Figure REF .", "Here, the camera and the CNN work together to form a computer vision sensor that is able to measure states $y_\\text{vis}$ that otherwise would not be available using traditional process measurement devices.", "Hence, following this new paradigm we obtain a fully automatic control system that can exhibit improved setpoint tracking performance which promotes increased consistency and performance.", "The paradigm shift from an operator-centric control system to the CNN-aided system of Figure REF introduces a significant vulnerability: poor prediction accuracy of $y_\\text{vis}$ when the image $V$ is novel relative to the training data used to prepare the CNN (i.e., the CNN sensor makes a highly inaccurate prediction because it is extrapolating).", "Injecting erroneous measurement data into a feedback control architecture can have severe consequences.", "Thus, we require an appropriate novelty detection approach (depicted in Figure REF ) to automatically recognize in real-time when the visual data $V$ is novel relative to the CNN sensor being used.", "Novelty detection denotes a set of unsupervised learning methods that differentiate between novel and normal data [76].", "A couple of paradigms are reconstruction models and one-class classification.", "One-Class Classification (OCC) denotes an area of methods that learn a single class of normal instances from unlabeled training data (typically assumed to be comprised of normal instances).", "These then identify novel data instances by determining if they lie outside the learned class.", "Here, we discuss the Sensor Activated Feature Extraction Once-Class Classification (SAFE-OCC) novelty detection framework [66].", "The SAFE-OCC framework leverages the native feature space of a CNN sensor to achieve novelty detection that is complimentary.", "The SAFE-OCC novelty detector involves three steps: feature extraction via the feature maps of a CNN sensor, feature refinement, and novelty detection via OCC (see Figure REF ).", "Figure: Representative snapshots from simulations used in the cart-pole case study.", "Reproduced from with permission from Elsevier.Figure: The control response trajectories of Simulations 2 (foggy disturbance) and 4 (blockage disturbance).", "The vertical dotted line at time-step 150 indicates when the fog disturbance is introduced in Simulation 2 and the blockage disturbance is introduced in Simulation 4.", "Effective control is maintained in Simulation 2 because these are normal images for the CNN.", "Ineffective control, however, is obtained in Simulation 4 because the images are novel to the CNN.", "The SAFE-OCC framework detects this novel images to prevent this type of behavior.", "Reproduced from with permission from Elsevier.We illustrate the application of SAFE-OCC to control the CartPole-v1 environment from OpenAI-Gym[6], which corresponds to the classic cart-pole control problem introduced in [2].", "Here, we seek to balance a pendulum above a cart which we can move either right or left at a mixed rate.", "We consider the angle of the pendulum (measured in degrees relative to vertical alignment) as the state variable $y$ (ignoring the position of the cart), and we take the cart movement direction to be the control variable $z \\in \\lbrace 0, 1\\rbrace \\subset \\mathbb {Z}$ (where 0 is left and 1 is right).", "Thus, we have a single-input single-output (SISO) process to control.", "With this simplification, we implement a PID controller with a derivative filter.", "We conducted four simulations: a base case that uses unperturbed images and three others that invoke a particular simulated visual disturbance after 150 time-steps.", "The three disturbance types are produced via ImgAug using the Fog, Spatter, and Cutout methods which correspond to fog, splattering, and square blockages, respectively.", "Figure REF shows representative images of these simulations.", "Figure REF shows the responses exhibited in Simulations 4 (blockage) and 2 (fog).", "In Figures REF , we observe that effective control in tracking the set-point is achieved for both simulations.", "This behavior can be attributed to the CNN sensor being trained with clear and fogged images which means that its predictions $\\hat{y}$ incur a low error relative to $y$ as shown in Figure REF .", "Figure REF shows the response in the fourth simulation; here, the blockage disturbance is novel relative to the CNN sensor and thus, significant prediction error is incurred in each case once the sensor is subjected to the disturbance.", "SAFE-OCC accurately identifies the novel images once they are injected into the CNN sensor and can avoid catastrophic control failure." ], [ "Conclusion", "This chapter has reviewed the use of convolutional neural networks (CNNs) for extracting information from complex data sources that are commonly encountered in manufacturing.", "We have used a set of selected case studies to demonstrate how these machine learning (ML) techniques can be used to conduct a variety of tasks such as classification, anomaly detection, and prediction of properties.", "The discussion outlined in this chapter is heavily biased by research conducted by our research group and is not meant to provide an exhaustive review.", "The field of ML learning is quickly evolving, with many different applications and techniques being developed.", "For instance, transformer models are nowadays being developed for computer vision and provide an alternative to address limitations of CNNs [9].", "Moreover, CNNs are actively being used to tackle a wide range of challenges arising in catalysis, materials, healthcare, and biology [28], [8], [10].", "An area that is still in it infancy is the use of computer vision techniques to enable feedback control; specifically, closing the loop between image data and feedback control is a technical challenge.", "This is because computer vision signals are inherently infinite-dimensional objects (e.g., fields) that cannot be controlled directly by control systems.", "As such, it is necessary to extract key descriptors that effectively summarize these objects and it is necessary to develop techniques for building dynamical models directly from computer vision signals.", "Techniques such as autoencoders, recurrent neural networks, and dynamic mode decomposition provide some alternatives [51], [55].", "In this context, it is also important to re-think how to design control architectures that can properly act on image data, given that most manufacturing systems have limited actuation.", "This is particularly relevant in 3D-printing and additive manufacturing applications, in which it is necessary to achieve high level of precision in shaping 3D objects.", "Moreover, it is important to think about how to leverage these image data sources in the development of physical models [46].", "The use of hyperspectral imaging in manufacturing is also an exciting direction [5]; this type of data can reveal properties of systems, materials, and products that provide rich information about quality and health.", "A fundamental challenge in dealing with hyperspectral imaging, however, is it inherit high dimensionality.", "Specifically, hyperspectral images contain many color channels and are challenging to process using CNNs.", "Another exciting area of current research is speech recognition [15]; one can envision a future in which instructions are provided to an automation system in the form of voice or in which the automation system summarizes the behavior of the system in narrative form or explains the reasoning behind a decision.", "Along these lines, the use of audio signals (e.g., as those obtained from vibration sensors) are currently being investigated to detect faults [77]." ], [ "Acknowledgments", "We acknowledge funding from the U.S. National Science Foundation under BIGDATA grant IIS-1837812 and support from the members of the Texas-Wisconsin-California Control Consortium (TWCCC)." ] ]
2210.07848
[ [ "Federated Best Arm Identification with Heterogeneous Clients" ], [ "Abstract We study best arm identification in a federated multi-armed bandit setting with a central server and multiple clients, when each client has access to a {\\em subset} of arms and each arm yields independent Gaussian observations.", "The {\\em reward} from an arm at any given time is defined as the average of the observations generated at this time across all the clients that have access to the arm.", "The end goal is to identify the best arm (the arm with the largest mean reward) of each client with the least expected stopping time, subject to an upper bound on the error probability (i.e., the {\\em fixed-confidence regime}).", "We provide a lower bound on the growth rate of the expected time to find the best arm of each client.", "Furthermore, we show that for any algorithm whose upper bound on the expected time to find the best arms matches with the lower bound up to a multiplicative constant, the ratio of any two consecutive communication time instants must be bounded, a result that is of independent interest.", "We then provide the first-known lower bound on the expected number of {\\em communication rounds} required to find the best arms.", "We propose a novel algorithm based on the well-known {\\em Track-and-Stop} strategy that communicates only at exponential time instants, and derive asymptotic upper bounds on its expected time to find the best arms and the expected number of communication rounds, where the asymptotics is one of vanishing error probabilities." ], [ "Introduction", "The problem of best arm identification [4], [12] deals with finding the best arm in a multi-armed bandit as quickly as possible, and falls under the class of optimal stopping problems in decision theory.", "This problem has been studied in the literature under two complementary regimes: (a) the fixed-confidence regime in which the goal is to minimise the expected time (number of samples) required to find the best arm subject to an upper bound on the error probability [4], [8], and (b) the fixed-budget regime in which the goal is to minimise the error probability subject to an upper bound on the number of samples [1], [2].", "Best arm identification in the fixed-confidence regime forms the main subject of this paper.", "Consider a federated learning setup [13], [9] with a central server and $M$ clients in which each client has access to a subset of arms in a $K$ -armed bandit (heterogeneous clients).", "We assume that each arm generates independent unit-variance Gaussian observations, and that the mean of the observations from an arm that can be accessed by one or more clients can be distinct across the clients.", "Defining the reward from an arm as the average of the observations of the arms across all the clients that have access to the arm, the goal is to minimise the expected time required to find the best arm (the arm with the largest mean reward) of each client, subject to a prescribed upper bound on the error probability.", "Clearly, communication between the clients and the server is necessary for each client to determine its best arm.", "The following trade-off is glaringly evident: the smaller the prescribed error probability, the larger the expected time required to find the best arms.", "Our objective is to obtain lower and upper bounds on the growth rate of the expected time to find the best arms in the asymptotic limit of vanishing error probabilities.", "Table: Availability of vaccines in selected countries where and ✗\\, mean authorised and yet unauthorised, respectively." ], [ "Motivation: Accessibility to Vaccines ", "Our problem setup (in particular, our definition of best arm) is inspired by the following example concerning vaccines for COVID-19.", "The COVID-19 pandemic and the numerous ways it has adversely affected peoples' lives over the past two years has led countries to invest efforts into developing new vaccines to combat the pandemic.", "At present, the World Health Organisation is known to have granted full or emergency authorisations to 34 vaccines.", "Of these, 11 have a full or emergency authorisation in only one country, 12 in ten or fewer countries, and 11 in more than ten countries.This data and that in Table REF are obtained from https://covid19.trackvaccines.org/.", "See Table REF for a subset of this data.", "In the face of each country having access to only a subset of vaccines, suppose that the reward of administering a vaccine to a group of individuals in a country is measured by the collective recovery rate of the individuals in the group.", "It is then natural for each country to evaluate, based on the recovery responses of distinct groups of individuals in the country to distinct vaccines, which is the best vaccine that can be administered to the rest of the country's population.", "Typically, when the size of each group of individuals participating in the experiment is relatively small in comparison to the size of the country's population, the sample mean of the rewards from each group is likely to be inaccurate.", "However, the average of the sample means computed across one or more groups of individuals (perhaps located in different countries) who are administered the same vaccine is likely to be more accurate and hence a more reliable measure of efficacy of the vaccine.", "Our definition of best arm in this paper dovetails neatly with the latter, more reliable measure of efficacy, with arms corresponding to vaccines and clients corresponding to countries.", "A subset of vaccines available in each country translates, as such, to each client having access to a subset of arms." ], [ "Contributions", "We now highlight the main contributions of this paper.", "We derive a problem instance-specific lower bound on the expected time required to find the best arm of each client.", "As in prior works on best arm identification [5], [15], we show that given an error probability threshold $\\delta \\in (0,1)$ , the lower bound scales as $\\Omega (\\log (1/\\delta ))$ (all logarithms are natural logarithms, unless the base of the logarithm is specified explicitly otherwise).", "We characterise the instance-dependent constant multiplying $\\log (1/\\delta )$ precisely.", "This constant, we show, is the solution to a max-inf optimisation problem in which the outer `max' is over all probability distributions on the arms and the inner `inf' is over the set of alternative problem instances, and is a measure of the “hardness” of the instance.", "The max-inf optimisation in the instance-dependent constant is seemingly hard to solve analytically; the key difficulty here is that the set of alternative problem instances in the inner inf does not admit a closed-form expression and the accessible arm set of each client is heterogeneous.", "This is in contrast to prior works (e.g., [5]) where the analogous max-inf problems can be solved analytically thanks to simple closed-form expressions for the set of alternative instances.", "Notwithstanding this, we recast the inf over the (uncountably infinite) set of alternative problem instances as a min over the (finite) set of arms, and demonstrate that the max-min optimisation resulting from the latter can be solved analytically and differs from the true max-inf by at most a factor of 2.", "For any algorithm whose upper bound on the expected time to find the best arms of the clients matches the lower bound up to a multiplicative constant (an almost-optimal algorithm), we show that the ratio of any two consecutive communication instants must be bounded, a result that is of independent interest.", "That is, in order to achieve order-wise optimality in the expected time to find the best arms, an algorithm may communicate at most exponentially sparsely, i.e., at communication time instants of the form $t=\\lceil (1+\\lambda )^r \\rceil $ , $r\\in \\mathbb {N}$ , for some $\\lambda >0$ .", "Using the preceding result, we show that given any error probability threshold $\\delta $ , there exists a sequence of problem instances of increasing levels of hardness on which the expected number of communication rounds required to find the best arms of the clients correctly with probability at least $1-\\delta $ grows as $\\Omega (\\log \\log (1/\\delta ))$ for any algorithm with a bounded ratio between consecutive communication time instants.", "To the best of our knowledge, this is the first-of-its kind result in the literature.", "We design a Track-and-Stop-based algorithm, called Heterogeneous Track-and-Stop and abbreviated as $\\textsc {Het}-\\textsc {TS}(\\lambda )$ , that communicates only at exponential time instants of the form $t=\\lceil (1+\\lambda )^r \\rceil $ , $r\\in \\mathbb {N}$ , for an input parameter $\\lambda >0$ .", "We show that given any $\\delta \\in (0,1)$ , the $\\textsc {Het}-\\textsc {TS}(\\lambda )$ algorithm (a) identifies the best arms correctly with probability at least $1-\\delta $ , (b) is asymptotically almost-optimal up to the multiplicative constant $2\\, (1+\\lambda )$ , and (c) takes $O(\\log \\log (1/\\delta ))$ many communication rounds on the average.", "The parameter $\\lambda $ thus serves as a tuning parameter to trade-off between the expected number of communication rounds and the expected time to find the best arms." ], [ "Related Works", "Best arm identification in the fixed-confidence setting for independent and identically distributed (i.i.d.)", "observations has been studied in [5], [11].", "The recent works [15], [10] extend the results of [5] to the setting of Markov observations from the arms.", "Works on federated learning with similar problem setups as that of our work are those of [17] and [18] in which each client has access to all the arms.", "The notion of global mean defined in these works coincides with our definition of reward (i.e., average of the arm means across the clients), and the goal is to design an algorithm that minimises the cumulative regret over a finite time horizon of $T$ time units.", "[14] study best arm identification in a federated learning setting in which each client has access to a subset of arms that is disjoint from the subsets of the other clients, and the best arm is the arm with the highest mean.", "Here, the best arm is also the best arm of some client.", "In contrast, we allow for non-empty intersections between the arms subsets of the clients, in which case the best arm of one client may not necessarily be the best arm of another.", "[16] study an optimal stopping variant of the problem in [17] in which the uplink from each client to the server entails a fixed cost of $C\\ge 0$ units, each client has access to all the arms, and the goal is to determine the arm with the largest mean at each client and also the arm with the highest global mean (as defined in [17]) with minimal total cost, defined as the sum of the number of arm selections and the total communication cost.", "When each client has access to all the arms, our problem is akin to finding the arm with the highest global mean." ], [ "Notations and Preliminaries", "For $n\\in \\mathbb {N}\\lbrace 1, 2, \\ldots \\rbrace $ , we let $[n]\\lbrace 1,\\dots ,n\\rbrace $ .", "We consider a federated multi-armed bandit with $K$ arms, a central server, and $M$ clients, in which each client has access to only a subset of arms.", "More specifically, we assume that client $m\\in [M]$ can pull the arms belonging to a subset $S_m\\subseteq [K]$ .", "Whereas in most of the works on multi-armed bandits, it is generally the case that $S_m=[K]$ for all $m\\in [M]$ , in our work, we allow for the case when $S_m$ is a strict subset of $[K]$ for some $m\\in [M]$ .", "Without loss of generality, we assume that $\\vert S_m \\vert \\ge 2$ .", "Pulling arm $i\\in S_m$ at time $t\\in \\mathbb {N}$ generates the observation $X_{i,m}(t) \\sim \\mathcal {N}(\\mu _{i,m},1)$ , where $\\mathcal {N}(\\mu _{i,m},1)$ denotes the Gaussian distribution with mean $\\mu _{i,m}\\in \\mathbb {R}$ and unit variance.", "A problem instance $v=(\\lbrace \\mu _{i,1}\\rbrace _{i\\in S_1}, \\lbrace \\mu _{i,2}\\rbrace _{i\\in S_2},\\dots , \\lbrace \\mu _{i,M}\\rbrace _{i\\in S_M})$ is defined by the collection of the means of the arms in each client's set of accessible arms, and we let $v_{i,m}\\mathcal {N}(\\mu _{i,m},1)$ .", "For any $i\\in [K]$ , we define the reward $X_i(t)$ of arm $i$ as the average of the observations obtained at time $t$ from all the clients $m$ for which $i\\in S_m$ that have access to arm $i$ , i.e., $X_i(t) \\frac{1}{M_i} \\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } X_{i,m}(t) $ , where $M_i \\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }$ is the number of clients that have access to arm $i$ .", "We let $\\mu _i \\frac{1}{M_i} \\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\mu _{i,m}$ denote the mean reward of arm $i$ .", "Arm $i$ is said to be the best arm of client $m$ if it has the highest mean reward among all the arms in $S_m$ .", "We assume that each client has a single best arm, and we let $a_m^* := \\operatornamewithlimits{argmax}_{i\\in S_m} \\mu _i$ denote the best arm of client $m$ .", "We let $a^*(a_m^*)_{m\\in [M]}\\in S_1\\times S_2\\times \\ldots \\times S_M$ denote the tuple of best arms of the clients.", "More explicitly, we write $a^*(v)$ to denote the tuple of best arms under the problem instance $v$ , and let $\\mathcal {P}$ be the set of the all problem instances with a single best arm at each client.", "We assume that the clients and the central server are time-synchronised and that the clients communicate with the server at certain pre-defined time instants.", "Given $(M,K,\\lbrace S_m\\rbrace _{m=1}^M)$ , a problem instance $v$ , and a confidence level $\\delta \\in (0,1)$ , we wish to design an algorithm for finding the best arm of each client with (a) the fewest number of time steps and communication rounds, and (b) error probability less than $\\delta $ .", "By an algorithm, we mean a tuple $\\Pi =(\\Pi _{\\rm comm},\\Pi _{\\rm cli},\\Pi _{\\rm svr})$ of (a) a strategy for communication between the clients and the server, (b) a strategy for selection of arms at each client, and (c) a combined stopping and recommendation rule at the server.", "The communication strategy $\\Pi _{\\rm comm}$ consists of the following components: (a) $\\lbrace b_r\\rbrace _{r \\in \\mathbb {N}}$ : the time instants of communication, with $b_r \\in \\mathbb {N}$ and $b_r \\le b_{r+1}$ for all $ r \\in \\mathbb {N}$ , (b) $\\Sigma $ : the set of values transmitted from the server to each client, (c) $\\Phi $ : the set of values transmitted from each client to the server; this is assumed to be identical for all the clients, (d) $\\hbar _r:{\\Phi }^{Mr} \\rightarrow \\Sigma $ : a function deployed at the server, which aggregates the information transmitted from all the clients in the communication rounds $1,\\ldots ,r$ , and generates an output value to be transmitted to each client, and (e) $\\rho _r^m: (\\mathbb {R}\\times S_m) ^{b_r} \\rightarrow \\Phi $ : a function deployed at client $m\\in [M]$ , which aggregates the observations seen by client $m$ in the time instants $1,\\ldots , b_r$ from the arms in $S_m$ , and generates an output value to be transmitted to the server.", "The arms selection strategy $\\Pi _{\\rm cli}$ consists of component arm selection rules $\\pi _t^m: (\\mathbb {R}\\times S_m) ^{t} \\times \\Sigma \\rightarrow S_m$ , $m\\in [M]$ .", "Here, $\\pi _t^m$ takes as input the observations seen from the arms in $S_m$ pulled by client $m$ up to time $t$ and the information received from the server to decide which arm in $S_m$ to pull at time $t+1$ .", "Lastly, the stopping and recommendation strategy $\\Pi _{\\rm svr}$ at the server consists of the following component rules: (a) the stopping rule $\\Upsilon _r: \\Phi ^{Mr} \\rightarrow \\lbrace 0,1\\rbrace $ that decides whether the algorithm stops in the $r$ -th communication round, $r\\in \\mathbb {N}$ , and (b) the recommendation rule $\\Psi _r: \\Phi ^{Mr} \\rightarrow S_1 \\times S_2 \\times \\cdots \\times S_M$ to output the empirical best arm of each client if the algorithm stops in the $r$ th communication round.", "We let $\\hat{a}_{\\delta ,m}$ denote the empirical best arm of client $m$ output by the algorithm under confidence level $\\delta $ , and define $\\hat{a}_\\delta (\\hat{a}_{\\delta ,m})_{m\\in [M]}$ .", "We assume that all the functions defined above are Borel-measurable.", "Note that if an algorithm stops in the $r$ th communication round, then its stopping time $\\tau =b_r$ .", "Given $\\delta \\in (0,1)$ , we say that an algorithm $\\Pi $ is $\\delta $ -PAC if $\\mathbb {P}_v^{\\Pi }\\left(\\tau _\\delta < +\\infty \\right)=1$ and $\\mathbb {P}_v^{\\Pi }\\left( \\hat{a}_\\delta \\ne a^*(v) \\right) \\le \\delta $ for any problem instance $v\\in \\mathcal {P}$ ; here, $\\mathbb {P}_v^\\Pi (\\cdot )$ the probability measure induced by the algorithm $\\Pi $ and the problem instance $v$ .", "Writing $\\tau _\\delta (\\Pi )$ and $\\mathfrak {r}_\\delta (\\Pi )$ to denote respectively the stopping time and the associated number of communication rounds corresponding to the confidence level $\\delta $ under the algorithm $\\Pi $ , our interest is in the following pair of optimisation problems: $\\inf _{\\Pi \\text{ is }\\delta \\text{-PAC}}\\ \\mathbb {E}_v^\\Pi [\\tau _\\delta (\\Pi )], \\quad \\inf _{\\Pi \\text{ is }\\delta \\text{-PAC}}\\ \\mathbb {E}_v^\\Pi [\\mathfrak {r}_\\delta (\\Pi )].$ In (REF ), $\\mathbb {E}_v^\\Pi $ denotes expectation with respect to the measure $\\mathbb {P}_v^\\Pi $ .", "Prior works [11], [15] show that the first term in (REF ) grows as $\\Theta (\\log (1/\\delta ))$ as $\\delta \\rightarrow 0$ .", "We anticipate that a similar growth rate holds for our problem setting.", "Our objective is to precisely characterise $\\liminf _{\\delta \\rightarrow 0}\\ \\inf _{\\Pi \\text{ is }\\delta \\text{-PAC}}\\ \\frac{\\mathbb {E}_v^\\Pi [\\tau _\\delta (\\Pi )]}{\\log (1/\\delta )}.$ In the following section, we present a lower bound for (REF ).", "Furthermore, we demonstrate that on any sequence of problem instances $\\lbrace v^{(l)}\\rbrace _{l=1}^{\\infty }$ with increasing levels of “hardness” (to be made precise soon), the second term in (REF ) grows as $\\Theta (\\log \\log (1/\\delta ))$ and obtain a precise characterisation of this growth rate, a first-of-its-kind result in the literature to the best of our knowledge." ], [ "Lower Bound: Converse", "Below, we first derive a problem-instance specific asymptotic lower bound on the expected stopping time.", "Then, we present a simplification to the constant appearing in the lower bound whose optimal solution is easy to evaluate, and provide the explicit structure of this optimal solution.", "Next, we show that for any algorithm to be almost-optimal (in the sense to be made precise later in this section), the ratio of any two consecutive communication time instants must be finite, a result that may be of independent interest and a key takeaway from the paper.", "Using this result, we obtain an $O(\\log \\log (1/\\delta ))$ lower bound on the expected number of communication rounds for a sub-class of $\\delta $ -PAC algorithms." ], [ "Lower bound on the Expected Stopping Time", "Let ${\\rm Alt}(v) \\lbrace v^{\\prime } \\in \\mathcal {P}: a^*(v)\\ne a^*(v^{\\prime }) \\rbrace $ denote the set of alternative problem instances corresponding to the problem instance $v$ .", "Let $\\Lambda $ denote the simplex of probability distributions on $K$ variables, and let $\\Lambda _m \\lbrace \\alpha \\in \\Lambda : \\alpha _i=0 \\ \\forall \\, i\\notin S_m\\rbrace $ denote the subset of $\\Lambda $ corresponding to client $m\\in [M]$ .", "We write $\\Gamma \\Lambda _1 \\times \\cdots \\times \\Lambda _M$ to denote the Cartesian product of $\\lbrace \\Lambda _m\\rbrace _{m=1}^{M}$ .", "The following proposition presents the first main result of this paper.", "Proposition 1 For any $v\\in \\mathcal {P}$ and $\\delta \\in (0,1)$ , $\\inf _{\\Pi \\text{ is }\\delta \\text{-PAC}}\\ \\mathbb {E}_v^\\Pi [\\tau _{\\delta }(\\Pi )] \\ge c^*(v) \\log \\left(\\frac{1}{4\\delta } \\right),$ where the constant $c^*(v)$ is given by $\\begin{split}\\hspace{-14.45377pt}&c^*(v)^{-1} =\\max _{\\omega \\in \\Gamma }\\!", "\\inf _{v^{\\prime }\\in {\\rm Alt}(v)}\\!", "\\sum _{m=1}^{M} \\sum _{i\\in S_m}\\!", "\\omega _{i,m}\\frac{\\big (\\mu _{i,m}(v)\\!-\\!\\mu _{i,m}(v^{\\prime })\\big )^2}{2}.\\end{split}$ Dividing both sides by $\\log (1/\\delta )$ and letting $\\delta \\rightarrow 0$ in (REF ), $\\liminf _{\\delta \\rightarrow 0}\\ \\inf _{\\Pi \\text{ is }\\delta \\text{-PAC}}\\ \\frac{\\mathbb {E}_v^\\Pi [\\tau _{\\delta }(\\Pi )]}{\\log (1/\\delta )} \\ge c^*(v).$ The term $c^*(v)$ defined in (REF ) is a measure of the “hardness” of the instance $v$ and is the solution to a max-inf optimisation problem where the outer `max' is over all $M$ -ary probability distributions $\\omega \\in \\Gamma $ such that $\\sum _{i \\in S_m} \\omega _{i,m} =1$ for all $m \\in [M]$ (here, $\\omega _{i,m}$ is the probability of pulling arm $i$ of client $m$ ), and the inner `inf' is over the set of alternative problem instances corresponding to the instance $v$ .", "The proof of Proposition REF is similar to the proof of [5] and is omitted for brevity.", "The key ideas in the proof to note are (a) the transportation lemma of [11] relating the error probability to the expected number of arm pulls and the Kullback–Leibler divergence between two problem instances $v$ and $v^{\\prime }\\in {\\rm Alt}(v)$ with distinct best arm locations, and (b) Wald's identity for i.i.d.", "observations." ], [ "A Simplification", "A close examination of the proof of the lower bound in [5] reveals that an important step in the proof therein is a further simplification of the max-inf optimisation in the instance-dependent constant; see [5].", "However, an analogous simplification of (REF ) is not possible as the set ${\\rm Alt}(v)$ does not admit a closed-form expression and the probability simplex of each client is heterogeneous.", "Nevertheless, we propose the following simplification.", "For any $\\omega \\in \\Gamma $ and instance $v$ , let $g_v(\\omega ) \\inf _{v^{\\prime }\\in {\\rm Alt}(v)} \\sum _{m=1}^{M} \\sum _{i \\in S_m} \\omega _{i,m}\\frac{(\\mu _{i,m}(v)-\\mu _{i,m}(v^{\\prime }))^2}{2}$ denote the inner minimum in (REF ).", "Our simplification of (REF ) is given by $\\widetilde{g}_v(\\omega ) \\min _{i \\in [K]} \\frac{\\Delta _i^2(v)/2}{\\frac{1}{M^2_i}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\frac{1}{\\omega _{i,m}} },$ where for each $i\\in [K]$ , $\\Delta _i(v) \\min _{m\\in [M]:\\, i\\in S_m} \\Big \\vert \\mu _i(v) - \\max _{j \\in S_m \\setminus \\lbrace i\\rbrace }\\mu _{j}(v) \\Big \\vert .$ In particular, if $\\omega _{i,m}=0$ for some $m\\in [K]$ and $i\\in S_m$ , $\\widetilde{g}_v(\\omega ) 0.$ Notice that the infimum in (REF ) is over the uncountably infinite set ${\\rm Alt}(v)$ , whereas the simplified minimum in (REF ) is over the finite set $[K]$ .", "Our next result shows that these two terms differ at most by a factor of 2.", "Lemma 1 For $v\\in \\mathcal {P}$ and $\\omega \\in \\Gamma $ , let $g_v(\\omega )$ and $\\widetilde{g}_v(\\omega )$ be as defined in (REF ) and (REF ) respectively.", "Then, $\\frac{1}{2}\\widetilde{g}_v(\\omega ) \\le g_v(\\omega ) \\le \\widetilde{g}_v(\\omega )$ .", "As a consequence of Lemma REF , it follows that $\\widetilde{c}(v)\\max _{\\omega \\in \\Gamma } \\widetilde{g}_v(\\omega )$ and $c^*(v)=\\max _{\\omega \\in \\Gamma } g_v(\\omega )$ differ only by a multiplicative factor of 2.", "It is not clear if the optimiser of $c^*(v)$ , if any, can be computed analytically.", "On the other hand, as we shall soon see, the optimiser of $\\widetilde{c}(v)$ can be computed in closed-form and plays an important role in our design of an asymptotically optimal algorithm in Section .", "Definition 2 (Good condition 1) An $\\omega \\in \\Gamma $ satisfies “good condition 1” if $\\frac{\\omega _{i_1,m_1}}{\\omega _{i_2,m_1}} = \\frac{\\omega _{i_1,m_2}}{\\omega _{i_2,m_2}}$ for all $i_1, i_2 \\in S_{m_1} \\cap S_{m_2}$ and $m_1, m_2 \\in [M]$ .", "That is, $\\omega $ satisfies good condition 1 if the ratios of the arm selection probabilities are consistent across the clients.", "The next result shows that $\\max _{\\omega \\in \\Gamma } \\widetilde{g}_v(\\omega )$ admits a solution that satisfies good condition 1.", "Proposition 3 Given $v\\in \\mathcal {P}$ , there exists $\\omega \\in \\Gamma $ that attains the maximum in the expression for $\\widetilde{c}(v)$ and satisfies good condition 1.", "Proposition REF follows in a straightforward manner from a more general result, namely Theorem REF , which we state later in the paper.", "Corollary 4 Let $\\widetilde{\\omega }(v)\\in \\Gamma $ be any $M$ -ary probability distribution that attains the maximum in the expression for $\\widetilde{c}(v)$ and satisfies good condition 1.", "Then, there exists a $K$ -dimensional vector $G(v)=[G(v)_i]_{i\\in [K]}$ such that $\\widetilde{\\omega }(v)_{i,m} = \\frac{G(v)_i}{\\sum _{i^{\\prime } \\in S_m} G(v)_{i^{\\prime }}}, \\quad i \\in S_m, \\ m\\in [M].$ Corollary REF elucidates the rather simple form of the optimiser of $\\widetilde{c}(v)$ , one that is characterised by a $K$ -dimensional vector $G(v)$ which, in the sequel, shall be referred to as the global vector corresponding to the instance $v$ .", "We shall soon see that it plays an important role in the design of our algorithm for finding the best arms of the clients.", "In fact, we show that in order to inform each client of its arm selection probabilities, the server needs to broadcast only the global vector instead of sending a separate probability vector to each client, thereby leading to significantly less downlink network traffic." ], [ "Lower bound on the Expected Number of Communication Rounds", "In this section, we present a lower bound on the expected number of communication rounds required by any “good” algorithm to find the best arms of the clients.", "Our interest is in the class of all almost-optimal $\\delta $ -PAC algorithms, as defined next.", "Definition 5 (Almost-optimal algorithm) Given $\\delta \\in (0,1)$ , and $\\alpha \\ge 1$ , a $\\delta $ -PAC algorithm $\\Pi $ is said to be almost-optimal up to a constant $\\alpha $ if for any $v \\in \\mathcal {P}$ , $\\mathbb {E}_v^\\Pi [\\tau _{\\delta }(\\Pi )] \\le \\alpha \\, c^*(v) \\log \\left(\\frac{1}{4\\delta }\\right).$ In addition, $\\Pi $ is said to be almost-optimal if it is almost-optimal up to a constant $\\alpha $ for some $\\alpha \\ge 1$ .", "Definition REF implies that the expected stopping time of an almost-optimal algorithm matches the lower bound in (REF ) up to the multiplicative constant $\\alpha $ .", "Notice that the sparser (more infrequent) the communication between the clients and the server, the larger the time required to find the best arms of the clients.", "Because (REF ) implies that the expected stopping time of an almost-optimal algorithm cannot be infinitely large, it is natural to ask what is the sparsest level of communication achievable in the class of almost-optimal algorithms.", "The next result provides a concrete answer to the preceding question.", "Lemma 2 Fix $\\delta \\in (0,\\frac{1}{4})$ and a $\\delta $ -PAC algorithm $\\Pi $ with communication time instants $\\lbrace b_r\\rbrace _{r\\in \\mathbb {N}}$ .", "If $\\Pi $ is almost-optimal, then $\\sup _{r \\in \\mathbb {N} } \\frac{b_{r+1}}{b_{r}} \\le \\eta $ for some $\\eta < +\\infty $ .", "Lemma REF , one of the key results of this paper and is of independent interest, asserts that the ratio of any two consecutive communication time instants of an almost-optimal algorithm must be bounded.", "An important implication of Lemma REF is that an almost-optimal algorithm can communicate at most exponentially sparsely, i.e., at exponential time instants of the form $t=\\lceil (1+\\lambda )^r \\rceil $ , $r\\in \\mathbb {N}$ , for some $\\lambda >0$ .", "For instance, an algorithm that communicates at time instants that grow super-exponentially (i.e., $t=2^{\\kappa (r)}$ for any super-linear function $\\kappa (r)$ ), does not satisfy the requirement in Lemma REF , and hence cannot be almost-optimal.", "Lemma REF implies that when an almost-optimal algorithm $\\Pi $ stops at time step $\\tau _\\delta (\\Pi )$ , at least $\\Omega (\\log _{\\eta }(\\tau _\\delta (\\Pi )))$ communication rounds must have occurred, i.e., $\\mathfrak {r}_\\delta (\\Pi )=\\Omega (\\log _\\eta (\\tau _\\delta (\\Pi )))$ almost surely (a.s.).", "The next result relates $\\log _\\eta (\\tau _\\delta (\\Pi ))$ with $\\log _\\eta (\\mathbb {E}[\\tau _\\delta (\\Pi )])$ .", "Lemma 3 Let $\\lbrace v^{(l)}\\rbrace _{l=1}^\\infty \\subset \\mathcal {P}$ be any sequence of problem instances with $\\lim _{l\\rightarrow \\infty } c^*(v^{(l)}) = +\\infty $ .", "Given $\\delta \\in \\left(0,\\frac{1}{4}\\right)$ , for any almost-optimal algorithm $\\Pi $ and $\\beta \\in (0,1)$ , $\\liminf \\limits _{l \\rightarrow \\infty } \\mathbb {P}_{v^{(l)}}^{\\Pi }\\left(\\log \\left(\\tau _{\\delta }(\\Pi ) \\right) \\!>\\!", "\\beta \\log \\left(\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _{\\delta }(\\Pi )] \\right) \\right) \\!\\ge \\!", "\\frac{1}{4} -\\delta .$ Lemma REF shows that $\\log (\\tau _{\\delta }(\\Pi )) = \\Omega (\\log (\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _{\\delta }(\\Pi )])$ with a non-vanishing probability on a sequence of problem instances $v^{(l)}$ with diverging levels of hardness.", "Proposition REF implies that $\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _{\\delta }(\\Pi )]=\\Omega (\\log (1/\\delta ))$ , which in conjunction with Lemma REF and the relation $\\mathfrak {r}_\\delta (\\Pi )=\\Omega (\\log _\\eta (\\tau _\\delta (\\Pi )))$ a.s., yields $\\mathfrak {r}_\\delta (\\Pi )=\\Omega (\\log _\\eta \\log (1/\\delta ))$ a.s., and consequently $\\mathbb {E}[\\mathfrak {r}_\\delta (\\Pi )]=\\Omega (\\log _\\eta \\log (1/\\delta ))$ .", "The next result makes this heuristic precise.", "Theorem 6 Fix $\\lbrace v^{(l)}\\rbrace _{l=1}^\\infty \\subset \\mathcal {P}$ with $\\lim _{l\\rightarrow \\infty } c^*(v^{(l)}) = +\\infty $ .", "Fix $\\delta \\in (0,\\frac{1}{4})$ .", "For any almost-optimal algorithm $\\Pi $ with communication time instants $\\lbrace b_r\\rbrace _{r\\in \\mathbb {N}}$ satisfying $\\frac{b_{r+1}}{b_r}\\le \\eta $ for all $ r\\in \\mathbb {N}$ , $\\liminf _{l\\rightarrow \\infty }\\ \\frac{\\mathbb {E}_{v^{(l)}}^\\Pi [\\mathfrak {r}_\\delta (\\Pi )]}{\\log _\\eta \\left(\\, c^*(v^{(l)})\\log \\left(\\frac{1}{4\\delta }\\right)\\right)} \\ge \\frac{1}{4} -\\delta .$ Theorem REF is the analogue of Proposition REF for the number of communication rounds, and is the first-of-its-kind result to the best of our knowledge." ], [ "The Heterogeneous Track-and-Stop ($\\textsc {Het}-\\textsc {TS}(\\lambda )$ ) Algorithm", "In this section, we propose an algorithm for finding the best arms of the clients based on the well-known Track-and-Stop strategy [5], [11] that communicates exponentially sparsely.", "Known as Heterogeneous Track-and-Stop and abbreviated as $\\textsc {Het}-\\textsc {TS}(\\lambda )$ for an input parameter $\\lambda >0$ , the individual components of our algorithm are described in detail below.", "Communication strategy: We set $b_r = \\lceil (1+\\lambda )^r \\rceil $ , $r\\in \\mathbb {N}$ .", "In the $r$ th communication round, each client transmits to the server on its uplink the empirical means of the observations seen from its arms up to time $b_r$ .", "Note that $\\hat{\\mu }_{i,m}(t) \\frac{1}{N_{i,m}(t)}\\sum _{s=1}^{t} \\mathbf {1}_{\\lbrace A_{m}(s) = i\\rbrace }X_{i,m}(s) $ is the empirical mean of arm $i\\in S_m$ after $t$ time instants, where $\\hat{\\mu }_{i,m}(t)=0$ if $N_{i,m}(t)=0$ .", "In (REF ), $A_m(t)$ is the arm observed in client $m$ at time $t$ , and $N_{i,m}(t):= \\sum _{s=1}^{t} \\mathbf {1}_{\\lbrace A_{m}(s) = i\\rbrace }$ is the number of times arm $i$ of client $m$ was pulled up to the time instant $t$ .", "On the downlink, for each $t\\in \\lbrace b_r\\rbrace _{ r\\in \\mathbb {N} }$ , the server first computes the global vector $G(\\hat{v}(t))$ according to the procedure outlined in Section  below and broadcasts this vector to each client.", "Here, $\\hat{v}(t)$ is the empirical problem instance at time $t$ , defined by the empirical arm means $\\lbrace \\hat{\\mu }_{i,m}(t): i\\in S_m, m\\in [M]\\rbrace $ received from the clients.", "Particularly, $G(\\hat{v}(t))=\\mathbf {1}^K$ if $\\hat{v}(t) \\notin \\mathcal {P}$ .", "Sampling strategy at each client: We use a variant of the so-called D-tracking rule of [5] for pulling the arms at each of the clients.", "Accordingly, at any time $t$ , client $m\\in [M]$ first computes $\\hat{\\omega }_{i,m}(t) \\frac{G(\\hat{v}(b_{r(t)}))_i}{\\sum _{i^{\\prime }\\in S_m} G(\\hat{v}(b_{r(t)}))_{i^{\\prime }}}, \\quad i\\in S_m,$ based on the global vector received from the server in the most recent communication round $r(t) \\min \\lbrace r\\in \\mathbb {N}: b_r \\ge t\\rbrace -1$ and let $b_0 0$ , then subsequently pulls arm ${\\footnotesize A_m(t) \\!\\in \\!", "{\\left\\lbrace \\begin{array}{ll}\\displaystyle \\operatornamewithlimits{argmin}_{i\\in S_m} N_{i,m}(t\\!-\\!1), &\\hspace{-52.75679pt} \\text{if }\\displaystyle \\min _{i\\in S_m} N_{i,m}(t\\!-\\!1)\\!", "<\\!", "\\sqrt{\\frac{t\\!-\\!", "1}{\\vert S_m\\vert }},\\\\\\displaystyle \\operatornamewithlimits{argmin}_{i\\in S_m} N_{i,m}(t-1)- t\\,\\hat{\\omega }_{i,m}(t) , &\\text{otherwise}.\\end{array}\\right.}}", "$ Ties, if any, are resolved uniformly at random.", "Notice that the rule in (REF ) ensures that each arm is pulled at least $O(\\sqrt{t})$ times in the long run.", "Stopping and recommendation rules at the server: We use a version of Chernoff's stopping rule at the server, as outlined below.", "Let $Z(t) \\inf _{v^{\\prime } \\in {\\rm Alt}(\\hat{v}(t))}\\ \\sum _{m=1}^M \\ \\sum _{i\\in S_m} N_{i,m}(t)\\ \\frac{(\\mu _{i,m}^\\prime - \\hat{\\mu }_{i,m}(t))^2}{2},$ where $\\hat{v}(t)$ is the empirical problem instance at time $t$ , defined by the empirical means $\\lbrace \\hat{\\mu }_{i,m}(t): i \\in S_m, \\, m\\in [M]\\rbrace $ received from the clients, and $v^\\prime $ is defined by the means $\\lbrace \\mu _{i,m}^\\prime : i \\in S_m, \\, m\\in [M]\\rbrace $ .", "Then, the (random) stopping time of the algorithm is defined as $\\tau _\\delta (\\Pi _{{\\rm Het-TS}}) = \\min \\lbrace t \\in \\lbrace b_r\\rbrace _{ r\\in \\mathbb {N} }: Z(t) \\!>\\!", "\\beta (t, \\delta ), t\\ge K\\rbrace ,$ where $\\beta (t, \\delta ) K^{\\prime }\\log (t^2+t) + f^{-1}(\\delta )$ , with $K^{\\prime } \\sum _{m=1}^M \\vert S_m \\vert $ and $f:(0,+\\infty ) \\rightarrow (0,1)$ defined as $f(x) \\sum _{i=1}^{K^{\\prime }} \\frac{x^{i-1}e^{-x}}{(i-1)!", "}, \\quad x \\in (0, +\\infty ).$ Finally, our algorithm outputs $\\hat{a}_{\\delta ,m} = \\mathop {\\arg \\max }_{i \\in S_m} \\hat{\\mu }_i (\\tau _\\delta ), \\quad m \\in [M],$ as the candidate best arms of the clients, where $\\hat{\\mu }_i (\\tau _\\delta )= \\frac{1}{M_i} \\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\ \\hat{\\mu }_{i,m}(\\tau _\\delta )$ .", "Remark 1 In contrast to existing works on best arm identification in the fixed-confidence regime, $\\Pi _{{\\rm Het-TS}}$ only stops at the communication time instants $\\lbrace b_r\\rbrace _{r\\in \\mathbb {N}}$ .", "This is clearly evident from (REF ).", "Remark 2 Our choice of $f$ in (REF ) ensures that the map $x \\mapsto f(x)$ is strictly monotone and continuous, and therefore admits an inverse, say $f^{-1}(\\cdot )$ .", "Furthermore, an important property of $f^{-1}(\\cdot )$ that results from the careful construction of $f$ as in (REF ) is that $\\lim _{\\delta \\rightarrow 0} \\frac{\\log (1/\\delta )}{f^{-1}(\\delta )}=1$ .", "See Lemma REF in the supplementary material for the proof.", "[t] $\\textsc {Het}-\\textsc {TS}$ : At Client $m\\in [M]$ [1] $\\delta \\in (0,1)$ : confidence level.", "$\\lbrace b_r: r\\in \\mathbb {N}\\rbrace $ : time instants of communication.", "$S_m$ : set of accessible arms.", "$\\hat{a}_{\\delta ,m}$ : the best arm in $S_m$ .", "Initialise $G \\leftarrow \\mathbf {1}^K$ $t\\in \\lbrace 1,2,\\ldots \\rbrace $ Compute the objective allocation $\\lbrace \\hat{\\omega }_{i,m}(t): i \\in S_m\\rbrace $ via (REF ).", "$\\min _{i\\in S_m} N_{i,m}(t-1) < \\sqrt{(t-1) / \\vert S_m\\vert }$ Pull arm $A_m(t) \\in \\operatornamewithlimits{argmin}_{i\\in S_m}N_{i,m}(t-1)$ ; resolve ties uniformly.", "Pull arm $A_m(t) \\in \\operatornamewithlimits{argmin}_{i\\in S_m}N_{i,m}(t-1)- t\\, \\hat{\\omega }_{i,m}(t)$ ; resolve ties uniformly.", "Update the empirical means $\\lbrace \\hat{\\mu }_{i,m}(t): i\\in S_m\\rbrace $ .", "$t \\in \\lbrace b_r: r \\in \\mathbb {N} \\rbrace $ Send the empirical means $\\lbrace \\hat{\\mu }_{i,m}(t): i\\in S_m\\rbrace $ from client $m$ to the server.", "$G$ $\\leftarrow $ latest global vector received from the server.", "Server signals to stop further arm pulls Receive best arm $\\hat{a}_{\\delta ,m}$ from the server.", "Break.", "Best arm $\\hat{a}_{\\delta , m}$ .", "[!ht] $\\textsc {Het}-\\textsc {TS}$ : At Central Server [1] $\\delta \\in (0,1)$ : confidence level.", "$\\lbrace b_r: r \\in \\mathbb {N}\\rbrace $ : time instants of communication.", "$S_1,\\ldots , S_M$ : sets of clients' accessible arms.", "Initialize $G \\leftarrow \\mathbf {1}^K$ $t \\in \\lbrace b_r: r \\in \\mathbb {N}\\rbrace $ $m\\in [M]$ Receive the empirical means $\\lbrace \\hat{\\mu }_{i,m}(t): i \\in S_m\\rbrace $ from client $m$ .", "$t\\ge K$ and $Z(t) > \\beta (t,\\delta )$ $m\\in [M]$ Compute the empirical best arm $\\hat{a}_{\\delta ,m}$ of client $m$ .", "Send $\\hat{a}_{\\delta ,m}$ and signal to stop further arm pulls to client $m$ .", "Break.", "Compute the global vector $G$ via the empirical means $\\lbrace \\hat{\\mu }_{i,m}(t): i\\in S_m,\\, m\\in [M]\\rbrace $ .", "Broadcast vector $G$ to all the clients." ], [ "Results on the Performance of $\\textsc {Het}-\\textsc {TS}(\\lambda )$", "In this section, we state the results on the performance of the $\\textsc {Het}-\\textsc {TS}(\\lambda )$ algorithm which we denote by $\\Pi _{\\textsc {Het}-\\textsc {TS}}$ (the input parameter $\\lambda $ being implicit).", "The first result below asserts that the algorithm is $\\delta $ -PAC for any $\\delta \\in (0,1)$ .", "Theorem 7 For any confidence level $\\delta \\in (0,1)$ , the $\\textsc {Het}-\\textsc {TS}(\\lambda )$ algorithm is $\\delta $ -PAC.", "Theorem 8 Fix $\\lambda >0$ , and let $b_r = \\lceil (1+\\lambda )^r \\rceil $ , $r\\in \\mathbb {N}$ .", "Given any $v\\in \\mathcal {P}$ and $\\delta \\in (0,1)$ , $\\tau _\\delta (\\Pi _{\\textsc {Het}-\\textsc {TS}})$ satisfies $\\mathbb {P}_v^{\\Pi _{\\textsc {Het}-\\textsc {TS}}} \\left(\\limsup _{\\delta \\rightarrow 0} \\frac{\\tau _{\\delta }(\\Pi _{\\textsc {Het}-\\textsc {TS}})}{\\log \\left(\\frac{1}{\\delta }\\right)} \\le 2\\, (1+\\lambda )\\, c^*(v) \\right) = 1.$ Furthermore, $\\mathbb {E}_v^{\\Pi _{\\textsc {Het}-\\textsc {TS}}}[\\tau _\\delta (\\Pi _{\\textsc {Het}-\\textsc {TS}})]$ satisfies $\\limsup _{\\delta \\rightarrow 0} \\frac{\\mathbb {E}_v^{\\Pi _{\\textsc {Het}-\\textsc {TS}}}[\\tau _\\delta (\\Pi _{ \\textsc {Het}-\\textsc {TS}) }]}{\\log \\left(\\frac{1}{\\delta }\\right)} \\le 2\\, (1+\\lambda )\\, c^*(v).$ Thus, in the limit as $\\delta \\rightarrow 0$ , $\\Pi _{\\textsc {Het}-\\textsc {TS}}$ is asymptotically almost-optimal up to the constant $\\alpha =2\\,(1+\\lambda )$ .", "Theorem REF lucidly demonstrates the tradeoff between the frequency of communication, which is parametrized by $\\lambda $ , and the expected stopping time.", "Since $\\textsc {Het}-\\textsc {TS}(\\lambda )$ communicates at time instances $b_r = \\lceil (1+\\lambda )^r \\rceil $ , as $\\lambda $ increases, communication occurs with lower frequency.", "This, however, leads to an increase in the multiplicative gap to asymptotic optimality, $2\\, (1+\\lambda )$ .", "The factor $1+\\lambda $ arises due to the necessity of communicating at time instances whose ratios $\\frac{b_{r+1}}{b_r}$ are bounded; see Lemma REF .", "The other factor 2 (in $2\\, (1+\\lambda )$ ) arises from approximating $g_v(\\omega )$ by $\\widetilde{g}_v(\\omega ) $ in Lemma REF .", "This factor is required to ensure that the optimal solution to $\\widetilde{c}(v)$ and the arm selection probabilities at each time instant can be evaluated in a tractable fashion.", "Corollary 9 Fix $\\lambda >0$ , and let $b_r = \\lceil (1+\\lambda )^r \\rceil $ , $r\\in \\mathbb {N}$ .", "Given any $v\\in \\mathcal {P}$ and $\\delta \\in (0,1)$ , $\\mathfrak {r}_\\delta (\\Pi _{ \\textsc {Het}-\\textsc {TS} })$ satisfies $\\mathbb {P}_v^{\\Pi _{ \\textsc {Het}-\\textsc {TS} }}\\left(\\limsup _{\\delta \\rightarrow 0} \\frac{\\mathfrak {r}_\\delta (\\Pi _{\\textsc {Het}-\\textsc {TS}})}{\\log _{1+\\lambda } \\left(\\log \\left(\\frac{1}{\\delta }\\right) c^*(v) \\right)} \\le 1 \\right) = 1.$ Furthermore, $\\mathbb {E}_v^{\\Pi _{\\textsc {Het}-\\textsc {TS} }}[\\mathfrak {r}_\\delta (\\Pi _{\\textsc {Het}-\\textsc {TS} })]$ satisfies $\\limsup _{\\delta \\rightarrow 0}\\ \\frac{\\mathbb {E}_v^{\\Pi _{\\textsc {Het}-\\textsc {TS}}}[\\mathfrak {r}_\\delta (\\Pi _{ \\textsc {Het}-\\textsc {TS}})]}{\\log _{1+\\lambda } \\left(\\log \\left(\\frac{1}{\\delta } \\right)c^*(v)\\right)} \\le 1.$ In contrast to Theorem REF , Corollary REF is a statement concerning the number of communication rounds.", "It says that the expectation of this quantity scales as $O(\\log \\log ({1}/{\\delta }))$ .", "This is perhaps unsurprising given that ${\\textsc {Het}-\\textsc {TS}}(\\lambda )$ communicates at exponentially growing time instants $\\lbrace b_r\\rbrace _{r\\in \\mathbb {N}}$ ." ], [ "Solving the Optimal Allocation", "Recall from Section REF that given any problem instance $v\\in \\mathcal {P}$ , the optimal solution to $\\max _{\\omega \\in \\Gamma }\\ \\widetilde{g}_v(\\omega )$ may be characterised by a $K$ -dimensional global vector $G(v)$ (see Corollary REF for more details).", "In this section, we provide the details on how to efficiently compute the global vector $G(v)$ corresponding to any problem instance $v \\in \\mathcal {P}$ .", "Consider the relation $R \\lbrace (i_1,i_2):\\ \\exists \\, m\\in [M], i_1,i_2 \\in S_m \\rbrace $ on the arms.", "Let $R_{\\mathrm {e}}$ be the equivalence relation generated by $R$ , i.e., the smallest equivalence relation containing $R$ .", "Clearly, the above equivalence relation $R_{\\mathrm {e}}$ partitions $[K]$ into equivalence classes.", "Let $Q_1, \\ldots , Q_L$ be the equivalence classes.", "For any $j\\in [L]$ , let $\\widetilde{g}_{v}^{(j)}(\\omega ) \\min _{i\\in Q_j} \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\omega _{i,m}} }$ .", "We define $\\widetilde{g}^{(j)}_v(\\omega ) = 0$ if there exists $m\\in [M]$ and $i \\in S_m \\cap Q_j$ such that $\\omega _{i,m}=0$ .", "In Eqn.", "(REF ) in Appendix of the supplementary material, we argue that the following optimisation problems admit a common solution: $\\max _{\\omega \\in \\Gamma } \\widetilde{g}_v(\\omega ),\\quad \\Big \\lbrace \\max _{\\omega \\in \\Gamma } \\widetilde{g}^{(l)}_v(\\omega ),\\quad l=1, \\ldots , L\\Big \\rbrace .$ Definition 10 (Good condition 2) Fix $v\\in \\mathcal {P}$ .", "An $\\omega \\in \\Gamma $ satisfies “good condition 2” if for all $ j \\in [L]$ and $i_1,i_2 \\in Q_{j}$ , $\\frac{\\Delta ^2_{i_1}(v)}{\\frac{1}{M^2_{i_1}}\\sum _{m=1}^M \\frac{\\mathbf {1}_{\\lbrace i_1 \\in S_m\\rbrace }}{\\omega _{i_1,m}} } = \\frac{\\Delta ^2_{i_2}(v)}{\\frac{1}{M^2_{i_2}}\\sum _{m=1}^M \\frac{\\mathbf {1}_{\\lbrace i_2 \\in S_m\\rbrace }}{\\omega _{i_2,m}} }$ .", "The next result states that the common solution to (REF ) is unique and satisfies good conditions 1 and 2.", "Theorem 11 For any problem instance $v\\in \\mathcal {P}$ , the common solution of the $L+1$ optimization problems in (REF ) satisfies good conditions 1 and 2 and is unique.", "Let $\\widetilde{w}(v)$ be the unique common solution to (REF ) corresponding to the instance $v$ .", "Let $G(v)$ be the unique global vector characterising $\\widetilde{w}(v)$ (see Corollary REF ) with $G(v)>0$ and $\\Vert G^{(j)}(v) \\Vert _2 =1 \\quad \\forall \\, j\\in [L],$ where $G^{(j)}(v) \\in \\mathbb {R}^{\\vert Q_j \\vert }$ denotes the sub-vector of $G(v)$ formed from the rows corresponding to the arms $i \\in Q_j$ .", "Let $H(v) \\in \\mathbb {R}^{K \\times K}$ be the matrix defined by $H(v)_{i_1,i_2} := \\frac{1}{\\Delta ^2_{i_1}(v) M^2_{i_1}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_1,i_2\\in S_m\\rbrace } \\quad \\forall \\, i_1,i_2 \\in [K].$ For $j\\in [L]$ , let $H^{(j)}(v) \\in \\mathbb {R}^{\\vert Q_j \\vert \\times \\vert Q_j \\vert }$ be the sub-matrix of $H(v)$ formed from the rows and columns corresponding to the arms in $Q_j$ .", "It is easy to verify that $\\big ( H^{(j)}(v) \\big )^\\top H^{(j)}(v) = H^{(j)}(v)\\big ( H^{(j)}(v) \\big )^\\top $ .", "That is, $H^{(j)}(v)$ is a normal matrix [7] and therefore has $\\vert Q_j \\vert $ linearly independent eigenvectors.", "The following result asserts that $G^{(j)}(v)$ is an eigenvector of the matrix $H^{(j)}(v)$ .", "Lemma 4 Given $v \\in \\mathcal {P}$ , $G^{(j)}(v)$ is a eigenvector of $H^{(j)}(v)$ with eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ for all $j \\in [L]$ , i.e., $H^{(j)}(v) \\, G^{(j)}(v) =\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}\\, G^{(j)}(v) \\quad \\forall \\, j\\in [L].$ Lemma 5 Given any problem instance $v \\in \\mathcal {P}$ , the dimension of the eigenspace of $H^{(j)}(v)$ associated with the eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ is equal to one for all $j \\in [L]$ .", "The main result of this section, a recipe for computing the global vector corresponding to an instance, is given below.", "Proposition 12 Fix $j\\in [L]$ and a problem instance $v \\in \\mathcal {P}$ .", "Among any set of $\\vert Q_j \\vert $ independent eigenvectors of $H^{(j)}(v)$ , there exists only one vector $\\mathbf {u}$ whose elements are all negative ($\\mathbf {u}<\\mathbf {0}$ ) or all positive ($\\mathbf {u} > \\mathbf {0}$ ).", "Furthermore, $G^{(j)}(v)={\\left\\lbrace \\begin{array}{ll}-\\frac{\\mathbf {u}}{\\Vert \\mathbf {u} \\Vert _2},\\ &\\text{if } \\mathbf {u}<\\mathbf {0} , \\\\\\frac{\\mathbf {u}}{\\Vert \\mathbf {u} \\Vert _2},\\ &\\text{if }\\mathbf {u}>\\mathbf {0}.\\end{array}\\right.", "}$ Proposition REF provides an efficient recipe to compute the global vector $G(v)$ from its $L$ sub-vectors $\\lbrace G^{(j)}(v) \\rbrace _{j \\in [L]}$ : among the eigenvectors of $H^{(j)}(v)$ , search for the unique vector of all-positive or all-negative entries, and normalise it to obtain $G^{(j)}(v)$ .", "We use this procedure in our implementation of $\\textsc {Het}-\\textsc {TS}(\\lambda )$ on synthetic and real-world datasets (e.g., the MovieLens dataset [3]).", "Experimental results are provided in Appendix ." ], [ "Concluding Remarks and Future Work", "We proposed a novel framework for best arm identification in the fixed-confidence regime for federated learning with heterogeneous clients.", "The novelty lies in that each client can only access a subset of the arms; this was motivated from the unavailability of authorised vaccines in certain countries.", "We showed, among other results, that any almost optimal algorithm must necessarily communicate such that the ratio of consecutive time instants is bounded.", "We proposed a track-and-stop-based algorithm whose expected stopping time is asymptotically within an identifiable multiplicative factor of the lower bound.", "Future work includes carefully examining the effects of heterogeneity, possible corruptions, and the quantisation of various messages on the uplinks and downlinks.", "Supplementary Material A Useful Lemma Lemma 6 Let $T$ be any fixed time instant, and let $\\mathcal {F}_T = \\sigma (\\lbrace X_{A_m(t),m}(t),A_m(t): t\\in [T], m\\in [M]\\rbrace )$ be the history of all the arm pulls and rewards seen up to time $T$ at all the clients under an algorithm $\\Pi $ .", "Let $E$ be any event such that $\\mathbf {1}_{\\lbrace E\\rbrace }$ is $\\mathcal {F}_T$ -measurable.", "Then, for any pair of problem instances $v$ and $v^{\\prime }$ , $\\sum _{t=1}^{T}\\ \\sum _{m=1}^M\\ \\sum _{i \\in S_m} \\mathbb {E}_{v}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace }\\ D_{\\mathrm {KL}}(v_{i,m} \\Vert v^{\\prime }_{i,m} )\\right] \\ge d_{\\mathrm {KL}}\\left(\\mathbb {P}_{v}^\\Pi (E), \\mathbb {P}_{{v^{\\prime }}}^{\\Pi }(E)\\right),$ where $D_{\\mathrm {KL}}(p \\Vert q)$ denotes the Kullback–Leibler (KL) divergence between the distributions $p$ and $q$ , and $d_{\\mathrm {KL}}(x,y)$ denotes the KL divergence between two Bernoulli distributions with parameters $x$ and $y$ .", "The proof of Lemma REF follows along the lines of the proof of [11] and is omitted.", "Proof of Lemma REF Fix $v=\\lbrace \\mu _{i,m}: i\\in S_m, \\, m\\in [M]\\rbrace \\in \\mathcal {P}$ and $\\omega \\in \\Gamma $ .", "Let $\\mathcal {C}(v) \\bigcup _{m\\in [M]} \\Big \\lbrace (a^*_m(v),i): i \\in S_m \\setminus \\lbrace a^*_m(v)\\rbrace \\Big \\rbrace .$ First, we note that by definition, $g_v(\\omega )=0$ if $\\omega _{i,m}=0$ for some $m\\in [K]$ and $i\\in S_m$ .", "Therefore, it suffices to consider the case when $\\omega _{i,m}>0$ for all $m\\in [K]$ and $i\\in S_m$ .", "In what follows, we abbreviate $\\mu _{i,m}(v^{\\prime })$ and $\\mu _{i}(v^{\\prime })$ to $\\mu ^{\\prime }_{i,m}$ and $\\mu ^{\\prime }_{i}$ respectively.", "We have $g_v(\\omega )& = \\inf _{v^{\\prime }\\in {\\rm Alt}(v)}\\ \\sum _{m=1}^{M}\\ \\sum _{i \\in S_m} \\omega _{i,m}\\frac{(\\mu _{i,m}-\\mu ^{\\prime }_{i,m})^2}{2} \\nonumber \\\\&= \\min _{(i_1,i_2)\\in \\mathcal {C}(v)}\\ \\inf _{v^{\\prime }:\\mu ^{\\prime }_{i_1}<\\mu ^{\\prime }_{i_2}}\\ \\sum _{m=1}^{M}\\ \\sum _{i \\in S_m} \\omega _{i,m}\\frac{(\\mu _{i,m}-\\mu ^{\\prime }_{i,m})^2}{2} \\nonumber \\\\&= \\min _{(i_1,i_2)\\in \\mathcal {C}(v)}\\ \\inf _{v^{\\prime }:\\mu ^{\\prime }_{i_1} \\le \\mu ^{\\prime }_{i_2}} \\ \\sum _{m=1}^{M} \\mathbf {1}_{\\lbrace i_1\\in S_m\\rbrace } \\omega _{i_1,m}\\frac{(\\mu _{i_1,m}-\\mu ^{\\prime }_{i_1,m})^2}{2}+\\mathbf {1}_{\\lbrace i_2\\in S_m\\rbrace } \\omega _{i_2,m}\\frac{(\\mu _{i_2,m}-\\mu ^{\\prime }_{i_2,m})^2}{2} \\nonumber \\\\&= \\min _{(i_1,i_2)\\in \\mathcal {C}(v)} \\ \\frac{(\\mu _{i_1}-\\mu _{i_2})^2/2}{\\frac{1}{M^2_{i_1}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_1 \\in S_m\\rbrace } \\frac{1}{\\omega _{i_1,m}} + \\frac{1}{M^2_{i_2}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_2 \\in S_m\\rbrace } \\frac{1}{\\omega _{i_2,m}}},$ where (REF ) follows from the penultimate line by using the method of Lagrange multipliers and noting that the inner infimum in the penultimate line is attained at $\\mu ^{\\prime }_{i_1,m} &= \\mu _{i_1,m} - \\frac{\\mu _{i_1}-\\mu _{i_2}}{M_{i_1}\\omega _{i_1,m}(\\sum _{i \\in \\lbrace i_1,i_2\\rbrace }\\sum _{m^{\\prime }:i \\in S_{m^{\\prime }}} \\frac{1}{ \\omega _{i,m^{\\prime }} M^2_{i}})} \\quad \\forall \\, m : i_1 \\in S_{m},\\nonumber \\\\\\mu ^{\\prime }_{i_2,m} &= \\mu _{i_2,m} + \\frac{\\mu _{i_1}-\\mu _{i_2}}{M_{i_2}\\omega _{i_2,m}(\\sum _{i \\in \\lbrace i_1,i_2\\rbrace }\\sum _{m^{\\prime }:i \\in S_{m^{\\prime }}} \\frac{1}{\\omega _{i,m^{\\prime }} M^2_{i} })} \\quad \\forall \\, m :i_2 \\in S_{m}$ From the definition of $\\widetilde{g}_v(\\omega )$ in (REF ), it is easy to verify that $\\widetilde{g}_v(\\omega ) = \\min _{(i_1,i_2)\\in \\mathcal {C}(v)} \\min \\bigg \\lbrace \\frac{(\\mu _{i_1}-\\mu _{i_2})^2/2}{\\frac{1}{M^2_{i_1}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_1 \\in S_m\\rbrace } \\frac{1}{\\omega _{i_1,m}} },\\frac{(\\mu _{i_1}-\\mu _{i_2})^2/2}{\\frac{1}{M^2_{i_2}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_2 \\in S_m\\rbrace } \\frac{1}{\\omega _{i_2,m}}} \\bigg \\rbrace ,$ whence it follows that $\\frac{\\widetilde{g}_v(\\omega )}{2} \\le g_v(\\omega ) \\le \\widetilde{g}_v(\\omega )$ .", "Proof of Lemma REF Define $v_{\\dagger }(\\rho )$ to be a special problem instance in which the arm means are given by $\\mu (v_{\\dagger }(\\rho ))_{i,m} = \\frac{i}{\\sqrt{\\rho }}, \\quad m\\in [M], \\ i \\in S_m.$ Then, it follows that $\\mu (v_{\\dagger }(\\rho ))_{i} = \\frac{i}{\\sqrt{\\rho }}$ for all $i\\in [K]$ .", "The following result will be used in the proof of Lemma REF .", "Lemma 7 Given any $\\rho >0$ , the problem instance $v_{\\dagger }(\\rho )$ , defined in (REF ), satisfies $\\frac{4\\rho }{MK^2} \\le c^*\\big (v_{\\dagger }(\\rho )\\big ) \\le 4K\\rho .$ [Proof of Lemma REF ] Recall the definition of $\\mathcal {C}(\\cdot )$ in (REF ).", "Let $\\Delta _{\\rm min}(\\rho )\\min _{(i,j) \\in \\mathcal {C}(v_{\\dagger }(\\rho ))} \\vert \\mu _i(v_{\\dagger }(\\rho )) - \\mu _j(v_{\\dagger }(\\rho )) \\vert ,$ and let $\\omega ^{\\rm trivial} \\in \\Gamma $ be defined as $\\omega ^{\\rm trivial}_{i,m} = \\frac{1}{\\vert S_m \\vert }, \\quad m\\in [M], \\ i \\in S_m.$ Notice that $c^*(v_{\\dagger }(\\rho ))^{-1} &= \\max _{\\omega \\in \\Gamma }\\ \\inf _{v^{\\prime }\\in {\\rm Alt}(v)}\\ \\sum _{m=1}^{M}\\ \\sum _{i\\in S_m} \\omega _{i,m}\\frac{\\big (\\mu _{i,m}(v)-\\mu _{i,m}(v^{\\prime })\\big )^2}{2} \\nonumber \\\\&\\le \\inf _{v^{\\prime }\\in {\\rm Alt}(v)}\\ \\sum _{m=1}^{M}\\ \\sum _{i\\in S_m} 1\\cdot \\frac{\\big (\\mu _{i,m}(v)-\\mu _{i,m}(v^{\\prime })\\big )^2}{2}\\nonumber \\\\&=g_{v_{\\dagger }(\\rho )}(\\mathbf {1}^{K\\times M}),$ where $\\mathbf {1}^{K \\times M}$ denotes the all-ones matrix of dimension $K \\times M$ .", "Also notice that $c^*(v_{\\dagger }(\\rho ))^{-1} &= \\max _{\\omega \\in \\Gamma }\\ \\inf _{v^{\\prime }\\in {\\rm Alt}(v)}\\ \\sum _{m=1}^{M}\\ \\sum _{i\\in S_m} \\omega _{i,m}\\frac{\\big (\\mu _{i,m}(v)-\\mu _{i,m}(v^{\\prime })\\big )^2}{2} \\nonumber \\\\& \\ge \\inf _{v^{\\prime }\\in {\\rm Alt}(v)}\\ \\sum _{m=1}^{M}\\ \\sum _{i\\in S_m} \\omega _{i,m}^{\\rm trivial}\\, \\frac{\\big (\\mu _{i,m}(v)-\\mu _{i,m}(v^{\\prime })\\big )^2}{2} \\nonumber \\\\&= g_{v_{\\dagger }(\\rho )}(\\omega ^{\\rm trivial}).$ From (REF ) and (REF ), we have $& g_{v_{\\dagger }(\\rho )}(\\omega ^{\\rm trivial}) \\le c^*\\big (v_{\\dagger }(\\rho )\\big )^{-1} \\le g_{v_{\\dagger }(\\rho )}(\\mathbf {1}^{K\\times M}) \\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow } \\min _{(i,j) \\in \\mathcal {C}(v_{\\dagger }(\\rho ))} \\frac{\\big (\\mu _i(v_{\\dagger }(\\rho ))-\\mu _j(v_{\\dagger }(\\rho ))\\big )^2/2}{\\frac{1}{M^2_i}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\frac{1}{\\omega ^{\\rm trivial}_{i,m}} +\\frac{1}{M^2_j}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace j \\in S_m\\rbrace }\\frac{1}{\\omega ^{\\rm trivial}_{j,m}} } \\nonumber \\\\&\\hspace{56.9055pt} \\le c^*\\big (v_{\\dagger }(\\rho )\\big )^{-1} \\le \\min _{(i,j) \\in \\mathcal {C}(v_{\\dagger }(\\rho ))} \\frac{\\big (\\mu _i(v_{\\dagger }(\\rho ))-\\mu _j(v_{\\dagger }(\\rho ))\\big )^2/2}{\\frac{1}{M^2_i}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\frac{1}{1} +\\frac{1}{M^2_j}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace j \\in S_m\\rbrace }\\frac{1}{1} } \\nonumber \\\\& \\stackrel{(b)}{\\Rightarrow } \\frac{\\Delta ^2_{\\rm min}(\\rho )}{4K} \\le c^*\\big (v_{\\dagger }(\\rho )\\big )^{-1} \\le \\frac{M\\Delta ^2_{\\rm min}(\\rho )}{4} \\nonumber \\\\& \\stackrel{(c)}{\\Rightarrow } \\frac{1}{4K\\rho } \\le c^*\\big (v_{\\dagger }(\\rho )\\big )^{-1} \\le \\frac{MK^2}{4\\rho } \\nonumber \\\\& \\Rightarrow \\frac{4\\rho }{MK^2} \\le c^*\\big (v_{\\dagger }(\\rho )\\big ) \\le 4K\\rho ,$ where $(a)$ above follows from (REF ) of Lemma REF , in writing $(b)$ , we make use of the observation that for all $m\\in [M]$ and $i\\in S_m$ , $\\frac{1}{\\omega ^{\\rm trivial}_{i,m}} = |S_m| \\le K,$ and $(c)$ makes use of the fact that $\\Delta ^2_{\\rm min}(\\rho ) \\in (\\frac{1}{\\rho }, \\frac{K^2}{\\rho })$ .", "This completes the desired proof.", "[Proof of Lemma REF ] Fix a confidence level $\\delta \\in (0,\\frac{1}{4})$ arbitrarily, and let $\\Pi $ be $\\delta $ -PAC and almost-optimal up to $\\alpha \\ge 1$ .", "Suppose, on the contrary, that $\\limsup _{r\\rightarrow \\infty } \\frac{b_{r+1}}{b_{r}} = +\\infty .$ Then, there exists an increasing sequence $\\lbrace z_l\\rbrace _{l=1}^\\infty $ such that $\\lim _{l\\rightarrow \\infty } \\frac{b_{z_l}}{b_{z_l+1}} = 0$ and $b_{z_l} <b_{{z_l}+1} $ for all $l\\in \\mathbb {N}$ .", "Let $T^*_{\\delta }(v) \\log \\left(\\frac{1}{4\\delta }\\right) \\, c^*(v) $ .", "Let $v^{(l)} v_{\\dagger }\\left(\\frac{\\sqrt{b_{z_l+1}b_{z_l}}}{4\\log (\\frac{1}{4\\delta })}\\right), \\quad \\mbox{for all} \\; l \\in \\mathbb {N}.$ By Lemma REF , we then have $\\frac{\\sqrt{b_{z_l+1}b_{z_l}}}{MK^2\\log (\\frac{1}{4\\delta })} \\le c^*(v^{(l)}) \\le \\frac{K \\sqrt{b_{z_l+1}b_{z_l}}}{\\log (\\frac{1}{4\\delta })}.$ Also, we have $\\frac{\\sqrt{b_{z_l+1}b_{z_l}}}{MK^2} \\le T^*_{\\delta }(v^{(l)}) \\le K \\sqrt{b_{z_l+1}b_{z_l}} .$ Let $E_l \\lbrace \\mbox{empirical best arms } \\hat{a}_\\delta =a^*(v^{(l)}) \\mbox{ and stopping time } \\tau _\\delta (\\Pi ) \\le b_{z_l} \\rbrace , \\quad l \\in \\mathbb {N},$ be the event that (a) $\\hat{a}_\\delta =(\\hat{a}_{\\delta ,m})_{m\\in [M]}$ , the vector of the empirical best arms of the clients at confidence level $\\delta $ , equals the vector $a^*(v^{(l)})$ , and (b) the stopping time $\\tau _\\delta (\\Pi ) \\le b_{z_l}$ .", "From Lemma REF , for any $l\\in \\mathbb {N}$ , we have $\\sum _{t=1}^{b_{z_l}}\\ \\sum _{m=1}^M \\ \\sum _{i \\in S_m} \\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace }\\ D_{\\mathrm {KL}}(v_{i,m}^{(l)} \\Vert v^{\\prime }_{i,m})\\right] \\ge d_{\\mathrm {KL}}\\left(\\mathbb {P}_{v^{(l)}}^{\\Pi }(E_l), \\mathbb {P}_{{v^{\\prime }}}^{\\Pi }(E_l)\\right)$ for all $v^{\\prime } \\in {\\rm Alt}(v^{(l)})$ .", "Note that $\\mathbb {P}_{v^{(l)}}^\\Pi (E_l) & = 1 - \\mathbb {P}_{v^{(l)}}^\\Pi (E_l^c)\\nonumber \\\\&\\stackrel{(a)}{\\ge } 1 - \\mathbb {P}_{v^{(l)}}^\\Pi (\\hat{a}_\\delta \\ne a^*(v^{(l)})) - \\mathbb {P}_{v^{(l)}}^\\Pi (\\tau _\\delta (\\Pi ) > {b}_{z_l}) \\nonumber \\\\& \\stackrel{(b)}{=} 1 - \\mathbb {P}_{v^{(l)}}^\\Pi (\\hat{a}_\\delta \\ne a^*(v^{(l)})) - \\mathbb {P}_{v^{(l)}}^\\Pi (\\tau _\\delta (\\Pi ) \\ge b_{z_l+1}) \\nonumber \\\\& \\stackrel{(c)}{\\ge } 1-\\delta - \\frac{\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _\\delta (\\Pi )]}{b_{z_l+1}} \\nonumber \\\\& \\stackrel{(d)}{\\ge } 1-\\delta - \\frac{\\alpha \\ T^*(v^{(l)})}{b_{z_l+1}} \\nonumber \\\\& \\stackrel{(e)}{\\ge } 1-\\delta - \\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}},$ where $(a)$ above follows from the union bound, $(b)$ follows by noting that $\\mathbb {P}_{v^{(l)}}^\\Pi (\\tau _\\delta (\\Pi ) \\ge b_{z_l+1}) = \\mathbb {P}_{v^{(l)}}^\\Pi (\\tau _\\delta (\\Pi ) > b_{z_l})$ as $\\tau _\\delta (\\Pi ) \\in \\lbrace b_r\\rbrace _{r\\in \\mathbb {N}}$ and $b_{z_l} < b_{{z_l}+1}$ , $(c)$ follows from Markov's inequality and the fact that $\\mathbb {P}_{v^{(l)}}^\\Pi (\\hat{a}_\\delta \\ne a^*(v)) \\le \\delta $ as $\\Pi $ is $\\delta $ -PAC, $(d)$ follows from the fact that $\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _\\delta (\\Pi )] \\le \\alpha \\, T^*(v^{(l)})$ as $\\Pi $ is almost-optimal up to the constant $\\alpha $ , and (e) follows from (REF ).", "Because the algorithm $\\Pi $ is $\\delta $ -PAC, it can be shown that $\\mathbb {P}_{v^{\\prime }}^\\Pi (E_l) \\le \\delta $ for all $v^{\\prime } \\in {\\rm Alt}(v^{(l)})$ .", "Continuing with (REF ) and using the fact that $d_{\\mathrm {KL}}(x,y) \\ge \\log \\left(\\frac{1}{4\\,\\delta ^{\\prime }}\\right)$ whenever $x\\ge 1-\\delta ^{\\prime }$ and $y \\le \\delta ^{\\prime }$ (see, for instance, [11]), setting $\\delta ^{\\prime }=\\delta + \\alpha K \\sqrt{b_{z_l}/b_{z_l+1}}$ , we have $&\\inf _{v^{\\prime }\\in {\\rm Alt}(v^{(l)})}\\ \\sum _{t=1}^{b_{z_l}}\\ \\sum _{m=1}^M \\ \\sum _{i \\in S_m} \\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace } \\ D_{\\mathrm {KL}}(v_{i,m}^{(l)} \\Vert v^{\\prime }_{i,m} ) \\right] \\ge \\log \\left(\\frac{1}{4\\delta +4\\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}}}\\right)\\nonumber \\\\&\\Rightarrow b_{z_l} \\inf _{v^{\\prime }\\in {\\rm Alt}(v^{(l)})}\\ \\sum _{t=1}^{b_{z_l}}\\ \\sum _{m=1}^M \\ \\sum _{i \\in S_m} \\frac{\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace } \\right]}{b_{z_l}} \\cdot \\frac{(\\mu _{i,m}^{(l)}-\\mu ^{\\prime }_{i,m})^2}{2} \\ge \\log \\left(\\frac{1}{4\\delta +4\\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}}}\\right)\\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow }\\frac{b_{z_l}}{c^*(v^{(l)})} \\ge \\log \\left(\\frac{1}{4\\delta +4\\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}}}\\right)\\nonumber \\\\& \\stackrel{(b)}{\\Rightarrow }MK^2\\log \\left(\\frac{1}{4\\delta }\\right)\\ \\sqrt{b_{z_l+1}/b_{z_l}}\\ge \\log \\left(\\frac{1}{4\\delta +4\\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}}}\\right)\\nonumber \\\\& \\Rightarrow 4 \\delta + 4 \\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}} \\ge (4\\delta )^{M^{-1}K^{-2}\\sqrt{b_{z_l}/b_{z_l+1}}}.$ In the above set of inequalities, $(a)$ follows from the definition of $c^*(v^{(l)})$ , and $(b)$ follows from (REF ).", "Letting $l \\rightarrow \\infty $ and using (REF ), we observe that the left-hand side of (REF ) converges to $4\\delta $ , whereas the right-hand side converges to 1, thereby resulting in $4\\delta \\ge 1$ , a contradiction.", "This proves that $\\limsup _{r\\rightarrow \\infty } \\frac{b_{r+1}}{b_{r}} < +\\infty $ , which implies $\\sup _{r \\in \\mathbb {N}} \\frac{b_{r+1}}{b_{r}} < \\eta $ for some $\\eta < +\\infty $ .", "Proof of Lemma REF Fix $\\delta \\in (0,\\frac{1}{4})$ and $\\beta \\in (0,1)$ arbitrarily.", "Let $\\Pi $ be almost-optimal up to a constant, say $\\alpha \\ge 1$ .", "Let $\\lbrace v^{(l)}\\rbrace _{l=1}^{\\infty }$ be any sequence of problem instances such that $\\lim _{l\\rightarrow \\infty } c^*(v^{(l)})=+\\infty $ , where this sequence must exist because of Lemma REF .", "Let $T_l \\big \\lceil \\left(\\mathbb {E}_{v^{(l)}}^{\\Pi }[\\tau _{\\delta }(\\Pi )]\\right)^\\beta \\big \\rceil $ .", "Because $\\lim _{l\\rightarrow \\infty } c^*(v^{(l)})=+\\infty $ , we have $\\lim _{l \\rightarrow \\infty }\\ \\frac{T_l}{\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _{\\delta }(\\Pi )]} = 0.$ For $l\\in \\mathbb {N}$ , let $F_l \\lbrace \\mbox{empirical best arms } \\hat{a}_\\delta =a^*(v^{(l)}) \\mbox{ and stopping time } \\tau _\\delta (\\Pi ) \\le T_l \\rbrace $ be the event that (a) the vector of empirical best arms matches with the vector of best arms under $v^{(l)}$ , and (b) the stopping time $\\tau _\\delta (\\Pi ) \\le T_l$ .", "Also, let $p_l \\mathbb {P}_{v^{(l)}}^\\Pi \\left(\\tau _\\delta > T_l \\right)$ .", "From Lemma REF , we know that $\\sum _{t=1}^{T_l}\\ \\sum _{m=1}^M \\ \\sum _{i \\in S_m} \\mathbb {E}_{v^{(l)}}^{\\Pi } \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace }\\ D_{\\mathrm {KL}}(v_{i,m}^{(l)} \\Vert v^{\\prime }_{i,m}) \\right] \\ge d_{\\mathrm {KL}}\\left(\\mathbb {P}_{v^{(l)}}^\\Pi (F_l), \\mathbb {P}_{{v^{\\prime }}}^\\Pi (F_l)\\right)$ for all problem instances $v^{\\prime }$ .", "In particular, for $v^{\\prime } \\in {\\rm Alt}(v^{(l)})$ , we note that for any $l\\in \\mathbb {N}$ , $\\mathbb {P}_{v^{(l)}}^\\Pi (F_l) & \\ge 1 - \\mathbb {P}_{v^{(l)}}(\\hat{a}_\\delta \\ne a^*(v^{(l)})) - \\mathbb {P}_{v^{(l)}}(\\tau _\\delta > T_l) \\nonumber \\\\& \\ge 1 - \\delta - p_l.$ Along similar lines, it can be shown that $\\mathbb {P}_{v^{\\prime }}^\\Pi (F_l) \\le \\delta +p_l$ for any $v^{\\prime } \\in {\\rm Alt}(v^{(l)})$ .", "Then, using the fact that $d(x,y)\\ge \\log \\left(\\frac{1}{4\\delta ^{\\prime }}\\right)$ whenever $x \\ge 1-\\delta ^{\\prime }$ and $y \\le \\delta ^{\\prime }$ , setting $\\delta ^{\\prime }=\\delta + p_l$ , we have $& \\inf _{v^{\\prime }\\in {\\rm Alt}(v^{(l)})}\\ \\sum _{t=1}^{T_l}\\ \\sum _{m=1}^M\\ \\sum _{i \\in S_m} \\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace }\\ D_{\\mathrm {KL}}(v_{i,m}^{(l)} \\Vert v^{\\prime }_{i,m}) \\right] \\ge \\log \\left(\\frac{1}{4\\delta + 4p_l}\\right)\\nonumber \\\\& \\Rightarrow T_l\\inf _{v^{\\prime }\\in {\\rm Alt}(v^{(l)})}\\sum _{t=1}^{T_l}\\ \\sum _{m=1}^M\\ \\sum _{i \\in S_m}\\ \\frac{\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace } \\right]}{T_l} \\cdot \\frac{(\\mu _{i,m}^{(l)}-\\mu ^{\\prime }_{i,m})^2}{2} \\ge \\log \\left(\\frac{1}{4\\delta + 4p_l}\\right)\\nonumber \\\\& \\Rightarrow \\frac{T_l}{c^*(v^{(l)})} \\ge \\log \\left(\\frac{1}{4\\delta + 4p_l}\\right)$ for all $l\\in \\mathbb {N}$ , where the last line above follows from the definition of $c^*(v^{(l)})$ .", "Because $\\Pi $ is almost-optimal up to constant $\\alpha \\ge 1$ , we have $c^*(v^{(l)}) \\log \\left(\\frac{1}{4\\delta }\\right) \\le \\mathbb {E}_{v^{(l)}}^\\Pi (\\tau _{\\delta }(\\Pi )) \\le \\alpha \\, c^*(v^{(l)}) \\, \\log \\left(\\frac{1}{4\\delta }\\right) \\quad \\text{for all }l\\in \\mathbb {N}.$ Combining (REF ) and (REF ), we get $\\frac{T_l}{\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _{\\delta }(\\Pi )]} \\ge \\frac{\\log \\left(\\frac{1}{4\\delta + 4p_l}\\right)}{\\alpha \\,\\log \\left(\\frac{1}{4\\delta }\\right)} \\quad \\text{for all }l\\in \\mathbb {N}.$ Suppose now that there exists $\\epsilon \\in \\left(0, \\frac{1}{4}-\\delta \\right)$ such that $\\liminf _{l \\rightarrow \\infty }\\ \\mathbb {P}_{v^{(l)}}^\\Pi \\left( \\log \\left(\\tau _{\\delta }(\\Pi ) \\right) > \\beta \\log \\left(\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\tau _{\\delta }(\\Pi )\\right] \\right) \\right) \\le \\frac{1}{4} - \\delta - \\epsilon .$ This implies from the definitions of $T_l$ and $p_l$ that there exists an increasing sequence $\\lbrace l_n:n\\ge 1\\rbrace $ such that $p_{l_n} \\le \\frac{1}{4}-\\delta -\\epsilon $ for all $n\\ge 1$ .", "Using this in (REF ), we get that $\\limsup _{l \\rightarrow \\infty }\\ \\frac{T_l}{\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\tau _{\\delta }(\\Pi )\\right]} \\ge \\limsup _{n \\rightarrow \\infty }\\ \\frac{T_{l_n}}{\\mathbb {E}_{v^{(l_n)}}^\\Pi \\left[\\tau _{\\delta }(\\Pi )\\right]} \\ge \\frac{\\log \\left(\\frac{1}{1-4\\epsilon }\\right)}{\\alpha \\log \\left(\\frac{1}{4\\delta }\\right)} > 0,$ which clearly contradicts (REF ).", "This proves that there is no $\\epsilon \\in \\left(0, \\frac{1}{4}-\\delta \\right)$ such that (REF ) holds, thereby establishing the desired result.", "Proof of Theorem REF Fix a sequence of problem instances $\\lbrace v^{(l)}\\rbrace _{l=1}^{\\infty }$ with $\\lim _{l\\rightarrow \\infty } c^*(v^{(l)})=+\\infty $ , a confidence level $\\delta \\in (0,\\frac{1}{4})$ , and an algorithm $\\Pi $ that is almost optimal up to a constant, say $\\alpha \\ge 1$ .", "From Lemma REF , we know that there exists $\\eta >0$ such that $\\mathfrak {r}_\\delta (\\Pi ) \\ge \\log _\\eta (\\tau _\\delta (\\Pi )) \\quad \\text{almost surely}.$ Also, from Lemma REF , we know that for any $\\beta \\in (0,1)$ and any sequence of problem instances $\\lbrace v^{(l)}\\rbrace _{l=1}^\\infty $ with $\\lim _{l \\rightarrow \\infty } c^*(v^{(l)})=+\\infty $ , $\\liminf _{l\\rightarrow \\infty }\\ \\mathbb {P}_{v^{(l)}}^{\\Pi }\\left(\\log \\left(\\tau _{\\delta }(\\Pi ) \\right) > \\beta \\log \\left(\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _{\\delta }(\\Pi )] \\right) \\right) \\ge \\frac{1}{4} -\\delta .$ Using (REF ) in (REF ), we have $&\\liminf _{l\\rightarrow \\infty }\\ \\mathbb {P}_{v^{(l)}}^\\Pi \\left(\\mathfrak {r}_\\delta (\\Pi ) >\\beta \\, \\log _\\eta \\left(\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _{\\delta }(\\Pi )] \\right) \\right) \\ge \\frac{1}{4} -\\delta \\quad \\forall \\ \\beta \\in (0,1)\\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow } \\liminf _{l\\rightarrow \\infty }\\ \\mathbb {P}_{v^{(l)}}^\\Pi \\left(\\mathfrak {r}_\\delta (\\Pi ) >\\beta \\, \\log _\\eta \\Big ( \\log \\Big (\\frac{1}{4\\delta }\\Big ) c^*(v^{(l)}) \\Big ) \\right) \\ge \\frac{1}{4} -\\delta \\quad \\forall \\ \\beta \\in (0,1)\\nonumber \\\\& \\stackrel{(b)}{\\Rightarrow } \\liminf _{l\\rightarrow \\infty }\\ \\frac{\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathfrak {r}_\\delta (\\Pi ) \\right]}{\\log _\\eta \\left( \\,\\log \\left(\\frac{1}{4\\delta }\\right) c^*(v^{(l)}) \\right)} \\ge \\beta \\,\\left( \\frac{1}{4} -\\delta \\right) \\quad \\forall \\ \\beta \\in (0,1)\\nonumber \\\\& \\stackrel{(c)}{\\Rightarrow } \\liminf _{l\\rightarrow \\infty }\\ \\frac{\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathfrak {r}_\\delta (\\Pi ) \\right]}{\\log _\\eta \\left( \\,\\log \\left(\\frac{1}{4\\delta }\\right) c^*(v^{(l)}) \\right)} \\ge \\frac{1}{4} -\\delta .$ In the above set of inequalities, $(a)$ follows from Proposition REF and the hypothesis that $\\Pi $ is $\\delta $ -PAC, $(b)$ follows from Markov's inequality, and $(c)$ follows from $(b)$ by letting $\\beta \\rightarrow 1$ .", "The desired result is thus established.", "Proof of Theorem REF Below, we record some important results that will be useful for proving Theorem REF .", "Lemma 8 ([12]) Let $Y_1,Y_2,\\ldots $ be independent Gaussian random variables with mean $\\mu $ and unit variance.", "Let $\\hat{\\mu }_n \\frac{1}{n} \\sum _{i=1}^n Y_i$ .", "Then, $\\mathbb {P} \\left( \\exists \\, n \\in \\mathbb {N}: \\frac{n}{2}(\\hat{\\mu }_n-\\mu )^2 \\ge \\log (1/\\delta ) + \\log (n(n+1)) \\right) \\le \\delta .$ Lemma 9 Fix $n \\in \\mathbb {N}$ .", "Let $Y_1,Y_2,\\ldots ,Y_n$ be independent random variables with $\\mathbb {P}(Y_i \\le y) \\le y$ for all $y\\in [0,1]$ and $i\\in [n]$ .", "Then, for any $\\epsilon > 0$ , $\\mathbb {P} \\bigg ( \\sum _{i=1}^n \\log (1/Y_i) \\ge \\epsilon \\bigg ) \\le f_n(\\epsilon )$ where $f_n:(0,+\\infty ) \\rightarrow (0,1) $ is defined by $f_n(x) = \\sum _{i=1}^{n} \\frac{x^{i-1}e^{-x}}{(i-1)!", "}, \\quad x \\in (0, +\\infty ).$ [Proof of Lemma REF ] First, for $i\\in [n]$ we define the random variable $Z_i F_{i}(Y_i)$ , where $F_{i}$ is the cumulative distribution function (CDF) of $Y_i$ .", "Clearly, $Z_i$ is a uniform random variable.", "Notice that $\\mathbb {P}(Y_i \\le y) \\le y = \\mathbb {P}(Z_i \\le y)$ for all $y\\in (0,1)$ , from which it follows that $\\mathbb {P} \\bigg ( \\sum _{i=1}^n \\log (1/Y_i) \\ge \\epsilon \\bigg ) \\le \\mathbb {P} \\bigg ( \\sum _{i=1}^n \\log (1/Z_i) \\ge \\epsilon \\bigg ).$ Therefore, it suffices to prove Lemma REF for the case when $Y_1, \\ldots , Y_n$ are independent and uniformly distributed on $[0,1]$ .", "Suppose that this is indeed the case.", "Then, we note that $\\mathbb {P} \\big (\\sum _{i=1}^n \\log (1/Y_i) \\ge \\epsilon \\big )=\\mathbb {P} \\big ( \\prod _{i=1}^n Y_i \\le \\exp (-\\epsilon ) \\big ) $ .", "Let $h_s(x) \\mathbb {P} \\big ( \\prod _{i=1}^s Y_i \\le x \\big )$ for $s\\in [n]$ and $x\\in (0, 1)$ .", "We then have $h_1(x) &= x, \\nonumber \\\\\\forall \\ s>1, \\quad h_s(x) &= \\int _{0}^1 h_{s-1}\\big (\\min \\lbrace x/y, 1\\rbrace \\big )\\ \\mathrm {d}y = x+ \\int _{x}^1 h_{s-1}(x/y) \\ \\mathrm {d}y.", "\\nonumber $ Using mathematical induction, we demonstrate below that $h_s(x) = \\sum _{i=1}^{s} \\frac{(\\log \\frac{1}{x})^{i-1}x}{(i-1)!", "}$ for all $s \\in [n]$ and $x\\in (0,1)$ .", "Base case: It is easy to verify that (REF ) holds for $s=1$ .", "For $s=2$ , we have $h_2(x) =x+ \\int _{x}^1 h_1\\left(\\frac{x}{y}\\right)\\ \\mathrm {d}y = x+ \\int _{x}^1 \\frac{x}{y}\\ \\mathrm {d}y = x + \\log (1/x)\\,x,$ thus verifying that (REF ) holds for $s=2$ .", "Induction step: Suppose now that (REF ) holds for $s=k$ for some $k > 2$ .", "Then, $h_{k+1}(x) & = x+ \\int _{x}^1 h_{k}(x/y)\\ \\mathrm {d}y \\nonumber \\\\& \\stackrel{(a)}{=} x+ \\int _{x}^1 \\sum _{i=1}^{k} \\frac{(\\log \\frac{y}{x})^{i-1} (\\frac{x}{y}) }{(i-1)!}", "\\ \\mathrm {d}y \\nonumber \\\\& = x+ \\sum _{i=1}^{k} \\int _{x}^1 \\frac{(\\log \\frac{y}{x})^{i-1} (\\frac{x}{y}) }{(i-1)!}", "\\ \\mathrm {d}y \\nonumber \\\\& =x+ \\sum _{i=1}^{k} \\frac{x}{(i-1)!", "}\\ \\int _{x}^1 \\frac{(\\log \\frac{y}{x})^{i-1}}{y}\\ \\mathrm {d}y \\nonumber \\\\& \\stackrel{(b)}{=} x+ \\sum _{i=1}^{k} \\frac{x}{(i-1)!}", "\\int _{1}^{1/x} \\frac{(\\log y^{\\prime })^{i-1}}{y^{\\prime }} \\ \\mathrm {d}y^{\\prime } \\nonumber \\\\& \\stackrel{(c)}{=}x+ \\sum _{i=1}^{k} \\frac{x}{(i-1)!}", "\\frac{(\\log \\frac{1}{x})^i}{i} \\nonumber \\\\& = \\sum _{i=1}^{k+1} \\frac{(\\log \\frac{1}{x})^{i-1}x}{(i-1)!", "}, $ where $(a)$ follows from the induction hypothesis, in writing $(b)$ above, we set $y^{\\prime } = y/x$ , and $(c)$ follows by noting that $\\int \\frac{(\\log y)^j}{y} \\, \\mathrm {d}y = \\frac{1}{j+1} (\\log y)^{j+1}.$ This demonstrates that (REF ) holds for $s=k+1$ .", "Finally, we note that $\\mathbb {P} \\bigg ( \\sum _{i=1}^n \\log (1/Y_i) \\ge \\epsilon \\bigg ) & = h_n\\big (\\exp (-\\epsilon )\\big ) \\nonumber \\\\& = \\sum _{i=1}^{n} \\nonumber \\frac{\\epsilon ^{i-1}e^{-\\epsilon }}{(i-1)!}", "\\nonumber \\\\& = f_n(\\epsilon ),$ thus establishing the desired result.", "With the above ingredients in place, we are now ready to prove Theorem REF .", "[Proof of Theorem REF ] Fix a confidence level $\\delta \\in (0,1)$ and a problem instance $v \\in \\mathcal {P}$ arbitrarily.", "We claim that $\\tau _\\delta (\\Pi _{\\mathrm {Het-TS}}) < + \\infty $ almost surely; a proof of this is deferred until the proof of Lemma REF .", "Assuming that the preceding fact is true, for $m\\in [M]$ and $i\\in S_m$ , let $ \\xi _{i,m} \\sup _{t \\ge K} \\frac{N_{i,m}(t)}{2}\\big (\\hat{\\mu }_{i,m}(t)-\\mu _{i,m}(v)\\big )^2 - \\log \\big (N_{i,m}(t)(N_{i,m}(t)+1)\\big ).$ From Lemma REF , we know that for any confidence level $\\delta ^{\\prime } \\in (0,1)$ , $\\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} (\\xi _{i,m} \\ge \\log (1/\\delta ^{\\prime })) \\le \\delta ^{\\prime }.$ Let $\\xi ^{\\prime }_{i,m} \\exp (-\\xi _{i,m})$ .", "Recall that $K^{\\prime } = \\sum _{m=1}^M \\vert S_m \\vert $ .", "From Lemma REF , we know that for any $\\epsilon >0$ , $& \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\left(\\sum _{m\\in [M]}\\ \\sum _{i\\in S_m}\\ \\log (1/\\xi ^{\\prime }_{i,m}) \\ge \\epsilon \\right) \\le f_{K^{\\prime }}(\\epsilon ) \\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow } \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\left(\\sum _{m\\in [M]}\\ \\sum _{i\\in S_m}\\ \\xi _{i,m} \\ge \\epsilon \\right) \\le f_{K^{\\prime }}(\\epsilon ) \\nonumber \\\\& \\stackrel{(b)}{\\Rightarrow } \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\left(\\sum _{m\\in [M]}\\ \\sum _{i\\in S_m}\\ \\xi _{i,m} \\ge \\epsilon \\right) \\le f(\\epsilon ) \\nonumber \\\\& \\stackrel{(c)}{\\Rightarrow } \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\left(\\sum _{m\\in [M]}\\ \\sum _{i\\in S_m}\\ \\xi _{i,m} \\ge f^{-1}(\\delta ) \\right) \\le \\delta , $ where $(a)$ above follows from the definition of $\\xi ^{\\prime }_{i,m}$ , $(b)$ follows from the definition of $f$ in (REF ), and in writing $(c)$ , we (i) make use of the fact that $f$ is continuous and strictly decreasing and therefore admits an inverse, and (ii) set $\\epsilon = f^{-1}(\\delta )$ .", "Eq.", "(REF ) then implies $& \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\bigg (\\forall \\, t\\ge K \\ \\sum _{m\\in [M]} \\sum _{i\\in S_m}\\frac{N_{i,m}(t)}{2}\\big (\\hat{\\mu }_{i,m}(t)-\\mu _{i,m}(v)\\big )^2 \\le K^{\\prime }\\log \\big (t(t+1)\\big ) + f^{-1}(\\delta ) \\bigg ) \\ge 1-\\delta \\nonumber \\\\& \\Rightarrow \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\bigg (\\forall \\, t\\ge K \\ \\sum _{m\\in [M]} \\sum _{i\\in S_m}\\frac{N_{i,m}(t)}{2}\\big (\\hat{\\mu }_{i,m}(t)-\\mu _{i,m}(v)\\big )^2 \\le \\beta (t, \\delta ) \\bigg ) \\ge 1-\\delta .$ Note that at the stopping time $\\tau _\\delta (\\Pi _{\\mathrm {Het-TS}})$ , we must have $\\inf _{v^{\\prime } \\in {\\rm Alt}(\\hat{v}(\\tau _\\delta ))} \\sum _{m \\in [M]} \\sum _{i\\in S_m} N_{i,m}(\\tau _\\delta ) \\frac{(\\mu _{i,m}(v^{\\prime }) - \\hat{\\mu }_{i,m}(\\tau _\\delta ))^2}{2} > \\beta (\\tau _\\delta , \\delta ).$ Thus, we may write (REF ) equivalently as $\\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\left( v \\notin {\\rm Alt} \\left(\\hat{v}(\\tau _\\delta (\\Pi _{\\mathrm {Het-TS}}))\\right) \\right) \\ge 1 - \\delta $ , which is identical to $\\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}}\\left( a^*(v) = a^*(\\hat{v}(\\tau _\\delta (\\Pi _{\\mathrm {Het-TS}}))) \\right) \\ge 1 - \\delta $ .", "This completes the proof.", "Proof of Theorem REF Lemma 10 ([19] as cited in [20]) Let $f: S \\times \\Theta \\rightarrow \\mathbb {R} $ be a continuous function, and $\\mathcal {D}:\\Theta \\rightrightarrows S$ be a compact-valued continuous correspondence.", "Let $f^*: \\Theta \\rightarrow \\mathbb {R} $ and $D^*: \\Theta \\rightrightarrows S $ be defined by $f^{*}(\\theta )=\\max \\lbrace f(x, \\theta ) : x \\in \\mathcal {D}(\\theta )\\rbrace $ and $\\mathcal {D}^{*}(\\theta )=\\operatornamewithlimits{argmax}\\lbrace f(x, \\theta ) : x \\in \\mathcal {D}(\\theta )\\rbrace =\\lbrace x \\in \\mathcal {D}(\\theta ):f(x, \\theta )=f^{*}(\\theta )\\rbrace .$ Then $f^{*}$ is a continuous function on $\\Theta $ , and $\\mathcal {D}^{*}$ is a compact-valued, upper hemicontinuous correspondence on $\\Theta $ .", "Lemma 11 ([6]) A singleton-valued correspondence is upper hemicontinuous if and only if it is lower hemicontinuous, in which case it is continuous as a function.", "Lemma 12 Let $f$ be as defined in (REF ).", "Then, $f^{-1}(\\delta )=(1+o(1)) \\log (1/\\delta )$ as $\\delta \\rightarrow 0$ , i.e., $\\lim _{\\delta \\rightarrow 0} \\frac{\\log (1/\\delta )}{f^{-1}(\\delta )} =1.$ Let $x = f^{-1}(\\delta )$ .", "Then, $\\lim _{\\delta \\rightarrow 0} \\frac{\\log (1/\\delta )}{f^{-1}(\\delta )}& \\stackrel{(a)}{=} \\lim _{x \\rightarrow +\\infty } \\frac{\\log (\\frac{1}{f(x)})}{x} \\nonumber \\\\& \\stackrel{(b)}{=} \\lim _{x \\rightarrow +\\infty } \\frac{-f^{\\prime }(x)/f(x)}{1} \\nonumber \\\\& \\stackrel{(c)}{=} \\lim _{x \\rightarrow +\\infty } \\frac{ \\frac{x^{K^{\\prime }-1}e^{-x}}{(K^{\\prime }-1)!", "}}{\\sum _{i=1}^{K^{\\prime }} \\frac{x^{i-1}e^{-x}}{(i-1)!}}", "\\nonumber \\\\& = \\lim _{x \\rightarrow +\\infty } \\frac{ \\frac{x^{K^{\\prime }-1}}{(K^{\\prime }-1)!", "}}{\\sum _{i=1}^{K^{\\prime }} \\frac{x^{i-1}}{(i-1)!}}", "\\nonumber \\\\& = 1,$ where $(a)$ above follows the fact that $x \\rightarrow \\infty $ as $\\delta \\rightarrow 0$ , $(b)$ follows from the L'Hospital's rule, and $(c)$ makes use of the fact that $f^{\\prime }(x)=\\frac{-x^{K^{\\prime }-1}e^{-x}}{(K^{\\prime }-1)!", "}$ .", "This completes the proof.", "Before proceeding further, we introduce some additional notations.", "For any $j \\in [L]$ and $m \\in [M]$ , let $\\Lambda _m^{(j)} {\\left\\lbrace \\begin{array}{ll}{\\Lambda }_m, & \\text{if }S_m \\subseteq Q_j, \\\\\\lbrace \\mathbf {0}^K\\rbrace , & \\text{otherwise},\\end{array}\\right.", "}$ where $\\mathbf {0}^K$ denotes the all-zeros vector of dimension $K$ .", "For each $j\\in [L]$ , noting that $ \\prod _{i=1}^M \\Lambda _i^{(j)}:=\\Lambda _1^{(j)}\\times \\ldots \\Lambda _M^{(j)}$ is compact and that the mapping $\\omega \\mapsto \\widetilde{g}^{(j)}_v(\\omega )$ is continuous, there exists a solution to $\\max _{\\omega \\in \\prod _{i=1}^M \\Lambda _i^{(j)}} \\widetilde{g}^{(j)}_v(\\omega )$ .", "Let $\\widetilde{\\omega }^{(j)}(v) \\in \\operatornamewithlimits{argmax}_{\\omega \\in \\prod _{i=1}^M \\Lambda _i^{(j)} } \\widetilde{g}^{(j)}_v(\\omega ),$ Further, let $\\widetilde{\\omega }(v) \\sum _{j=1}^L \\widetilde{\\omega }^{(j)}(v)$ .", "Then, it is easy to verify that $\\widetilde{\\omega } (v) \\in \\Gamma $ is a common solution to $\\max _{\\omega \\in \\Gamma } \\widetilde{g}_v(\\omega ), \\max _{\\omega \\in \\Gamma } \\widetilde{g}^{(1)}_v(\\omega ), \\ldots , \\max _{\\omega \\in \\Gamma } \\widetilde{g}^{(L)}_v(\\omega ).$ Note that such common solution above is unique (we defer the proof of this fact to Theorem REF ), which then implies that the solution to $\\operatornamewithlimits{argmax}_{\\omega \\in \\prod _{i=1}^M \\Lambda _i^{(j)} } \\widetilde{g}^{(j)}_v(\\omega )$ is unique.", "Hence, $\\widetilde{\\omega }^{(j)}(v)$ and $\\widetilde{\\omega }(v)$ are well-defined.", "Lemma 13 Given any problem instance $v\\in \\mathcal {P}$ , under $\\Pi _{\\mathrm {Het-TS}}$ , $\\lim _{t \\rightarrow \\infty } \\Vert \\widetilde{\\omega }(\\hat{v}(t)) - \\widetilde{\\omega }(v) \\Vert _\\infty =0 \\quad \\text{almost surely}.$ Consequently, for any $m\\in [M]$ and $i\\in S_m$ , $\\lim _{t \\rightarrow \\infty } \\left|\\frac{N_{i,m}(t)}{t} - \\widetilde{\\omega }(v)_{i,m} \\right|=0 \\quad \\text{almost surely}.$ Fix $j\\in [L]$ and $v\\in \\mathcal {P}$ arbitrarily.", "By the strong law of large numbers, it follows that for any $i\\in [K]$ and $m\\in S_m$ , $& \\lim _{t \\rightarrow \\infty } \\hat{\\mu }_{i,m}(t) = \\mu _{i,m}(v) \\quad \\text{almost surely} \\nonumber \\\\& \\Rightarrow \\lim _{t \\rightarrow \\infty } \\hat{\\mu }_{i}(t) = \\mu _{i}(v) \\quad \\text{almost surely} \\\\& \\Rightarrow \\lim _{t \\rightarrow \\infty } \\Delta _i(\\hat{v}(t)) = \\Delta _i(v) \\quad \\text{almost surely}.", "$ For any $v^{\\prime }\\in \\mathcal {P}$ , note that $\\widetilde{g}^{(j)}_{v^{\\prime }}(\\omega )$ is a function of $(\\Delta (v^{\\prime }),\\omega )$ for $\\Delta (v^{\\prime }) \\in (\\mathbb {R^+})^K$ and $\\omega \\in \\prod _{i=1}^M \\Lambda _i^{(j)}$ .", "From Lemma REF and Lemma REF that for any $\\epsilon _1 >0$ , there exists $\\epsilon _2 >0$ such that for all $v^{\\prime } \\in \\mathcal {P}$ with $\\Vert \\Delta (v)-\\Delta (v^{\\prime }) \\Vert _\\infty \\le \\epsilon _2$ , $\\Vert \\widetilde{\\omega }^{(j)}(v) - \\widetilde{\\omega }^{(j)}(v^{\\prime }) \\Vert _\\infty \\le \\epsilon _1.$ Combining () and (REF ), it follows that $\\lim _{t \\rightarrow \\infty } \\Vert \\widetilde{\\omega }^{(j)}(v) - \\widetilde{\\omega }^{(j)}(\\hat{v}(t)) \\Vert _\\infty =0 \\quad \\text{almost surely},$ which in turn implies that $\\lim _{t \\rightarrow \\infty } \\Vert \\widetilde{\\omega }(v) - \\widetilde{\\omega }(\\hat{v}(t)) \\Vert _\\infty =0$ almost surely.", "Recall the definition of $\\hat{\\omega }_{i,m}(t)$ in (REF ), which means $\\hat{\\omega }_{i,m}(t) =\\widetilde{\\omega }_{i,m}\\big (\\hat{v}(b_{r(t)})\\big ) $ .", "Then, by (REF ) for any $m\\in [M]$ and $i\\in S_m$ we have $\\lim _{t \\rightarrow \\infty } \\Vert \\widetilde{\\omega }(v) - \\hat{\\omega }(t) \\Vert _\\infty =0$ almost surely.", "Consequently, by [5], for any $m\\in [M]$ and $i\\in S_m$ , $\\lim _{t \\rightarrow \\infty } \\left|\\frac{N_{i,m}(t)}{t} - \\widetilde{\\omega }(v)_{i,m} \\right|=0 \\quad \\text{almost surely}.$ Lemma 14 Given any problem instance $v\\in \\mathcal {P}$ , under $\\Pi _{\\mathrm {Het-TS}}$ , $\\lim _{t \\rightarrow \\infty } \\frac{Z(t)}{t} = g_v(\\widetilde{\\omega }(v))\\quad \\text{almost surely}.$ Fix a problem instance $v\\in \\mathcal {P}$ arbitrarily.", "Define $\\hat{N}(t) \\in \\Gamma $ as $\\hat{N}_{i,m}(t) \\frac{N_{i,m}(t)}{ t}, \\quad i \\in S_m, \\ m \\in [M].$ Then, $\\frac{Z(t)}{t} & = \\inf _{v^{\\prime } \\in {\\rm Alt}(\\hat{v}(t))}\\ \\sum _{m=1}^M \\ \\sum _{i\\in S_m} \\frac{N_{i,m}(t)}{t}\\ \\frac{(\\mu _{i,m}(v^{\\prime }) - \\hat{\\mu }_{i,m}(t))^2}{2} \\nonumber \\\\& = \\inf _{v^{\\prime } \\in {\\rm Alt}(\\hat{v}(t))}\\ \\sum _{m=1}^M \\ \\sum _{i\\in S_m} {\\hat{N}_{i,m}(t)}\\ \\frac{(\\mu _{i,m}(v^{\\prime }) - \\hat{\\mu }_{i,m}(t))^2}{2} \\nonumber \\\\& = g_{\\hat{v}(t)}(\\hat{N}(t)) \\nonumber \\\\& \\stackrel{(a)}{=} \\min _{(i,j) \\in \\mathcal {C}(\\hat{v}(t))} \\frac{\\big (\\hat{\\mu }_i(t)-\\hat{\\mu }_j(t)\\big )^2/2}{\\frac{1}{M^2_i}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\frac{1}{\\hat{N}_{i,m}(t)} +\\frac{1}{M^2_j}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace j \\in S_m\\rbrace }\\frac{1}{\\hat{N}_{j,m}(t)} }, $ where $(a)$ follows from (REF ) of Lemma REF .", "Because $v \\in \\mathcal {P}$ and $\\lim _{t \\rightarrow \\infty } \\hat{\\mu }_{i}(t) = \\mu _{i}\\ \\text{almost surely}$ for all $i\\in [K]$ from (REF ), we get that $\\lim _{t \\rightarrow \\infty } \\mathcal {C}(\\hat{v}(t)) = \\mathcal {C}(v) \\quad \\text{almost surely},$ where $\\mathcal {C}(\\cdot )$ is as defined in (REF ).", "Combining (REF ), (REF ), (REF ), (REF ), and Lemma REF , we get that almost surely, $\\lim _{t \\rightarrow \\infty }\\ \\frac{Z(t)}{t} & = \\lim _{t\\rightarrow \\infty }\\min _{(i,j) \\in \\mathcal {C}(\\hat{v}(t))} \\frac{\\big (\\hat{\\mu }_i(t)-\\hat{\\mu }_j(t)\\big )^2/2}{\\frac{1}{M^2_i}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\frac{1}{\\hat{N}_{i,m}(t)} +\\frac{1}{M^2_j}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace j \\in S_m\\rbrace }\\frac{1}{\\hat{N}_{j,m}(t)} } \\nonumber \\\\& =\\min _{(i,j) \\in \\mathcal {C}(v)} \\frac{\\big ({\\mu }_i(v)-{\\mu }_j(v)\\big )^2/2}{\\frac{1}{M^2_i}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\frac{1}{\\widetilde{\\omega }_{i,m}(v)} +\\frac{1}{M^2_j}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace j \\in S_m\\rbrace }\\frac{1}{\\widetilde{\\omega }_{j,m}(v)} } \\nonumber \\\\& = g_{v}(\\widetilde{\\omega }(v)).$ This completes the desired proof.", "Lemma 15 Given any confidence level $\\delta \\in (0,1)$ , $\\tau _\\delta (\\Pi _{\\mathrm {Het-TS}}) < +\\infty \\quad \\text{almost surely}.$ As a consequence of Lemma REF , we have $\\lim _{t \\rightarrow \\infty } \\frac{\\beta (t, \\delta )}{Z(t)}& = \\lim _{t \\rightarrow \\infty } \\frac{K^{\\prime }\\log (t^2+t)+ f^{-1}(\\delta )}{t \\frac{Z(t)}{t}} \\nonumber \\\\& = \\lim _{t \\rightarrow \\infty } \\frac{K^{\\prime }\\log (t^2+t)+ f^{-1}(\\delta )}{t g_v(\\widetilde{\\omega }(v))} \\nonumber \\\\& = 0 \\quad \\text{almost surely}.$ Therefore, there almost surely exists $0<T<+\\infty $ such that $Z(t) > \\beta (t, \\delta )$ for all $t \\ge T$ , thus proving that $\\tau _\\delta (\\Pi _{\\textsc {Het}-\\textsc {TS}})$ is finite almost surely.", "Lemma 16 Given any problem instance $v \\in \\mathcal {P}$ and $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right)$ , there exists $\\delta _{\\rm upper}(v,\\epsilon ) >0$ such that for any $\\delta \\in (0, \\delta _{\\rm upper}(v,\\epsilon ))$ , $t\\,g_v(\\widetilde{\\omega }(v)) > \\beta (t, \\delta ) + t\\,\\epsilon $ for all $t \\ge T_{\\rm last}(v,\\delta , \\epsilon )$ , where $T_{\\rm last}(v,\\delta , \\epsilon ) \\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } + K^{\\prime }\\log \\left(\\left(\\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }\\right)^2 + \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\right) \\frac{1}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } +1.$ Fix $v \\in \\mathcal {P}$ and $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right)$ arbitarily.", "Recall that $\\beta (t, \\delta ) = K^{\\prime }\\log (t^2+t) + f^{-1}(\\delta )$ .", "To prove Lemma REF , it suffices to verify that The derivative of the left-hand of (REF ) with respect to $t$ is greater than that of the right-hand side of (REF ) for all $t \\ge T_{\\rm last}(v, \\delta , \\epsilon )$ , and Eq.", "(REF ) holds for $t=T_{\\rm last}(v, \\delta , \\epsilon )$ .", "In order to verify that the condition in item 1 above holds, we note from Lemma REF that $\\lim _{\\delta \\rightarrow 0} f^{-1}(\\delta )=+\\infty $ , as a consequence of which we get that there exists $\\delta _{\\rm upper}(v, \\epsilon )>0$ such that for all $\\delta \\in (0, \\delta _{\\rm upper}(v, \\epsilon ))$ , $T_{\\rm last}(v, \\delta , \\epsilon ) < \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }$ and $\\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\ge \\frac{3K^{\\prime }}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }.$ Notice that the derivative of the left-hand side of (REF ) with respect to $t$ is equal to $g_v(\\widetilde{\\omega }(v))$ , whereas that of the right-hand side of (REF ) is equal to $\\frac{K^{\\prime }(2+\\frac{1}{t})}{t + 1}+\\epsilon $ .", "Hence, to verify the condition in item 1, we need to demonstrate that $g_v(\\widetilde{\\omega }(v))-\\epsilon > \\frac{K^{\\prime }(2+\\frac{1}{t})}{t + 1} \\quad \\text{for all } t \\ge T_{\\rm last}(v,\\delta , \\epsilon ).$ We note that for all $t \\ge T_{\\rm last}(v,\\delta , \\epsilon )$ , $t+1 & > t\\nonumber \\\\& \\ge T_{\\rm last}(v,\\delta , \\epsilon )\\nonumber \\\\& \\ge \\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }\\nonumber \\\\& \\ge \\frac{3K^{\\prime }}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\nonumber \\\\& \\ge \\frac{(2+\\frac{1}{t})K^{\\prime }}{g_v(\\widetilde{\\omega }(v)) - \\epsilon },$ where in writing the last line above, we use the fact that $3 \\ge 2+\\frac{1}{t}$ whenever $t \\ge 1$ .", "We then obtain (REF ) upon rearranging (REF ) and using the fact that $\\epsilon >0$ .", "This verifies the condition in item 1.", "To verify the condition in item 2 above, we note that for all $T_{\\rm last}(v, \\delta , \\epsilon ) \\le t \\le \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }$ , we have $t & \\ge T_{\\rm last}(v, \\delta , \\epsilon )\\nonumber \\\\& > \\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } + K^{\\prime }\\log \\left(\\left(\\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }\\right)^2 + \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\right) \\frac{1}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\nonumber \\\\& \\ge \\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } + K^{\\prime }\\log \\left(t^2 +t \\right) \\frac{1}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }.$ Equivalently, upon rearranging the terms in (REF ), we get $t\\,g_v(\\widetilde{\\omega }(v)) > \\beta (t, \\delta ) + t\\,\\epsilon $ for all $T_{\\rm last}(v, \\delta , \\epsilon ) \\le t \\le \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }$ .", "In particular, noting that (REF ) holds for $t=T_{\\rm last}(v, \\delta , \\epsilon )$ verifies the condition in item 2, and thereby completes the proof.", "With the above ingredients in place, we are now ready to prove Theorem REF .", "[Proof of Theorem REF ] Fix a problem instance $v\\in \\mathcal {P}$ arbitrarily.", "Given any $\\epsilon >0$ , let $T_{\\rm cvg}(v,\\epsilon )$ denote the smallest positive integer such that $\\left|\\frac{Z(t)}{t} -g_v(\\widetilde{\\omega }(v)) \\right|\\le \\epsilon \\quad \\forall \\ t \\ge T_{\\rm cvg}(v,\\epsilon ).$ From Lemma REF , we know that $T_{\\rm cvg}(v,\\epsilon ) < +\\infty $ almost surely.", "Therefore, for any $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right)$ and $\\delta \\in (0, \\delta _{\\rm upper}(v,\\epsilon ))$ , it follows from Lemma REF that $Z(t) > \\beta (t, \\delta ) \\quad \\forall \\, t \\ge \\max \\big \\lbrace T_{\\rm cvg}(v,\\epsilon ),T_{\\rm last}(v,\\delta , \\epsilon ), K\\big \\rbrace \\quad \\text{almost surely},$ where $\\delta _{\\rm upper}(v,\\epsilon )$ and $T_{\\rm last}(v,\\delta , \\epsilon )$ are as defined in Lemma REF .", "Recall that $b_r = \\lceil (1+\\lambda )^r \\rceil $ in the Het-TS algorithm.", "From (REF ), it follows that $\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}}) \\le (1+\\lambda ) \\max \\big \\lbrace T_{\\rm cvg}(v,\\epsilon ),T_{\\rm last}(v,\\delta , \\epsilon ), K \\big \\rbrace +1 \\quad \\text{almost surely}$ for any $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right) $ and $\\delta \\in (0, \\delta _{\\rm upper}(v,\\epsilon ))$ , which implies that $\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}}) \\le (1+\\lambda ) \\,T_{\\rm cvg}(v,\\epsilon ) + (1+\\lambda )\\, T_{\\rm last}(v,\\delta , \\epsilon ) + (1+\\lambda )K +1 \\quad \\text{almost surely}.$ Then, for any $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right) $ , the following set of relations hold almost surely: $& \\limsup _{\\delta \\rightarrow 0} \\frac{\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}})}{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\le \\limsup _{\\delta \\rightarrow 0} \\frac{(1+\\lambda )\\, T_{\\rm cvg}(v,\\epsilon ) + (1+\\lambda ) \\, T_{\\rm last}(v,\\delta , \\epsilon ) + (1+\\lambda )K +1 }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\stackrel{(a)}{=} \\limsup _{\\delta \\rightarrow 0} \\frac{ (1+\\lambda )\\, T_{\\rm last}(v,\\delta , \\epsilon ) }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\stackrel{(b)}{=} \\limsup _{\\delta \\rightarrow 0} \\frac{ (1+\\lambda )\\, \\left(\\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } + K^{\\prime }\\log \\left(\\left(\\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }\\right)^2 + \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\right) \\frac{1}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } +1 \\right) }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\& \\stackrel{(c)}{=} \\frac{1+\\lambda }{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\nonumber \\\\& \\stackrel{(d)}{\\le } \\frac{1+\\lambda }{\\frac{1}{2}c^*(v)^{-1} - \\epsilon },$ where $(a)$ follows from the fact that $T_{\\rm cvg}(v,\\epsilon )$ is not a function of $\\delta $ and that $T_{\\rm cvg}(v,\\epsilon ) < +\\infty $ almost surely, $(b)$ follows from the definition of $T_{\\rm last}(v,\\delta , \\epsilon )$ , $(c)$ follows from Lemma REF , and $(d)$ makes use of Lemma REF .", "Letting $\\epsilon \\rightarrow 0$ in (REF ), we get $\\limsup _{\\delta \\rightarrow 0} \\frac{\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}})}{\\log \\left(\\frac{1}{\\delta }\\right)} \\le 2\\, (1+\\lambda )\\, c^*(v) \\quad \\text{almost surely}.$ Taking expectation on both sides of (REF ), we get $\\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} [\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}})] \\le (1+\\lambda ) \\, \\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} [T_{\\rm cvg}(v,\\epsilon )] + (1+\\lambda )\\, T_{\\rm last}(v,\\delta , \\epsilon ) +1$ for all $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right) $ and $\\delta \\in (0, \\delta _{\\rm upper}(v,\\epsilon ))$ , from which it follows that $& \\limsup _{\\delta \\rightarrow 0} \\frac{\\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} [\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}})]}{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\le \\limsup _{\\delta \\rightarrow 0} \\frac{(1+\\lambda )\\, \\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} [T_{\\rm cvg}(v,\\epsilon )] + (1+\\lambda ) \\, T_{\\rm last}(v,\\delta , \\epsilon ) + (1+\\lambda )K +1 }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\stackrel{(a)}{=} \\limsup _{\\delta \\rightarrow 0} \\frac{ (1+\\lambda ) \\, T_{\\rm last}(v,\\delta , \\epsilon ) }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\stackrel{(b)}{=} \\limsup _{\\delta \\rightarrow 0} \\frac{ (1+\\lambda )\\left(\\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } + K^{\\prime }\\log \\left(\\left(\\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }\\right)^2 + \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\right) \\frac{1}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } +1 \\right) }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\& \\stackrel{(c)}{=} \\frac{1+\\lambda }{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\nonumber \\\\& \\stackrel{(d)}{\\le } \\frac{1+\\lambda }{\\frac{1}{2}c^*(v)^{-1} - \\epsilon },$ where $(a)$ follows from the fact that $\\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} [T_{\\rm cvg}(v,\\epsilon )]$ does not depend on $\\delta $ and that $\\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}}[T_{\\rm cvg}(v,\\epsilon )] < +\\infty $ , $(b)$ follows from the definition of $T_{\\rm last}(v,\\delta , \\epsilon )$ , $(c)$ follows from Lemma REF , and $(d)$ makes use of Lemma REF .", "Letting $\\epsilon \\rightarrow 0$ in (REF ), we get $\\limsup _{\\delta \\rightarrow 0} \\frac{\\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} (\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}}))}{\\log \\left(\\frac{1}{\\delta }\\right)} \\le 2\\, (1+\\lambda ) \\, c^*(v).$ This completes the desired proof.", "Proof of Theorem REF Fix a problem instance $v \\in \\mathcal {P}$ arbitrarily.", "Recall that there exists a common solution to (REF ) (see the discussion after (REF )).", "The following results show that this solution satisfies good conditions 1 and 2 and that it is unique.", "Lemma 17 The common solution to (REF ) satisfies good condition 2.", "Let $\\widetilde{\\omega }(v)$ be a common solution to (REF ).", "Let $Q^{\\rm min}_j \\operatornamewithlimits{argmin}_{i \\in Q_j} \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}}, \\quad j \\in [L].$ Suppose $\\widetilde{\\omega }(v)$ does not meet good condition 2.", "Then, there exists $l \\in [L]$ such that $Q_l \\ne Q^{\\rm min}_l$ .", "We now recursively construct an $\\omega \\in \\Gamma $ such that $ \\widetilde{g}^{(l)}_v(\\omega ) > \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)) $ , thereby leading to a contradiction.", "Step 1: Initialization.", "Set $\\omega ^{(0)} \\widetilde{\\omega }(v)$ and $Q^{(0)} Q^{\\rm min}_l$ .", "Step 2: Iterations.", "For each $s\\in \\lbrace 0,1,2,\\ldots \\, |Q_l^{\\rm min}|-1\\rbrace $ , note that there exists $i_1 \\in Q^{(s)}$ , $i_2 \\in Q_l \\setminus Q^{(s)}$ , and $m^{\\prime } \\in [M]$ such that $i_1,i_2 \\in S_{m^{\\prime }}$ .", "Let $\\epsilon > 0$ be sufficiently small so that $\\frac{\\Delta ^2_{i_2}(v)}{\\frac{1}{M^2_{i_2}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_2 \\in S_m\\rbrace } \\frac{1}{\\omega ^{(s)}_{i_2,m}-\\epsilon }} > \\min _{i \\in Q_{l}} \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)} } =\\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)) .$ Then, we construct $\\omega ^{(s+1)}$ as $\\forall \\, m\\in [M], i\\in S_m, \\ \\ \\omega ^{(s+1)}_{i,m} {\\left\\lbrace \\begin{array}{ll}\\omega ^{(s)}_{i,m}-\\epsilon ,\\ &\\text{if } i=i_2, m=m^{\\prime } \\\\\\omega ^{(s)}_{i,m}+\\epsilon ,\\ &\\text{if } i=i_1, m=m^{\\prime }, \\\\\\omega ^{(s)}_{i,m},\\ &\\text{otherwise},\\end{array}\\right.", "}$ and set $Q^{(s+1)} Q^{(s)} \\setminus \\lbrace i_1\\rbrace $ .", "We then have $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\omega ^{(s+1)}_{i,m}}} > \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)) \\quad \\forall \\, i \\in Q_l \\setminus Q^{(s+1)},$ and $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\omega ^{(s+1)}_{i,m}}} = \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)) \\quad \\forall \\, i \\in Q^{(s+1)}.$ By following the above procedure for $s\\in \\lbrace 0, 1, 2, \\ldots , |Q_l^{\\rm min}|\\rbrace $ , we arrive at $\\omega ^{(\\vert Q^{\\rm min}_l \\vert )}$ such that $ \\widetilde{g}^{(l)}_v(\\omega ^{(\\vert Q^{\\rm min}_l \\vert )}) > \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)) $ , which is the clearly a contradiction.", "Lemma 18 The common solution to (REF ) satisfies good condition 1.", "Let $\\widetilde{\\omega }(v)$ be a common solution to (REF ).", "From Lemma REF , we know that $\\widetilde{\\omega }(v)$ satisfies good condition 2.", "Suppose now that $\\widetilde{\\omega }(v)$ does not satisfy good condition 1.", "Then, there exists $l \\in [L]$ , $m_1,m_2 \\in [M]$ , and $i_1, i_2 \\in S_{m_1} \\cap S_{m_2} \\subseteq Q_{l}$ such that $\\frac{\\widetilde{\\omega }_{i_1,m_1}(v)}{\\widetilde{\\omega }_{i_2,m_1}(v)} > \\frac{\\widetilde{\\omega }_{i_1,m_2}(v)}{\\widetilde{\\omega }_{i_2,m_2}(v)}.$ Because $\\widetilde{\\omega }(v)$ satisfies good condition 2, we must have $\\frac{\\Delta ^2_{i_2}(v)}{\\frac{1}{M^2_{i_2}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_2 \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i_2,m}(v)}} = \\frac{\\Delta ^2_{i_1}(v)}{\\frac{1}{M^2_{i_1}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_1 \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i_1,m}(v)}} = \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)).$ Note that (REF ) implies that $\\frac{\\widetilde{\\omega }_{i_2,m_1}(v)}{\\widetilde{\\omega }_{i_2,m_2}(v)} < \\frac{\\widetilde{\\omega }_{i_1,m_1}(v)}{\\widetilde{\\omega }_{i_1,m_2}(v)} $ .", "Let $\\rho $ be any value such that $\\left(\\frac{\\widetilde{\\omega }_{i_2,m_1}(v)}{\\widetilde{\\omega }_{i_2,m_2}(v)}\\right)^2 < \\rho < \\left(\\frac{\\widetilde{\\omega }_{i_1,m_1}(v)}{\\widetilde{\\omega }_{i_1,m_2}(v)}\\right)^2.$ Using the fact that the derivative of $x \\mapsto \\frac{1}{x}$ is $ -\\frac{ 1}{x^2}$ , we have $\\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v) - \\rho \\epsilon } - \\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v)} = \\frac{\\rho \\epsilon }{\\widetilde{\\omega }^2_{i_1,m_1}(v)} + o(\\epsilon )\\quad \\mbox{as }\\epsilon \\rightarrow 0,$ and $\\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v)} - \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v) + \\epsilon } = \\frac{\\epsilon }{\\widetilde{\\omega }^2_{i_1,m_2}(v)} + o(\\epsilon )\\quad \\mbox{as }\\epsilon \\rightarrow 0.$ Here $o(\\epsilon )$ is a function in $\\epsilon $ that satisfies $\\lim _{\\epsilon \\rightarrow 0}\\frac{o(\\epsilon )}{\\epsilon }=0$ .", "By combining these equations, $&\\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v) - \\rho \\epsilon } - \\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v)}-\\bigg ( \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v)} - \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v) + \\epsilon } \\bigg ) \\nonumber \\\\& = \\frac{\\rho \\epsilon }{\\widetilde{\\omega }^2_{i_1,m_1}(v)} + o(\\epsilon ) - \\bigg (\\frac{\\epsilon }{\\widetilde{\\omega }^2_{i_1,m_2}(v)} + o(\\epsilon ) \\bigg )\\\\&=\\epsilon \\bigg [ \\frac{\\rho }{\\widetilde{\\omega }^2_{i_1,m_1}(v)} \\big (1+o_\\epsilon (1) \\big ) - \\frac{1}{\\widetilde{\\omega }^2_{i_1,m_2}(v)} \\big (1+o_\\epsilon (1) \\big ) \\bigg ] $ where $o_\\epsilon (1)$ is a term that vanishes as $\\epsilon \\downarrow 0$ .", "From (REF ), we have, $\\rho < \\frac{\\widetilde{\\omega }_{i_1, m_1}^2(v)}{\\widetilde{\\omega }_{i_1, m_2}^2(v)} & \\Longleftrightarrow \\frac{\\rho }{\\widetilde{\\omega }_{i_1, m_1}^2(v)} < \\frac{1}{\\widetilde{\\omega }_{i_1, m_2}^2(v)}$ By (REF ) and (REF ), there exists $\\epsilon _1>0$ such that for all $\\epsilon \\in (0,\\epsilon _1]$ , $\\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v) - \\rho \\epsilon } - \\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v)}-\\bigg ( \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v)} - \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v) + \\epsilon } \\bigg )<0.$ In other words, for all $\\epsilon \\in (0,\\epsilon _1]$ , $\\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v)} + \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v)} > \\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v) - \\rho \\epsilon } + \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v) + \\epsilon }.$ Similarly, there exists $\\epsilon _2 >0 $ for any $\\epsilon \\in (0, \\epsilon _2]$ such that $\\frac{1}{\\widetilde{\\omega }_{i_2,m_1} (v)} + \\frac{1}{\\widetilde{\\omega }_{i_2,m_2}(v)} >\\frac{1}{\\widetilde{\\omega }_{i_2,m_1}(v) +\\rho \\epsilon } + \\frac{1}{\\widetilde{\\omega }_{i_2,m_2}(v) - \\epsilon } .$ Set $\\epsilon =\\min \\lbrace \\epsilon _1, \\epsilon _2\\rbrace $ .", "Let $\\omega ^{\\prime } \\in \\Gamma $ be defined as $\\forall \\, m\\in [M], i\\in S_m, \\ \\ \\omega ^{\\prime }_{i,m} {\\left\\lbrace \\begin{array}{ll}\\widetilde{\\omega }_{i,m}(v)-\\rho \\epsilon ,\\ &\\text{if } i=i_1, m=m_1 \\\\\\widetilde{\\omega }_{i,m}(v)+\\epsilon ,\\ &\\text{if } i=i_1, m=m_2 \\\\\\widetilde{\\omega }_{i,m}(v) + \\rho \\epsilon ,\\ &\\text{if } i=i_2, m=m_1 \\\\\\widetilde{\\omega }_{i,m}(v) - \\epsilon ,\\ &\\text{if } i=i_2, m=m_2 \\\\\\widetilde{\\omega }_{i,m}(v),\\ &\\text{otherwise}.\\end{array}\\right.", "}$ Then, from (REF ) and (REF ), we have $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{{\\omega }^{\\prime }_{i,m}}} >\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} \\quad \\forall \\, i \\in \\lbrace i_1,i_2\\rbrace ,$ and $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{{\\omega }^{\\prime }_{i,m}}} =\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} \\quad \\forall \\, i \\in Q_l \\setminus \\lbrace i_1,i_2\\rbrace .$ We then consider the following two cases.", "Case 1: $Q_{l} = \\lbrace i_1,i_2\\rbrace $ .", "In this case, it follows from (REF ) that $\\widetilde{g}^{(l)}_v(\\omega ^{\\prime }) >\\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v))$ , which contradicts with the fact that $\\widetilde{\\omega }(v)$ is an optimum solution to $\\max _{\\omega \\in \\Gamma } \\widetilde{g}_v(\\omega )$ .", "Case 2: $ \\lbrace i_1,i_2\\rbrace \\subsetneq Q_{l}$ .", "In this case, it follows from (REF ) that $\\widetilde{g}^{(j)}_v(\\omega ^{\\prime }) = \\widetilde{g}^{(j)}_v(\\widetilde{\\omega }(v))$ for all $ j \\in [L]$ , which implies that $\\omega ^{\\prime }$ is a common solution to (REF ) just as $\\widetilde{\\omega }(v)$ is.", "However, note that the right-hand sides of (REF ) and (REF ) are equal because $\\widetilde{\\omega }(v)$ satisfies good condition 2.", "As a result, it follows that $\\frac{\\Delta ^2_{i_1}(v)}{\\frac{1}{M^2_{i_1}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_1 \\in S_m\\rbrace } \\frac{1}{\\omega ^{\\prime }_{i_1,m}}} >\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{{\\omega }^{\\prime }_{i,m}}} \\quad \\forall \\, i \\in Q_l \\setminus \\lbrace i_1, i_2\\rbrace .$ This shows that $\\omega ^{\\prime }$ does not meet good condition 2, thereby contradicting Lemma REF .", "Lemma 19 The common solution to (REF ) is unique.", "Suppose that $\\widetilde{\\omega }(v)$ and $\\widetilde{\\omega }^{\\prime }(v)$ are two common solutions to (REF ).", "Suppose further that $\\widetilde{\\omega }(v) \\ne \\widetilde{\\omega }^{\\prime }(v)$ .", "In the following, we arrive at a contradiction.", "Let $\\widetilde{\\omega }^{\\rm avg}(v) (\\widetilde{\\omega }(v)+\\widetilde{\\omega }^{\\prime }(v))/2$ .", "From Lemma REF , we know that $\\widetilde{\\omega }(v)$ and $\\widetilde{\\omega }^{\\prime }(v)$ meet good condition 2.", "This implies that $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} = \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\prime }_{i,m}(v)}} \\quad \\forall \\, i \\in [K],$ which in turn implies that ${\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} = {\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\prime }_{i,m}(v)}}.$ Using the relation $\\frac{2}{(a+b)/2} < \\frac{1}{a}+\\frac{1}{b}$ whenever $a,b>0$ and $a \\ne b$ , we get that $\\frac{2}{\\widetilde{\\omega }^{\\rm avg}_{i,m}(v)} < \\frac{1}{\\widetilde{\\omega }_{i,m}(v)} + \\frac{1}{\\widetilde{\\omega }^{\\prime }_{i,m}(v)} \\quad \\forall \\, m\\in [M], \\, i\\in S_m \\; \\text{ such that }\\widetilde{\\omega }_{i,m}(v) \\ne \\widetilde{\\omega }^{\\prime }_{i,m}(v).$ Let $Q_{\\rm diff} \\lbrace \\iota \\in [K]: \\exists \\, m, \\widetilde{\\omega }_{\\iota ,m}(v) \\ne \\widetilde{\\omega }^{\\prime }_{\\iota ,m}(v)\\rbrace $ .", "As a consequence of (REF ), for all $i\\in Q_{\\rm diff} $ , we have $& {\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{2}{\\widetilde{\\omega }^{\\rm avg}_{i,m}(v)}} <{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} + {\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\prime }_{i,m}(v)}} \\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow } {\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\rm avg}_{i,m}(v)}} < {\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} \\nonumber \\\\& \\Rightarrow \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\rm avg}_{i,m}(v)}} > \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}}, $ where $(a)$ follows from (REF ).", "In addition, it is clear to observe that for all $i \\in [K] \\setminus Q_{\\rm diff}$ , $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\rm avg}_{i,m}(v)}} = \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}}.$ By (REF ) and (REF ) we know that $\\widetilde{g}^{(j)}_v(\\widetilde{\\omega }^{\\rm avg}(v)) \\ge \\widetilde{g}^{(j)}_v(\\widetilde{\\omega }(v))$ for each $j\\in [L]$ , which implies that $\\widetilde{\\omega }^{\\rm avg}(v)$ is a common solution to (REF ).", "Now, there must exist $l \\in [L]$ such that $Q_{\\rm diff} \\cap Q_l \\ne \\emptyset $ , and then we consider two cases.", "Case 1: $Q_{\\rm diff} \\cap Q_l = Q_l$ .", "Eq.", "(REF ) implies that $\\widetilde{g}^{(l)}_v(\\widetilde{\\omega }^{\\rm avg}(v)) > \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v))$ , which contradicts the fact that $\\widetilde{\\omega }(v)$ and $\\widetilde{\\omega }^{\\rm avg}(v)$ are the common solution to (REF ).", "Case 2: $Q_{\\rm diff} \\cap Q_l \\subsetneq Q_l$ .", "Note that the right-hand side of (REF ) and (REF ) are equal because $\\widetilde{\\omega }(v)$ meets good condition 2.", "Hence, there exists $i_1,i_2 \\in Q_l$ such that $\\frac{\\Delta ^2_{i_1}(v)}{\\frac{1}{M^2_{i_1}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_1 \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\rm avg}_{i_1,m}(v)}} >\\frac{\\Delta ^2_{i_2}(v)}{\\frac{1}{M^2_{i_2}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\rm avg}_{i_2,m}(v)}}$ which means that $\\widetilde{\\omega }^{\\rm avg}(v)$ violates good condition 2, a contradiction to Lemma REF .", "Finally, Theorem REF follows from Lemma REF , Lemma REF , and Lemma REF .", "Proof of Proposition REF Before proving Proposition REF , we first supply the proofs of Lemma REF and Lemma REF .", "Proof of Lemma REF Fix $j \\in [L]$ and a problem instance $v$ arbitrarily.", "Recall that $\\widetilde{\\omega }(v)$ is the unique common solution (REF ) and $G(v)$ is the global vector characterising $\\widetilde{\\omega }(v)$ uniquely (via (REF )).", "From Lemma REF , we know that $\\widetilde{\\omega }(v)$ satisfies good condition 2, which implies that for any $i \\in Q_j$ , $& \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }(v)_{i,m}} } =\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v)) \\nonumber \\\\& \\Rightarrow \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }(v)_{i,m}} }{M^2_{i} \\Delta ^2_{i}(v)} =\\frac{1}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow } \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{\\sum _{\\iota \\in S_m}G(v)_\\iota }{G(v)_i}}{M^2_{i} \\Delta ^2_{i}(v)} =\\frac{1}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\Rightarrow \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } {\\sum _{\\iota \\in S_m}G(v)_\\iota }}{M^2_{i} \\Delta ^2_{i}(v)} =\\frac{{G(v)_i}}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\Rightarrow \\sum _{\\iota \\in Q_j} G(v)_\\iota \\left( \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i,\\iota \\in S_m\\rbrace } }{M^2_{i} \\Delta ^2_{i}(v)} \\right) =\\frac{{G(v)_i}}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))}.", "$ In the above set of implications, $(a)$ follows from (REF ).", "Noting that (REF ) is akin to (REF ) completes the desired proof.", "Proof of Lemma REF Fix $j \\in [L]$ and a problem instance $v \\in \\mathcal {P}$ arbitrarily.", "It is easy to verify that $G^{(j)}(v)$ has strictly positive entries (else, $\\widetilde{g}_v({\\widetilde{\\omega }(v)})=0$ according to (REF )).", "Suppose that $\\mathbf {u}\\in \\mathbb {R}^{\\vert Q_j \\vert }$ is another eigenvector of $H^{(j)}(v)$ corresponding to the eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ and $\\lbrace \\mathbf {u},\\, G^{(j)}(v)\\rbrace $ is linearly independent.", "Let $\\mathbf {u}^{\\prime } G^{(j)}(v) + \\epsilon \\mathbf {u},$ where $\\epsilon > 0$ is any number such that each entry of $\\mathbf {u}^{\\prime }$ is strictly positive.", "Let $\\omega ^{\\prime } \\in \\Gamma $ be defined as $\\forall \\, m\\in [M], i\\in S_m, \\ \\ \\omega ^{\\prime }_{i,m} ={\\left\\lbrace \\begin{array}{ll}\\end{array}\\frac{\\mathbf {u}^{\\prime }_{{\\rm Idx}(i)}}{\\sum _{\\iota \\in S_m } \\mathbf {u}^{\\prime }_{{\\rm Idx}(\\iota )}}\\ &\\text{if}\\ i\\in Q_j \\\\\\widetilde{\\omega }(v)_{i,m}\\ &\\text{otherwise},\\right.", "}$ where for any $i\\in Q_j$ , ${\\rm Idx}(i) \\in [\\vert Q_j \\vert ]$ represents the index of arm $i$ within the arms set $Q_j$ .", "Then, it follows from the definition of $\\omega ^{\\prime }$ that $\\widetilde{g}_{v}^{(l)}(\\widetilde{\\omega }(v)) = \\widetilde{g}_{v}^{(l)}(\\omega ^{\\prime })$ for all $l \\ne j$ and $\\omega ^{\\prime } \\ne \\widetilde{\\omega }(v)$ .", "Note that $\\mathbf {u}^{\\prime }$ is also an eigenvector of $H^{(j)}(v)$ corresponding to the eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ .", "This means that for all $i\\in Q_j$ , $& \\sum _{\\iota \\in Q_j} \\mathbf {u}^{\\prime }_{{\\rm Idx}(\\iota )} \\left( \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i,\\iota \\in S_m\\rbrace } }{M^2_{i} \\Delta ^2_{i}(v)} \\right) =\\frac{ \\mathbf {u}^{\\prime }_{{\\rm Idx}(i)}}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\Rightarrow \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } {\\sum _{\\iota \\in S_m}\\mathbf {u}^{\\prime }_{{\\rm Idx}(\\iota )}}}{M^2_{i} \\Delta ^2_{i}(v)} =\\frac{{\\mathbf {u}^{\\prime }_{{\\rm Idx}(i)}}}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\Rightarrow \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{\\sum _{\\iota \\in S_m}\\mathbf {u}^{\\prime }_{{\\rm Idx}(\\iota )}}{\\mathbf {u}^{\\prime }_{{\\rm Idx}(i)}}}{M^2_{i} \\Delta ^2_{i}(v)} =\\frac{1}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\Rightarrow \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\omega ^{\\prime }_{i,m}} } =\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v)).$ From (REF ), it is clear that $\\widetilde{g}_{v}^{(l)}(\\widetilde{\\omega }(v)) = \\widetilde{g}_{v}^{(l)}(\\omega ^{\\prime })$ for all $l \\in [L]$ , which contradicts Lemma REF .", "Thus, there is no eigenvector $\\mathbf {u}\\in \\mathbb {R}^{\\vert Q_j \\vert }$ of $H^{(j)}(v)$ corresponding to the eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ such that $\\mathbf {u}$ and $G^{(j)}(v)>0$ are linearly independent.", "This completes the desired proof.", "Proof of Proposition REF Let $\\mathbf {v}$ be any eigenvector of $H^{(j)}(v)$ whose eigenvalue is not equal to $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ .", "Because $H^{(j)}(v)$ is a normal matrix, its eigenvectors corresponding to distinct eigenvalues are orthogonal [7].", "This implies that $\\langle \\mathbf {v}, G^{(j)}(v) \\rangle =0$ , where $\\langle \\cdot ,\\cdot \\rangle $ denotes the vector inner product operator.", "Note that $G^{(j)}(v)$ has strictly positive entries.", "Therefore, the entries of $\\mathbf {v}$ cannot be all positive or all negative.", "From Lemma REF , we know that any eigenvector $\\mathbf {v^{\\prime }}$ associated with the eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ should satisfy $\\mathbf {v^{\\prime }} = \\alpha G^{(j)}(v),\\quad \\text{for some } \\alpha \\in \\mathbb {R}\\setminus \\lbrace 0\\rbrace ,$ which implies that the entries of $\\mathbf {v}^{\\prime }$ are either all positive or all negative.", "Also, Lemma REF implies that among any complete set of eigenvectors of $H^{(j)}(v)$ , there is only one eigenvector $\\mathbf {u}$ with eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ .", "From the exposition above, it then follows that the entries of $\\mathbf {u}$ must be all positive or all negative.", "Noting that $G^{(j)}(v)$ has unit norm (see (REF )), we arrive at the form in (REF ).", "This completes the proof.", "Experimental Results In this section, we corroborate our theoretical results by implementing $\\textsc {Het}-\\textsc {TS}(\\lambda )$ and performing a variety of experiments on a synthetic dataset and the MovieLens dataset.", "Synthetic Dataset The instance we used contains $M=5$ clients and $K=5$ arms.", "Their means are captured in the following matrix $\\mu =\\begin{bmatrix}7.55725965 & 7.66129480 & 7.32803730 & 7.32543803 & 7.64140236\\\\6.16828802 & 6.45512172 & 6.93972864 & 6.10118710 & 6.66694736\\\\5.29985957 & 5.07225703 & 5.84690706 & 5.01784103 & 5.96481965\\\\4.15091046 & 4.27998901 & 4.62915314 & 4.02730890 & 4.13292786\\\\3.44192387 & 3.92546024 & 3.53417944 & 3.23238613 & 3.77754542\\end{bmatrix}.$ The rows of $\\mu $ index the arms while the columns index the clients (so, for example, $\\mu _{2,3} = 6.93972864$ is the mean of the arm 2 of client 3).", "Figure: Expected stopping times for various overlap patterns as described in ().Figure: Expected stopping times for various λ\\lambda 's and for overlap pattern 𝒪 (2) \\mathcal {O}^{(2)}.To empirically evaluate the effect of various sets $\\lbrace S_m\\rbrace _{m\\in [M]}$ on the expected stopping time, we consider different overlap patterns (which is a multiset) $\\mathcal {O}^{(p)} = \\lbrace S_{1}^{(p)} ,S_{2}^{(p)} ,\\ldots , S_{5}^{(p)} \\rbrace $ where $\\mathcal {O}^{(1)} &= \\big \\lbrace \\lbrace 1,2\\rbrace , \\lbrace 2,3\\rbrace , \\lbrace 3,4\\rbrace ,\\lbrace 4,5\\rbrace , \\lbrace 5,1\\rbrace \\big \\rbrace , \\nonumber \\\\\\mathcal {O}^{(2)} &= \\big \\lbrace \\lbrace 1,2,3\\rbrace , \\lbrace 2,3,4\\rbrace , \\lbrace 3,4,5\\rbrace ,\\lbrace 4, 5,1\\rbrace , \\lbrace 5,1,2\\rbrace \\big \\rbrace \\nonumber \\\\\\mathcal {O}^{(3)} &= \\big \\lbrace \\lbrace 1,2,3,4\\rbrace , \\lbrace 2,3,4,5\\rbrace , \\lbrace 3,4,5,1\\rbrace , \\lbrace 4,5,1,2\\rbrace , \\lbrace 5,1,2,3\\rbrace \\big \\rbrace ,\\quad \\mbox{and} \\nonumber \\\\\\mathcal {O}^{(4)} &= \\big \\lbrace \\lbrace 1,2,3,4,5\\rbrace , \\lbrace 1,2,3,4,5\\rbrace , \\lbrace 1,2,3,4,5\\rbrace , \\lbrace 1,2,3,4,5\\rbrace , \\lbrace 1,2,3,4,5\\rbrace \\big \\rbrace .", "$ Thus, the larger the index of the overlap pattern $p$ , the larger the overlap among the sets $\\lbrace S_m^{(p)}\\rbrace _{m\\in [M]}$ and the more clients have access to a fixed arm $i \\in [K]$ .", "The matrix $\\mu $ , together with an overlap pattern $\\mathcal {O}^{(p)}$ , uniquely defines a problem instance $v=(\\lbrace \\mu _{i,1}\\rbrace _{i\\in S_1}, \\lbrace \\mu _{i,2}\\rbrace _{i\\in S_2},\\dots , \\lbrace \\mu _{i,M}\\rbrace _{i\\in S_M})$ .", "Effect of Amount of Overlap The empirical expected stopping times of $\\textsc {Het}-\\textsc {TS}(\\lambda )$ for $\\lambda =0.01$ are displayed in Fig.", "REF .", "It can be seen that as $\\delta $ decreases, the empirical stopping time increases, as expected.", "More interestingly, note that for a fixed $\\delta $ , the stopping time is not monotone in the amount of overlap.", "This is due to two factors that work in opposite directions as one increases the amount of overlap of $S_m$ 's among various clients.", "On the one hand, each client has access to more arms, yielding more information about the bandit instance for the client.", "On the other hand, with more arms, the set of arms that can potentially be the best arm for that particular client also increases.", "This observation is interesting and, at first glance, counter-intuitive.", "Figure: Comparisons of the upper and lower bounds.", "Effect of Communication Frequency Recall that $\\textsc {Het}-\\textsc {TS}(\\lambda )$ communicates and stops at those time instants $t$ of the form $b_r=\\lceil (1+ \\lambda )^r \\rceil $ for $r\\in \\mathbb {N}$ .", "As $\\lambda $ increases, the communication frequency decreases.", "In other words, $\\textsc {Het}-\\textsc {TS}(\\lambda )$ is is communicating at sparser time instants.", "Thus as $\\lambda $ grows, we should expect that the stopping times increase commensurately as the server receives less data per unit time.", "This is reflected in Fig.", "REF where we use the instance $v=(\\mu , \\mathcal {O}^{(2)})$ .", "We note another interesting phenomenon, most evident from the curve indicated by $\\lambda = 0.5$ .", "The growth pattern of the empirical stopping time has a piecewise linear shape.", "This is because $\\textsc {Het}-\\textsc {TS}(\\lambda )$ does not stop at any arbitrary integer time; it only does so at the times that correspond to communication rounds $b_r = \\lceil (1+ \\lambda )^r \\rceil $ for $r\\in \\mathbb {N}$ .", "Hence, for $\\delta $ and $\\delta ^{\\prime }$ sufficiently close, the empirical stopping times will be exactly the same with high probability.", "This explains the piecewise linear stopping pattern as $\\log (1/\\delta )$ grows.", "Comparison to Theoretical Bounds In the final experiment for synthetic data, we set $\\lambda =0.01$ and overlap pattern $\\mathcal {O}^{(2)}$ as our instance $v$ .", "In Fig.", "REF , we compare the empirical stopping time to the lower bound in Proposition REF and the upper bound in Theorem REF .", "Recall that the asymptotic ratio of the expected stopping time ${\\mathbb {E}_v^{\\Pi _{\\textsc {Het}-\\textsc {TS}}}[\\tau _\\delta (\\Pi _{ \\textsc {Het}-\\textsc {TS} })]}$ to $\\log (1/\\delta )$ is $c^*(v)$ and $2(1+\\lambda )c^*(v)$ in the lower and upper bounds respectively.", "We observe that as $\\delta $ becomes sufficiently small, the slope of the empirical curve lies between the upper and lower bounds, as expected.", "Furthermore, we see that ${\\mathbb {E}_v^{\\Pi _{\\textsc {Het}-\\textsc {TS}}}[\\tau _\\delta (\\Pi _{ \\textsc {Het}-\\textsc {TS} })]}/{\\log \\left(1/{\\delta }\\right)}$ is close to the lower bound, which strongly suggests our learned allocation $\\widetilde{\\omega }(\\hat{v}(t))$ is very close to optimal allocation $\\arg \\max _{\\omega \\in \\Gamma } g_{\\hat{v}(t)}(\\omega )$ .", "We observe from Fig.", "REF that the empirical performance or, more precisely, the slope of the expected stopping time as a function of $\\log (1/\\delta )$ is close to $(1+\\lambda )\\, c^*(v)$ .", "This suggests that the factor $1+\\lambda $ (in $2\\, (1+\\lambda )$ ) in Theorem REF is unavoidable if we communicate at time instances that grow as $\\Theta ( (1+\\lambda )^r)$ .", "The presence of the factor 2 (in $2\\, (1+\\lambda )$ ) is to enable the optimal allocation $\\hat{\\omega }_{i,m}(t)$ to be solved in a tractable fashion.", "For more details concerning this point, see the discussion following Theorem REF .", "MovieLens Dataset In the MovieLens dataset [3], there are about 2.2 million rating samples and 10,197 movies.", "Following the experimental settings in [16], we view each country and genre as a client and an arm, respectively.", "Besides, we normalize the rating score in the range of 0 to 100.", "We note that in the raw dataset that there are very few or even no samples for some combinations of country and genre.", "Thus, in our experiment we discard any country and genre pair with fewer than ten samples.", "As a result, we end up with 10,044 movies and $M=48$ clients across $K=19$ arms.", "It is natural that different clients have different arm sets in the dataset; this dovetails neatly with our problem setting in which $S_m$ 's need not be the same as one another and they need not be the full set $[K]$ .", "Table: Comparison of the empirical stopping times between Het-TS\\textsc {Het}-\\textsc {TS} and Uniform for λ=0.01\\lambda =0.01.As in [20], we compare our algorithm to a baseline method which we call Uniform.", "Uniform has the same stopping rule as $\\textsc {Het}-\\textsc {TS}$ , but it uses a uniform sampling rule (i.e., each client uniformly samples each arm).", "Note that Uniform is a $\\delta $ -PAC algorithm for all $\\delta \\in (0,1)$ .", "Our numerical results, which are obtained by averaging over four independent experiments and by setting $\\lambda =0.01$ , are presented in Table REF .", "We observed from our experiments that the statistical variations of the results are minimal (and virtually non-existent) as the algorithm necessarily stops at one of the time instants of the form $b_r = \\lceil (1+\\lambda )^r\\rceil $ for $r\\in \\mathbb {N}$ .", "Hence, “error bars” are not indicated.", "From Table REF , we observe that the ratio of empirical stopping time between Uniform and $\\textsc {Het}-\\textsc {TS}$ is approximately eight, showing that the sampling rule of $\\textsc {Het}-\\textsc {TS}$ is highly effective in rapidly identifying the best arms in this real-world dataset." ], [ "A Useful Lemma", "Lemma 6 Let $T$ be any fixed time instant, and let $\\mathcal {F}_T = \\sigma (\\lbrace X_{A_m(t),m}(t),A_m(t): t\\in [T], m\\in [M]\\rbrace )$ be the history of all the arm pulls and rewards seen up to time $T$ at all the clients under an algorithm $\\Pi $ .", "Let $E$ be any event such that $\\mathbf {1}_{\\lbrace E\\rbrace }$ is $\\mathcal {F}_T$ -measurable.", "Then, for any pair of problem instances $v$ and $v^{\\prime }$ , $\\sum _{t=1}^{T}\\ \\sum _{m=1}^M\\ \\sum _{i \\in S_m} \\mathbb {E}_{v}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace }\\ D_{\\mathrm {KL}}(v_{i,m} \\Vert v^{\\prime }_{i,m} )\\right] \\ge d_{\\mathrm {KL}}\\left(\\mathbb {P}_{v}^\\Pi (E), \\mathbb {P}_{{v^{\\prime }}}^{\\Pi }(E)\\right),$ where $D_{\\mathrm {KL}}(p \\Vert q)$ denotes the Kullback–Leibler (KL) divergence between the distributions $p$ and $q$ , and $d_{\\mathrm {KL}}(x,y)$ denotes the KL divergence between two Bernoulli distributions with parameters $x$ and $y$ .", "The proof of Lemma REF follows along the lines of the proof of [11] and is omitted." ], [ "Proof of Lemma ", "Fix $v=\\lbrace \\mu _{i,m}: i\\in S_m, \\, m\\in [M]\\rbrace \\in \\mathcal {P}$ and $\\omega \\in \\Gamma $ .", "Let $\\mathcal {C}(v) \\bigcup _{m\\in [M]} \\Big \\lbrace (a^*_m(v),i): i \\in S_m \\setminus \\lbrace a^*_m(v)\\rbrace \\Big \\rbrace .$ First, we note that by definition, $g_v(\\omega )=0$ if $\\omega _{i,m}=0$ for some $m\\in [K]$ and $i\\in S_m$ .", "Therefore, it suffices to consider the case when $\\omega _{i,m}>0$ for all $m\\in [K]$ and $i\\in S_m$ .", "In what follows, we abbreviate $\\mu _{i,m}(v^{\\prime })$ and $\\mu _{i}(v^{\\prime })$ to $\\mu ^{\\prime }_{i,m}$ and $\\mu ^{\\prime }_{i}$ respectively.", "We have $g_v(\\omega )& = \\inf _{v^{\\prime }\\in {\\rm Alt}(v)}\\ \\sum _{m=1}^{M}\\ \\sum _{i \\in S_m} \\omega _{i,m}\\frac{(\\mu _{i,m}-\\mu ^{\\prime }_{i,m})^2}{2} \\nonumber \\\\&= \\min _{(i_1,i_2)\\in \\mathcal {C}(v)}\\ \\inf _{v^{\\prime }:\\mu ^{\\prime }_{i_1}<\\mu ^{\\prime }_{i_2}}\\ \\sum _{m=1}^{M}\\ \\sum _{i \\in S_m} \\omega _{i,m}\\frac{(\\mu _{i,m}-\\mu ^{\\prime }_{i,m})^2}{2} \\nonumber \\\\&= \\min _{(i_1,i_2)\\in \\mathcal {C}(v)}\\ \\inf _{v^{\\prime }:\\mu ^{\\prime }_{i_1} \\le \\mu ^{\\prime }_{i_2}} \\ \\sum _{m=1}^{M} \\mathbf {1}_{\\lbrace i_1\\in S_m\\rbrace } \\omega _{i_1,m}\\frac{(\\mu _{i_1,m}-\\mu ^{\\prime }_{i_1,m})^2}{2}+\\mathbf {1}_{\\lbrace i_2\\in S_m\\rbrace } \\omega _{i_2,m}\\frac{(\\mu _{i_2,m}-\\mu ^{\\prime }_{i_2,m})^2}{2} \\nonumber \\\\&= \\min _{(i_1,i_2)\\in \\mathcal {C}(v)} \\ \\frac{(\\mu _{i_1}-\\mu _{i_2})^2/2}{\\frac{1}{M^2_{i_1}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_1 \\in S_m\\rbrace } \\frac{1}{\\omega _{i_1,m}} + \\frac{1}{M^2_{i_2}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_2 \\in S_m\\rbrace } \\frac{1}{\\omega _{i_2,m}}},$ where (REF ) follows from the penultimate line by using the method of Lagrange multipliers and noting that the inner infimum in the penultimate line is attained at $\\mu ^{\\prime }_{i_1,m} &= \\mu _{i_1,m} - \\frac{\\mu _{i_1}-\\mu _{i_2}}{M_{i_1}\\omega _{i_1,m}(\\sum _{i \\in \\lbrace i_1,i_2\\rbrace }\\sum _{m^{\\prime }:i \\in S_{m^{\\prime }}} \\frac{1}{ \\omega _{i,m^{\\prime }} M^2_{i}})} \\quad \\forall \\, m : i_1 \\in S_{m},\\nonumber \\\\\\mu ^{\\prime }_{i_2,m} &= \\mu _{i_2,m} + \\frac{\\mu _{i_1}-\\mu _{i_2}}{M_{i_2}\\omega _{i_2,m}(\\sum _{i \\in \\lbrace i_1,i_2\\rbrace }\\sum _{m^{\\prime }:i \\in S_{m^{\\prime }}} \\frac{1}{\\omega _{i,m^{\\prime }} M^2_{i} })} \\quad \\forall \\, m :i_2 \\in S_{m}$ From the definition of $\\widetilde{g}_v(\\omega )$ in (REF ), it is easy to verify that $\\widetilde{g}_v(\\omega ) = \\min _{(i_1,i_2)\\in \\mathcal {C}(v)} \\min \\bigg \\lbrace \\frac{(\\mu _{i_1}-\\mu _{i_2})^2/2}{\\frac{1}{M^2_{i_1}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_1 \\in S_m\\rbrace } \\frac{1}{\\omega _{i_1,m}} },\\frac{(\\mu _{i_1}-\\mu _{i_2})^2/2}{\\frac{1}{M^2_{i_2}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_2 \\in S_m\\rbrace } \\frac{1}{\\omega _{i_2,m}}} \\bigg \\rbrace ,$ whence it follows that $\\frac{\\widetilde{g}_v(\\omega )}{2} \\le g_v(\\omega ) \\le \\widetilde{g}_v(\\omega )$ ." ], [ "Proof of Lemma ", "Define $v_{\\dagger }(\\rho )$ to be a special problem instance in which the arm means are given by $\\mu (v_{\\dagger }(\\rho ))_{i,m} = \\frac{i}{\\sqrt{\\rho }}, \\quad m\\in [M], \\ i \\in S_m.$ Then, it follows that $\\mu (v_{\\dagger }(\\rho ))_{i} = \\frac{i}{\\sqrt{\\rho }}$ for all $i\\in [K]$ .", "The following result will be used in the proof of Lemma REF .", "Lemma 7 Given any $\\rho >0$ , the problem instance $v_{\\dagger }(\\rho )$ , defined in (REF ), satisfies $\\frac{4\\rho }{MK^2} \\le c^*\\big (v_{\\dagger }(\\rho )\\big ) \\le 4K\\rho .$ [Proof of Lemma REF ] Recall the definition of $\\mathcal {C}(\\cdot )$ in (REF ).", "Let $\\Delta _{\\rm min}(\\rho )\\min _{(i,j) \\in \\mathcal {C}(v_{\\dagger }(\\rho ))} \\vert \\mu _i(v_{\\dagger }(\\rho )) - \\mu _j(v_{\\dagger }(\\rho )) \\vert ,$ and let $\\omega ^{\\rm trivial} \\in \\Gamma $ be defined as $\\omega ^{\\rm trivial}_{i,m} = \\frac{1}{\\vert S_m \\vert }, \\quad m\\in [M], \\ i \\in S_m.$ Notice that $c^*(v_{\\dagger }(\\rho ))^{-1} &= \\max _{\\omega \\in \\Gamma }\\ \\inf _{v^{\\prime }\\in {\\rm Alt}(v)}\\ \\sum _{m=1}^{M}\\ \\sum _{i\\in S_m} \\omega _{i,m}\\frac{\\big (\\mu _{i,m}(v)-\\mu _{i,m}(v^{\\prime })\\big )^2}{2} \\nonumber \\\\&\\le \\inf _{v^{\\prime }\\in {\\rm Alt}(v)}\\ \\sum _{m=1}^{M}\\ \\sum _{i\\in S_m} 1\\cdot \\frac{\\big (\\mu _{i,m}(v)-\\mu _{i,m}(v^{\\prime })\\big )^2}{2}\\nonumber \\\\&=g_{v_{\\dagger }(\\rho )}(\\mathbf {1}^{K\\times M}),$ where $\\mathbf {1}^{K \\times M}$ denotes the all-ones matrix of dimension $K \\times M$ .", "Also notice that $c^*(v_{\\dagger }(\\rho ))^{-1} &= \\max _{\\omega \\in \\Gamma }\\ \\inf _{v^{\\prime }\\in {\\rm Alt}(v)}\\ \\sum _{m=1}^{M}\\ \\sum _{i\\in S_m} \\omega _{i,m}\\frac{\\big (\\mu _{i,m}(v)-\\mu _{i,m}(v^{\\prime })\\big )^2}{2} \\nonumber \\\\& \\ge \\inf _{v^{\\prime }\\in {\\rm Alt}(v)}\\ \\sum _{m=1}^{M}\\ \\sum _{i\\in S_m} \\omega _{i,m}^{\\rm trivial}\\, \\frac{\\big (\\mu _{i,m}(v)-\\mu _{i,m}(v^{\\prime })\\big )^2}{2} \\nonumber \\\\&= g_{v_{\\dagger }(\\rho )}(\\omega ^{\\rm trivial}).$ From (REF ) and (REF ), we have $& g_{v_{\\dagger }(\\rho )}(\\omega ^{\\rm trivial}) \\le c^*\\big (v_{\\dagger }(\\rho )\\big )^{-1} \\le g_{v_{\\dagger }(\\rho )}(\\mathbf {1}^{K\\times M}) \\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow } \\min _{(i,j) \\in \\mathcal {C}(v_{\\dagger }(\\rho ))} \\frac{\\big (\\mu _i(v_{\\dagger }(\\rho ))-\\mu _j(v_{\\dagger }(\\rho ))\\big )^2/2}{\\frac{1}{M^2_i}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\frac{1}{\\omega ^{\\rm trivial}_{i,m}} +\\frac{1}{M^2_j}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace j \\in S_m\\rbrace }\\frac{1}{\\omega ^{\\rm trivial}_{j,m}} } \\nonumber \\\\&\\hspace{56.9055pt} \\le c^*\\big (v_{\\dagger }(\\rho )\\big )^{-1} \\le \\min _{(i,j) \\in \\mathcal {C}(v_{\\dagger }(\\rho ))} \\frac{\\big (\\mu _i(v_{\\dagger }(\\rho ))-\\mu _j(v_{\\dagger }(\\rho ))\\big )^2/2}{\\frac{1}{M^2_i}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\frac{1}{1} +\\frac{1}{M^2_j}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace j \\in S_m\\rbrace }\\frac{1}{1} } \\nonumber \\\\& \\stackrel{(b)}{\\Rightarrow } \\frac{\\Delta ^2_{\\rm min}(\\rho )}{4K} \\le c^*\\big (v_{\\dagger }(\\rho )\\big )^{-1} \\le \\frac{M\\Delta ^2_{\\rm min}(\\rho )}{4} \\nonumber \\\\& \\stackrel{(c)}{\\Rightarrow } \\frac{1}{4K\\rho } \\le c^*\\big (v_{\\dagger }(\\rho )\\big )^{-1} \\le \\frac{MK^2}{4\\rho } \\nonumber \\\\& \\Rightarrow \\frac{4\\rho }{MK^2} \\le c^*\\big (v_{\\dagger }(\\rho )\\big ) \\le 4K\\rho ,$ where $(a)$ above follows from (REF ) of Lemma REF , in writing $(b)$ , we make use of the observation that for all $m\\in [M]$ and $i\\in S_m$ , $\\frac{1}{\\omega ^{\\rm trivial}_{i,m}} = |S_m| \\le K,$ and $(c)$ makes use of the fact that $\\Delta ^2_{\\rm min}(\\rho ) \\in (\\frac{1}{\\rho }, \\frac{K^2}{\\rho })$ .", "This completes the desired proof.", "[Proof of Lemma REF ] Fix a confidence level $\\delta \\in (0,\\frac{1}{4})$ arbitrarily, and let $\\Pi $ be $\\delta $ -PAC and almost-optimal up to $\\alpha \\ge 1$ .", "Suppose, on the contrary, that $\\limsup _{r\\rightarrow \\infty } \\frac{b_{r+1}}{b_{r}} = +\\infty .$ Then, there exists an increasing sequence $\\lbrace z_l\\rbrace _{l=1}^\\infty $ such that $\\lim _{l\\rightarrow \\infty } \\frac{b_{z_l}}{b_{z_l+1}} = 0$ and $b_{z_l} <b_{{z_l}+1} $ for all $l\\in \\mathbb {N}$ .", "Let $T^*_{\\delta }(v) \\log \\left(\\frac{1}{4\\delta }\\right) \\, c^*(v) $ .", "Let $v^{(l)} v_{\\dagger }\\left(\\frac{\\sqrt{b_{z_l+1}b_{z_l}}}{4\\log (\\frac{1}{4\\delta })}\\right), \\quad \\mbox{for all} \\; l \\in \\mathbb {N}.$ By Lemma REF , we then have $\\frac{\\sqrt{b_{z_l+1}b_{z_l}}}{MK^2\\log (\\frac{1}{4\\delta })} \\le c^*(v^{(l)}) \\le \\frac{K \\sqrt{b_{z_l+1}b_{z_l}}}{\\log (\\frac{1}{4\\delta })}.$ Also, we have $\\frac{\\sqrt{b_{z_l+1}b_{z_l}}}{MK^2} \\le T^*_{\\delta }(v^{(l)}) \\le K \\sqrt{b_{z_l+1}b_{z_l}} .$ Let $E_l \\lbrace \\mbox{empirical best arms } \\hat{a}_\\delta =a^*(v^{(l)}) \\mbox{ and stopping time } \\tau _\\delta (\\Pi ) \\le b_{z_l} \\rbrace , \\quad l \\in \\mathbb {N},$ be the event that (a) $\\hat{a}_\\delta =(\\hat{a}_{\\delta ,m})_{m\\in [M]}$ , the vector of the empirical best arms of the clients at confidence level $\\delta $ , equals the vector $a^*(v^{(l)})$ , and (b) the stopping time $\\tau _\\delta (\\Pi ) \\le b_{z_l}$ .", "From Lemma REF , for any $l\\in \\mathbb {N}$ , we have $\\sum _{t=1}^{b_{z_l}}\\ \\sum _{m=1}^M \\ \\sum _{i \\in S_m} \\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace }\\ D_{\\mathrm {KL}}(v_{i,m}^{(l)} \\Vert v^{\\prime }_{i,m})\\right] \\ge d_{\\mathrm {KL}}\\left(\\mathbb {P}_{v^{(l)}}^{\\Pi }(E_l), \\mathbb {P}_{{v^{\\prime }}}^{\\Pi }(E_l)\\right)$ for all $v^{\\prime } \\in {\\rm Alt}(v^{(l)})$ .", "Note that $\\mathbb {P}_{v^{(l)}}^\\Pi (E_l) & = 1 - \\mathbb {P}_{v^{(l)}}^\\Pi (E_l^c)\\nonumber \\\\&\\stackrel{(a)}{\\ge } 1 - \\mathbb {P}_{v^{(l)}}^\\Pi (\\hat{a}_\\delta \\ne a^*(v^{(l)})) - \\mathbb {P}_{v^{(l)}}^\\Pi (\\tau _\\delta (\\Pi ) > {b}_{z_l}) \\nonumber \\\\& \\stackrel{(b)}{=} 1 - \\mathbb {P}_{v^{(l)}}^\\Pi (\\hat{a}_\\delta \\ne a^*(v^{(l)})) - \\mathbb {P}_{v^{(l)}}^\\Pi (\\tau _\\delta (\\Pi ) \\ge b_{z_l+1}) \\nonumber \\\\& \\stackrel{(c)}{\\ge } 1-\\delta - \\frac{\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _\\delta (\\Pi )]}{b_{z_l+1}} \\nonumber \\\\& \\stackrel{(d)}{\\ge } 1-\\delta - \\frac{\\alpha \\ T^*(v^{(l)})}{b_{z_l+1}} \\nonumber \\\\& \\stackrel{(e)}{\\ge } 1-\\delta - \\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}},$ where $(a)$ above follows from the union bound, $(b)$ follows by noting that $\\mathbb {P}_{v^{(l)}}^\\Pi (\\tau _\\delta (\\Pi ) \\ge b_{z_l+1}) = \\mathbb {P}_{v^{(l)}}^\\Pi (\\tau _\\delta (\\Pi ) > b_{z_l})$ as $\\tau _\\delta (\\Pi ) \\in \\lbrace b_r\\rbrace _{r\\in \\mathbb {N}}$ and $b_{z_l} < b_{{z_l}+1}$ , $(c)$ follows from Markov's inequality and the fact that $\\mathbb {P}_{v^{(l)}}^\\Pi (\\hat{a}_\\delta \\ne a^*(v)) \\le \\delta $ as $\\Pi $ is $\\delta $ -PAC, $(d)$ follows from the fact that $\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _\\delta (\\Pi )] \\le \\alpha \\, T^*(v^{(l)})$ as $\\Pi $ is almost-optimal up to the constant $\\alpha $ , and (e) follows from (REF ).", "Because the algorithm $\\Pi $ is $\\delta $ -PAC, it can be shown that $\\mathbb {P}_{v^{\\prime }}^\\Pi (E_l) \\le \\delta $ for all $v^{\\prime } \\in {\\rm Alt}(v^{(l)})$ .", "Continuing with (REF ) and using the fact that $d_{\\mathrm {KL}}(x,y) \\ge \\log \\left(\\frac{1}{4\\,\\delta ^{\\prime }}\\right)$ whenever $x\\ge 1-\\delta ^{\\prime }$ and $y \\le \\delta ^{\\prime }$ (see, for instance, [11]), setting $\\delta ^{\\prime }=\\delta + \\alpha K \\sqrt{b_{z_l}/b_{z_l+1}}$ , we have $&\\inf _{v^{\\prime }\\in {\\rm Alt}(v^{(l)})}\\ \\sum _{t=1}^{b_{z_l}}\\ \\sum _{m=1}^M \\ \\sum _{i \\in S_m} \\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace } \\ D_{\\mathrm {KL}}(v_{i,m}^{(l)} \\Vert v^{\\prime }_{i,m} ) \\right] \\ge \\log \\left(\\frac{1}{4\\delta +4\\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}}}\\right)\\nonumber \\\\&\\Rightarrow b_{z_l} \\inf _{v^{\\prime }\\in {\\rm Alt}(v^{(l)})}\\ \\sum _{t=1}^{b_{z_l}}\\ \\sum _{m=1}^M \\ \\sum _{i \\in S_m} \\frac{\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace } \\right]}{b_{z_l}} \\cdot \\frac{(\\mu _{i,m}^{(l)}-\\mu ^{\\prime }_{i,m})^2}{2} \\ge \\log \\left(\\frac{1}{4\\delta +4\\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}}}\\right)\\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow }\\frac{b_{z_l}}{c^*(v^{(l)})} \\ge \\log \\left(\\frac{1}{4\\delta +4\\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}}}\\right)\\nonumber \\\\& \\stackrel{(b)}{\\Rightarrow }MK^2\\log \\left(\\frac{1}{4\\delta }\\right)\\ \\sqrt{b_{z_l+1}/b_{z_l}}\\ge \\log \\left(\\frac{1}{4\\delta +4\\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}}}\\right)\\nonumber \\\\& \\Rightarrow 4 \\delta + 4 \\alpha \\,K\\sqrt{\\frac{b_{z_l}}{b_{z_l+1}}} \\ge (4\\delta )^{M^{-1}K^{-2}\\sqrt{b_{z_l}/b_{z_l+1}}}.$ In the above set of inequalities, $(a)$ follows from the definition of $c^*(v^{(l)})$ , and $(b)$ follows from (REF ).", "Letting $l \\rightarrow \\infty $ and using (REF ), we observe that the left-hand side of (REF ) converges to $4\\delta $ , whereas the right-hand side converges to 1, thereby resulting in $4\\delta \\ge 1$ , a contradiction.", "This proves that $\\limsup _{r\\rightarrow \\infty } \\frac{b_{r+1}}{b_{r}} < +\\infty $ , which implies $\\sup _{r \\in \\mathbb {N}} \\frac{b_{r+1}}{b_{r}} < \\eta $ for some $\\eta < +\\infty $ ." ], [ "Proof of Lemma ", "Fix $\\delta \\in (0,\\frac{1}{4})$ and $\\beta \\in (0,1)$ arbitrarily.", "Let $\\Pi $ be almost-optimal up to a constant, say $\\alpha \\ge 1$ .", "Let $\\lbrace v^{(l)}\\rbrace _{l=1}^{\\infty }$ be any sequence of problem instances such that $\\lim _{l\\rightarrow \\infty } c^*(v^{(l)})=+\\infty $ , where this sequence must exist because of Lemma REF .", "Let $T_l \\big \\lceil \\left(\\mathbb {E}_{v^{(l)}}^{\\Pi }[\\tau _{\\delta }(\\Pi )]\\right)^\\beta \\big \\rceil $ .", "Because $\\lim _{l\\rightarrow \\infty } c^*(v^{(l)})=+\\infty $ , we have $\\lim _{l \\rightarrow \\infty }\\ \\frac{T_l}{\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _{\\delta }(\\Pi )]} = 0.$ For $l\\in \\mathbb {N}$ , let $F_l \\lbrace \\mbox{empirical best arms } \\hat{a}_\\delta =a^*(v^{(l)}) \\mbox{ and stopping time } \\tau _\\delta (\\Pi ) \\le T_l \\rbrace $ be the event that (a) the vector of empirical best arms matches with the vector of best arms under $v^{(l)}$ , and (b) the stopping time $\\tau _\\delta (\\Pi ) \\le T_l$ .", "Also, let $p_l \\mathbb {P}_{v^{(l)}}^\\Pi \\left(\\tau _\\delta > T_l \\right)$ .", "From Lemma REF , we know that $\\sum _{t=1}^{T_l}\\ \\sum _{m=1}^M \\ \\sum _{i \\in S_m} \\mathbb {E}_{v^{(l)}}^{\\Pi } \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace }\\ D_{\\mathrm {KL}}(v_{i,m}^{(l)} \\Vert v^{\\prime }_{i,m}) \\right] \\ge d_{\\mathrm {KL}}\\left(\\mathbb {P}_{v^{(l)}}^\\Pi (F_l), \\mathbb {P}_{{v^{\\prime }}}^\\Pi (F_l)\\right)$ for all problem instances $v^{\\prime }$ .", "In particular, for $v^{\\prime } \\in {\\rm Alt}(v^{(l)})$ , we note that for any $l\\in \\mathbb {N}$ , $\\mathbb {P}_{v^{(l)}}^\\Pi (F_l) & \\ge 1 - \\mathbb {P}_{v^{(l)}}(\\hat{a}_\\delta \\ne a^*(v^{(l)})) - \\mathbb {P}_{v^{(l)}}(\\tau _\\delta > T_l) \\nonumber \\\\& \\ge 1 - \\delta - p_l.$ Along similar lines, it can be shown that $\\mathbb {P}_{v^{\\prime }}^\\Pi (F_l) \\le \\delta +p_l$ for any $v^{\\prime } \\in {\\rm Alt}(v^{(l)})$ .", "Then, using the fact that $d(x,y)\\ge \\log \\left(\\frac{1}{4\\delta ^{\\prime }}\\right)$ whenever $x \\ge 1-\\delta ^{\\prime }$ and $y \\le \\delta ^{\\prime }$ , setting $\\delta ^{\\prime }=\\delta + p_l$ , we have $& \\inf _{v^{\\prime }\\in {\\rm Alt}(v^{(l)})}\\ \\sum _{t=1}^{T_l}\\ \\sum _{m=1}^M\\ \\sum _{i \\in S_m} \\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace }\\ D_{\\mathrm {KL}}(v_{i,m}^{(l)} \\Vert v^{\\prime }_{i,m}) \\right] \\ge \\log \\left(\\frac{1}{4\\delta + 4p_l}\\right)\\nonumber \\\\& \\Rightarrow T_l\\inf _{v^{\\prime }\\in {\\rm Alt}(v^{(l)})}\\sum _{t=1}^{T_l}\\ \\sum _{m=1}^M\\ \\sum _{i \\in S_m}\\ \\frac{\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathbf {1}_{\\lbrace A_t(m)=i\\rbrace } \\right]}{T_l} \\cdot \\frac{(\\mu _{i,m}^{(l)}-\\mu ^{\\prime }_{i,m})^2}{2} \\ge \\log \\left(\\frac{1}{4\\delta + 4p_l}\\right)\\nonumber \\\\& \\Rightarrow \\frac{T_l}{c^*(v^{(l)})} \\ge \\log \\left(\\frac{1}{4\\delta + 4p_l}\\right)$ for all $l\\in \\mathbb {N}$ , where the last line above follows from the definition of $c^*(v^{(l)})$ .", "Because $\\Pi $ is almost-optimal up to constant $\\alpha \\ge 1$ , we have $c^*(v^{(l)}) \\log \\left(\\frac{1}{4\\delta }\\right) \\le \\mathbb {E}_{v^{(l)}}^\\Pi (\\tau _{\\delta }(\\Pi )) \\le \\alpha \\, c^*(v^{(l)}) \\, \\log \\left(\\frac{1}{4\\delta }\\right) \\quad \\text{for all }l\\in \\mathbb {N}.$ Combining (REF ) and (REF ), we get $\\frac{T_l}{\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _{\\delta }(\\Pi )]} \\ge \\frac{\\log \\left(\\frac{1}{4\\delta + 4p_l}\\right)}{\\alpha \\,\\log \\left(\\frac{1}{4\\delta }\\right)} \\quad \\text{for all }l\\in \\mathbb {N}.$ Suppose now that there exists $\\epsilon \\in \\left(0, \\frac{1}{4}-\\delta \\right)$ such that $\\liminf _{l \\rightarrow \\infty }\\ \\mathbb {P}_{v^{(l)}}^\\Pi \\left( \\log \\left(\\tau _{\\delta }(\\Pi ) \\right) > \\beta \\log \\left(\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\tau _{\\delta }(\\Pi )\\right] \\right) \\right) \\le \\frac{1}{4} - \\delta - \\epsilon .$ This implies from the definitions of $T_l$ and $p_l$ that there exists an increasing sequence $\\lbrace l_n:n\\ge 1\\rbrace $ such that $p_{l_n} \\le \\frac{1}{4}-\\delta -\\epsilon $ for all $n\\ge 1$ .", "Using this in (REF ), we get that $\\limsup _{l \\rightarrow \\infty }\\ \\frac{T_l}{\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\tau _{\\delta }(\\Pi )\\right]} \\ge \\limsup _{n \\rightarrow \\infty }\\ \\frac{T_{l_n}}{\\mathbb {E}_{v^{(l_n)}}^\\Pi \\left[\\tau _{\\delta }(\\Pi )\\right]} \\ge \\frac{\\log \\left(\\frac{1}{1-4\\epsilon }\\right)}{\\alpha \\log \\left(\\frac{1}{4\\delta }\\right)} > 0,$ which clearly contradicts (REF ).", "This proves that there is no $\\epsilon \\in \\left(0, \\frac{1}{4}-\\delta \\right)$ such that (REF ) holds, thereby establishing the desired result." ], [ "Proof of Theorem ", "Fix a sequence of problem instances $\\lbrace v^{(l)}\\rbrace _{l=1}^{\\infty }$ with $\\lim _{l\\rightarrow \\infty } c^*(v^{(l)})=+\\infty $ , a confidence level $\\delta \\in (0,\\frac{1}{4})$ , and an algorithm $\\Pi $ that is almost optimal up to a constant, say $\\alpha \\ge 1$ .", "From Lemma REF , we know that there exists $\\eta >0$ such that $\\mathfrak {r}_\\delta (\\Pi ) \\ge \\log _\\eta (\\tau _\\delta (\\Pi )) \\quad \\text{almost surely}.$ Also, from Lemma REF , we know that for any $\\beta \\in (0,1)$ and any sequence of problem instances $\\lbrace v^{(l)}\\rbrace _{l=1}^\\infty $ with $\\lim _{l \\rightarrow \\infty } c^*(v^{(l)})=+\\infty $ , $\\liminf _{l\\rightarrow \\infty }\\ \\mathbb {P}_{v^{(l)}}^{\\Pi }\\left(\\log \\left(\\tau _{\\delta }(\\Pi ) \\right) > \\beta \\log \\left(\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _{\\delta }(\\Pi )] \\right) \\right) \\ge \\frac{1}{4} -\\delta .$ Using (REF ) in (REF ), we have $&\\liminf _{l\\rightarrow \\infty }\\ \\mathbb {P}_{v^{(l)}}^\\Pi \\left(\\mathfrak {r}_\\delta (\\Pi ) >\\beta \\, \\log _\\eta \\left(\\mathbb {E}_{v^{(l)}}^\\Pi [\\tau _{\\delta }(\\Pi )] \\right) \\right) \\ge \\frac{1}{4} -\\delta \\quad \\forall \\ \\beta \\in (0,1)\\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow } \\liminf _{l\\rightarrow \\infty }\\ \\mathbb {P}_{v^{(l)}}^\\Pi \\left(\\mathfrak {r}_\\delta (\\Pi ) >\\beta \\, \\log _\\eta \\Big ( \\log \\Big (\\frac{1}{4\\delta }\\Big ) c^*(v^{(l)}) \\Big ) \\right) \\ge \\frac{1}{4} -\\delta \\quad \\forall \\ \\beta \\in (0,1)\\nonumber \\\\& \\stackrel{(b)}{\\Rightarrow } \\liminf _{l\\rightarrow \\infty }\\ \\frac{\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathfrak {r}_\\delta (\\Pi ) \\right]}{\\log _\\eta \\left( \\,\\log \\left(\\frac{1}{4\\delta }\\right) c^*(v^{(l)}) \\right)} \\ge \\beta \\,\\left( \\frac{1}{4} -\\delta \\right) \\quad \\forall \\ \\beta \\in (0,1)\\nonumber \\\\& \\stackrel{(c)}{\\Rightarrow } \\liminf _{l\\rightarrow \\infty }\\ \\frac{\\mathbb {E}_{v^{(l)}}^\\Pi \\left[\\mathfrak {r}_\\delta (\\Pi ) \\right]}{\\log _\\eta \\left( \\,\\log \\left(\\frac{1}{4\\delta }\\right) c^*(v^{(l)}) \\right)} \\ge \\frac{1}{4} -\\delta .$ In the above set of inequalities, $(a)$ follows from Proposition REF and the hypothesis that $\\Pi $ is $\\delta $ -PAC, $(b)$ follows from Markov's inequality, and $(c)$ follows from $(b)$ by letting $\\beta \\rightarrow 1$ .", "The desired result is thus established." ], [ "Proof of Theorem ", "Below, we record some important results that will be useful for proving Theorem REF .", "Lemma 8 ([12]) Let $Y_1,Y_2,\\ldots $ be independent Gaussian random variables with mean $\\mu $ and unit variance.", "Let $\\hat{\\mu }_n \\frac{1}{n} \\sum _{i=1}^n Y_i$ .", "Then, $\\mathbb {P} \\left( \\exists \\, n \\in \\mathbb {N}: \\frac{n}{2}(\\hat{\\mu }_n-\\mu )^2 \\ge \\log (1/\\delta ) + \\log (n(n+1)) \\right) \\le \\delta .$ Lemma 9 Fix $n \\in \\mathbb {N}$ .", "Let $Y_1,Y_2,\\ldots ,Y_n$ be independent random variables with $\\mathbb {P}(Y_i \\le y) \\le y$ for all $y\\in [0,1]$ and $i\\in [n]$ .", "Then, for any $\\epsilon > 0$ , $\\mathbb {P} \\bigg ( \\sum _{i=1}^n \\log (1/Y_i) \\ge \\epsilon \\bigg ) \\le f_n(\\epsilon )$ where $f_n:(0,+\\infty ) \\rightarrow (0,1) $ is defined by $f_n(x) = \\sum _{i=1}^{n} \\frac{x^{i-1}e^{-x}}{(i-1)!", "}, \\quad x \\in (0, +\\infty ).$ [Proof of Lemma REF ] First, for $i\\in [n]$ we define the random variable $Z_i F_{i}(Y_i)$ , where $F_{i}$ is the cumulative distribution function (CDF) of $Y_i$ .", "Clearly, $Z_i$ is a uniform random variable.", "Notice that $\\mathbb {P}(Y_i \\le y) \\le y = \\mathbb {P}(Z_i \\le y)$ for all $y\\in (0,1)$ , from which it follows that $\\mathbb {P} \\bigg ( \\sum _{i=1}^n \\log (1/Y_i) \\ge \\epsilon \\bigg ) \\le \\mathbb {P} \\bigg ( \\sum _{i=1}^n \\log (1/Z_i) \\ge \\epsilon \\bigg ).$ Therefore, it suffices to prove Lemma REF for the case when $Y_1, \\ldots , Y_n$ are independent and uniformly distributed on $[0,1]$ .", "Suppose that this is indeed the case.", "Then, we note that $\\mathbb {P} \\big (\\sum _{i=1}^n \\log (1/Y_i) \\ge \\epsilon \\big )=\\mathbb {P} \\big ( \\prod _{i=1}^n Y_i \\le \\exp (-\\epsilon ) \\big ) $ .", "Let $h_s(x) \\mathbb {P} \\big ( \\prod _{i=1}^s Y_i \\le x \\big )$ for $s\\in [n]$ and $x\\in (0, 1)$ .", "We then have $h_1(x) &= x, \\nonumber \\\\\\forall \\ s>1, \\quad h_s(x) &= \\int _{0}^1 h_{s-1}\\big (\\min \\lbrace x/y, 1\\rbrace \\big )\\ \\mathrm {d}y = x+ \\int _{x}^1 h_{s-1}(x/y) \\ \\mathrm {d}y.", "\\nonumber $ Using mathematical induction, we demonstrate below that $h_s(x) = \\sum _{i=1}^{s} \\frac{(\\log \\frac{1}{x})^{i-1}x}{(i-1)!", "}$ for all $s \\in [n]$ and $x\\in (0,1)$ .", "Base case: It is easy to verify that (REF ) holds for $s=1$ .", "For $s=2$ , we have $h_2(x) =x+ \\int _{x}^1 h_1\\left(\\frac{x}{y}\\right)\\ \\mathrm {d}y = x+ \\int _{x}^1 \\frac{x}{y}\\ \\mathrm {d}y = x + \\log (1/x)\\,x,$ thus verifying that (REF ) holds for $s=2$ .", "Induction step: Suppose now that (REF ) holds for $s=k$ for some $k > 2$ .", "Then, $h_{k+1}(x) & = x+ \\int _{x}^1 h_{k}(x/y)\\ \\mathrm {d}y \\nonumber \\\\& \\stackrel{(a)}{=} x+ \\int _{x}^1 \\sum _{i=1}^{k} \\frac{(\\log \\frac{y}{x})^{i-1} (\\frac{x}{y}) }{(i-1)!}", "\\ \\mathrm {d}y \\nonumber \\\\& = x+ \\sum _{i=1}^{k} \\int _{x}^1 \\frac{(\\log \\frac{y}{x})^{i-1} (\\frac{x}{y}) }{(i-1)!}", "\\ \\mathrm {d}y \\nonumber \\\\& =x+ \\sum _{i=1}^{k} \\frac{x}{(i-1)!", "}\\ \\int _{x}^1 \\frac{(\\log \\frac{y}{x})^{i-1}}{y}\\ \\mathrm {d}y \\nonumber \\\\& \\stackrel{(b)}{=} x+ \\sum _{i=1}^{k} \\frac{x}{(i-1)!}", "\\int _{1}^{1/x} \\frac{(\\log y^{\\prime })^{i-1}}{y^{\\prime }} \\ \\mathrm {d}y^{\\prime } \\nonumber \\\\& \\stackrel{(c)}{=}x+ \\sum _{i=1}^{k} \\frac{x}{(i-1)!}", "\\frac{(\\log \\frac{1}{x})^i}{i} \\nonumber \\\\& = \\sum _{i=1}^{k+1} \\frac{(\\log \\frac{1}{x})^{i-1}x}{(i-1)!", "}, $ where $(a)$ follows from the induction hypothesis, in writing $(b)$ above, we set $y^{\\prime } = y/x$ , and $(c)$ follows by noting that $\\int \\frac{(\\log y)^j}{y} \\, \\mathrm {d}y = \\frac{1}{j+1} (\\log y)^{j+1}.$ This demonstrates that (REF ) holds for $s=k+1$ .", "Finally, we note that $\\mathbb {P} \\bigg ( \\sum _{i=1}^n \\log (1/Y_i) \\ge \\epsilon \\bigg ) & = h_n\\big (\\exp (-\\epsilon )\\big ) \\nonumber \\\\& = \\sum _{i=1}^{n} \\nonumber \\frac{\\epsilon ^{i-1}e^{-\\epsilon }}{(i-1)!}", "\\nonumber \\\\& = f_n(\\epsilon ),$ thus establishing the desired result.", "With the above ingredients in place, we are now ready to prove Theorem REF .", "[Proof of Theorem REF ] Fix a confidence level $\\delta \\in (0,1)$ and a problem instance $v \\in \\mathcal {P}$ arbitrarily.", "We claim that $\\tau _\\delta (\\Pi _{\\mathrm {Het-TS}}) < + \\infty $ almost surely; a proof of this is deferred until the proof of Lemma REF .", "Assuming that the preceding fact is true, for $m\\in [M]$ and $i\\in S_m$ , let $ \\xi _{i,m} \\sup _{t \\ge K} \\frac{N_{i,m}(t)}{2}\\big (\\hat{\\mu }_{i,m}(t)-\\mu _{i,m}(v)\\big )^2 - \\log \\big (N_{i,m}(t)(N_{i,m}(t)+1)\\big ).$ From Lemma REF , we know that for any confidence level $\\delta ^{\\prime } \\in (0,1)$ , $\\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} (\\xi _{i,m} \\ge \\log (1/\\delta ^{\\prime })) \\le \\delta ^{\\prime }.$ Let $\\xi ^{\\prime }_{i,m} \\exp (-\\xi _{i,m})$ .", "Recall that $K^{\\prime } = \\sum _{m=1}^M \\vert S_m \\vert $ .", "From Lemma REF , we know that for any $\\epsilon >0$ , $& \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\left(\\sum _{m\\in [M]}\\ \\sum _{i\\in S_m}\\ \\log (1/\\xi ^{\\prime }_{i,m}) \\ge \\epsilon \\right) \\le f_{K^{\\prime }}(\\epsilon ) \\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow } \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\left(\\sum _{m\\in [M]}\\ \\sum _{i\\in S_m}\\ \\xi _{i,m} \\ge \\epsilon \\right) \\le f_{K^{\\prime }}(\\epsilon ) \\nonumber \\\\& \\stackrel{(b)}{\\Rightarrow } \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\left(\\sum _{m\\in [M]}\\ \\sum _{i\\in S_m}\\ \\xi _{i,m} \\ge \\epsilon \\right) \\le f(\\epsilon ) \\nonumber \\\\& \\stackrel{(c)}{\\Rightarrow } \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\left(\\sum _{m\\in [M]}\\ \\sum _{i\\in S_m}\\ \\xi _{i,m} \\ge f^{-1}(\\delta ) \\right) \\le \\delta , $ where $(a)$ above follows from the definition of $\\xi ^{\\prime }_{i,m}$ , $(b)$ follows from the definition of $f$ in (REF ), and in writing $(c)$ , we (i) make use of the fact that $f$ is continuous and strictly decreasing and therefore admits an inverse, and (ii) set $\\epsilon = f^{-1}(\\delta )$ .", "Eq.", "(REF ) then implies $& \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\bigg (\\forall \\, t\\ge K \\ \\sum _{m\\in [M]} \\sum _{i\\in S_m}\\frac{N_{i,m}(t)}{2}\\big (\\hat{\\mu }_{i,m}(t)-\\mu _{i,m}(v)\\big )^2 \\le K^{\\prime }\\log \\big (t(t+1)\\big ) + f^{-1}(\\delta ) \\bigg ) \\ge 1-\\delta \\nonumber \\\\& \\Rightarrow \\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\bigg (\\forall \\, t\\ge K \\ \\sum _{m\\in [M]} \\sum _{i\\in S_m}\\frac{N_{i,m}(t)}{2}\\big (\\hat{\\mu }_{i,m}(t)-\\mu _{i,m}(v)\\big )^2 \\le \\beta (t, \\delta ) \\bigg ) \\ge 1-\\delta .$ Note that at the stopping time $\\tau _\\delta (\\Pi _{\\mathrm {Het-TS}})$ , we must have $\\inf _{v^{\\prime } \\in {\\rm Alt}(\\hat{v}(\\tau _\\delta ))} \\sum _{m \\in [M]} \\sum _{i\\in S_m} N_{i,m}(\\tau _\\delta ) \\frac{(\\mu _{i,m}(v^{\\prime }) - \\hat{\\mu }_{i,m}(\\tau _\\delta ))^2}{2} > \\beta (\\tau _\\delta , \\delta ).$ Thus, we may write (REF ) equivalently as $\\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}} \\left( v \\notin {\\rm Alt} \\left(\\hat{v}(\\tau _\\delta (\\Pi _{\\mathrm {Het-TS}}))\\right) \\right) \\ge 1 - \\delta $ , which is identical to $\\mathbb {P}_v^{\\Pi _{\\mathrm {Het-TS}}}\\left( a^*(v) = a^*(\\hat{v}(\\tau _\\delta (\\Pi _{\\mathrm {Het-TS}}))) \\right) \\ge 1 - \\delta $ .", "This completes the proof." ], [ "Proof of Theorem ", "Lemma 10 ([19] as cited in [20]) Let $f: S \\times \\Theta \\rightarrow \\mathbb {R} $ be a continuous function, and $\\mathcal {D}:\\Theta \\rightrightarrows S$ be a compact-valued continuous correspondence.", "Let $f^*: \\Theta \\rightarrow \\mathbb {R} $ and $D^*: \\Theta \\rightrightarrows S $ be defined by $f^{*}(\\theta )=\\max \\lbrace f(x, \\theta ) : x \\in \\mathcal {D}(\\theta )\\rbrace $ and $\\mathcal {D}^{*}(\\theta )=\\operatornamewithlimits{argmax}\\lbrace f(x, \\theta ) : x \\in \\mathcal {D}(\\theta )\\rbrace =\\lbrace x \\in \\mathcal {D}(\\theta ):f(x, \\theta )=f^{*}(\\theta )\\rbrace .$ Then $f^{*}$ is a continuous function on $\\Theta $ , and $\\mathcal {D}^{*}$ is a compact-valued, upper hemicontinuous correspondence on $\\Theta $ .", "Lemma 11 ([6]) A singleton-valued correspondence is upper hemicontinuous if and only if it is lower hemicontinuous, in which case it is continuous as a function.", "Lemma 12 Let $f$ be as defined in (REF ).", "Then, $f^{-1}(\\delta )=(1+o(1)) \\log (1/\\delta )$ as $\\delta \\rightarrow 0$ , i.e., $\\lim _{\\delta \\rightarrow 0} \\frac{\\log (1/\\delta )}{f^{-1}(\\delta )} =1.$ Let $x = f^{-1}(\\delta )$ .", "Then, $\\lim _{\\delta \\rightarrow 0} \\frac{\\log (1/\\delta )}{f^{-1}(\\delta )}& \\stackrel{(a)}{=} \\lim _{x \\rightarrow +\\infty } \\frac{\\log (\\frac{1}{f(x)})}{x} \\nonumber \\\\& \\stackrel{(b)}{=} \\lim _{x \\rightarrow +\\infty } \\frac{-f^{\\prime }(x)/f(x)}{1} \\nonumber \\\\& \\stackrel{(c)}{=} \\lim _{x \\rightarrow +\\infty } \\frac{ \\frac{x^{K^{\\prime }-1}e^{-x}}{(K^{\\prime }-1)!", "}}{\\sum _{i=1}^{K^{\\prime }} \\frac{x^{i-1}e^{-x}}{(i-1)!}}", "\\nonumber \\\\& = \\lim _{x \\rightarrow +\\infty } \\frac{ \\frac{x^{K^{\\prime }-1}}{(K^{\\prime }-1)!", "}}{\\sum _{i=1}^{K^{\\prime }} \\frac{x^{i-1}}{(i-1)!}}", "\\nonumber \\\\& = 1,$ where $(a)$ above follows the fact that $x \\rightarrow \\infty $ as $\\delta \\rightarrow 0$ , $(b)$ follows from the L'Hospital's rule, and $(c)$ makes use of the fact that $f^{\\prime }(x)=\\frac{-x^{K^{\\prime }-1}e^{-x}}{(K^{\\prime }-1)!", "}$ .", "This completes the proof.", "Before proceeding further, we introduce some additional notations.", "For any $j \\in [L]$ and $m \\in [M]$ , let $\\Lambda _m^{(j)} {\\left\\lbrace \\begin{array}{ll}{\\Lambda }_m, & \\text{if }S_m \\subseteq Q_j, \\\\\\lbrace \\mathbf {0}^K\\rbrace , & \\text{otherwise},\\end{array}\\right.", "}$ where $\\mathbf {0}^K$ denotes the all-zeros vector of dimension $K$ .", "For each $j\\in [L]$ , noting that $ \\prod _{i=1}^M \\Lambda _i^{(j)}:=\\Lambda _1^{(j)}\\times \\ldots \\Lambda _M^{(j)}$ is compact and that the mapping $\\omega \\mapsto \\widetilde{g}^{(j)}_v(\\omega )$ is continuous, there exists a solution to $\\max _{\\omega \\in \\prod _{i=1}^M \\Lambda _i^{(j)}} \\widetilde{g}^{(j)}_v(\\omega )$ .", "Let $\\widetilde{\\omega }^{(j)}(v) \\in \\operatornamewithlimits{argmax}_{\\omega \\in \\prod _{i=1}^M \\Lambda _i^{(j)} } \\widetilde{g}^{(j)}_v(\\omega ),$ Further, let $\\widetilde{\\omega }(v) \\sum _{j=1}^L \\widetilde{\\omega }^{(j)}(v)$ .", "Then, it is easy to verify that $\\widetilde{\\omega } (v) \\in \\Gamma $ is a common solution to $\\max _{\\omega \\in \\Gamma } \\widetilde{g}_v(\\omega ), \\max _{\\omega \\in \\Gamma } \\widetilde{g}^{(1)}_v(\\omega ), \\ldots , \\max _{\\omega \\in \\Gamma } \\widetilde{g}^{(L)}_v(\\omega ).$ Note that such common solution above is unique (we defer the proof of this fact to Theorem REF ), which then implies that the solution to $\\operatornamewithlimits{argmax}_{\\omega \\in \\prod _{i=1}^M \\Lambda _i^{(j)} } \\widetilde{g}^{(j)}_v(\\omega )$ is unique.", "Hence, $\\widetilde{\\omega }^{(j)}(v)$ and $\\widetilde{\\omega }(v)$ are well-defined.", "Lemma 13 Given any problem instance $v\\in \\mathcal {P}$ , under $\\Pi _{\\mathrm {Het-TS}}$ , $\\lim _{t \\rightarrow \\infty } \\Vert \\widetilde{\\omega }(\\hat{v}(t)) - \\widetilde{\\omega }(v) \\Vert _\\infty =0 \\quad \\text{almost surely}.$ Consequently, for any $m\\in [M]$ and $i\\in S_m$ , $\\lim _{t \\rightarrow \\infty } \\left|\\frac{N_{i,m}(t)}{t} - \\widetilde{\\omega }(v)_{i,m} \\right|=0 \\quad \\text{almost surely}.$ Fix $j\\in [L]$ and $v\\in \\mathcal {P}$ arbitrarily.", "By the strong law of large numbers, it follows that for any $i\\in [K]$ and $m\\in S_m$ , $& \\lim _{t \\rightarrow \\infty } \\hat{\\mu }_{i,m}(t) = \\mu _{i,m}(v) \\quad \\text{almost surely} \\nonumber \\\\& \\Rightarrow \\lim _{t \\rightarrow \\infty } \\hat{\\mu }_{i}(t) = \\mu _{i}(v) \\quad \\text{almost surely} \\\\& \\Rightarrow \\lim _{t \\rightarrow \\infty } \\Delta _i(\\hat{v}(t)) = \\Delta _i(v) \\quad \\text{almost surely}.", "$ For any $v^{\\prime }\\in \\mathcal {P}$ , note that $\\widetilde{g}^{(j)}_{v^{\\prime }}(\\omega )$ is a function of $(\\Delta (v^{\\prime }),\\omega )$ for $\\Delta (v^{\\prime }) \\in (\\mathbb {R^+})^K$ and $\\omega \\in \\prod _{i=1}^M \\Lambda _i^{(j)}$ .", "From Lemma REF and Lemma REF that for any $\\epsilon _1 >0$ , there exists $\\epsilon _2 >0$ such that for all $v^{\\prime } \\in \\mathcal {P}$ with $\\Vert \\Delta (v)-\\Delta (v^{\\prime }) \\Vert _\\infty \\le \\epsilon _2$ , $\\Vert \\widetilde{\\omega }^{(j)}(v) - \\widetilde{\\omega }^{(j)}(v^{\\prime }) \\Vert _\\infty \\le \\epsilon _1.$ Combining () and (REF ), it follows that $\\lim _{t \\rightarrow \\infty } \\Vert \\widetilde{\\omega }^{(j)}(v) - \\widetilde{\\omega }^{(j)}(\\hat{v}(t)) \\Vert _\\infty =0 \\quad \\text{almost surely},$ which in turn implies that $\\lim _{t \\rightarrow \\infty } \\Vert \\widetilde{\\omega }(v) - \\widetilde{\\omega }(\\hat{v}(t)) \\Vert _\\infty =0$ almost surely.", "Recall the definition of $\\hat{\\omega }_{i,m}(t)$ in (REF ), which means $\\hat{\\omega }_{i,m}(t) =\\widetilde{\\omega }_{i,m}\\big (\\hat{v}(b_{r(t)})\\big ) $ .", "Then, by (REF ) for any $m\\in [M]$ and $i\\in S_m$ we have $\\lim _{t \\rightarrow \\infty } \\Vert \\widetilde{\\omega }(v) - \\hat{\\omega }(t) \\Vert _\\infty =0$ almost surely.", "Consequently, by [5], for any $m\\in [M]$ and $i\\in S_m$ , $\\lim _{t \\rightarrow \\infty } \\left|\\frac{N_{i,m}(t)}{t} - \\widetilde{\\omega }(v)_{i,m} \\right|=0 \\quad \\text{almost surely}.$ Lemma 14 Given any problem instance $v\\in \\mathcal {P}$ , under $\\Pi _{\\mathrm {Het-TS}}$ , $\\lim _{t \\rightarrow \\infty } \\frac{Z(t)}{t} = g_v(\\widetilde{\\omega }(v))\\quad \\text{almost surely}.$ Fix a problem instance $v\\in \\mathcal {P}$ arbitrarily.", "Define $\\hat{N}(t) \\in \\Gamma $ as $\\hat{N}_{i,m}(t) \\frac{N_{i,m}(t)}{ t}, \\quad i \\in S_m, \\ m \\in [M].$ Then, $\\frac{Z(t)}{t} & = \\inf _{v^{\\prime } \\in {\\rm Alt}(\\hat{v}(t))}\\ \\sum _{m=1}^M \\ \\sum _{i\\in S_m} \\frac{N_{i,m}(t)}{t}\\ \\frac{(\\mu _{i,m}(v^{\\prime }) - \\hat{\\mu }_{i,m}(t))^2}{2} \\nonumber \\\\& = \\inf _{v^{\\prime } \\in {\\rm Alt}(\\hat{v}(t))}\\ \\sum _{m=1}^M \\ \\sum _{i\\in S_m} {\\hat{N}_{i,m}(t)}\\ \\frac{(\\mu _{i,m}(v^{\\prime }) - \\hat{\\mu }_{i,m}(t))^2}{2} \\nonumber \\\\& = g_{\\hat{v}(t)}(\\hat{N}(t)) \\nonumber \\\\& \\stackrel{(a)}{=} \\min _{(i,j) \\in \\mathcal {C}(\\hat{v}(t))} \\frac{\\big (\\hat{\\mu }_i(t)-\\hat{\\mu }_j(t)\\big )^2/2}{\\frac{1}{M^2_i}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\frac{1}{\\hat{N}_{i,m}(t)} +\\frac{1}{M^2_j}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace j \\in S_m\\rbrace }\\frac{1}{\\hat{N}_{j,m}(t)} }, $ where $(a)$ follows from (REF ) of Lemma REF .", "Because $v \\in \\mathcal {P}$ and $\\lim _{t \\rightarrow \\infty } \\hat{\\mu }_{i}(t) = \\mu _{i}\\ \\text{almost surely}$ for all $i\\in [K]$ from (REF ), we get that $\\lim _{t \\rightarrow \\infty } \\mathcal {C}(\\hat{v}(t)) = \\mathcal {C}(v) \\quad \\text{almost surely},$ where $\\mathcal {C}(\\cdot )$ is as defined in (REF ).", "Combining (REF ), (REF ), (REF ), (REF ), and Lemma REF , we get that almost surely, $\\lim _{t \\rightarrow \\infty }\\ \\frac{Z(t)}{t} & = \\lim _{t\\rightarrow \\infty }\\min _{(i,j) \\in \\mathcal {C}(\\hat{v}(t))} \\frac{\\big (\\hat{\\mu }_i(t)-\\hat{\\mu }_j(t)\\big )^2/2}{\\frac{1}{M^2_i}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\frac{1}{\\hat{N}_{i,m}(t)} +\\frac{1}{M^2_j}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace j \\in S_m\\rbrace }\\frac{1}{\\hat{N}_{j,m}(t)} } \\nonumber \\\\& =\\min _{(i,j) \\in \\mathcal {C}(v)} \\frac{\\big ({\\mu }_i(v)-{\\mu }_j(v)\\big )^2/2}{\\frac{1}{M^2_i}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace }\\frac{1}{\\widetilde{\\omega }_{i,m}(v)} +\\frac{1}{M^2_j}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace j \\in S_m\\rbrace }\\frac{1}{\\widetilde{\\omega }_{j,m}(v)} } \\nonumber \\\\& = g_{v}(\\widetilde{\\omega }(v)).$ This completes the desired proof.", "Lemma 15 Given any confidence level $\\delta \\in (0,1)$ , $\\tau _\\delta (\\Pi _{\\mathrm {Het-TS}}) < +\\infty \\quad \\text{almost surely}.$ As a consequence of Lemma REF , we have $\\lim _{t \\rightarrow \\infty } \\frac{\\beta (t, \\delta )}{Z(t)}& = \\lim _{t \\rightarrow \\infty } \\frac{K^{\\prime }\\log (t^2+t)+ f^{-1}(\\delta )}{t \\frac{Z(t)}{t}} \\nonumber \\\\& = \\lim _{t \\rightarrow \\infty } \\frac{K^{\\prime }\\log (t^2+t)+ f^{-1}(\\delta )}{t g_v(\\widetilde{\\omega }(v))} \\nonumber \\\\& = 0 \\quad \\text{almost surely}.$ Therefore, there almost surely exists $0<T<+\\infty $ such that $Z(t) > \\beta (t, \\delta )$ for all $t \\ge T$ , thus proving that $\\tau _\\delta (\\Pi _{\\textsc {Het}-\\textsc {TS}})$ is finite almost surely.", "Lemma 16 Given any problem instance $v \\in \\mathcal {P}$ and $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right)$ , there exists $\\delta _{\\rm upper}(v,\\epsilon ) >0$ such that for any $\\delta \\in (0, \\delta _{\\rm upper}(v,\\epsilon ))$ , $t\\,g_v(\\widetilde{\\omega }(v)) > \\beta (t, \\delta ) + t\\,\\epsilon $ for all $t \\ge T_{\\rm last}(v,\\delta , \\epsilon )$ , where $T_{\\rm last}(v,\\delta , \\epsilon ) \\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } + K^{\\prime }\\log \\left(\\left(\\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }\\right)^2 + \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\right) \\frac{1}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } +1.$ Fix $v \\in \\mathcal {P}$ and $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right)$ arbitarily.", "Recall that $\\beta (t, \\delta ) = K^{\\prime }\\log (t^2+t) + f^{-1}(\\delta )$ .", "To prove Lemma REF , it suffices to verify that The derivative of the left-hand of (REF ) with respect to $t$ is greater than that of the right-hand side of (REF ) for all $t \\ge T_{\\rm last}(v, \\delta , \\epsilon )$ , and Eq.", "(REF ) holds for $t=T_{\\rm last}(v, \\delta , \\epsilon )$ .", "In order to verify that the condition in item 1 above holds, we note from Lemma REF that $\\lim _{\\delta \\rightarrow 0} f^{-1}(\\delta )=+\\infty $ , as a consequence of which we get that there exists $\\delta _{\\rm upper}(v, \\epsilon )>0$ such that for all $\\delta \\in (0, \\delta _{\\rm upper}(v, \\epsilon ))$ , $T_{\\rm last}(v, \\delta , \\epsilon ) < \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }$ and $\\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\ge \\frac{3K^{\\prime }}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }.$ Notice that the derivative of the left-hand side of (REF ) with respect to $t$ is equal to $g_v(\\widetilde{\\omega }(v))$ , whereas that of the right-hand side of (REF ) is equal to $\\frac{K^{\\prime }(2+\\frac{1}{t})}{t + 1}+\\epsilon $ .", "Hence, to verify the condition in item 1, we need to demonstrate that $g_v(\\widetilde{\\omega }(v))-\\epsilon > \\frac{K^{\\prime }(2+\\frac{1}{t})}{t + 1} \\quad \\text{for all } t \\ge T_{\\rm last}(v,\\delta , \\epsilon ).$ We note that for all $t \\ge T_{\\rm last}(v,\\delta , \\epsilon )$ , $t+1 & > t\\nonumber \\\\& \\ge T_{\\rm last}(v,\\delta , \\epsilon )\\nonumber \\\\& \\ge \\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }\\nonumber \\\\& \\ge \\frac{3K^{\\prime }}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\nonumber \\\\& \\ge \\frac{(2+\\frac{1}{t})K^{\\prime }}{g_v(\\widetilde{\\omega }(v)) - \\epsilon },$ where in writing the last line above, we use the fact that $3 \\ge 2+\\frac{1}{t}$ whenever $t \\ge 1$ .", "We then obtain (REF ) upon rearranging (REF ) and using the fact that $\\epsilon >0$ .", "This verifies the condition in item 1.", "To verify the condition in item 2 above, we note that for all $T_{\\rm last}(v, \\delta , \\epsilon ) \\le t \\le \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }$ , we have $t & \\ge T_{\\rm last}(v, \\delta , \\epsilon )\\nonumber \\\\& > \\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } + K^{\\prime }\\log \\left(\\left(\\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }\\right)^2 + \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\right) \\frac{1}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\nonumber \\\\& \\ge \\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } + K^{\\prime }\\log \\left(t^2 +t \\right) \\frac{1}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }.$ Equivalently, upon rearranging the terms in (REF ), we get $t\\,g_v(\\widetilde{\\omega }(v)) > \\beta (t, \\delta ) + t\\,\\epsilon $ for all $T_{\\rm last}(v, \\delta , \\epsilon ) \\le t \\le \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }$ .", "In particular, noting that (REF ) holds for $t=T_{\\rm last}(v, \\delta , \\epsilon )$ verifies the condition in item 2, and thereby completes the proof.", "With the above ingredients in place, we are now ready to prove Theorem REF .", "[Proof of Theorem REF ] Fix a problem instance $v\\in \\mathcal {P}$ arbitrarily.", "Given any $\\epsilon >0$ , let $T_{\\rm cvg}(v,\\epsilon )$ denote the smallest positive integer such that $\\left|\\frac{Z(t)}{t} -g_v(\\widetilde{\\omega }(v)) \\right|\\le \\epsilon \\quad \\forall \\ t \\ge T_{\\rm cvg}(v,\\epsilon ).$ From Lemma REF , we know that $T_{\\rm cvg}(v,\\epsilon ) < +\\infty $ almost surely.", "Therefore, for any $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right)$ and $\\delta \\in (0, \\delta _{\\rm upper}(v,\\epsilon ))$ , it follows from Lemma REF that $Z(t) > \\beta (t, \\delta ) \\quad \\forall \\, t \\ge \\max \\big \\lbrace T_{\\rm cvg}(v,\\epsilon ),T_{\\rm last}(v,\\delta , \\epsilon ), K\\big \\rbrace \\quad \\text{almost surely},$ where $\\delta _{\\rm upper}(v,\\epsilon )$ and $T_{\\rm last}(v,\\delta , \\epsilon )$ are as defined in Lemma REF .", "Recall that $b_r = \\lceil (1+\\lambda )^r \\rceil $ in the Het-TS algorithm.", "From (REF ), it follows that $\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}}) \\le (1+\\lambda ) \\max \\big \\lbrace T_{\\rm cvg}(v,\\epsilon ),T_{\\rm last}(v,\\delta , \\epsilon ), K \\big \\rbrace +1 \\quad \\text{almost surely}$ for any $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right) $ and $\\delta \\in (0, \\delta _{\\rm upper}(v,\\epsilon ))$ , which implies that $\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}}) \\le (1+\\lambda ) \\,T_{\\rm cvg}(v,\\epsilon ) + (1+\\lambda )\\, T_{\\rm last}(v,\\delta , \\epsilon ) + (1+\\lambda )K +1 \\quad \\text{almost surely}.$ Then, for any $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right) $ , the following set of relations hold almost surely: $& \\limsup _{\\delta \\rightarrow 0} \\frac{\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}})}{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\le \\limsup _{\\delta \\rightarrow 0} \\frac{(1+\\lambda )\\, T_{\\rm cvg}(v,\\epsilon ) + (1+\\lambda ) \\, T_{\\rm last}(v,\\delta , \\epsilon ) + (1+\\lambda )K +1 }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\stackrel{(a)}{=} \\limsup _{\\delta \\rightarrow 0} \\frac{ (1+\\lambda )\\, T_{\\rm last}(v,\\delta , \\epsilon ) }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\stackrel{(b)}{=} \\limsup _{\\delta \\rightarrow 0} \\frac{ (1+\\lambda )\\, \\left(\\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } + K^{\\prime }\\log \\left(\\left(\\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }\\right)^2 + \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\right) \\frac{1}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } +1 \\right) }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\& \\stackrel{(c)}{=} \\frac{1+\\lambda }{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\nonumber \\\\& \\stackrel{(d)}{\\le } \\frac{1+\\lambda }{\\frac{1}{2}c^*(v)^{-1} - \\epsilon },$ where $(a)$ follows from the fact that $T_{\\rm cvg}(v,\\epsilon )$ is not a function of $\\delta $ and that $T_{\\rm cvg}(v,\\epsilon ) < +\\infty $ almost surely, $(b)$ follows from the definition of $T_{\\rm last}(v,\\delta , \\epsilon )$ , $(c)$ follows from Lemma REF , and $(d)$ makes use of Lemma REF .", "Letting $\\epsilon \\rightarrow 0$ in (REF ), we get $\\limsup _{\\delta \\rightarrow 0} \\frac{\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}})}{\\log \\left(\\frac{1}{\\delta }\\right)} \\le 2\\, (1+\\lambda )\\, c^*(v) \\quad \\text{almost surely}.$ Taking expectation on both sides of (REF ), we get $\\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} [\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}})] \\le (1+\\lambda ) \\, \\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} [T_{\\rm cvg}(v,\\epsilon )] + (1+\\lambda )\\, T_{\\rm last}(v,\\delta , \\epsilon ) +1$ for all $\\epsilon \\in \\left(0, g_v(\\widetilde{\\omega }(v)) \\right) $ and $\\delta \\in (0, \\delta _{\\rm upper}(v,\\epsilon ))$ , from which it follows that $& \\limsup _{\\delta \\rightarrow 0} \\frac{\\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} [\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}})]}{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\le \\limsup _{\\delta \\rightarrow 0} \\frac{(1+\\lambda )\\, \\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} [T_{\\rm cvg}(v,\\epsilon )] + (1+\\lambda ) \\, T_{\\rm last}(v,\\delta , \\epsilon ) + (1+\\lambda )K +1 }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\stackrel{(a)}{=} \\limsup _{\\delta \\rightarrow 0} \\frac{ (1+\\lambda ) \\, T_{\\rm last}(v,\\delta , \\epsilon ) }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\&\\stackrel{(b)}{=} \\limsup _{\\delta \\rightarrow 0} \\frac{ (1+\\lambda )\\left(\\frac{f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } + K^{\\prime }\\log \\left(\\left(\\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon }\\right)^2 + \\frac{2f^{-1}(\\delta )}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\right) \\frac{1}{g_v(\\widetilde{\\omega }(v)) - \\epsilon } +1 \\right) }{\\log \\left(\\frac{1}{\\delta }\\right)} \\nonumber \\\\& \\stackrel{(c)}{=} \\frac{1+\\lambda }{g_v(\\widetilde{\\omega }(v)) - \\epsilon } \\nonumber \\\\& \\stackrel{(d)}{\\le } \\frac{1+\\lambda }{\\frac{1}{2}c^*(v)^{-1} - \\epsilon },$ where $(a)$ follows from the fact that $\\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} [T_{\\rm cvg}(v,\\epsilon )]$ does not depend on $\\delta $ and that $\\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}}[T_{\\rm cvg}(v,\\epsilon )] < +\\infty $ , $(b)$ follows from the definition of $T_{\\rm last}(v,\\delta , \\epsilon )$ , $(c)$ follows from Lemma REF , and $(d)$ makes use of Lemma REF .", "Letting $\\epsilon \\rightarrow 0$ in (REF ), we get $\\limsup _{\\delta \\rightarrow 0} \\frac{\\mathbb {E}_v^{\\Pi _{\\mathrm {Het-TS}}} (\\tau _{\\delta }(\\Pi _{\\mathrm {Het-TS}}))}{\\log \\left(\\frac{1}{\\delta }\\right)} \\le 2\\, (1+\\lambda ) \\, c^*(v).$ This completes the desired proof." ], [ "Proof of Theorem ", "Fix a problem instance $v \\in \\mathcal {P}$ arbitrarily.", "Recall that there exists a common solution to (REF ) (see the discussion after (REF )).", "The following results show that this solution satisfies good conditions 1 and 2 and that it is unique.", "Lemma 17 The common solution to (REF ) satisfies good condition 2.", "Let $\\widetilde{\\omega }(v)$ be a common solution to (REF ).", "Let $Q^{\\rm min}_j \\operatornamewithlimits{argmin}_{i \\in Q_j} \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}}, \\quad j \\in [L].$ Suppose $\\widetilde{\\omega }(v)$ does not meet good condition 2.", "Then, there exists $l \\in [L]$ such that $Q_l \\ne Q^{\\rm min}_l$ .", "We now recursively construct an $\\omega \\in \\Gamma $ such that $ \\widetilde{g}^{(l)}_v(\\omega ) > \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)) $ , thereby leading to a contradiction.", "Step 1: Initialization.", "Set $\\omega ^{(0)} \\widetilde{\\omega }(v)$ and $Q^{(0)} Q^{\\rm min}_l$ .", "Step 2: Iterations.", "For each $s\\in \\lbrace 0,1,2,\\ldots \\, |Q_l^{\\rm min}|-1\\rbrace $ , note that there exists $i_1 \\in Q^{(s)}$ , $i_2 \\in Q_l \\setminus Q^{(s)}$ , and $m^{\\prime } \\in [M]$ such that $i_1,i_2 \\in S_{m^{\\prime }}$ .", "Let $\\epsilon > 0$ be sufficiently small so that $\\frac{\\Delta ^2_{i_2}(v)}{\\frac{1}{M^2_{i_2}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_2 \\in S_m\\rbrace } \\frac{1}{\\omega ^{(s)}_{i_2,m}-\\epsilon }} > \\min _{i \\in Q_{l}} \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)} } =\\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)) .$ Then, we construct $\\omega ^{(s+1)}$ as $\\forall \\, m\\in [M], i\\in S_m, \\ \\ \\omega ^{(s+1)}_{i,m} {\\left\\lbrace \\begin{array}{ll}\\omega ^{(s)}_{i,m}-\\epsilon ,\\ &\\text{if } i=i_2, m=m^{\\prime } \\\\\\omega ^{(s)}_{i,m}+\\epsilon ,\\ &\\text{if } i=i_1, m=m^{\\prime }, \\\\\\omega ^{(s)}_{i,m},\\ &\\text{otherwise},\\end{array}\\right.", "}$ and set $Q^{(s+1)} Q^{(s)} \\setminus \\lbrace i_1\\rbrace $ .", "We then have $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\omega ^{(s+1)}_{i,m}}} > \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)) \\quad \\forall \\, i \\in Q_l \\setminus Q^{(s+1)},$ and $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\omega ^{(s+1)}_{i,m}}} = \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)) \\quad \\forall \\, i \\in Q^{(s+1)}.$ By following the above procedure for $s\\in \\lbrace 0, 1, 2, \\ldots , |Q_l^{\\rm min}|\\rbrace $ , we arrive at $\\omega ^{(\\vert Q^{\\rm min}_l \\vert )}$ such that $ \\widetilde{g}^{(l)}_v(\\omega ^{(\\vert Q^{\\rm min}_l \\vert )}) > \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)) $ , which is the clearly a contradiction.", "Lemma 18 The common solution to (REF ) satisfies good condition 1.", "Let $\\widetilde{\\omega }(v)$ be a common solution to (REF ).", "From Lemma REF , we know that $\\widetilde{\\omega }(v)$ satisfies good condition 2.", "Suppose now that $\\widetilde{\\omega }(v)$ does not satisfy good condition 1.", "Then, there exists $l \\in [L]$ , $m_1,m_2 \\in [M]$ , and $i_1, i_2 \\in S_{m_1} \\cap S_{m_2} \\subseteq Q_{l}$ such that $\\frac{\\widetilde{\\omega }_{i_1,m_1}(v)}{\\widetilde{\\omega }_{i_2,m_1}(v)} > \\frac{\\widetilde{\\omega }_{i_1,m_2}(v)}{\\widetilde{\\omega }_{i_2,m_2}(v)}.$ Because $\\widetilde{\\omega }(v)$ satisfies good condition 2, we must have $\\frac{\\Delta ^2_{i_2}(v)}{\\frac{1}{M^2_{i_2}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_2 \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i_2,m}(v)}} = \\frac{\\Delta ^2_{i_1}(v)}{\\frac{1}{M^2_{i_1}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_1 \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i_1,m}(v)}} = \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v)).$ Note that (REF ) implies that $\\frac{\\widetilde{\\omega }_{i_2,m_1}(v)}{\\widetilde{\\omega }_{i_2,m_2}(v)} < \\frac{\\widetilde{\\omega }_{i_1,m_1}(v)}{\\widetilde{\\omega }_{i_1,m_2}(v)} $ .", "Let $\\rho $ be any value such that $\\left(\\frac{\\widetilde{\\omega }_{i_2,m_1}(v)}{\\widetilde{\\omega }_{i_2,m_2}(v)}\\right)^2 < \\rho < \\left(\\frac{\\widetilde{\\omega }_{i_1,m_1}(v)}{\\widetilde{\\omega }_{i_1,m_2}(v)}\\right)^2.$ Using the fact that the derivative of $x \\mapsto \\frac{1}{x}$ is $ -\\frac{ 1}{x^2}$ , we have $\\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v) - \\rho \\epsilon } - \\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v)} = \\frac{\\rho \\epsilon }{\\widetilde{\\omega }^2_{i_1,m_1}(v)} + o(\\epsilon )\\quad \\mbox{as }\\epsilon \\rightarrow 0,$ and $\\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v)} - \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v) + \\epsilon } = \\frac{\\epsilon }{\\widetilde{\\omega }^2_{i_1,m_2}(v)} + o(\\epsilon )\\quad \\mbox{as }\\epsilon \\rightarrow 0.$ Here $o(\\epsilon )$ is a function in $\\epsilon $ that satisfies $\\lim _{\\epsilon \\rightarrow 0}\\frac{o(\\epsilon )}{\\epsilon }=0$ .", "By combining these equations, $&\\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v) - \\rho \\epsilon } - \\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v)}-\\bigg ( \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v)} - \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v) + \\epsilon } \\bigg ) \\nonumber \\\\& = \\frac{\\rho \\epsilon }{\\widetilde{\\omega }^2_{i_1,m_1}(v)} + o(\\epsilon ) - \\bigg (\\frac{\\epsilon }{\\widetilde{\\omega }^2_{i_1,m_2}(v)} + o(\\epsilon ) \\bigg )\\\\&=\\epsilon \\bigg [ \\frac{\\rho }{\\widetilde{\\omega }^2_{i_1,m_1}(v)} \\big (1+o_\\epsilon (1) \\big ) - \\frac{1}{\\widetilde{\\omega }^2_{i_1,m_2}(v)} \\big (1+o_\\epsilon (1) \\big ) \\bigg ] $ where $o_\\epsilon (1)$ is a term that vanishes as $\\epsilon \\downarrow 0$ .", "From (REF ), we have, $\\rho < \\frac{\\widetilde{\\omega }_{i_1, m_1}^2(v)}{\\widetilde{\\omega }_{i_1, m_2}^2(v)} & \\Longleftrightarrow \\frac{\\rho }{\\widetilde{\\omega }_{i_1, m_1}^2(v)} < \\frac{1}{\\widetilde{\\omega }_{i_1, m_2}^2(v)}$ By (REF ) and (REF ), there exists $\\epsilon _1>0$ such that for all $\\epsilon \\in (0,\\epsilon _1]$ , $\\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v) - \\rho \\epsilon } - \\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v)}-\\bigg ( \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v)} - \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v) + \\epsilon } \\bigg )<0.$ In other words, for all $\\epsilon \\in (0,\\epsilon _1]$ , $\\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v)} + \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v)} > \\frac{1}{\\widetilde{\\omega }_{i_1,m_1}(v) - \\rho \\epsilon } + \\frac{1}{\\widetilde{\\omega }_{i_1,m_2}(v) + \\epsilon }.$ Similarly, there exists $\\epsilon _2 >0 $ for any $\\epsilon \\in (0, \\epsilon _2]$ such that $\\frac{1}{\\widetilde{\\omega }_{i_2,m_1} (v)} + \\frac{1}{\\widetilde{\\omega }_{i_2,m_2}(v)} >\\frac{1}{\\widetilde{\\omega }_{i_2,m_1}(v) +\\rho \\epsilon } + \\frac{1}{\\widetilde{\\omega }_{i_2,m_2}(v) - \\epsilon } .$ Set $\\epsilon =\\min \\lbrace \\epsilon _1, \\epsilon _2\\rbrace $ .", "Let $\\omega ^{\\prime } \\in \\Gamma $ be defined as $\\forall \\, m\\in [M], i\\in S_m, \\ \\ \\omega ^{\\prime }_{i,m} {\\left\\lbrace \\begin{array}{ll}\\widetilde{\\omega }_{i,m}(v)-\\rho \\epsilon ,\\ &\\text{if } i=i_1, m=m_1 \\\\\\widetilde{\\omega }_{i,m}(v)+\\epsilon ,\\ &\\text{if } i=i_1, m=m_2 \\\\\\widetilde{\\omega }_{i,m}(v) + \\rho \\epsilon ,\\ &\\text{if } i=i_2, m=m_1 \\\\\\widetilde{\\omega }_{i,m}(v) - \\epsilon ,\\ &\\text{if } i=i_2, m=m_2 \\\\\\widetilde{\\omega }_{i,m}(v),\\ &\\text{otherwise}.\\end{array}\\right.", "}$ Then, from (REF ) and (REF ), we have $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{{\\omega }^{\\prime }_{i,m}}} >\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} \\quad \\forall \\, i \\in \\lbrace i_1,i_2\\rbrace ,$ and $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{{\\omega }^{\\prime }_{i,m}}} =\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} \\quad \\forall \\, i \\in Q_l \\setminus \\lbrace i_1,i_2\\rbrace .$ We then consider the following two cases.", "Case 1: $Q_{l} = \\lbrace i_1,i_2\\rbrace $ .", "In this case, it follows from (REF ) that $\\widetilde{g}^{(l)}_v(\\omega ^{\\prime }) >\\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v))$ , which contradicts with the fact that $\\widetilde{\\omega }(v)$ is an optimum solution to $\\max _{\\omega \\in \\Gamma } \\widetilde{g}_v(\\omega )$ .", "Case 2: $ \\lbrace i_1,i_2\\rbrace \\subsetneq Q_{l}$ .", "In this case, it follows from (REF ) that $\\widetilde{g}^{(j)}_v(\\omega ^{\\prime }) = \\widetilde{g}^{(j)}_v(\\widetilde{\\omega }(v))$ for all $ j \\in [L]$ , which implies that $\\omega ^{\\prime }$ is a common solution to (REF ) just as $\\widetilde{\\omega }(v)$ is.", "However, note that the right-hand sides of (REF ) and (REF ) are equal because $\\widetilde{\\omega }(v)$ satisfies good condition 2.", "As a result, it follows that $\\frac{\\Delta ^2_{i_1}(v)}{\\frac{1}{M^2_{i_1}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_1 \\in S_m\\rbrace } \\frac{1}{\\omega ^{\\prime }_{i_1,m}}} >\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{{\\omega }^{\\prime }_{i,m}}} \\quad \\forall \\, i \\in Q_l \\setminus \\lbrace i_1, i_2\\rbrace .$ This shows that $\\omega ^{\\prime }$ does not meet good condition 2, thereby contradicting Lemma REF .", "Lemma 19 The common solution to (REF ) is unique.", "Suppose that $\\widetilde{\\omega }(v)$ and $\\widetilde{\\omega }^{\\prime }(v)$ are two common solutions to (REF ).", "Suppose further that $\\widetilde{\\omega }(v) \\ne \\widetilde{\\omega }^{\\prime }(v)$ .", "In the following, we arrive at a contradiction.", "Let $\\widetilde{\\omega }^{\\rm avg}(v) (\\widetilde{\\omega }(v)+\\widetilde{\\omega }^{\\prime }(v))/2$ .", "From Lemma REF , we know that $\\widetilde{\\omega }(v)$ and $\\widetilde{\\omega }^{\\prime }(v)$ meet good condition 2.", "This implies that $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} = \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\prime }_{i,m}(v)}} \\quad \\forall \\, i \\in [K],$ which in turn implies that ${\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} = {\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\prime }_{i,m}(v)}}.$ Using the relation $\\frac{2}{(a+b)/2} < \\frac{1}{a}+\\frac{1}{b}$ whenever $a,b>0$ and $a \\ne b$ , we get that $\\frac{2}{\\widetilde{\\omega }^{\\rm avg}_{i,m}(v)} < \\frac{1}{\\widetilde{\\omega }_{i,m}(v)} + \\frac{1}{\\widetilde{\\omega }^{\\prime }_{i,m}(v)} \\quad \\forall \\, m\\in [M], \\, i\\in S_m \\; \\text{ such that }\\widetilde{\\omega }_{i,m}(v) \\ne \\widetilde{\\omega }^{\\prime }_{i,m}(v).$ Let $Q_{\\rm diff} \\lbrace \\iota \\in [K]: \\exists \\, m, \\widetilde{\\omega }_{\\iota ,m}(v) \\ne \\widetilde{\\omega }^{\\prime }_{\\iota ,m}(v)\\rbrace $ .", "As a consequence of (REF ), for all $i\\in Q_{\\rm diff} $ , we have $& {\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{2}{\\widetilde{\\omega }^{\\rm avg}_{i,m}(v)}} <{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} + {\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\prime }_{i,m}(v)}} \\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow } {\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\rm avg}_{i,m}(v)}} < {\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}} \\nonumber \\\\& \\Rightarrow \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\rm avg}_{i,m}(v)}} > \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}}, $ where $(a)$ follows from (REF ).", "In addition, it is clear to observe that for all $i \\in [K] \\setminus Q_{\\rm diff}$ , $\\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\rm avg}_{i,m}(v)}} = \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }_{i,m}(v)}}.$ By (REF ) and (REF ) we know that $\\widetilde{g}^{(j)}_v(\\widetilde{\\omega }^{\\rm avg}(v)) \\ge \\widetilde{g}^{(j)}_v(\\widetilde{\\omega }(v))$ for each $j\\in [L]$ , which implies that $\\widetilde{\\omega }^{\\rm avg}(v)$ is a common solution to (REF ).", "Now, there must exist $l \\in [L]$ such that $Q_{\\rm diff} \\cap Q_l \\ne \\emptyset $ , and then we consider two cases.", "Case 1: $Q_{\\rm diff} \\cap Q_l = Q_l$ .", "Eq.", "(REF ) implies that $\\widetilde{g}^{(l)}_v(\\widetilde{\\omega }^{\\rm avg}(v)) > \\widetilde{g}^{(l)}_v(\\widetilde{\\omega }(v))$ , which contradicts the fact that $\\widetilde{\\omega }(v)$ and $\\widetilde{\\omega }^{\\rm avg}(v)$ are the common solution to (REF ).", "Case 2: $Q_{\\rm diff} \\cap Q_l \\subsetneq Q_l$ .", "Note that the right-hand side of (REF ) and (REF ) are equal because $\\widetilde{\\omega }(v)$ meets good condition 2.", "Hence, there exists $i_1,i_2 \\in Q_l$ such that $\\frac{\\Delta ^2_{i_1}(v)}{\\frac{1}{M^2_{i_1}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i_1 \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\rm avg}_{i_1,m}(v)}} >\\frac{\\Delta ^2_{i_2}(v)}{\\frac{1}{M^2_{i_2}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }^{\\rm avg}_{i_2,m}(v)}}$ which means that $\\widetilde{\\omega }^{\\rm avg}(v)$ violates good condition 2, a contradiction to Lemma REF .", "Finally, Theorem REF follows from Lemma REF , Lemma REF , and Lemma REF ." ], [ "Proof of Proposition ", "Before proving Proposition REF , we first supply the proofs of Lemma REF and Lemma REF ." ], [ "Proof of Lemma ", "Fix $j \\in [L]$ and a problem instance $v$ arbitrarily.", "Recall that $\\widetilde{\\omega }(v)$ is the unique common solution (REF ) and $G(v)$ is the global vector characterising $\\widetilde{\\omega }(v)$ uniquely (via (REF )).", "From Lemma REF , we know that $\\widetilde{\\omega }(v)$ satisfies good condition 2, which implies that for any $i \\in Q_j$ , $& \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }(v)_{i,m}} } =\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v)) \\nonumber \\\\& \\Rightarrow \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\widetilde{\\omega }(v)_{i,m}} }{M^2_{i} \\Delta ^2_{i}(v)} =\\frac{1}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\stackrel{(a)}{\\Rightarrow } \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{\\sum _{\\iota \\in S_m}G(v)_\\iota }{G(v)_i}}{M^2_{i} \\Delta ^2_{i}(v)} =\\frac{1}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\Rightarrow \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } {\\sum _{\\iota \\in S_m}G(v)_\\iota }}{M^2_{i} \\Delta ^2_{i}(v)} =\\frac{{G(v)_i}}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\Rightarrow \\sum _{\\iota \\in Q_j} G(v)_\\iota \\left( \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i,\\iota \\in S_m\\rbrace } }{M^2_{i} \\Delta ^2_{i}(v)} \\right) =\\frac{{G(v)_i}}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))}.", "$ In the above set of implications, $(a)$ follows from (REF ).", "Noting that (REF ) is akin to (REF ) completes the desired proof." ], [ "Proof of Lemma ", "Fix $j \\in [L]$ and a problem instance $v \\in \\mathcal {P}$ arbitrarily.", "It is easy to verify that $G^{(j)}(v)$ has strictly positive entries (else, $\\widetilde{g}_v({\\widetilde{\\omega }(v)})=0$ according to (REF )).", "Suppose that $\\mathbf {u}\\in \\mathbb {R}^{\\vert Q_j \\vert }$ is another eigenvector of $H^{(j)}(v)$ corresponding to the eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ and $\\lbrace \\mathbf {u},\\, G^{(j)}(v)\\rbrace $ is linearly independent.", "Let $\\mathbf {u}^{\\prime } G^{(j)}(v) + \\epsilon \\mathbf {u},$ where $\\epsilon > 0$ is any number such that each entry of $\\mathbf {u}^{\\prime }$ is strictly positive.", "Let $\\omega ^{\\prime } \\in \\Gamma $ be defined as $\\forall \\, m\\in [M], i\\in S_m, \\ \\ \\omega ^{\\prime }_{i,m} ={\\left\\lbrace \\begin{array}{ll}\\end{array}\\frac{\\mathbf {u}^{\\prime }_{{\\rm Idx}(i)}}{\\sum _{\\iota \\in S_m } \\mathbf {u}^{\\prime }_{{\\rm Idx}(\\iota )}}\\ &\\text{if}\\ i\\in Q_j \\\\\\widetilde{\\omega }(v)_{i,m}\\ &\\text{otherwise},\\right.", "}$ where for any $i\\in Q_j$ , ${\\rm Idx}(i) \\in [\\vert Q_j \\vert ]$ represents the index of arm $i$ within the arms set $Q_j$ .", "Then, it follows from the definition of $\\omega ^{\\prime }$ that $\\widetilde{g}_{v}^{(l)}(\\widetilde{\\omega }(v)) = \\widetilde{g}_{v}^{(l)}(\\omega ^{\\prime })$ for all $l \\ne j$ and $\\omega ^{\\prime } \\ne \\widetilde{\\omega }(v)$ .", "Note that $\\mathbf {u}^{\\prime }$ is also an eigenvector of $H^{(j)}(v)$ corresponding to the eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ .", "This means that for all $i\\in Q_j$ , $& \\sum _{\\iota \\in Q_j} \\mathbf {u}^{\\prime }_{{\\rm Idx}(\\iota )} \\left( \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i,\\iota \\in S_m\\rbrace } }{M^2_{i} \\Delta ^2_{i}(v)} \\right) =\\frac{ \\mathbf {u}^{\\prime }_{{\\rm Idx}(i)}}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\Rightarrow \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } {\\sum _{\\iota \\in S_m}\\mathbf {u}^{\\prime }_{{\\rm Idx}(\\iota )}}}{M^2_{i} \\Delta ^2_{i}(v)} =\\frac{{\\mathbf {u}^{\\prime }_{{\\rm Idx}(i)}}}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\Rightarrow \\frac{\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{\\sum _{\\iota \\in S_m}\\mathbf {u}^{\\prime }_{{\\rm Idx}(\\iota )}}{\\mathbf {u}^{\\prime }_{{\\rm Idx}(i)}}}{M^2_{i} \\Delta ^2_{i}(v)} =\\frac{1}{\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v))} \\nonumber \\\\& \\Rightarrow \\frac{\\Delta ^2_{i}(v)}{\\frac{1}{M^2_{i}}\\sum _{m=1}^M \\mathbf {1}_{\\lbrace i \\in S_m\\rbrace } \\frac{1}{\\omega ^{\\prime }_{i,m}} } =\\widetilde{g}_{v}^{(j)}(\\widetilde{\\omega }(v)).$ From (REF ), it is clear that $\\widetilde{g}_{v}^{(l)}(\\widetilde{\\omega }(v)) = \\widetilde{g}_{v}^{(l)}(\\omega ^{\\prime })$ for all $l \\in [L]$ , which contradicts Lemma REF .", "Thus, there is no eigenvector $\\mathbf {u}\\in \\mathbb {R}^{\\vert Q_j \\vert }$ of $H^{(j)}(v)$ corresponding to the eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ such that $\\mathbf {u}$ and $G^{(j)}(v)>0$ are linearly independent.", "This completes the desired proof." ], [ "Proof of Proposition ", "Let $\\mathbf {v}$ be any eigenvector of $H^{(j)}(v)$ whose eigenvalue is not equal to $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ .", "Because $H^{(j)}(v)$ is a normal matrix, its eigenvectors corresponding to distinct eigenvalues are orthogonal [7].", "This implies that $\\langle \\mathbf {v}, G^{(j)}(v) \\rangle =0$ , where $\\langle \\cdot ,\\cdot \\rangle $ denotes the vector inner product operator.", "Note that $G^{(j)}(v)$ has strictly positive entries.", "Therefore, the entries of $\\mathbf {v}$ cannot be all positive or all negative.", "From Lemma REF , we know that any eigenvector $\\mathbf {v^{\\prime }}$ associated with the eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ should satisfy $\\mathbf {v^{\\prime }} = \\alpha G^{(j)}(v),\\quad \\text{for some } \\alpha \\in \\mathbb {R}\\setminus \\lbrace 0\\rbrace ,$ which implies that the entries of $\\mathbf {v}^{\\prime }$ are either all positive or all negative.", "Also, Lemma REF implies that among any complete set of eigenvectors of $H^{(j)}(v)$ , there is only one eigenvector $\\mathbf {u}$ with eigenvalue $\\frac{1}{g_v^{(j)}(\\widetilde{\\omega }(v))}$ .", "From the exposition above, it then follows that the entries of $\\mathbf {u}$ must be all positive or all negative.", "Noting that $G^{(j)}(v)$ has unit norm (see (REF )), we arrive at the form in (REF ).", "This completes the proof." ], [ "Experimental Results", "In this section, we corroborate our theoretical results by implementing $\\textsc {Het}-\\textsc {TS}(\\lambda )$ and performing a variety of experiments on a synthetic dataset and the MovieLens dataset." ], [ "Synthetic Dataset", "The instance we used contains $M=5$ clients and $K=5$ arms.", "Their means are captured in the following matrix $\\mu =\\begin{bmatrix}7.55725965 & 7.66129480 & 7.32803730 & 7.32543803 & 7.64140236\\\\6.16828802 & 6.45512172 & 6.93972864 & 6.10118710 & 6.66694736\\\\5.29985957 & 5.07225703 & 5.84690706 & 5.01784103 & 5.96481965\\\\4.15091046 & 4.27998901 & 4.62915314 & 4.02730890 & 4.13292786\\\\3.44192387 & 3.92546024 & 3.53417944 & 3.23238613 & 3.77754542\\end{bmatrix}.$ The rows of $\\mu $ index the arms while the columns index the clients (so, for example, $\\mu _{2,3} = 6.93972864$ is the mean of the arm 2 of client 3).", "Figure: Expected stopping times for various overlap patterns as described in ().Figure: Expected stopping times for various λ\\lambda 's and for overlap pattern 𝒪 (2) \\mathcal {O}^{(2)}.To empirically evaluate the effect of various sets $\\lbrace S_m\\rbrace _{m\\in [M]}$ on the expected stopping time, we consider different overlap patterns (which is a multiset) $\\mathcal {O}^{(p)} = \\lbrace S_{1}^{(p)} ,S_{2}^{(p)} ,\\ldots , S_{5}^{(p)} \\rbrace $ where $\\mathcal {O}^{(1)} &= \\big \\lbrace \\lbrace 1,2\\rbrace , \\lbrace 2,3\\rbrace , \\lbrace 3,4\\rbrace ,\\lbrace 4,5\\rbrace , \\lbrace 5,1\\rbrace \\big \\rbrace , \\nonumber \\\\\\mathcal {O}^{(2)} &= \\big \\lbrace \\lbrace 1,2,3\\rbrace , \\lbrace 2,3,4\\rbrace , \\lbrace 3,4,5\\rbrace ,\\lbrace 4, 5,1\\rbrace , \\lbrace 5,1,2\\rbrace \\big \\rbrace \\nonumber \\\\\\mathcal {O}^{(3)} &= \\big \\lbrace \\lbrace 1,2,3,4\\rbrace , \\lbrace 2,3,4,5\\rbrace , \\lbrace 3,4,5,1\\rbrace , \\lbrace 4,5,1,2\\rbrace , \\lbrace 5,1,2,3\\rbrace \\big \\rbrace ,\\quad \\mbox{and} \\nonumber \\\\\\mathcal {O}^{(4)} &= \\big \\lbrace \\lbrace 1,2,3,4,5\\rbrace , \\lbrace 1,2,3,4,5\\rbrace , \\lbrace 1,2,3,4,5\\rbrace , \\lbrace 1,2,3,4,5\\rbrace , \\lbrace 1,2,3,4,5\\rbrace \\big \\rbrace .", "$ Thus, the larger the index of the overlap pattern $p$ , the larger the overlap among the sets $\\lbrace S_m^{(p)}\\rbrace _{m\\in [M]}$ and the more clients have access to a fixed arm $i \\in [K]$ .", "The matrix $\\mu $ , together with an overlap pattern $\\mathcal {O}^{(p)}$ , uniquely defines a problem instance $v=(\\lbrace \\mu _{i,1}\\rbrace _{i\\in S_1}, \\lbrace \\mu _{i,2}\\rbrace _{i\\in S_2},\\dots , \\lbrace \\mu _{i,M}\\rbrace _{i\\in S_M})$ .", "The empirical expected stopping times of $\\textsc {Het}-\\textsc {TS}(\\lambda )$ for $\\lambda =0.01$ are displayed in Fig.", "REF .", "It can be seen that as $\\delta $ decreases, the empirical stopping time increases, as expected.", "More interestingly, note that for a fixed $\\delta $ , the stopping time is not monotone in the amount of overlap.", "This is due to two factors that work in opposite directions as one increases the amount of overlap of $S_m$ 's among various clients.", "On the one hand, each client has access to more arms, yielding more information about the bandit instance for the client.", "On the other hand, with more arms, the set of arms that can potentially be the best arm for that particular client also increases.", "This observation is interesting and, at first glance, counter-intuitive.", "Figure: Comparisons of the upper and lower bounds." ], [ "Effect of Communication Frequency", "Recall that $\\textsc {Het}-\\textsc {TS}(\\lambda )$ communicates and stops at those time instants $t$ of the form $b_r=\\lceil (1+ \\lambda )^r \\rceil $ for $r\\in \\mathbb {N}$ .", "As $\\lambda $ increases, the communication frequency decreases.", "In other words, $\\textsc {Het}-\\textsc {TS}(\\lambda )$ is is communicating at sparser time instants.", "Thus as $\\lambda $ grows, we should expect that the stopping times increase commensurately as the server receives less data per unit time.", "This is reflected in Fig.", "REF where we use the instance $v=(\\mu , \\mathcal {O}^{(2)})$ .", "We note another interesting phenomenon, most evident from the curve indicated by $\\lambda = 0.5$ .", "The growth pattern of the empirical stopping time has a piecewise linear shape.", "This is because $\\textsc {Het}-\\textsc {TS}(\\lambda )$ does not stop at any arbitrary integer time; it only does so at the times that correspond to communication rounds $b_r = \\lceil (1+ \\lambda )^r \\rceil $ for $r\\in \\mathbb {N}$ .", "Hence, for $\\delta $ and $\\delta ^{\\prime }$ sufficiently close, the empirical stopping times will be exactly the same with high probability.", "This explains the piecewise linear stopping pattern as $\\log (1/\\delta )$ grows." ], [ "Comparison to Theoretical Bounds", "In the final experiment for synthetic data, we set $\\lambda =0.01$ and overlap pattern $\\mathcal {O}^{(2)}$ as our instance $v$ .", "In Fig.", "REF , we compare the empirical stopping time to the lower bound in Proposition REF and the upper bound in Theorem REF .", "Recall that the asymptotic ratio of the expected stopping time ${\\mathbb {E}_v^{\\Pi _{\\textsc {Het}-\\textsc {TS}}}[\\tau _\\delta (\\Pi _{ \\textsc {Het}-\\textsc {TS} })]}$ to $\\log (1/\\delta )$ is $c^*(v)$ and $2(1+\\lambda )c^*(v)$ in the lower and upper bounds respectively.", "We observe that as $\\delta $ becomes sufficiently small, the slope of the empirical curve lies between the upper and lower bounds, as expected.", "Furthermore, we see that ${\\mathbb {E}_v^{\\Pi _{\\textsc {Het}-\\textsc {TS}}}[\\tau _\\delta (\\Pi _{ \\textsc {Het}-\\textsc {TS} })]}/{\\log \\left(1/{\\delta }\\right)}$ is close to the lower bound, which strongly suggests our learned allocation $\\widetilde{\\omega }(\\hat{v}(t))$ is very close to optimal allocation $\\arg \\max _{\\omega \\in \\Gamma } g_{\\hat{v}(t)}(\\omega )$ .", "We observe from Fig.", "REF that the empirical performance or, more precisely, the slope of the expected stopping time as a function of $\\log (1/\\delta )$ is close to $(1+\\lambda )\\, c^*(v)$ .", "This suggests that the factor $1+\\lambda $ (in $2\\, (1+\\lambda )$ ) in Theorem REF is unavoidable if we communicate at time instances that grow as $\\Theta ( (1+\\lambda )^r)$ .", "The presence of the factor 2 (in $2\\, (1+\\lambda )$ ) is to enable the optimal allocation $\\hat{\\omega }_{i,m}(t)$ to be solved in a tractable fashion.", "For more details concerning this point, see the discussion following Theorem REF ." ], [ "MovieLens Dataset", "In the MovieLens dataset [3], there are about 2.2 million rating samples and 10,197 movies.", "Following the experimental settings in [16], we view each country and genre as a client and an arm, respectively.", "Besides, we normalize the rating score in the range of 0 to 100.", "We note that in the raw dataset that there are very few or even no samples for some combinations of country and genre.", "Thus, in our experiment we discard any country and genre pair with fewer than ten samples.", "As a result, we end up with 10,044 movies and $M=48$ clients across $K=19$ arms.", "It is natural that different clients have different arm sets in the dataset; this dovetails neatly with our problem setting in which $S_m$ 's need not be the same as one another and they need not be the full set $[K]$ .", "Table: Comparison of the empirical stopping times between Het-TS\\textsc {Het}-\\textsc {TS} and Uniform for λ=0.01\\lambda =0.01.As in [20], we compare our algorithm to a baseline method which we call Uniform.", "Uniform has the same stopping rule as $\\textsc {Het}-\\textsc {TS}$ , but it uses a uniform sampling rule (i.e., each client uniformly samples each arm).", "Note that Uniform is a $\\delta $ -PAC algorithm for all $\\delta \\in (0,1)$ .", "Our numerical results, which are obtained by averaging over four independent experiments and by setting $\\lambda =0.01$ , are presented in Table REF .", "We observed from our experiments that the statistical variations of the results are minimal (and virtually non-existent) as the algorithm necessarily stops at one of the time instants of the form $b_r = \\lceil (1+\\lambda )^r\\rceil $ for $r\\in \\mathbb {N}$ .", "Hence, “error bars” are not indicated.", "From Table REF , we observe that the ratio of empirical stopping time between Uniform and $\\textsc {Het}-\\textsc {TS}$ is approximately eight, showing that the sampling rule of $\\textsc {Het}-\\textsc {TS}$ is highly effective in rapidly identifying the best arms in this real-world dataset." ] ]
2210.07780
[ [ "Asymptotic of the Kantorovich potential for the optimal transport with\n Coulomb cost" ], [ "Abstract We prove a conjecture regarding the asymptotic behavior at infinity of the Kantorovich potential for the Multimarginal Optimal Transport with Coulomb and Riesz costs." ], [ "Introduction", "Given a function $\\rho \\in L^1({\\mathbb {R} }^d, {\\mathbb {R} }_+)$ with $\\int _{{\\mathbb {R} }^d} \\rho = N$ , we study the following Multimarginal Optimal Transport (MOT) problem $problem]{SCE}\\min _{{\\mathbb {P} }\\,\\mapsto \\, \\rho } \\left\\lbrace \\int _{({\\mathbb {R} }^{d})^{N}} \\sum _{1 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i - \\mathbf {r}_j|^s} {\\mathbb {P} }({\\rm d}\\mathbf {r}_1, \\dots , {\\rm d}\\mathbf {r}_N) \\right\\rbrace ,$ where $s>0$ .", "Here, the notation “${\\mathbb {P} }\\mapsto \\rho $ ” means that the minimum runs over all (symmetric) probability measures ${\\mathbb {P} }$ on $({\\mathbb {R} }^d)^N$ such that for every $i = 1,\\dots , N$ the marginal of ${\\mathbb {P} }$ over the $i$ -th copy of ${\\mathbb {R} }^d$ in $({\\mathbb {R} }^d)^N$ is given by $\\rho /N$ .", "In quantum chemistry, when $s = 1$ and $d = 3$ , (SCE) is coined as the Strictly-Correlated Electrons functional [19], [17].", "This functional is of paramount interest in Density-functional Theory (DFT), an important computational modeling method in quantum physics and chemistry.", "We refer to [6], [22] for recent surveys on (SCE) and its applications in DFT.", "As any MOT problem, (SCE) admits the following dual formulation $problem]{SCE_d}\\sup _{v \\,\\, \\text{s.t.}", "\\int _{{\\mathbb {R} }^d} |v|\\rho \\,<\\, \\infty } \\left\\lbrace ~E_N(v) - \\int _{{\\mathbb {R} }^d} v \\rho \\right\\rbrace .$ Here, the supremum runs over all continuous functions $v : {\\mathbb {R} }^d \\rightarrow {\\mathbb {R} }$ which are integrable with respect to $\\rho $ , and the quantity $E_N(v)$ is defined as $E_N(v) = \\inf _{\\mathbf {r}_1, \\dots , \\mathbf {r}_N} \\left\\lbrace \\sum _{1 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i - \\mathbf {r}_j|^s} + \\sum _{i = 1}^N v(\\mathbf {r}_i)\\right\\rbrace .$ Maximizers of (SCEd) are known to exist, as shown in [1], [3], and are called Kantorovich potentials.", "Furthermore, they can be chosen (locally) Lipschitz.", "Note that $v$ should be regarded as an external potential applied to the $N$ classical particles and $E_N(v)$ as the classical ground-state energy.", "It has been argued [18], [23], [6] on physical ground that, when $\\rho $ is supported over the entire space ${\\mathbb {R} }^d$ and in the case of the Coulomb potential in three space-dimension ($s = 1$ and $d = 3$ ), any (regular enough) Kantorovich potential $v$ should verify the asymptotic $\\boxed{v(\\mathbf {r}) \\sim - \\frac{N-1}{|\\mathbf {r}|} \\quad \\text{as } |\\mathbf {r}| \\rightarrow \\infty .", "}$ This conjecture is rigorously known to hold either when $N = 2$ and $\\rho $ is spherically symmetric using [15], or for an arbitrary number of particles $N$ in one space-dimension appealing to [2], using that in both cases an explicit formula is available for the SCE system.", "It is also known to hold when $\\rho $ is compactly supported for one well-chosen Kantorovich potential, even though in this specific case, the problem is not entirely well-posed, see [9] and also remdualc below.", "In the paper, we prove this conjecture in full generality: Theorem 1 (Asymptotic of the Kantorovich potential) Let $\\Omega \\subset {\\mathbb {R} }^d$ be an unbounded and connected open set, and let $\\rho \\in L^1(\\Omega , {\\mathbb {R} }_+)$ with $\\int _{{\\mathbb {R} }^d} \\rho = N$ be such that $\\rho > 0$ almost everywhere on $\\Omega $ .", "Let $v$ be a Kantorovich potential for (SCEd) which is locally Lipschitz on $\\Omega $ .", "Then, there exists some constant $C$ such that $v(\\mathbf {r}) + C \\sim -\\frac{N-1}{|\\mathbf {r}|^s} \\quad \\text{in the limit } |\\mathbf {r}| \\rightarrow \\infty \\text{ and }\\mathbf {r}\\in \\Omega .$ It is known [16] that the gradients (in the Fréchet sense) of two Kantorovich potentials coincide $\\rho $ almost-everywhere.", "When those potentials are furthermore locally Lipschitz, these gradients identify with their weak counterparts (in the Sobolev $W^{1, \\infty }_{\\rm loc}$ sense; see, e.g.", "[4]).", "Since $\\Omega $ is connected, they must then be equal up to an additive constant on $\\Omega $ .", "Therefore, it is enough to prove asymk for one such Kantorovich potential.", "It is moreover known that such a Kantorovich potential exists, see lemv below.", "We also note that the asymptotic (REF ) is of theoretical importance in computational quantum chemistry.", "Indeed, in the context of strongly-correlated materials, the functional defined by (SCE) has been proved useful to fashion exchange-correlation functionals in DFT [11], [12], [13], [14].", "From this perspective, the Kantorovich potential corresponds to the associated exchange-correlation potential (plus the Hartree potential).", "Now, it is well-known that those potentials shall verify some requirements, in particular having the asymptotic (REF ) at infinity [21], [20].", "Remark 1 We will see from the proof of asymk that the specific form of the pair interaction potential $w : [0, \\infty ] \\rightarrow (0, \\infty ]$ between the particles does not really matter, as long as it behaves like $r^{-s}$ at infinity.", "Here, we have taken $w(r) = r^{-s}$ exactly, i.e.", "the so-called Riesz potential.", "In [9], we proved that, in the Coulomb case $s = d-2$ in dimension $d > 2$ , there always exists a Kantorovich potential $v$ for (SCE) which is an attractive Coulomb potential induced by some positive charge distribution, coined as the dual charge.", "That is, $v$ rewrites as $v(\\mathbf {r}) = -|\\cdot |^{2-d} \\ast \\rho _{\\rm ext}(\\mathbf {r})$ where $\\rho _{\\rm ext}$ is a positive measure.", "Using asymk, we were able to prove the following statement.", "Corollary 2 (Total mass of the dual charge) corollary]dualchargemass We suppose that $s = d-2$ with $d > 2$ .", "Let $\\Omega \\subset {\\mathbb {R} }^d$ be an unbounded and connected open set, which we assume to not “shrink in special directions at infinity”, in the sense that $|A_\\Omega | > 0$ , where $|\\cdot |$ is the Lebesgue measure on the sphere $\\mathbb {S}^{d-1}$ and $A_\\Omega := \\left\\lbrace \\mathbf {\\xi } \\in \\mathbb {S}^{d-1} : \\text{for all } \\, r_0 \\text{ there exists } r \\geqslant r_0 \\text{ s.t. }", "r\\mathbf {\\xi } \\in \\Omega \\right\\rbrace .$ Let $\\rho \\in L^1(\\Omega , {\\mathbb {R} }_+)$ with $\\int _{{\\mathbb {R} }^d} \\rho = N$ be such that $\\rho > 0$ almost everywhere $\\Omega $ .", "Let $\\rho _{\\rm ext}$ be a positive measure such that $-|\\,\\cdot \\,|^{2-d}\\ast \\rho _{\\rm ext}$ is a Kantorovich potential for (SCEd) which is locally Lipschitz on $\\Omega $ .", "Then, it must be that $\\int _{{\\mathbb {R} }^d}~\\rho _{\\rm ext} = N-1.$ The intuition for dualchargemass is that, an electron being fixed, the dual charge needs to exert a force which counterbalances the repulsion due to the other $N-1$ electrons, thus leading to the conjectured equality (REF ).", "The hypothesis regarding the shape of $\\Omega $ at infinity is mainly technical and will become transparent in the proof of the corollary.", "There always exists a measure $\\rho _{\\rm ext}$ as in the statement of the corollary [9].", "In fact, such a measure is uniquely determined on $\\Omega $ as $\\rho _{\\rm ext} := -c_d \\Delta v$ , where $v$ is a specific Lipschitz Kantorovich potential of (SCE), i.e.", "one that verifies eqv below.", "Here $c_d :=\\frac{d(d-2)\\pi ^{d/2}}{\\Gamma \\left(\\frac{d}{2} + 1\\right)}$ and $-\\Delta $ is to be understood as (minus) the distributional Laplacian on ${\\mathbb {R} }^d$ .", "We proved in [9] that the dist ribution $\\rho _{\\rm ext}$ defined as such is actually a positive measure.", "Remark 2 In the case where $\\rho $ is compactly supported, we can always assume that (REF ) holds [9].", "Indeed, when the support $\\Omega $ of $\\rho $ is compact, we can always add to $\\rho _{\\rm ext}$ some charge uniformly distributed over a sphere of radius $R$ , where $\\Omega \\subset B_R$ , since the potential induced inside of $\\Omega $ by this additional charge is constant.", "The proofs of asymk and dualchargemass are consigned in proofsas.", "Acknowledgments.", "I would like to thank Mathieu Lewin (CNRS & Ceremade, Université Paris Dauphine – PSL) for having advised me during this work, as well as Paola Gori-Giorgi (Microsoft Research & Vrije Universiteit Amsterdam) for pointing out to me some useful references.", "This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement MDFT No 725528)." ], [ "Dissociation at infinity", "In order to prove asymk, we will appeal to the following theorem.", "Theorem 3 (Dissociation at infinity) Let $\\rho \\in L^1({\\mathbb {R} }^d, {\\mathbb {R} }_+)$ with $\\int _{{\\mathbb {R} }^d} \\rho = N$ .", "Let ${\\mathbb {P} }$ be a symmetric minimizer of (SCE).", "Then, there exists a large enough ball $B_R \\subset {\\mathbb {R} }^d$ of radius $R$ such that ${\\mathbb {P} }\\left( ({\\mathbb {R} }^d \\setminus B_R) \\times ({\\mathbb {R} }^d \\setminus B_R) \\times {\\mathbb {R} }^{d(N-2)} \\right) = 0.$ The above theorem (which is evidently true when $\\rho $ is compactly supported) was mentioned as a conjecture in [6].", "It says that, at optimality, as one particle is sent to infinity, all the other remaining particles shall remain in a bounded domain ${\\mathbb {P} }$ almost-surely.", "Otherwise stated, “dissociation at infinity” only occurs for one particle at a time.", "We will provide later in HVZsec a stronger version of this theorem.", "The proof of mainthm only relies on the notion of cyclical monotonicity, which we now recall.", "One can prove [16] that, given any minimizer ${\\mathbb {P} }$ of (SCE), its support $\\Gamma $ is concentrated on a set $\\Gamma _0$ which is $c$ -cyclically monotone, in the sense that for all $k \\in \\mathbb {N}$ , all families $(\\mathbf {r}_1^i, \\dots , \\mathbf {r}_N^i)$ with $i = 1, \\dots , k$ of points in $\\Gamma _0$ , and all set of permutations $\\sigma _1, \\dots , \\sigma _N \\in \\mathfrak {S}_k$ we have $\\sum _{i = 1}^k c(\\mathbf {r}_1^i, \\dots , \\mathbf {r}_N^{i}) \\leqslant \\sum _{i = 1}^k c(\\mathbf {r}_1^{\\sigma _1(i)}, \\dots , \\mathbf {r}_N^{\\sigma _N(i)}),$ where $c(\\mathbf {r}_1, \\dots , \\mathbf {r}_N) = \\sum _{1 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i - \\mathbf {r}_j|^s}.$ Here, the meaning of (REF ) is that exchanging the positions of any particles at optimality necessarily leads to an increase of the energy.", "In [1], [3], it is proved that, at optimality, the particles cannot get too close to one another, in the sense that there exists some distance $\\eta > 0$ such that, for any minimizer ${\\mathbb {P} }$ of (SCE), we have ${\\mathbb {P} }\\left( \\min _{1 \\leqslant i < j \\leqslant N} |\\mathbf {r}_i - \\mathbf {r}_j| < \\eta \\right) = 0.$ By continuity of $c$ away from the diagonals, this entails that (REF ) actually holds on the closure of $\\Gamma _0$ , and hence on the entire support $\\Gamma $ of ${\\mathbb {P} }$ .", "Furthermore, in the formulation of (SCE), one can substitute to the Riesz potential $|\\mathbf {r}|^{-s}$ its truncated version, i.e.", "$\\min \\lbrace |\\mathbf {r}|^{-s}, \\eta ^{-s}\\rbrace $ .", "In what follows, we will write $|\\mathbf {r}|_\\eta := \\max \\lbrace |\\mathbf {r}|, \\eta \\rbrace ,$ so that the truncated Riesz potential reads $|\\mathbf {r}|_\\eta ^{-s}$ .", "We start with the two-marginal case $N=2$ as an illustration of the general argument.", "Let ${\\mathbb {P} }$ be a minimizer of (SCE) with support $\\Gamma $ .", "We proceed reductio ad absurdum by assuming that, for all radius $R > 0$ , we have ${\\mathbb {P} }\\left(({\\mathbb {R} }^d \\setminus B_R) \\times ({\\mathbb {R} }^d \\setminus B_R)\\right) > 0.$ By definition of the support $\\Gamma $ of ${\\mathbb {P} }$ , we have ${\\mathbb {P} }\\left(\\left(({\\mathbb {R} }^d \\setminus B_R) \\times ({\\mathbb {R} }^d \\setminus B_R)\\right) \\cap \\Gamma \\right) > 0.$ Therefore, there must exist a sequence $(\\mathbf {r}_1^{\\scriptscriptstyle (k)}, \\mathbf {r}_2^{\\scriptscriptstyle (k)}) \\in \\Gamma $ such that $\\mathbf {r}_i^{\\scriptscriptstyle (k)}\\rightarrow \\infty $ as $k \\rightarrow \\infty $ ($i = 1,2$ ).", "Given any $(\\mathbf {r}_1, \\mathbf {r}_2) \\in \\Gamma $ , and appealing to the $c$ -cyclical monotonicity, we have $\\frac{1}{|\\mathbf {r}_1 - \\mathbf {r}_2|^s} \\leqslant \\frac{1}{|\\mathbf {r}_1 - \\mathbf {r}_2|^s} + \\frac{1}{|\\mathbf {r}_1^{\\scriptscriptstyle (k)}- \\mathbf {r}_2^{\\scriptscriptstyle (k)}|^s}\\leqslant \\frac{1}{|\\mathbf {r}_1 - \\mathbf {r}_1^{\\scriptscriptstyle (k)}|^s} +\\frac{1}{|\\mathbf {r}_2 - \\mathbf {r}_2^{\\scriptscriptstyle (k)}|^s}.$ We now let $k \\rightarrow \\infty $ to obtain the contradiction that for all $(\\mathbf {r}_1, \\mathbf {r}_2) \\in \\Gamma $ $0 < \\frac{1}{|\\mathbf {r}_1 - \\mathbf {r}_2|^s} \\leqslant 0.$ Let us now consider the general case $N \\geqslant 2$ .", "Let ${\\mathbb {P} }$ be a minimizer of (SCE) with support $\\Gamma $ .", "Note that we can (and will) suppose that ${\\mathbb {P} }$ is symmetric.", "Indeed, if ${\\mathbb {P} }$ is not symmetric, it suffices to consider $\\widetilde{{\\mathbb {P} }}(\\mathbf {r}_1, \\dots , \\mathbf {r}_N) = \\frac{1}{N!}", "\\sum _{\\sigma \\in \\mathfrak {S}_N} {\\mathbb {P} }(\\mathbf {r}_{\\sigma (1)}, \\dots , \\mathbf {r}_{\\sigma (N)}).$ Let us assume that there exists a sequence of configurations in $\\Gamma $ such that exactly ${J} \\in \\lbrace 2, \\dots , N\\rbrace $ particles escape to infinity.", "That is, there exists a sequence $(\\mathbf {r}_1^{\\scriptscriptstyle (k)}, \\dots , \\mathbf {r}_N^{\\scriptscriptstyle (k)}) \\in \\Gamma $ such that $\\mathbf {r}_i^{\\scriptscriptstyle (k)}\\rightarrow \\infty $ as $k \\rightarrow \\infty $ for $i = 1, \\dots , {J}$ , and such that the remaining particles remain in some bounded region.", "Up to a subsequence and by compactness, we may assume that there exists $\\overline{\\mathbf {r}}_i \\in {\\mathbb {R} }^d$ such that $\\mathbf {r}_i^{\\scriptscriptstyle (k)}\\rightarrow \\overline{\\mathbf {r}}_i$ as $k \\rightarrow \\infty $ for all $i = {J} + 1, \\dots , N$ .", "Once again appealing to the $c$ -cyclical monotonicity, for all $(\\mathbf {r}_1, \\dots , \\mathbf {r}_N) \\in \\Gamma $ we have $\\sum _{1 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i - \\mathbf {r}_j|_\\eta ^s} + \\sum _{1 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i^{\\scriptscriptstyle (k)}- \\mathbf {r}_j^{\\scriptscriptstyle (k)}|_\\eta ^s} \\\\ \\leqslant \\sum _{i = 2}^N \\frac{1}{|\\mathbf {r}_1^{\\scriptscriptstyle (k)}- \\mathbf {r}_i|_\\eta ^s} + \\sum _{2 \\leqslant i <j \\leqslant N} \\frac{1}{|\\mathbf {r}_i - \\mathbf {r}_j|_\\eta ^s} \\\\ + \\sum _{i = 2}^N \\frac{1}{|\\mathbf {r}_1 - \\mathbf {r}_i^{\\scriptscriptstyle (k)}|_\\eta ^s} + \\sum _{2 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i^{\\scriptscriptstyle (k)}- \\mathbf {r}_j^{\\scriptscriptstyle (k)}|_\\eta ^s}.$ In the definition of c-cyclical monotonicity as given in (REF ), the above corresponds to the choice $k = 2$ with $\\sigma _1 = (12)$ and $\\sigma _i = \\textrm {id}$ for all $i = 2, \\dots , N$ .", "Note that, contrary to (REF ), we used the truncated cost above.", "This is mainly for convenience, to emphasize that all quantities are finite.", "In particular, after subtracting the interactions between the $\\mathbf {r}_2, \\dots , \\mathbf {r}_N$ (resp.", "$\\mathbf {r}_2^{\\scriptscriptstyle (k)}, \\dots , \\mathbf {r}_N^{\\scriptscriptstyle (k)}$ ) on both sides of (REF ), we can legally write $\\sum _{i = 2}^N \\frac{1}{|\\mathbf {r}_1 - \\mathbf {r}_i|_\\eta ^s} + \\sum _{i = 2}^N \\frac{1}{|\\mathbf {r}_1^{\\scriptscriptstyle (k)}- \\mathbf {r}_i^{\\scriptscriptstyle (k)}|_\\eta ^s} \\leqslant \\sum _{i = 2}^N \\frac{1}{|\\mathbf {r}_1^{\\scriptscriptstyle (k)}- \\mathbf {r}_i|_\\eta ^s} + \\sum _{i = 2}^N \\frac{1}{|\\mathbf {r}_1 -\\mathbf {r}_i^{\\scriptscriptstyle (k)}|_\\eta ^s}.$ We now let $k \\rightarrow \\infty $ to obtain that for all $(\\mathbf {r}_1, \\dots , \\mathbf {r}_N) \\in \\Gamma $ $\\sum _{i = 2}^N \\frac{1}{|\\mathbf {r}_1 - \\mathbf {r}_i|_\\eta ^s} \\leqslant \\sum _{i = {J}+1}^N \\frac{1}{|\\mathbf {r}_1 - \\overline{\\mathbf {r}}_i|_\\eta ^s}.$ Now, we let $\\mathbf {r}_i = \\mathbf {r}_i^{\\scriptscriptstyle (k)}$ for $i = 1, \\dots , N$ above.", "Letting $k \\rightarrow \\infty $ , and by the assumption that ${J} > 1$ , we see from the above inequality that the escaping particles cannot contribute to the first-order in (REF ), meaning that $\\frac{|\\mathbf {r}_1^{\\scriptscriptstyle (k)}|^s}{|\\mathbf {r}_1^{\\scriptscriptstyle (k)}- \\mathbf {r}_i^{\\scriptscriptstyle (k)}|^s} \\rightarrow 0 \\quad (i = 2, \\dots , {J}).$ Since $\\frac{|\\mathbf {r}_1^{\\scriptscriptstyle (k)}|}{|\\mathbf {r}_1^{\\scriptscriptstyle (k)}- \\mathbf {r}_i^{\\scriptscriptstyle (k)}|} \\geqslant \\frac{|\\mathbf {r}_1^{\\scriptscriptstyle (k)}|}{|\\mathbf {r}_1^{\\scriptscriptstyle (k)}| + |\\mathbf {r}_i^{\\scriptscriptstyle (k)}|} = \\frac{1}{1 + \\frac{|\\mathbf {r}_i^{\\scriptscriptstyle (k)}|}{|\\mathbf {r}_1^{\\scriptscriptstyle (k)}|}},$ it must be that $|\\mathbf {r}_1^{\\scriptscriptstyle (k)}| = o(|\\mathbf {r}_i^{\\scriptscriptstyle (k)}|)$ .", "But, by symmetry of ${\\mathbb {P} }$ , meaning that for any $(\\mathbf {r}_1, \\dots , \\mathbf {r}_N) \\in \\Gamma _0$ , we have $(\\mathbf {r}_{\\sigma (1)}, \\dots , \\mathbf {r}_{\\sigma (N)}) \\in \\Gamma _0$ for all $\\sigma \\in \\mathfrak {S}_N$ , we can switch the indices 1 and $i$ in (REF ).", "Reproducing the above argument, we are led to $|\\mathbf {r}_i^{\\scriptscriptstyle (k)}| = o(|\\mathbf {r}_1^{\\scriptscriptstyle (k)}|)$ , and therefore to a contradiction.", "Hence, reductio ad absurdum, the thesis of mainthm is proved." ], [ "Proof of the main results", "In this section, we prove that any Kantorovich potential $v$ which is locally Lipschitz has the conjectured asymptotic behavior at infinity.", "We start by briefly recalling some important facts regarding the duality theory for (SCE)." ], [ "On duality theory", "Given a minimizer ${\\mathbb {P} }$ of (SCE) with support $\\Gamma $ and a Kantorovich potential $v$ , there exists $\\Gamma _1\\subset \\Gamma $ with ${\\mathbb {P} }(\\Gamma \\setminus \\Gamma _1) = 0$ such that $\\sum _{1 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i - \\mathbf {r}_j|_\\eta ^s} + \\sum _{i = 1}^N v(\\mathbf {r}_i) = E_N(v) \\quad \\text{on }\\, \\Gamma _1.$ When $v$ is continuous, the above equality is valid on the closure $\\overline{\\Gamma _1}$ , and therefore on the entire support $\\Gamma $ .", "Among all the Kantorovich potentials of (SCEd), one can specify a very special potential, namely one that verifies the following equation $v(\\mathbf {r}_1) = \\sup _{\\mathbf {r}_2, \\dots , \\mathbf {r}_N}~\\left\\lbrace ~-\\sum _{i = 2}^N v(\\mathbf {r}_i) - \\sum _{1 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i - \\mathbf {r}_j|_\\eta ^s}\\right\\rbrace .$ For such a $v$ , we always have $E_N(v) = 0$ .", "We have the following lemma.", "Lemma 4 ([9] and [1]) Let $\\rho \\in L^1({\\mathbb {R} }^d, {\\mathbb {R} }_+)$ with $\\int _{{\\mathbb {R} }^d} \\rho = N$ , and let $v$ be a Kantorovich potential for (SCEd) verifying eqv.", "Then, $v$ is Lipschitz and it satisfies the following limit $\\lim _{|\\mathbf {r}| \\rightarrow \\infty } v(\\mathbf {r}) = - E_{N-1}(v) > 0$ where, according to (REF ), we have $E_{N-1}(v) = \\inf _{\\mathbf {r}_2, \\dots , \\mathbf {r}_N} \\left\\lbrace ~\\sum _{2 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i - \\mathbf {r}_j|_\\eta ^s} + \\sum _{i = 2}^N v(\\mathbf {r}_i) \\right\\rbrace .$ Recall that two Lipschitz Kantorovich potentials are equal (up to an additive constant) on (the connected components) of the support of $\\rho $ [16].", "Therefore, it follows from the above lemma that the Kantorovich potential $v$ which verifies eqv is unique on $\\Omega $ , where $\\Omega $ verifies the assumptions of asymk.", "Furthermore, it also follows that any Kantorovich potential which is locally Lipschitz on $\\Omega $ is actually Lipschitz and equal (up to an additive constant) to the $v$ which verifies eqv." ], [ "Proof of asymk", ".", "Let $v$ be the Kantorovich potential which verifies eqv, and let ${\\mathbb {P} }$ be a symmetric minimizer of (SCE) with support $\\Gamma $ .", "Let us first prove the lower bound asymptotic, that is $w(\\mathbf {r}) := v(\\mathbf {r}) + E_{N-1}(v) \\geqslant -\\frac{N-1}{|\\mathbf {r}|^s} + o\\left(\\frac{1}{|\\mathbf {r}|^{s}}\\right).$ We argue as in (the proof of) [9].", "Let $(\\mathbf {r}_1^{\\scriptscriptstyle (k)}, \\dots , \\mathbf {r}_N^{\\scriptscriptstyle (k)}) \\in \\Gamma $ be such that $\\mathbf {r}_1^{\\scriptscriptstyle (k)}\\rightarrow \\infty $ as $k \\rightarrow \\infty $ .", "Such a sequence exists as the support $\\Gamma $ of $\\rho $ is unbounded by hypothesis.", "According to mainthm regarding the dissociation at infinity, all the other particles must stay in a bounded region ${\\mathbb {P} }$ almost-surely as $\\mathbf {r}_1^{\\scriptscriptstyle (k)}$ goes to infinity.", "Up to a subsequence and by compactness, we may therefore assume that $\\mathbf {r}_i^{\\scriptscriptstyle (k)}\\rightarrow \\overline{\\mathbf {r}}_i$ for some $\\overline{\\mathbf {r}}_i\\in {\\mathbb {R} }^d$ ($i = 2, \\dots , N$ ).", "We have $\\sum _{1 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i ^{\\scriptscriptstyle (k)}- \\mathbf {r}_j^{\\scriptscriptstyle (k)}|_\\eta ^s} + \\sum _{i =1}^N w(\\mathbf {r}_i^{\\scriptscriptstyle (k)}) = E_N(w)$ According to lemv, $w$ is continuous and $w(\\mathbf {r}) \\rightarrow 0$ as $\\mathbf {r}\\rightarrow \\infty $ .", "Hence, we obtain by letting $k \\rightarrow \\infty $ above that $\\sum _{2 \\leqslant i < j \\leqslant N} \\frac{1}{|\\overline{\\mathbf {r}}_i - \\overline{\\mathbf {r}}_j|_\\eta ^s} + \\sum _{i = 2}^N w(\\overline{\\mathbf {r}}_i) = E_N(w).$ We now select $\\mathbf {r}_i = \\overline{\\mathbf {r}}_i$ ($i = 2, \\dots , N$ ) in eqv as verified by $v$ .", "We obtain using the above equality that for all $\\mathbf {r}\\in {\\mathbb {R} }^d$ $w(\\mathbf {r}) \\geqslant - \\sum _{i = 2}^N \\frac{1}{|\\mathbf {r}- \\overline{\\mathbf {r}}_i|_\\eta ^s} = -\\frac{N-1}{|\\mathbf {r}|^s} + o\\left(\\frac{1}{|\\mathbf {r}|^{s}}\\right).$ Therefore, the lower bound asymptotic (REF ) is proved.", "Now, the upper bound asymptotic is obtained as follows.", "Since $w$ vanishes at infinity, we have $E_N(w) \\leqslant E_{N-1}(w)$ .", "Furthermore, by definition of $E_{N-1}(w)$ , for all $ \\mathbf {r}_1, \\dots , \\mathbf {r}_N \\in {\\mathbb {R} }^d$ we have $E_{N-1}(w) \\leqslant \\sum _{2 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i - \\mathbf {r}_j|_\\eta ^s} + \\sum _{i = 2}^N w(\\mathbf {r}_i).$ According to (REF ), we have $\\sum _{1 \\leqslant i < k \\leqslant N} \\frac{1}{|\\mathbf {r}_i - \\mathbf {r}_j|^s_\\eta } + \\sum _{i = 1}^N w(\\mathbf {r}_i) = E_N(w) \\quad \\text{on }\\, \\Gamma .$ From (REF ) and the above equality, we finally obtain $w(\\mathbf {r}_1) \\leqslant \\sum _{i = 2}^N \\, \\frac{1}{|\\mathbf {r}_1 - \\mathbf {r}_i|_\\eta ^s} \\quad \\text{for all }(\\mathbf {r}_1, \\dots , \\mathbf {r}_N) \\in \\Gamma .$ By the assumption that $\\rho > 0$ almost everywhere on $\\mathbf {r}\\in \\Omega $ , and since the above inequality holds over the support $\\Gamma $ of ${\\mathbb {P} }$ , we then obtain the upper bound asymptotic on $w$ as sought-after.", "$\\Box $" ], [ "Dissociation and binding inequalities", "From our main theorem regarding the asymptotic behavior at infinity of the Kantorovich potential, we deduce the following interesting result.", "Corollary 5 (Dissociation and binding inequalities) Let $v$ be Kantorovich potential for (SCEd) which is Lipschitz, vanishes at infinity, i.e.", "$\\lim _{\\mathbf {r}\\rightarrow \\infty } v(\\mathbf {r}) = 0$ , and satisfies the asymptotic $v(\\mathbf {r}) \\sim -\\frac{N-1}{|\\mathbf {r}|^s} \\quad \\text{in the limit }\\, |\\mathbf {r}|\\rightarrow \\infty \\text{ and } \\mathbf {r}\\in \\Omega ,$ where $\\Omega $ is an unbounded and connected open set.", "Such a $v$ exists according to asymk and lemv.", "Let $E_K(v)$ be the infimum defined over $({\\mathbb {R} }^d)^K$ as in (REF ), with the convention that $E_1(v)$ is the infimum of $v$ over ${\\mathbb {R} }^d$ .", "Then, the following items are veracious: We have the following set of inequalities: $E_N(v) = E_{N-1}(v) < E_{N-2}(v) < \\dots < E_1(v);$ The ground-state energy $E_K(v)$ is attained for all number of particles $K = 1, \\dots , N$ ; There exists a large enough radius $R > 0$ such that $\\Sigma \\cap \\left( ({\\mathbb {R} }^d \\setminus B_R) \\times ({\\mathbb {R} }^d \\setminus B_R) \\times {\\mathbb {R} }^{d(N-2)} \\right) = \\emptyset .$ Here the set $\\Sigma \\subset ({\\mathbb {R} }^d)^N$ is the set of minimizers of $E_N(v)$ defined as $\\Sigma := \\operatornamewithlimits{arg\\,min}_{\\mathbf {r}_1, \\dots , \\mathbf {r}_N}\\left\\lbrace \\sum _{1 \\leqslant i < j \\leqslant N} \\frac{1}{|\\mathbf {r}_i - \\mathbf {r}_j|^s} + \\sum _{i = 1}^N v(\\mathbf {r}_i)\\right\\rbrace .$ A potential $v$ which behaves like $-(N-1)|\\mathbf {r}|^{-s}$ at infinity can bind $N-1$ or less particles.", "The equality $E_N(v) = E_{N-1}(v)$ in (REF ) is rather remarkable, as it implies that the Kantorovich potential $v$ can bind one more additionnal electron, leading to a total of $N$ (or less) particles.", "This shows that the minimizer(s) and the Kantorovich potential(s) for (SCE) are rather subtle objects.", "The item REF in the above corollary is a refinement of our first dissociation theorem, namely mainthm.", "Recall that if $v$ is a Kantorovich potential, then it verifies the optimality condition (REF ).", "It follows that the support of ${\\mathbb {P} }$ is contained in $\\Sigma $ as defined in (REF ).", "Nevertheless, we emphazise that it need not be true that the support is equal to the whole of $\\Sigma $ .", "For instance, in one space-dimension with the Coulomb potential $-|r|$ , the support can be a proper subset of $\\Sigma $ , see [9].", "Finally, the strict inequalities in (REF ) say that placing one particle at infinity always increases the energy of the system with $N-1$ particles.", "The physical intuition is that, if $K > 1$ particles are placed at infinity where the potential $v$ behaves like $-(N-1)|\\mathbf {r}|^{-s}$ , upon those particles is exerted an attractive potential which eventually pull them back in the finite vicinity, since the remaining electrons induce a potential $-(N-K)|\\mathbf {r}|^s$ at infinity, and that $(N-1)-(N-K) > 0$ .", "This is the classical argument of Zhislin–Sigalov in [24], where the authors proved a stability result for neutral or positively-charged atoms and molecules using the so-called Hunzinker–van Winter–Zhislin (HVZ) theorem, which is an important result in quantum physics [10], [5] but is an easy fact for classical systems like ours.", "We now reproduce this argument to prove the corollary.", "We note that $E_N(v)$ is attained by definition, since $v$ is a Kantorovich potential, and that the equality $E_{N}(v) = E_{N-1}(v)$ follows from the fact that the support $\\Omega $ of $\\rho $ is unbounded.", "We note that $E_{K}(v) < E_{K-1}(v)$ implies that $E_K(v)$ is attained.", "Otherwise, given any minimizing sequence $\\mathbf {r}_1^{\\scriptscriptstyle (k)}, \\dots , \\mathbf {r}_K^{\\scriptscriptstyle (k)}$ for $E_K(v)$ , up to a permutation on the indices we could extract a subsequence such that the first $K_0 \\in \\lbrace 1, \\dots , K-1\\rbrace $ particles $\\mathbf {r}_1^{\\scriptscriptstyle (k)}, \\dots , \\mathbf {r}_{K_0}^{\\scriptscriptstyle (k)}$ converge to a minimizer of $E_{K_0}(v)$ by compactness.", "This would imply the contradiction $E_{K_0}(v) = E_K(v) < E_{K-1}(v) \\leqslant E_{K_0}(v)$ since $v$ vanishes at infinity.", "Since $v$ is a continuous function which vanishes at infinity and which is negative somewhere according to its asymptotic behavior at infinity, the infimum of $v$ must be attained on some compact set, i.e.", "$E_1(v)$ is attained.", "Therefore, if we can prove the set of strict inequalities in (REF ), the item REF will immediately follow.", "We proceed by induction on the number of particles $K$ .", "We start by proving that $E_2(v) < E_1(v)$ .", "Let $\\mathbf {r}^*$ be a minimizer of $E_1(v)$ , that is $v(\\mathbf {r}^*) = \\min v$ .", "By definition of $E_2(v)$ , for all $\\mathbf {r}\\in {\\mathbb {R} }^d$ we have $E_1(v) + v(\\mathbf {r}) + \\frac{1}{|\\mathbf {r}- \\mathbf {r}^*|^s} \\geqslant E_2(v).$ By the asymptotic behavior of $v$ , we have $v(\\mathbf {r}_q) + \\frac{1}{|\\mathbf {r}- \\mathbf {r}^*|^s} = - \\frac{N - 2}{|\\mathbf {r}|^s} + o\\left(\\frac{1}{|\\mathbf {r}|^s}\\right) \\quad \\text{in }\\, \\Omega .$ Therefore, for large enough $|\\mathbf {r}| \\gg 1$ in $\\Omega $ , we obtain $E_2(v) < E_1(v)$ .", "As mentioned above, this implies that $E_2(v)$ is attained.", "We then proceed by induction, i.e.", "$E_{K-1}(v)$ is attained for some $\\overline{\\mathbf {r}}_1, \\dots ,\\overline{\\mathbf {r}}_{K-1}$ , and we have $E_{K-1}(v) + v(\\mathbf {r}) + \\sum _{i = 1}^{K-1} \\frac{1}{|\\mathbf {r}- \\overline{\\mathbf {r}}_i|^s} \\geqslant E_K(v).$ Once again, by the asymptotic behavior of $v$ , we have $v(\\mathbf {r}) + \\sum _{i = 1}^{K-1} \\frac{1}{|\\mathbf {r}- \\overline{\\mathbf {r}}_i|^s} = - \\frac{N - K}{|\\mathbf {r}|^s} + o\\left(\\frac{1}{|\\mathbf {r}|^s}\\right)\\quad \\text{in }\\, \\Omega .$ Therefore, for large enough $|\\mathbf {r}| \\gg 1$ in $\\Omega $ , we obtain $E_{K}(v) < E_{K-1}(v)$ , and therefore that $E_K(v)$ is attained.", "Hence, the item REF is proved by induction, and consequently the item REF .", "The item REF follows immediately from the item REF ." ], [ "Proof of dualchargemass", "Finally, we conclude this paper with the proof that if $\\rho _{\\rm ext}$ is a measure such that $U^{\\rho _{\\rm ext}} := -|\\cdot |^{d-2} \\ast \\rho _{\\rm ext}$ is a Kantorovich potential for (SCEd) which is locally Lipschitz on the support $\\Omega $ of $\\rho $ , where $\\Omega $ verifies the assumptions mentioned in the corollary, then $\\rho _{\\rm ext}({\\mathbb {R} }^d) = N-1$ .", "The upper bound $\\rho _{\\rm ext}({\\mathbb {R} }^d) \\leqslant N-1$ was already proved in [9], where it follows from the fact that, given any compactly supported (finite) measure $\\mu $ , we have $U^\\mu (\\mathbf {r}) \\sim - \\frac{\\mu ({\\mathbb {R} }^d)}{|\\mathbf {r}|^{d-2}} \\quad \\text{in the limit } \\mathbf {r}\\rightarrow \\infty .$ Nonetheless, this asymptotic need not be true for a non-compactly supported measure $\\mu $For instance, consider $\\mu = \\sum _{i} a_i\\delta _{\\mathbf {r}_i}$ where $(\\mathbf {r}_i)_i$ is a sequence of points such that $\\mathbf {r}_i \\rightarrow \\infty $ as $i \\rightarrow \\infty $ , and $(a_i)_i$ is a sequence of positive reals such that $\\sum _{i} a_i < \\infty $ ..", "Nevertheless, one can prove [7] that, under the assumption $\\int _{{\\mathbb {R} }^d} \\frac{\\mu ({\\rm d}\\mathbf {r})}{|\\mathbf {r}|^{d-2}} < \\infty ,$ there exists a “small” Borel set $E \\subset \\mathbb {S}^{d-1}$ (in the sense that $C_d(E) = 0$ , where $C_d(E)$ is the capacity of $E$ , see [8]) such that $\\lim _{r \\rightarrow \\infty } r^{2-d} U^\\mu (r \\mathbf {\\xi }) = -\\mu ({\\mathbb {R} }^d).$ for all $\\mathbf {\\xi } \\in \\mathbb {S}^{d-1} \\setminus E$ .", "Notice that the hypothesis (REF ) is verified for $\\rho _{\\rm ext}$ as $U^{\\rho _{\\rm ext}}(0) < \\infty $ , since $U^{\\rho _{\\rm ext}}$ is continuous.", "Now, by the assumption on $\\Omega $ stated in dualchargemass, we have $|A_\\Omega | > 0$ , where $|\\cdot |$ denotes the Lebesgue measure on the sphere $\\mathbb {S}^{d-1}$ and where $A_\\Omega := \\left\\lbrace \\mathbf {\\xi } \\in \\mathbb {S}^{d-1} : \\text{for all } \\, r_0 \\text{ there exists } r \\geqslant r_0 \\text{ s.t. }", "r\\mathbf {\\xi } \\in \\Omega \\right\\rbrace .$ In particular, since any measurable set $B \\subset \\mathbb {S}^{d-1}$ of null capacity verifies $|B| = 0$ [8], we have $|A_\\Omega \\setminus E| > 0$ .", "Therefore, there exists some direction $\\mathbf {\\xi }_0 \\in A_\\Omega \\setminus E$ and a sequence $(r_i)_{i \\geqslant 0}$ of positive reals such that $\\lim _{i} r_i = \\infty $ and $\\lim _{i \\rightarrow \\infty }~r_i^{2-d} U^{\\rho _{\\rm ext}}(r_i \\mathbf {\\xi }_0) = -\\rho _{\\rm ext}({\\mathbb {R} }^d),$ and such that $r_i \\mathbf {\\xi }_0 \\in \\Omega $ for all $i \\geqslant 0$ .", "Using asymk, we have $\\lim _{i \\rightarrow \\infty }~r_i^{2-d} U^{\\rho _{\\rm ext}}(r_i \\mathbf {\\xi }_0) = -(N-1).$ The thesis of dualchargemass is therefore proved.", "$\\Box $" ] ]
2210.07830
[ [ "Magnetic interactions in intercalated transition metal dichalcogenides:\n a study based on ab initio model construction" ], [ "Abstract Transition metal dichalcogenides (TMDs) are known to have a wide variety of magnetic structures by hosting other transition metal atoms in the van der Waals gaps.", "To understand the chemical trend of the magnetic properties of the intercalated TMDs, we perform a systematic first-principles study for 48 compounds with different hosts, guests, and composition ratios.", "Starting with calculations based on spin density functional theory, we derive classical spin models by applying the Liechtenstein method to the ab initio Wannier-based tight-binding model.", "We show that the calculated exchange couplings are overall consistent with the experiments.", "In particular, when the composition rate is 1/3, the chemical trend can be understood in terms of the occupation of the 3d-orbital in the intercalated transition metal.", "The present results give us a useful guiding principle to predict the magnetic structure of compounds that are yet to be synthesized." ], [ "Introduction", "Transition metal dichalcogenides (TMDs) are two-dimensional layered materials of the type $TX_2$ , where $T$ is a transition metal atom, and $X$ is a chalcogen atom.", "They offer a fascinating playground to study various physical phenomena such as unconventional superconductivity, exotic charge density waves, emerging spin, valley, and exciton physics [1], [2], [3], [4].", "One of their characteristic features in bulk and thin films with atomic-scale thickness is that they can serve as an intercalation host.", "Namely, various guest elements can be accommodated in the van der Waals (vdW) gaps between each layer of $TX_2$ , changing the physical properties of the system dramatically.", "In particular, when 3$d$ transition metal atoms ($M$ ) are intercalated, a variety of magnetic states such as helical spin states [5], [6], [7], [8], [9], [10], half-metallic states [11], noncollinear antiferromagnetic states [12], [13], anisotropic in-plane ferromagnetic states [14], [15] emerges, for which intriguing transport phenomena such as the anomalous Hall effect  [16], [17], [18] and crystalline Hall effect  [19] have been investigated intensively.", "It is an interesting question whether such various magnetic states and properties realized in the intercalated TMDs can be reproduced from first principles and described/understood in terms of a simple model.", "It is also a non-trivial challenge to predict unknown magnetic properties for compounds that are yet to be synthesized.", "For these problems, recently, several ab initio studies have been performed.", "For example, a calculation based on density functional theory (DFT) has successfully shown that the most stable state in $\\mathrm {\\textit {M}_{1/3}NbS_{2}}$ where $M$ =(Fe, Co) has a noncoplanar magnetic structure for which the topological Hall effect is expected to be observed [20].", "For $M$ =(Cr, Mn, Fe), effective spin models were derived from first principles, and the origin of the characteristic helical magnetic structure has been discussed [21].", "However, the general chemical trend of the host- and guest-dependence of the magnetic property of the intercalated TMDs is yet to be fully understood, and a systematic study for various host $TX_2$ and guest $M$ with different composition ratios is highly desired.", "To determine the most stable magnetic structure for a given material, there are several established approaches.", "One is of course a calculation based on spin DFT (SDFT), which usually works successfully for transition metal compounds [22].", "However, this approach is numerically expensive and not so efficient when the magnetic unit cell is large.", "Another promising approach is deriving a classical spin model from SDFT calculation for a magnetic state (typically the ferromagnetic state) for which the numerical cost is not so expensive.", "Once a classical spin model is derived, we can determine the stable magnetic structures even when the magnetic unit cell is large.", "The local force method, equivalently called the Liechtenstein formula [23], is often used to construct such effective spin models.", "With this method, we can evaluate the exchange interactions in the spin model by estimating the energy change against spin rotations.", "This formula has been successfully applied to the calculations for the magnetic transition temperatures of transition metals  [24], noncollinear magnets, and magnetic alloys  [25].", "While it was originally formulated for the multiple scattering theory with the Green's functions and implemented in SDFT calculations with the Korringa-Kohn-Rostoker (KKR) theory, it is applicable to the tight-binding model based on ab initio Wannier functions [26], [27], [28].", "In this study, we first performed a systematic SDFT calculation for $M_{x}TX_{2}$ where $M$ = (V, Cr, Mn, Fe, Co, Ni), $T$ = (Nb, Ta), and $X$ = (S, Se) with $x$ = $1/3$ and $1/4$ (48 compounds in total).", "Starting with the calculations for the representative ferromagnetic state of Cr$_{x}TX_{2}$ , we construct classical spin models by applying the local force method to the Wannier-based tight-binding model.", "We then determine the most stable magnetic structure for each material by examining the sign of the exchange interactions.", "In this approach, we discuss the possibility of the intra-layer AF states which are numerically expensive to investigate by SDFT calculation.", "We show that the theoretical results agree well with the magnetic orders experimentally reported.", "Moreover, we find for $x=1/3$ compounds that a simple model can give a unified explanation for the material dependence of the stable spin configuration in terms of the filling of the 3$d$ orbitals of the intercalated transition metals.", "This observation gives us a useful guiding principle to predict magnetic properties of intercalated TMDs which are yet to be synthesized." ], [ "Method", "Starting from the DFT calculation, we first construct a tight-binding Hamiltonian based on the Wannier function.", "We then applied the Liechtenstein formula [23] and derived an effective spin model.", "We neglect the spin-orbit coupling effect so that spin canting due to the Dzyaloshinsky-Moriya interaction [29], [30] is not considered in the present study.", "Then, the sum of effective interactions ($J_{0}$ ) and each interaction ($J_{ij}$ ) are perturbatively evaluated by rotating a spin from the ferromagnetic state and examining the changes of the total energy.", "Let us first consider the classical Heisenberg Hamiltonian: $H_{\\textrm {s}} = -2\\sum _{\\langle i,j\\rangle }J_{ij}\\mathbf {s}_{i}\\cdot \\mathbf {s}_{j}$ We then introduce $\\delta E_{i}$ as the energy change when we rotate the spin at site $i$ by $\\theta _{i}$ from the ferromagnetic state, and $\\delta E_{ij}$ as the energy change when the spin at site $j$ is also rotated by $\\theta _{j}$ on the same rotation axis.", "It should be noted that $\\delta E_{i}$ and $\\delta E_{ij}$ are directly related with $2\\sum _{j\\ne i}J_{ij}$ and $-2J_{ij}$ , respectively: $\\begin{split}[2]{\\delta E_{i}}{\\theta _{i}} &= 2\\sum _{j\\ne i}J_{ij} \\\\[2]{\\delta E_{ij}}{\\theta _{i}}{\\theta _{j}} &= -2J_{ij}\\end{split}$ Next, we consider the tight-binding Hamiltonian defined as follows, $H_{\\textrm {TB}} = \\sum _{\\langle i,j\\rangle }A_{ij}c^{\\dagger }_{i}c_{j}$ where the indices $i,j$ run over all degrees of freedom that specify the Wannier functions, namely, lattice vectors, sublattices, atomic or molecular orbitals, and spins.", "Using the Green's function for the tight-binding Hamiltonian, we can calculate the energy change due to the spin rotation.", "In the Green's functions formalism, the free energy $F$ of the system (REF ) is expressed as, $F = -T \\sum _{\\omega _{n}}e^{i\\omega _{n}0^{+}}\\textrm {Tr ln}[G^{-1}(i\\omega _{n})].$ where $\\omega _{n} = (2n+1)\\pi /\\beta $ denotes the electronic Matsubara frequency and the Green's function $G$ is given by $G(i\\omega _{n})^{-1}=(i\\omega _{n}\\delta _{ij}-A_{ij})$ .", "If we rotate the spins as in the case of the Heisenberg model, the changes in the free energy, i.e., $\\delta F_{i}$ and $\\delta F_{ij}$ , are given by the following equations: $[2]{\\delta F_{i}}{\\theta _{i}} &=& -2T\\sum _{\\omega _{n}}\\textrm {Tr}_{jl\\sigma }[B_{i}G^{\\uparrow }_{ij}B_{j}G^{\\downarrow }_{ji}] \\nonumber \\\\&&+2T\\sum _{\\omega _{n}}\\textrm {Tr}_{l\\sigma }[B_{i}G^{\\uparrow }_{ii}B_{i}G^{\\downarrow }_{ii}] \\\\[2]{\\delta F_{ij}}{\\theta _{i}}{\\theta _{j}} &=& T\\sum _{\\omega _{n}}\\textrm {Tr}_{l\\sigma }\\left[B_{i}G^{\\uparrow }_{ij}B_{j}G^{\\downarrow }_{ji}\\right]$ where $B$ stands for the effective magnetic field, namely, the spin splitting in the calculation based on the local spin density approximation (LSDA).", "By comparing these expressions with equation (REF ), we can evaluate $J_{ij}$ for the itinerant tight-binding Hamiltonian (eq.", "(REF ))." ], [ "DFT calculation", "We used the Vienna Ab initio Simulation Package code [31] for SDFT calculations of intercalated TMDs.", "The Perdew–Burke-Ernzerhof exchange-correlation functional  [32] and the projector augmented wave method  [33], [34] were used.", "We show the crystal structures of intercalated TMDs in Fig.", "REF .", "There are two intercalated transition metals per unit cell, which are located in different vdW gaps.", "We see that the intercalated transition metals are surrounded by a distorted octahedron formed by chalcogen atoms.", "We can also see that intercalated transition metals form a hexagonal close-packed lattice when $x=1/3$ and a triangular lattice stacked along the $c$ -axis when $x=1/4$ .", "We performed structural optimization for all target compounds.", "In this optimization, we assumed that the spin configuration is ferromagnetic (FM), and the lattice parameters and internal coordinates were optimized, keeping the original space group symmetries $P6_322$ for $x=1/3$ and $P6_3/mmc$ for $x=1/4$ .", "For materials having no magnetization in the SDFT calculations, we performed SDFT+$U$ calculations.", "The value of $U$ was set as $U=3$ eV for Fe$_{1/4}$ XSe$_2$ ($X=$ Nb and Ta), the Co-, and Ni-intercalated compounds.", "As mentioned before, there are two intercalated transition metals per unit cell (see, Fig.", "REF ).", "In the SDFT calculations, we focus on the magnetic structures that do not expand the unit cell.", "Thus, we consider only the antiferromagnetic (AFM) state having the interlayer antiferromagnetic and intralayer ferromagnetic structure.", "The energies of FM and AFM states were calculated for optimized structures and compared with each other.", "The energy cut-off for the plane-wave basis set was set to 500 eV, and a 12$\\times $ 12$\\times $ 8 k-point grid for the primitive cell of the intercalated TMDs was used in the structural optimization and the calculations of the ground state energies." ], [ "Construction of Wannier-based tight-binding model", "Wannier functions were constructed by using the Wannier90 code [35].", "In Fig.", "REF , we show the band structures of $\\mathrm {Cr_{1/3}NbS_{2}}$ as a representative example.", "The inner window to fix the low energy band dispersion was set from -8 to 2 eV.", "The energy cut-off for the plane-wave basis was set to 500 eV.", "A 12$\\times $ 12$\\times $ 8 $k$ -point grid was used in calculating FM reference states, and a 6$\\times $ 6$\\times $ 4 sampling $k$ -point grid was used for constructing Wannier functions.", "Figure: Band structures of (upper) majority spin and (lower) minority spin of Cr 1/3 NbS 2 \\mathrm {Cr_{1/3}NbS_{2}} in the ferromagnetic state.", "The energy are measured from the Fermi level.", "Blue lines are calculated from DFT calculations, and red lines are from the Wannier functions." ], [ "Evaluation of exchange interactions", "We applied the Liechtenstein formula to the Cr-intercalated compounds, and the exchange interactions for the other transition metal-intercalated compounds were evaluated by shifting the Fermi level, which corresponds to the rigid band approximation.", "According to the results of the DFT calculation, only intercalated transition metals have a sizeable magnetic moment, and those of the other atoms are negligibly small in the FM order.", "Thus we ignored interactions other than those between intercalated transition metals, and extracted the spin model, whose interactions are finite only between intercalated transition metals.", "A 8$\\times $ 8$\\times $ 8 k-point grid was used in the evaluation of Eqs.", "(5) and (6).", "Inverse temperature $\\beta $ was set to 500 $\\textrm {eV}^{-1}$ .", "In order to reduce the computational cost, we use the intermediate representation of the Green's function [36], [37] in the Liechtenstein formula." ], [ "Stable magnetic order according to DFT calculation ", "We first summarize the experimentally observed magnetic structures in Table REF (a).", "Table REF (b) shows the results of the DFT calculations, where we compare the energies of the FM and AFM states.", "We can see from Table REF (a) and REF (b) that the experimental magnetic structures in 16 out of the 23 compounds are successfully reproduced in the DFT calculations.", "The remaining 6 compounds except for $\\mathrm {Fe_{1/4}TaS_{2}}$ (namely, $\\mathrm {Co_{1/3}NbS_{2}}$ , $\\mathrm {Co_{1/3}TaS_{2}}$ , $\\mathrm {Cr_{1/4}NbS_{2}}$ , $\\mathrm {Cr_{1/4}NbSe_{2}}$ , $\\mathrm {Mn_{1/4}NbSe_{2}}$ , and $\\mathrm {Fe_{1/4}NbSe_{2}}$ ) are known to be AFM in the experiments but predicted to be FM in the DFT calculation.", "It should be noted that we did not consider intralayer AFM states in the DFT calculation because the magnetic unit cell becomes too large.", "Thus the intralayer magnetic structure is always FM, and only the interlayer magnetic structure can be AFM.", "We will see later in Table REF (c) that we obtain the correct AFM ground states in $\\mathrm {Cr_{1/4}NbS_{2}}$ and $\\mathrm {Cr_{1/4}NbSe_{2}}$ based on the spin model calculations derived by the Liechtenstein formula.", "On the other hand, in the case of $\\mathrm {Fe_{1/4}TaS_{2}}$ , the AFM state is more stable than the FM state in the DFT calculation, while it is FM in the experiment.", "We will discuss this discrepancy in Sec.", "REF .", "Table: Stable magnetic structure in the (a) experiments, (b) DFT calculations, and (c) classical spin model derived by the Liechtenstein formula.", "Letters with an asterisk(*) denote that the theoretical results are not consistent with the experimental results in (a).", "F, HM, and AF stand for ferromagnetic, helimagnetic, and antiferromagnetic structures, respectively." ], [ "Exchange constant", "We show the filling (the number of 3$d$ electrons in the unit cell) dependence of the interlayer (Figs.", "REF (a), (b)) and intralayer (Figs.", "REF (c), (d)) exchange constants evaluated by the Liechtenstein formula.", "As we described in Sec.", "REF , we start with the most representative case, i.e., the ferromagnetic state for $M$ =Cr.", "We shift the position of the Fermi level and look at the energy change due to a spin rotation.", "While we neglect the detail of the host ($TX_2$ ) dependence on the electronic structure, as we see below, the rigid-band approximation successfully reproduces the overall chemical trend of the experimental results.", "Figures REF (a) and REF (c) are the results for $x=1/3$ , and 3(b) and 3(d) are those for $x=1/4$ .", "Let us first look at the former.", "We see that both the intralayer and interlayer interactions have a similar filling dependence.", "When the number of the 3$d$ electrons is small or large, the exchange constants tend to take a positive small value (FM).", "On the other hand, when the filling is close to half-filling (as in the cases of Mn, Fe, Co), the interactions tend to be negative (AFM).", "This result is consistent with the previous study for the Fe- and Co-intercalated $x=1/3$ system in which noncoplanar AFM structures were shown to be favored (Ref.", "PhysRevMaterials.6.024201) since the hexagonal close-packed lattice is magnetically frustrated when all the nearest neighbor interactions are negative.", "Next, let us move on to the case of $x=1/4$ .", "We see that the interlayer exchange constant shown in Fig.", "REF (b) does not show a significant host ($TX_2$ ) dependence for $M$ =(V, Cr, Mn, Fe).", "We see a similar behavior for the intralayer exchange constant (Fig.", "REF (d)).", "Another distinct feature is that the energy scale of the intralayer exchange constant is much smaller than that of the interlayer exchange constant.", "Namely, the system has a strong coupling along the $c$ axis rather than in the $ab$ plane.", "In Table.", "REF (c), we summarize the stable magnetic structures determined by the sign of the exchange constants.", "Among 23 compounds for which the magnetic structure is determined experimentally, we can say that the theoretical magnetic structures of 18 compounds are consistent with the experiment.", "Here, let us note that both the interlayer and intralayer exchange constants change their sign around $M$ =Mn.", "Thus we do not determine which magnetic order is stable for six compounds with $M$ =Mn.", "For $x=1/3$ , we have a similar problem for $M$ =Ni.", "Namely, for $\\mathrm {Ni_{1/3}NbS_{2}}$ and $\\mathrm {Ni_{1/3}TaS_{2}}$ , at least one of the intralayer or interlayer interactions is close to zero, indicating that these materials are located near the boundary of the FM and AFM states.", "On the other hand, for the case of Fe$_x$ TaS$_2$ , our approach does not reproduce the experimental results.", "For $x=1/3$ , we should note that while SDFT apparently reproduces the ferromagnetic ground state in the experiment [17], the intralayer AFM state is not considered in the calculation.", "Regarding the reason for the disagreement between theory and experiment, we leave it for future study.", "For $x=1/4$ , neither the SDFT calculation nor the Lichtenstein approach reproduces the experimental ferromagnetic ground states.", "One possible reason is the contribution of the orbital magnetization of the intercalated Fe atoms.", "While intercalated Fe atoms are shown to have a finite orbital moment of about 33$\\%$ of the spin moment [Ref.", "PhysRevLett.107.247201], the orbital moment is not taken into account in the present calculation." ], [ "Simple interpretation of the material dependence of the exchange constant for $x=1/3$", "As we have seen in Figs.", "REF , the energy scale of the interlayer and intralayer exchange constants are similar to each other for $x=1/3$ but very different for $x=1/4$ .", "This result indicates that while the intercalated TMDs are crystallographically two-dimensional, they are magnetically isotropic (three-dimensional) for $x=1/3$ but anisotropic (quasi-one dimensional) for $x=1/4$ .", "In this subsection, let us discuss whether the material dependence of the exchange constant for $x=1/3$ can be understood in terms of a simple single-orbital Hubbard model on the Bethe lattice.", "When the Coulomb repulsion (the Hubbard $U$ ) is absent, the system has a semicircular DOS (see the inset of Fig.", "REF ).", "We set the bandwidth $W=2D$ and $U=W$ .", "In Fig.", "REF , we show the filling dependence of $J_{0}(=\\sum _{j}J_{ij})$  [59].", "When the filling is closed to 1 (half-filling), the super-exchange mechanism is dominant, and thus $J_0$ takes a negative value.", "On the other hand, when the filling is very low or high, the double exchange mechanism makes $J_0$ positive.", "Namely, the system is FM for low and high filling but AFM for half-filling.", "Interestingly, this behavior can be seen for both the interlayer and intralayer exchange constant for $x=1/4$ (see Figs.", "REF (a) and (c)).", "Figure: Filling dependence of J 0 J_{0} for the single orbital Hubbard model on the Bethe lattice.Figure: (a) DOS and PDOS of the 3dd orbitals for Cr 1/3 NbS 2 \\textrm {Cr}_{1/3}\\textrm {NbS}_{2}.", "Black dotted line is DOS and blue (red) line is the PDOS of the 3dd orbitals with the majority (minority) spin.", "(b) Enlarged plot for the PDOS of the 3dd orbitals.", "Six vertical black dotted lines denote the Fermi level of V-, Cr-, Mn-, Fe-, and Ni-intercalated TMDs determined by the rigid band approximation.Figure: Spin polarization (difference between the filling of the majority and minority spins) of the 3dd orbitals in M 1/3 TX 2 M_{1/3}TX_2.In Fig.", "REF , we also plot the spin polarization (i.e., the difference between the filling of the majority and minority spins) as a function of the filling.", "we see that AFM interaction is strongest when the spin polarization is largest.", "It is interesting to see whether this behavior can also be seen for the exchange constants of $M_{1/3}TX_2$ .", "As a typical case, let us look into the case of $\\textrm {Cr}_{1/3}\\textrm {NbS}_{2}$ .", "In Fig.", "REF , we show the total DOS and partial DOS (PDOS) of the 3$d$ orbitals.", "The vertical black dotted lines in Fig.", "REF (b) denote the Fermi level ($E_F$ ) for $M$ =V, Cr, Mn, Fe, Co, and Ni from the left, respectively.", "We see that when $E_F=0$ (i.e., the case of $M$ =Cr), the minority spin is almost empty.", "When $E_F$ is higher than that of Mn ($\\sim 0.4$ eV), the majority spin starts to be occupied.", "Thus the spin polarization takes its maximum between $M$ =Mn and Fe.", "To make this situation clearer, in Fig.", "REF , we plot the spin polarization for $M_{1/3}$ NbS$_2$ together with the results for other $M_{1/3}TX_2$ .", "From these plots, we expect that the AFM interaction becomes strongest for $M$ =Mn or Fe, and interestingly again, it is indeed the case seen in Figs.", "REF (a) and (c)." ], [ "Conclusion", "By means of first-principles calculations based on SDFT and ab initio derivation of the classical spin model based on the Liechtenstein method, we systematically investigated the material dependence of the magnetic interactions in 48 intercalated TMDs $M_xTX_2$ , in which a variety of magnetic structures is realized.", "For both $x$ =1/3 and $x$ =1/4, our calculations overall succeeded in reproducing the experimental results, and especially for $x$ =1/3, we found that the intercalated guest-atom dependence can be simply understood in terms of the filling of the 3$d$ orbitals.", "The present result will provide a useful guideline to predict magnetic structures in compounds which has not been synthesized." ], [ "Acknowledgements", "We would like to thank Masaki Nakano for illuminating discussions.", "We acknowledge the financial support by Grant-in-Aids for Scientific Research (JSPS KAKENHI) Grant No.", "JP21H04437, JP21H04990 and JP19H05825.", "T. N. was supported by JST, PRESTO Grant Number JPMJPR20L7, Japan." ] ]
2210.07740
[ [ "Multisymplectic Constraint Analysis of Scalar Field Theories,\n Chern-Simons Gravity, and Bosonic String Theory" ], [ "Abstract The (pre)multisymplectic geometry of the De Donder--Weyl formalism for field theories is further developed for a variety of field theories including a scalar field theory from the canonical Klein-Gordon action, the electric and magnetic Carrollian scalar field theories, bosonic string theory from the Nambu-Goto action, and $2+1$ gravity as a Chern-Simons theory.", "The Lagrangians for the scalar field theories and for $2+1$ Chern-Simons gravity are found to be singular in the De Donder--Weyl sense while the Nambu-Goto Lagrangian is found to be regular.", "Furthermore, the constraint structure of the premultisymplectic phase spaces of singular field theories is explained and applied to these theories.", "Finally, it is studied how symmetries are developed on the multisymplectic phase spaces in the presence of constraints." ], [ " colorlinks=true, linkcolor=black, urlcolor=blue, citecolor=black same The (pre)multisymplectic geometry of the De Donder–Weyl formalism for field theories is further developed for a variety of field theories including a scalar field theory from the canonical Klein-Gordon action, the electric and magnetic Carrollian scalar field theories, bosonic string theory from the Nambu-Goto action, and $2+1$ gravity as a Chern-Simons theory.", "The Lagrangians for the scalar field theories and for $2+1$ Chern-Simons gravity are found to be singular in the De Donder–Weyl sense while the Nambu-Goto Lagrangian is found to be regular.", "Furthermore, the constraint structure of the premultisymplectic phase spaces of singular field theories is explained and applied to these theories.", "Finally, it is studied how symmetries are developed on the (pre)multisymplectic phase spaces in the presence of constraints.", "Key words: Classical field theories, multisymplectic formulation, Lagrangian and Hamiltonian formalisms, Symmetries, Scalar field theory, Carrollian theories, Bosonic String theory, $p$ -branes, Chern–Simons gravity." ] ]
2210.07698
[ [ "High-harmonic generation in liquids with few-cycle pulses: effect of\n laser-pulse duration on the cut-off energy" ], [ "Abstract High-harmonic generation (HHG) in liquids is opening new opportunities for attosecond light sources and attosecond time-resolved studies of dynamics in the liquid phase.", "In gas-phase HHG, few-cycle pulses are routinely used to create isolated attosecond pulses and to extend the cut-off energy.", "Here, we study the properties of HHG in liquids, including water and several alcohols, by continuously tuning the pulse duration of a mid-infrared driver from the multi- to the sub-two-cycle regime.", "Similar to the gas phase, we observe the transition from discrete odd-order harmonics to continuous extreme-ultraviolet emission.", "However, the cut-off energy is shown to be entirely independent of the pulse duration.", "This observation is confirmed by ab-initio simulations of HHG in large clusters.", "Our results support the notion that the cut-off energy is a fundamental property of the liquid, independent of the driving-pulse properties.", "Combined with the recently reported wavelength-independence of the cutoff, these results confirm the direct sensitivity of HHG to the mean-free paths of slow electrons in liquids.", "Our results additionally imply that few-cycle mid-infrared laser pulses are suitable drivers for generating isolated attosecond pulses from liquids." ], [ "Introduction", "The process of high-harmonic generation (HHG) has been successfully used as a probe method for understanding strong-field dynamics in gases and solids with inherent attosecond time resolution.", "This has led to the possibility of imaging and reconstruction of molecular orbitals [1], [2], [3], time-dependent chirality [4], [5], inter-band and intra-band electron dynamics [6], [7], [8], [9], [10] and probing of charge migration [11], [12], [13], to name a few.", "The basis of such applications of HHG to high-harmonic spectroscopy (HHS) is a clear understanding of the HHG mechanism.", "This has been achieved in gases through systematic studies of fundamental observables like the cut-off energy as a function of laser parameters, such as intensity, wavelength, pulse duration, etc.", "[14], [15], [16], [17], [18].", "However, most (bio)chemically-relevant reactions occur in the liquid phase.", "As a result, to apply HHS to liquids, it is not only important to understand the HHG mechanism in liquids but also to systematically investigate the harmonic spectrum dependence on various experimental parameters for identifying features that are laser driven and those that originate from structural properties of the liquid environment.", "Previous works on HHS in bulk liquids have been demonstrated to be a bright source of extreme-ultraviolet radiation [19], [20], [21].", "Further, in the multi-cycle regime, the cut-off energy $E_{\\rm c}$ of the liquid harmonic spectra as defined by Lewenstein et al.", "[22], i.e.", "the end of the plateau region, has been shown to be independent of wavelength and the laser intensity [23].", "These results have been successfully explained within a scattering-limited trajectory model that shows the harmonic cut-off energy to be limited by a sample-characteristic mean-free path [23].", "In addition, the maximum harmonic energy ($E_{\\rm max}$ ) in the multi-cycle regime has been shown to be linearly dependent on the electric field amplitude [19], [23].", "In remarkable contrast, the maximum energy of harmonic spectra (designated as cut-off energy) in [24] attributed to liquid-phase isopropanol, obtained in the few-cycle regime, have been reported to be linearly dependent on the laser intensity with cut-off energies as high as $\\sim $ 50 eV.", "Both properties are reminiscent of gas-phase HHG on one hand, and contrasting with previous work on liquid-phase HHG [19], [25], [21], [23] on the other.", "These observations have led the authors of Ref.", "[24] to the conclusion that electron-scattering cross sections of liquid isopropanol were significantly reduced compared to the isolated molecule.", "However, in an earlier work by some authors of the presented work utilizing few-cycle pulses of a 800 nm driver, an extension of the cut-off energy was not observed [25].These controversies further highlight the need for additional systematic studies of HHG in the liquid phase in varying laser conditions.", "In this work we demonstrate the influence of pulse duration of liquid-phase HHG by presenting back-to-back measurements of liquid- and gas-phase high harmonics in the sub-two cycle regime of 1.8 $\\mu m$ laser wavelength.", "Our results clearly show that the cut-off energy ($E_{\\rm c}$ ) remains pulse-width independent with a very weak dependence on intensity(if any) in the case of the liquid-phase high-harmonic spectrum throughout the transition from the multi-cycle ($\\sim 50$ fs) to the sub-two-cycle regime ($\\sim 11.5$ fs) regime.", "In comparison, the gas-phase harmonics show the expected linear dependence on the laser intensity for both $E_{\\rm c}$ and $E_{\\rm max}$ .", "We also demonstrate the pulse-width independence of $E_{\\rm c}$ to be a general property of harmonic spectrum generated from ethanol, isopropanol and heavy water (D$_2$ O).", "Our work provides a novel, and completely independent confirmation of the fact that $E_{\\rm c}$ in liquid-phase high-harmonic spectra is a fundamental property of the liquids, limited by the electron mean-free paths, and is weakly influenced by laser parameters, such as wavelength, pulse duration or intensity.", "These observations are confirmed by ab-initio calculations of HHG in liquids [26].", "Consistent with the scattering model introduced in Ref.", "[23], the experimental and numerical results demonstrate that even in the few-cycle regime, the experimental results can be fully explained in terms of electron scattering as the limiting factor determining $E_{\\rm c}$ for harmonics generated in the liquid phase.", "In addition, as inferred from the continuity of the harmonic spectra, the measurement suggests liquid-HHG as source of ultrashort extreme-ultraviolet pulses at lower threshold intensities in comparison to the gas phase.", "Figure: (A) Schematic of the experimental setup.", "Laser pulses with a central wavelength of 1800 nm, pulse energy of 800 μ\\mu J and pulse width of ∼\\sim 43 fs are focused into a HCF of 703 μ\\mu m inner diameter filled with 2.4 bar Ar.", "The exiting broadened pulse passes through a 2-mm fused-silica window before being focused on the flat-jet target for generating high harmonics.", "The harmonics pass through a slit into the XUV spectrometer that diffracts the different orders onto an MCP backed with a phosphor screen.", "The gas-phase sample is delivered into the laser beam through a heatable bubbler coupled to a nozzle.", "This setup is mounted on the same 3D manipulator as the liquid jet, such that back-to-back measurements of liquid- and gas-phase samples can be realized by a lateral translation of 2.5 cm.", "(B) A comparison of the unbroadened IR spectrum obtained from an evacuated fiber (blue line) and the broadened IR spectrum in presence of 2.4 bar Ar (orange line).", "(C) The transient-grating FROG trace of the compressed laser pulses.", "(D) Comparison of FWHM pulse-widths measured using FROG for the initial TOPAS output (blue line), at the output of the fiber (orange line) and after the 2 mm FS window (yellow line).FWHM widths obtained from Gaussian fits are indicated in the plot." ], [ "Experimental Setup", "Figure REF (A) shows a schematic of the experimental setup.", "A commercial 0.8 $\\mu $ m, kHz Ti-Sapphire laser coupled with an optical parametric amplifier HE-TOPAS is used to generate 800 $\\mu $ J laser pulses centered at 1.8 $\\mu $ m with a duration of 43 fs (blue trace in Figure REF (D)).", "These pulses are then coupled into a hollow-core fiber (HCF) of 703 $\\mu $ m inner diameter with a 1.5 m focal length lens.", "The HCF is filled with 2.4 bar of Ar to generate a broadened optical spectrum as shown by the orange trace in Figure REF (B).", "A pulse width of $\\sim 25$ fs is obtained from the output of the HCF (orange trace in Figure REF (D)).", "Further compression is provided by the 2 mm fused silica window of the experimental chamber.", "This results in a compressed pulse of $\\sim $ 11.5 fs full-width at half maximum (FWHM) as shown by the yellow trace in Figure REF (D), using a Transient-Grating Frequency-Resolved Optical Gating (TG-FROG) [27] measurement (Figure REF (C)).", "Figure REF (B) shows that the IR spectrum for the case of an evacuated fiber is unbroadened, corresponding to a multi-cycle pulse (for an evacuated fiber) with a measured pulse duration of $\\sim 43$ fs, i.e.", "a Fourier-limited pulse.", "The harmonic spectrum generated from such a multi-cycle pulse is expected to consist of discrete harmonics as a consequence of the periodicity of the HHG process.", "As one approaches the single-cycle regime the discrete harmonic spectrum is expected to evolve into a continuous one [18].", "These pulses are then focused on a liquid flat-jet of $\\sim 1 \\mu $ m thickness with a concave mirror of 40 cm focal length.", "The generated harmonics are diffracted by a XUV grating onto a multi-channel plate (MCP) with a phosphor screen and optical CCD camera for detection.", "The flat-jet, formed by the colliding jet geometry [19] is mounted on an XYZ manipulator for finer adjustment with respect to the laser beam.", "For a back-to-back measurement of the gas-phase harmonics a heatable bubbler is mounted on the same XYZ stage at a distance of 2.5 cm from the flat-jet, which is five times the spatial lateral extension of the fat jet.", "We can therefore measure the gas-phase harmonics by simple translation along the lateral direction.", "The MCP is maintained at a voltage of -1.6 kV and the phosphor at 3.3 kV for detection of the liquid-phase harmonic spectra with an acquisition time of 200 ms. Each data set is then averaged over 30 such spectra.", "As the signal yield from the gas-phase harmonics is lower, the MCP voltage is increased to -1.7 kV with an acquisition time of 600 ms and 30 spectra are averaged for the gas-phase data.", "Figure: (A) A comparison of the high-harmonic spectrum from the uncompressed laser pulse (IR beam passing through evacuated fiber) (blue line) and the compressed sub-two cycle laser pulse (IR beam passing through the fiber filled with 2.4 bar Ar, orange line).", "(B) A comparison of the harmonic spectrum generated from liquid-phase isopropanol (blue line) and gas-phase isopropanol (green line) using sub-two cycle 1.8μ\\mu m, ∼\\sim 12 fs laser pulses.", "(C) Harmonic spectra from liquid-phase isopropanol for a range of intensities driven by sub-two cycle pulses.", "(D) Harmonic spectra from gas-phase isopropanol for a range of intensities driven by sub-two cycle pulses.", "Each harmonic spectrum (both for liquid and gas phase) is normalized to its maximal signal intensity.", "The maximal intensity is limited by the onset of plasma generation in the flat-jet target." ], [ "Observation of high-harmonic generation from bulk liquids using sub-two cycle pulses", "Figure REF (A) shows a comparison of the harmonic spectrum obtained from the multi-cycle pulse (blue line) and from a sub-two cycle pulse (orange line), where a clear transformation of the discrete harmonic spectrum to a continuous one is observed.", "For both of these spectra the laser intensity is kept below the onset of plasma generation.", "To measure a systematic intensity dependence of the harmonic spectra in the liquid phase it is essential to eliminate possible gas-phase contributions from the liquid harmonic spectrum.", "As the cut-off energies for gases scale linearly with intensity, it may introduce a pseudo linear dependence of the harmonic energy with the laser intensity.", "Generally, a reference gas spectrum can be obtained by simply translating the flat-jet laterally [23], [19], [25].", "As evaporation from the flat-jet creates a gas-phase background, this method is sufficient for acquiring gas phase spectra up to 1500 nm wavelength [19].", "However, we observed that at 1800 nm (both for multicycle and single cycle regime) no detectable gas-phase harmonic signal were present when the laser was focused 0.5 mm (about the lateral size of the flat jet) from the center of the jet where the gas signal was measured for shorter wavelengths [23].", "This is because as we shift to longer wavelengths the photon energy reduces and the incident intensity is not sufficient to generate detectable signal from the density of the evaporating gas.", "This can simply be observed from the acquisition times of the signal.", "For harmonic spectra obtained with an 800-nm driver, a gas-phase harmonic spectrum (acquired by a lateral 0.5 mm shift from the jet center) requires an acquisition time of 200 ms, whereas similar signal intensity for the liquid-phase harmonic spectra only take 20 ms [23].", "For the current 1800-nm measurements a collection time of 200 ms was required to obtain the liquid spectrum with decent signal-to-noise ratio.", "In accordance to previous measurements, this would require an acquisition time of 2000 ms for the gas-phase spectrum, which is beyond the maximum acquisition time of the system.", "However, this observation reported independently for few-cycle measurements done on liquid isopropanol [24], cannot completely rule out gas contribution in the liquid harmonic spectra.", "At 1800 nm and the chosen focusing conditions, the Rayleigh range amounts to $\\sim 1$ cm.", "Therefore, at sufficiently high laser intensities, the laser beam passing through the jet has enough intensity to generate harmonics from the gas layer adjacent to the back surface of the liquid jet.", "The gas density indeed monotonically decreases with the distance from the liquid-gas interface.", "In addition, the driving laser itself causes heating of the liquid, which increases evaporation resulting in an increased gas density for the next arriving laser pulse.", "This effect is absent when we laterally shift the flat-jet out of the interaction region.", "Therefore, it is essential to do a back-to-back measurement of both gas and liquid phases separately to exclude gas-phase contributions to the liquid-phase harmonics.", "Isopropanol was chosen as the target for the liquid and gas-phase comparative study due to its higher vapor pressure as compared to water, which makes it easier to form a denser gas jet.", "Figure REF (B) shows a comparison of the harmonic spectrum generated from isopropanol in gas- (green line) and liquid-phase (blue line) with a sub-two-cycle 1800 nm laser pulse with a peak intensity of 4.4$\\times $ 10$^{13}$ W/cm$^2$ .", "Since the high-harmonic signal from gases is dependent on the position of the target with respect to the laser focus [28], [29], the comparative study was performed at a position where the maximum signal intensity of gas-phase harmonics was observed.", "As expected, we observe that the liquid $E_{\\rm c}$ of $\\sim 11.3$ eV(determined using the technique elaborated in section 3.3 consistent with the Lewenstein definition[22]).", "Further, it is observed that the maximum photon energy for the gas-phase spectrum extends up to 35 eV, whereby the detection limit is determined by the background signal.", "Figure: (A) Quadratic fit of experimentally measured FROG data to calibrate pulse width (FWHM) as a function of glass thickness.", "The data points at -2 mm glass thickness correspond to the FROG trace measurement at the output of the HCF before transmission through the 2 mm FS experimental chamber window.", "(B) Harmonic spectra from liquid-phase D 2 _{2}O for different glass thicknesses.", "The normalized harmonic spectra at different pulsewidths have been scaled down by factors of 1/2 (14 fs), 1/4 (16 fs), 1/8 (19 fs), 1/16 (26 fs), 1/32 (31 fs), 1/64 (37 fs) and 1/128 (43 fs) respectively.The data sets were taken at a fluence of ∼\\sim 0.51 J/cm 2 ^2.The incident laser intensity is varied with the help of an automated iris.", "At each iris position the focal spot size is imaged and the transmitted beam power is measured.", "As the harmonic spectra are acquired at a distance of 11 mm from the focus position, the 1/e$^2$ radius is calculated using $w(z) = w_o\\sqrt{1+(\\frac{z}{z_R})^2}$ where $w_o$ is the 1/e$^2$ radius of the beam at focus, $z$ is the distance from the focus, $z_R$ is the Rayleigh length for specific $w_o$ and $w(z)$ is the 1/e$^2$ radius of the beam at distance $z$ from the focus.", "For a gaussian beam 99$\\%$ of the beam power is contained in an area of radius w'(z)=1.52$w(z)$ .", "The respective intensity of each aperture diameter is calculated as $I = \\frac{2P}{\\pi w^{\\prime }(z)^2 t}$ where $P$ is the total power transmitted through the iris, $t$ is the FWHM pulse duration of $\\sim $ 12 fs.", "Figure REF (C) shows the normalized harmonic spectra from liquid isopropanol over a range of laser intensities.", "The actual intensity on the jet should be accurate with calculated intensities from the above measurement up to a precision of 20$\\%$ .", "It is observed that in the liquid phase harmonic generation occurs for intensities as low as 1$\\times $ 10$^{13}$ W/cm$^2$ and the $E_{\\rm c}$ is constant at 11.3 eV, independent of the incident intensity.", "Another interesting feature observed is that with increasing intensity the liquid harmonic spectra transition from discrete to continuous.", "In comparison, Figure REF (D) shows the normalized harmonic spectrum of gas-phase isopropanol for similar intensities.", "It is observed that gas-phase harmonics appear only above a threshold intensity of 4$\\times $ 10$^{13}$ W/cm$^2$ and, as expected, $E_{\\rm c}$ shows a linear dependence on the incident laser intensity.", "To observe the effect of pulse duration on the cut-off energy of liquids, a systematic variation of the pulse duration was performed using a pair of fused-silica wedges, where one wedge was fixed and the other was translated using an automated stage, which varied the amount of additional glass in the beam path from $\\sim $ 0.51 mm to $\\sim $ 1.72 mm.", "Beyond this, an additional 2 mm fused-silica window was added in the beam path to vary the glass thickness from 2.5 mm to 3.72 mm over the same range of wedge displacements.", "Figure REF (A) shows the pulse duration at each glass thickness is measured using a TG-FROG set-up.", "The black dashed line indicates the quadratic fitting of the pulse-duration as a function of glass thickness that is used for calibrating the pulse durations for Figure REF (B).", "Figure REF (B) shows the harmonic spectrum obtained from liquid D$_{2}$ O for different pulse durations.", "For clarity the normalized harmonic spectra at 14 fs, 16 fs, 19 fs, 26 fs, 31 fs, 37 fs and 43 fs have been scaled down by factors of 1/2, 1/4, 1/8, 1/16, 1/32, 1/64 and 1/128 respectively.", "A clear transition from a slightly modulated continuous spectrum to a sharply peaked odd-only harmonic spectrum at higher pulse duration is observed.", "An interesting phenomenon is observed at intermediate pulse durations, e.g.", "in the green and blue curves of Fig.", "REF (B), corresponding to pulse durations of 26 fs and 31 fs, respectively.", "In these cases, the harmonic peaks display a substructure that is best visible around 12 eV.", "This substructure is attributed to the interference of the direct, forward-propagating emission from the bulk liquid with a replica that has been internally reflected twice before exiting the thin liquid sheet.", "This assignment is supported by the photon-energy intervals of the observed structure and our previous work on the subject [20].", "Importantly, we find here that these substructures disappear for shorter pulse duration, resulting in the generation of truly continuous XUV spectra.", "This, in turn, indicates that isolated attosecond pulses with a good temporal contrast could be generated from liquids." ], [ "Ab-initio calculations", "From the theoretical perspective, we employed the recently developed methodology for calculating the HHG response of liquids through the use of finite-sized clusters [26].", "We follow the prescription in ref.", "[26] and use 54-molecule water clusters for calculating the nonlinear optical response, which is further averaged over 12 orientations.", "The cluster approach attempts to passivate the surface contribution to the harmonic response by an additional absorbing layer placed outside of the cluster, and by freezing the surface state dynamics.", "Figure: (A) Comparison of high harmonic spectrum for liquid and gas phase H 2 _2O from ab-initio calculations for 1800 nm wavelength, 10.5 fs pulse duration at an intensity of 6.3×\\times 10 13 ^{13} W/cm 2 ^2.", "(B) Harmonic spectra for different 6 fs, 10.5 fs and 24 fs pulse durations for liquid phase H 2 _2O obtained from ab-initio calculations.", "The calculations were performed at a fluence of 0.66 J/cm 2 ^2.The resulting HHG response approximately corresponds to that of the bulk liquid.", "We model the laser-matter interaction in the length gauge with the following electric field: $E(t) = E_0 f(t) \\cos (\\omega t +\\phi _{CEP})$ where $E_0$ is the field amplitude, $\\omega $ is taken to correspond to an 1800 nm driving wavelength, $f(t)$ is an envelope function taken as a super-sine form[30], which roughly corresponds to a super-Gaussian but is numerically more convenient, and $\\phi _{CEP}$ is the carrier-envelope phase (CEP).", "We modeled several different pulse durations corresponding to different HHG regimes.", "For the long pulse limit we used a FWHM of 24 fs and a peak pulse intensity of 2.625$\\times $ 10$^{13}$ W/cm$^2$ , and the CEP was set to zero.", "The shorter pulse durations employed a peak pulse intensity that was increased linearly with respect to the pulse duration, roughly corresponding to the experimental settings.", "The response was further averaged over three CEPs of values of 0, $\\pi /4$ , and $\\pi /2$ , to correspond to the experimental set-up which is not CEP-stabilized.", "Figure REF (B) shows the numerically obtained spectra for several pulse durations, which correspond very well with the experimental results in Figure REF (B).", "Indeed, the cut-off in the ab-initio calculations is independent of the pulse duration, and is very weakly dependent on the pulse peak power (increasing by only  1 eV when the peak intensity is increased by a factor of 4).", "Moreover, the calculations show that the harmonic peak contrast indeed decreases when employing shorter pulses, as observed experimentally.", "Overall, these results agree well with our previously developed semi-classical trajectory model [23] that predicts a similar weak dependence on pulse peak power due to a suppression of longer electron trajectories through mean-free-path-limited scattering channels in the liquid.", "Such a strong suppression is not observed in the gas-phase calculations, which follow the expected simple three-step-model results as seen in the gas-phase simulations(bottom-panel) of Figure REF (A).", "Figure: (A) Measured temporal profiles of the sub-two-cycle pulse after it propagates through 0.51 mm FS (blue line) and 3.72 mm (orange line).", "FWHM widths obtained from Gaussian fits are indicated in the plot.", "(B-D) Harmonic spectra from liquid-phase ethanol (B), isopropanol (C) and D 2 _2O (D) for different glass thicknesses Inset in Fig.", "5(D) shows the same spectra as Fig.", "5(D) but now normalized.", "The 13.3 fs spectrum is seen to envelope the 43 fs spectrum indicating negligible shift in cut-off energy.", "The black dased lines in all figures of Fig.", "5 are guides for the eye while the green dashed lines indicate the cut-off energy determined for the multi-cycle spectrum using the fitting technique described in the manuscript." ], [ "Pulse width dependence of cut-off energy for different liquids", "Finally, we also investigated whether the liquid-phase $E_{\\rm c}$ changes as a function of pulse duration for different liquids.", "For this purpose we compare the near continuous harmonic spectrum obtained for a glass thickness of 0.51 mm with the harmonic spectrum obtained for the maximum glass thickness of 3.72 mm.", "Figure REF (A) shows the temporal profiles of the laser pulses measured using FROG for these two glass thicknesses at a fluence of $\\sim $ 0.51 J/cm$^2$ .", "Figure REF (B), Figure REF (C) and Figure REF (D) show that $E_{\\rm c}$ is independent of pulse duration for liquid-phase ethanol, isopropanol and heavy water with their cut-off energies amounting to 11.1 eV, 11.3 eV and 12.9 eV, respectively with an accuracy of one visible harmonic order.", "To determine the cut-off energy (end of the plateau region) accurately in the discrete multi-cycle liquid harmonic spectrum and reduce human errors we use the a fitting approach elaborated in the supplement Figure S2.", "of [23].", "In brief, we explain the methodology as follows.", "A typical harmonic spectrum(beyond the perturbative regime) for a multi-cycle laser pulse includes plateau region with harmonic peaks of comparable signal strength followed by a cut-off region where the harmonic yield falls exponentially as a function of harmonic order.", "This is observed as a linear decline in a log-linear scale as shown in Figure REF (B-D).", "We perform a linear fitting of the log of the signal at each harmonic peak in the cut-off region as a function of the harmonic energy(represented by the black-slanted dashed lines in the orange curves of Figure REF (B-D)).", "The black dashed lines parallel to the energy axes in the orange curves of Figure REF (B-D) represent the average intensity value of the plateau harmonics.", "The intersection of these two lines(the average intensity of plateau harmonics and the linear fit of the log(signal) for the cut-off harmonics), is denoted the cut-off energy(depicted by the green dashed lines in Figure REF (B-D)).", "However, in the few-cycle regime where the spectrum is continuous, we do not observe discrete peaks but rather a slight modulation on a continuous spectrum.", "Overlaying the normalized spectrum of the few cycle case with the multi-cycle case (inset of Figure REF (D)) we see that the continuous spectrum(blue line for 13.3 fs) envelopes the discrete spectrum(orange line for 43 fs) perfectly indicating negligible change in the cut-off energy.", "We have previously argued that, in the multi-cycle regime, HHG from liquids is well explained by a scattering-limited trajectory model.", "Based on this trajectory picture, $E_{\\rm c}$ is determined by the characteristic electron mean-free path (MFP) in a liquid, which is shown to be comparable to the electron MFP in liquid water, methanol, ethanol and isopropanol [23].", "Determining MFPs for few-eV electrons in the liquid phase is an unsolved challenge, both computationally and experimentally.", "As a result, reliable low-energy electron MFPs have so far only be obtained in liquid H$_{2}$ O through a combination of experimental and theoretical methods [31], [32].", "For the alcohols, the liquid-phase MFP has been estimated by 1/($n\\sigma $ ), where $\\sigma $ is the gas-phase elastic scattering cross-section and $n$ is the number density of scattering molecules in the liquid phase.", "The agreement of the MFP values derived from the scattering-limited trajectory model for the multi-cycle liquid jet experiments [23] and the estimated MFP values proved that scattering played the decisive role for determining $E_{\\rm c}$ for liquids.", "As a result, unlike in gases $E_{\\rm c}$ is observed to be wavelength- and approximately intensity-independent in the multicycle regime.", "The cut-off energies obtained for different liquids in the current experiments using the sub-two cycle pulses are also in agreement with the multi-cycle measurements done with laser wavelengths of 800 nm, 1500 nm and 1800 nm on liquid flat-jets [23] and is demonstrated to be intensity-independent above a certain threshold intensity.", "Further we demonstrate that $E_{\\rm c}$ is also pulsewidth independent.", "The observed behavior is consistent with the scattering model of HHG in liquids.", "Since $E_{\\rm c}$ is solely dependent on the liquid MFP and independent of the intensity, the increase in pulse duration just causes the transition of the harmonic spectrum from continuous to discrete.", "These results further demonstrate that even in the sub-two cycle regime, scattering plays the dominant role in the HHG mechanism in liquids, and the simple theoretical description developed in Ref.", "[23] is valid." ], [ "Conclusions", "In this work we have demonstrated that the cut-off energy of harmonic spectra generated in the liquid phase is pulse-duration independent.", "The values of the cut-off energy are moreover in agreement with our recent work in the multi-cycle regime [23].", "Through a systematic variation of the pulse duration, we have observed that harmonic spectra from liquids evolve from continuous to discrete with increasing pulse duration.", "Furthermore, with a back-to-back measurement of gas-phase and liquid-phase isopropanol with sub-two-cycle pulses, we showed that for liquids $E_{\\rm c}$ remains intensity independent as opposed to the expected linear intensity dependence in the gas-phase spectra.", "We also showed that the onset of HHG in bulk liquid isopropanol occurs at much lower peak intensities (1$\\times $ 10$^{13}$ W/cm$^2$ ) compared to the gas phase (4$\\times $ 10$^{13}$ W/cm$^2$ ).", "The present results thus show that sub-two-cycle mid-infrared driving pulses do generate fully continuous XUV spectra from bulk liquids.", "However, unlike in gas-phase HHG, the cut-off energy is not extended, but identical to that obtained with multi-cycle drivers.", "Combined with our previous results demonstrating the wavelength-independence of the cut-off energy $E_{\\rm c}$ in the multi-cycle regime, the present results further confirm that scattering plays a dominant role in HHG in liquids making $E_{\\rm c}$ a fundamental property of the liquid, independent of laser parameters such as wavelength, intensity and pulse duration.", "Owing to the continuous nature of the emitted XUV spectra which enable a more accurate determination of $E_{\\rm c}$ , few-cycle HHG in liquids may become an accurate method for the first all-optical determination of the mean-free paths of slow electrons in liquids, which play an important role in the understanding of radiation damage in aqueous environments.", "We thank Mario Seiler, Michael Urban and Andreas Schneider for their excellent technical support.", "We acknowledge financial support from ETH Zürich and the Swiss National Science Foundation through grant 200021-172946.", "This work is supported by the Deutsche Forschungsgemeinschaft (DFG) through the priority program QUTIF (SOLSTICE-281310551) and the Cluster of Excellence `CUI: Advanced Imaging of Matter'- EXC 2056 - project ID 390715994, Grupos Consolidados (IT1249-19), and the Max Planck - New York City Center for Non-Equilibrium Quantum Phenomena.", "The Flatiron Institute is a division of the Simons Foundation.", "AM acknowledges the support of the InterMUST-AoW PostDoc Fellowship.", "ZY acknowledges financial support from an ETH Career Seed Grant No SEED-12 19-1/1-004952-00.", "O.N.", "gratefully acknowledges the generous support of a Schmidt Science Fellowship." ], [ "Competing interests", "The authors declare no conflicts of interest." ] ]
2210.07736
[ [ "The Formation and Structure of Olympic Gels" ], [ "Abstract Different methods for creating Olympic gels are analyzed using computer simulations.", "First ideal reference samples are obtained from freely interpenetrating semi-dilute solutions and melts of cyclic polymers.", "The distribution of pairwise concatenations per cyclic molecule is given by a Poisson-distribution and can be used to describe the elastic structure of the gels.", "Several batches of linear chains decorated with different selectively binding groups at their ends are mixed in the \"DNA Origami\" technique and network formation is realized.", "While the formation of cyclic molecules follows mean field predictions below overlap of the precursor molecules, an enhanced ring formation above overlap is found that is not explained by mean field arguments.", "The \"progressive construction\" method allows to create Olympic gels with a single reaction step from a concentrated mixture of large compressed rings with a low weight fraction short chains that are below overlap concentration.", "This method, however, is limited by the difficulty to obtain a sufficiently high degree of polymerization of the large rings." ], [ "Introduction", "“Olympic gels” (OGs) [1] are networks made of cyclic polymers (also called polymer rings) that are not linked by chemical cross-links.", "Instead, the permanent entanglement between concatenated rings establishes a three dimensional network structure similar to the structure of the Olympic rings.", "Due to the lack of cross-links, these samples were considered to be ideal model networks that might allow for an unperturbed analysis of the effect of entanglements in a polymer network [2].", "Recently [3], it was shown by computer simulations that OGs show unusual swelling properties, since networks of longer strands swell less than networks made of short cyclic molecules at otherwise identical preparation conditions.", "This was explained [3] by the desinterspersion of overlapping non-concatenated rings, that causes a large non-affine contributions to the swelling of these gels at the available low average number of concatenations per ring.", "Even though these OGs are, therefore, a very interesting model system to understand the physics of entanglements, these network have not yet been synthesized.", "The reason for this can be understood from considering the geometry of cyclization: Let us consider a mono-disperse solution of linear $N$ -mers that are long enough for cyclization.", "In order to form an OG, the cyclic molecules have to be at sufficient overlap such that a sufficiently large average number of concatenations per ring, $f_{n}$ , can be established.", "Let $P\\approx \\phi R^{3}/(N\\mbox{v}_{0})$ denote the number of overlapping linear chains of the same degree of polymerization $N$ , monomeric unit volume $\\mbox{v}_{0}$ and size $R$ at a polymer volume fraction of $\\phi $ .", "Gel formation, therefore, requires $P\\gg 1$ and a weight average of at least two connections per molecule.", "However, the probability that the $N$ -mer forms a ring is only of order $\\approx 1/(2P)$ , since there is only one opposite end of the same chain to react with in the pervaded volume.", "Therefore, even if all $P/(2P)$ overlapping rings were mutually concatenated, these rings would not form a gel, since there is less than one connection per ring.", "Interestingly, this discussion depends only on $P$ (which even cancels out) and not explicitly on $\\phi $ and $N$ .", "Therefore, one cannot tune $N$ or $\\phi $ such that an OG might be obtained.", "Instead, one has to modify the synthesis to allow for the formation of OG's.", "In this respect, Raphael et al.", "[4] (based upon an idea of De Gennes [1]) and Pickett [5] proposed two different methods for preparing OGs as sketched in Figure REF .", "The first method by De Gennes [1] is called “progressive construction”, since the network structure is built up in several steps.", "First, large ring polymers are made at very dilute concentrations in order to suppress the competing growth of linear chains.", "Next, the solution is concentrated above the overlap concentration of the rings.", "Then, end-functionalized linear chains are allowed to diffuse into the solution, whereby the concentration of linear chains is chosen below the overlap concentration of the linear chains in order to promote cycle formation.", "When the mixing equilibrium is reached, the reaction is started and a fraction of the surrounding cyclic polymer is entrapped by the closure of the linear polymers, see left of Fig REF , which may lead to an OG, if a sufficiently large number of connections between the long rings are established.", "Figure: Progressive construction (left) and DNA-Origami(right).", "Lines represent polymers, dots of same color (color onlineall Figures) are reactive end-groups of same type and gray dots withblack boundary are reacted polymer end-groups.The “DNA-Origami” approach of Pickett [5] requires the fabrication of a large set of different selectively binding end-groups.", "Pickett proposed to isolate a large set of different DNA fragments and to attach the ends of one chain to the two strands of one of these fragments.", "At solvent conditions under which DNA denaturates, the small DNA fragments will open up and the polymers decorated with the single strands of these fragments on both ends will form a linear solution of polymers.", "If the solvent conditions are modified such that the opposite DNA fragments stick together, the polymers can form rings upon selectively binding with the corresponding DNA fragment.", "If the number of different fragments is sufficiently large, OG's are obtained because chains of same type of fragment will be below overlap concentration and thus, ring formation will dominate the growth of linear chains.", "In addition to these methods, one could also use topoisomerase to create OGs of nearly perfect structure from semi-dilute solutions of cyclic DNA.", "We created OGs in similar manner by allowing the polymer strands to interpenetrate freely in our computer simulations.", "The data of these simulations is used below as ideal reference systems due to the missing polydispersity of the cyclic molecules and the absence of linear chains.", "In the present paper, we test these different approaches concerning their applicability to produce OGs and concerning the quality of the network structure that can be obtained in this way both analytically and by computer simulations." ], [ "Computer Simulations and Analysis", "We use the bond-fluctuation model (BFM) [6], [7] to simulate solutions of linear chains and rings.", "This method was chosen, since it is is known to reproduce conformational properties and dynamics of melts [8], [9] and semi-dilute solutions [10], [11] and polymer networks [12], [13].", "In this method, each monomer is represented by a cube occupying eight lattice sites on a cubic lattice.", "The bonds between monomers are restricted to a set of 108 bond vectors which ensure cut-avoidance of polymer strands by checking for excluded volume.", "Monomer motion is modeled by random jumps to one of the six nearest lattice positions.", "A move is accepted, if the bonds connecting to the new position are still among the set of 108 bond vectors and if no monomers overlap.", "All samples of the present study were created in simulation boxes with periodic boundary conditions.", "A-thermal solvent is treated implicitly by empty lattice sites.", "In our work, we discuss two different series of simulations in order to analyze ideal OG and the competition between growth of linear chains and ring formation (DNA-Origami).", "The results of these simulations serve as input to analyze the “progressive construction” method.", "The details of sample preparation of these simulation series are summarized at the beginning of the corresponding sections.", "For analysis of network connectivity, rings were first simplified at conserved topology in one additional simulation run by removing monomers $i$ from the chain contour, whenever monomers $i$ , $i+1$ and $i-1$ formed a tight triangle through which no bond of any other molecule could pass [14].", "Next, we determined a regular projection of any pair of overlapping rings with minimum number of intersections as also described in [14].", "Finally, we computed the Gauss code of the projection as input for the Skein-Template algorithm of Gouesbet et al.", "[15].", "The types of the knots and pairwise links formed were analyzed by using the resulting HOMFLY polynomials [16].", "Data of previous work [17] show that the effect of Brunnian links or similar structures that are not (entirely) detected by a pairwise linking analysis should be ignorable for the degrees of polymerization of our study.", "Therefore, we restricted our connectivity analysis to pairwise links only.", "The resulting connectivity matrix between pairs of rings is used to analyze the network structure, weight fraction of gels, the gel point position, and the amount of the elastically active rings." ], [ "Ideal Olympic gels", "Interpenetrating solutions of mono-disperse cyclic polymers are considered to serve as an ideal reference system to understand the formation of OGs, since in these samples, concatenation is in equilibrium with the polymer conformations in solution at the particular polymer concentration.", "Our simulations of these ideal OGs cover an array of different degrees of polymerization $N$ = 16, 32, 64, 128, 192, 256, 384, 512, 768, and 1024 and polymer volume fractions of $\\phi $ = 0.5, 0.375, 0.25, 0.1875, 0.125, 0.0625, and 0.03125 as described previously [11].", "The numbers of rings per sample varied from 512 to 4096, resulting in a simulation box size between $128^{3}$ and $512^{3}$ lattice sites.", "Periodic boundary conditions were applied in all space directions.", "During equilibration, additional “diagonal” moves were allowed that do preserve connectivity of a ring but allow for a change in the topology of overlapping rings, since all entanglements are switched off.", "After equilibration, all “x-traps” (pairs of bonds that mutually block each other's motion - these arise here from switching off diagonal moves) [18] were removed while returning to the original set of moves and network connectivity is analyzed as described in the previous section.", "In our preceding work [11], we analyzed only the average number of concatenated rings per ring polymer, the number average “functionality” of the rings, $f_{n}$ , in a solution of mono-disperse interpenetrating rings in order to develop a model for the conformations of rings in melt or solution.", "It was found that $f_{n}\\approx \\gamma \\phi ^{\\nu /(3\\nu -1)}N\\left(1-P_{\\mbox{OO}}\\right),$ with a numerical constant $\\gamma \\approx 0.034\\pm 0.001$ and a cut-off $\\left(1-P_{\\mbox{OO}}\\right)$ for very small $\\phi ^{\\nu /(3\\nu -1)}N$ based upon the probability for non-concatenation $P_{\\mbox{OO}}\\approx \\exp \\left(-\\left(\\phi ^{\\nu /(3\\nu -1)}N-a\\right)/N_{\\mbox{OO}}\\right)$ .", "Here, $N_{\\mbox{OO}}$ is the cross-over degree of polymerization (as defined for extrapolating to $\\phi =1$ ) that distinguishes between the non-concatenated regime $N<N_{\\mbox{OO}}$ and the concatenated regime $N>N_{\\mbox{OO}}.$ Similar to knotting, a second parameter $a$ is used that can be understood as effective minimum degree of polymerization to make concatenation possible (with respect to a particular environment).", "However, for poly-disperse systems, gelation requires [19] a weight average functionality $f_{w}$ of two $f_{w}=2$ instead of $f_{n}=2$ .", "Therefore, we have to determine $f_{w}$ and the distribution of concatenations in order to estimate the position of the gel point.", "Figure: Distribution of the number of concatenationsff per ring (data points) in mono-disperse solutions of interpenetratingrings at φ=0.5\\phi =0.5.", "The lines are computed using equation ()and the average number of concatentations f n f_{n} as the only adjustableparameter.In a previous work [20], it was argued that the number of concatenations per ring is related to the area of the minimal surface bounded by a ring.", "A rather narrow distribution of the area of the minimal surfaces of rings of same degree of polymerization at the same preparation conditions can be assumed because of the dominance of the boundary region of the area (see ref.", "[20]).", "Let us assume that concatenation is a random process that occurs with equal probability per polymer strand that passes through this rather constant area of the minimum surface.", "Under these conditions, the distribution of concatenated states is expected [21] to be described by a Poisson distribution $P(f,f_{n})=\\frac{f_{n}^{f}}{f!", "}\\,\\mathrm {e}^{-f_{n}}$ around the number average number of concatenations, $f_{n}$ , per ring.", "Figure REF shows an excellent agreement between equation (REF ) and the data of the computer simulations independent of the degree of polymerization $N$ and the average number of concatenations $f_{n}$ .", "The weight average functionality of the above Poisson distribution [22] is $f_{w}=f_{n}+1$ and fits well to the simulation data as shown in Figure REF .", "Note that the cut-off for non-concatenation, $P_{OO}$ , is only a weak correction at the gel point (of order 10%) an can be safely ignored for well developed gels.", "Figure: Number average f n f_{n} (hollow symbols)and weight average f w f_{w} (filled symbols) number of concatenationsof cyclic polymers in mono-disperse solutions of cyclic polymers.The continuous line is a linear increase, the dotted line is equation(), and the dashed green line is equation ().The above estimate $f_{w}=2$ for the position of the gel point is tested by computing the size of the largest cluster of concatenated rings in the samples.", "The result is shown in Figure REF .", "According to the simulation data of networks close to the gel point (see also Figure REF ), the gel point is located at a weight average functionality $f_{w,c}=2.12\\pm 0.03$ .", "Figure: The weight fractions of the largestconnected cluster of concatenated rings (hollow symbols) and the elasticallyactive material (full symbols) as function of the weight average functionalityof the rings.", "Continuous line is weight fraction of gel, equation(), and dashed line is weight fraction of active material,equation ().To estimate the the weight fractions of active material and gel, we use the approach of Miller and Macosko [23], [24] and apply it to a Poisson distributed set of functionalities with given average $f_{n}$ .", "Let $P_{out}$ denote the probability of finding a finite chain when “looking out” along one of the $f$ connections (concatenations) of a ring.", "The functionality of the connected ring is selected randomly according to the weight fraction of connections to rings with $f$ connections, $a_{f}=\\frac{P(f,f_{n})f}{\\sum _{f}P(f,f_{n})f}=\\frac{f_{n}^{f-1}}{(f-1)!", "}\\mbox{e}^{-f_{n}}=P(f-1,f_{n}).$ Above we made use of $\\sum _{f}P(f,f_{n})f=f_{n}$ .", "Note that the situation we analyze here is equivalent to full conversion $p=1$ in Refs.", "[23], [24] and thus, looking “out” or “into” a molecule from a given connection is equivalent: $P_{out}=P_{in}=\\sum _{f}a_{f}P_{out}^{f-1}.$ Thus, we need to solve numerically $\\sum _{f}a_{f}P_{out}^{f-2}-1=0$ for $P_{out}$ as function of $f_{n}$ , which is the only variable here.", "We seek a solution of this equation within the interval $[0,1[$ , if existing; otherwise $P_{out}=1$ .", "The weight fraction of sol, $w_{sol}$ , and gel, $w_{gel}$ , are computed by inserting this particular solution for $P_{out}$ into the following equations: $w_{sol}(f_{n})=\\sum _{f}P(f,f_{n})P_{out}^{f}$ $w_{gel}(f_{n})=1-w_{sol}$ The weight fraction of the active material, $w_{act}$ , is the weight fraction of all rings in gel with at least two independent connections to the gel $w_{act}(f_{n})=w_{gel}(f_{n})-\\sum _{f}P(f,f_{n})fP_{out}^{f-1}(1-P_{out}).$ The above predictions are compared with simulation data in Figure (REF ).", "We observe good agreement with our mean field estimate for both weight fraction of gel and active material.", "Note that in contrast to conventional gelation the functionality of the molecules is not determined by chemistry; instead it results from the interpenetration and concatenation of overlapping rings.", "The external parameters $N$ and $\\phi $ enter directly via $f_{n}$ in the sol-gel transition and not in corrections for intra-molecular reactions as typical for chemically linked gels.", "Corrections to gelation (e.g.", "shift of gel point) are also expected to be universal here, since such corrections can only arise from multiple links between overlapping rings, that depend in the same manner from overlap between the molecules as concatenation.", "Note that the above analysis allows to determine $f_{w}$ and thus, $\\gamma $ from experimental data in the region just above the gel point by measuring the weight fraction of solubles.", "Finally, we would like to point out that OGs with $f_{n}\\gtrsim 8$ can be considered essentially as defect free model systems, since $w_{gel}>0.999$ and $w_{act}>0.99$ .", "This ideal case of interpenetrating solutions is used in the following two sections below as reference to analyze DNA-Origami and progressive construction." ], [ "DNA-Origami", "To analyze the competition between the growth of linear chains and ring closure, we equilibrated melts of linear chains with rather small degrees of polymerization $N=16,$ 32, and 64 in order to achieve significant amounts of ring molecules upon randomly linking all chain ends.", "The total number of monomers in each sample was $2^{20}$ .", "The linear chains were randomly assigned to be part of one of the $B=1$ , 2, 4, 8, 16, 32, or 64 batches of chains.", "Polymer volume fraction was kept constant at $\\phi =0.5$ and all samples were simulated on a lattice of $256^{3}$ lattice sites.", "A permanent bond is introduced whenever two previously unreacted chain ends of the same batch hit each other during the course of their motion at the smallest possible separation on the lattice.", "The extent of reaction at the end of the simulations was typically above 99% of the maximum possible extent of reaction.", "However, all samples were analyzed at conversion of $p=0.9568$ , which is the smallest maximum conversion that was achieved within all samples, in order to eliminate systematic effects caused by a variation of $p$ .", "The number and weight fractions of linear chains and cyclic polymers of different size were analyzed at the end of the simulations.", "Let us assume that at the beginning of the reaction, all chains are mono-disperse with a degree of polymerization $N\\gg 1$ such that ring formation is not restricted within individual chains.", "The overlap number $P\\approx \\phi R_{g}^{3}/N$ describes the number of chains in the pervaded volume of a polymer at the beginning of the reactions.", "Splitting the chains into $B$ batches of chains that react selectively within the same batch reduces the overlap number to $\\approx \\phi R_{g}^{3}/(BN)=P/B$ .", "Pickett [5] expects that the weight fraction of rings among all polymer, $w_{O}$ , can be described by a function of form $w_{O}\\approx \\frac{1}{1+zP/B}\\text{,}$ whereby $z$ is a numerical constant close to two reflecting the fact that only one opposite chain end is in the vicinity for ring formation, while each overlapping linear chain contributes two reactive groups.", "Our simulation data in the “dilute” regime $B>P$ is well described by this prediction with $z\\approx 1.60\\pm 0.05$ as adjustable parameter, see Figure REF .", "However, the available data at $P/B>1$ seems to be a function of $P/B$ , such that $w_{O}\\approx y\\left(P/B\\right)^{\\alpha }$ with $\\alpha \\approx -0.12\\pm 0.03$ and $y\\approx 0.36\\pm 0.02$ .", "We searched literature for explanations of the regime $P/B>1$ , but neither a numerical test [25] of the compact exploration of space of the reactive chain ends [26] (resulting in a much larger $\\alpha $ ) nor an explicit computation of the self-dilution effect [27], [28] by using the mean field rate equations of the Appendix lead to convincing results (see Figure REF ).", "Figure: Weight fraction of cyclic polymersas function of the number of batches BB rescaled to overlap concentration.The “self-dilution” estimate was computed for p=0.9568p=0.9568 andfit to the data for with w O >1/2w_{O}>1/2, while the effect of p<1p<1was ignored for equation ().", "The dotted line is apower law approximation for the tail at large P/BP/B as discussedin the text.Figure: Typical number fraction distribution n i n_{i} oflinear chains and cyclic polymers made of ii precursor chains atP/B<1P/B<1.", "Data taken at p=0.9568p=0.9568 of a sample with N=64N=64 and B=32B=32.Lines are computed using the differential equations described in theAppendix.In addition to the above trends for the total weight fraction of cyclic polymers, $w_{O}$ , we observe a qualitative change in the number fraction distributions $n_{i}$ of linear chains and rings made of $i$ precursor chains for the two regimes $P/B>1$ and $P/B<1$ as shown in Figure REF and REF .", "The data below overlap, $P/B<1$ can be approximated by the set of differential equations given in the appendix.", "The effect of self-dilution is visible here by the depletion of shortest linear chains as compared to a most probable distribution (linear decay on semi-log plot).", "The decay of the number fraction of rings at small $i$ can be approximated alternatively by a power law with an exponent near $-3/2$ with an exponential cut-off similar to the distribution of the linear chains.", "In contrast to this, both distributions apparently become two most probable distributions with different effective conversions $p$ in the regime $P/B>1$ in a first order approximation, see Figure REF .", "Here, the effective $p$ to describe the distributions of the rings is always smaller as the one of the linear chains, which is close to equation (REF ) of the Appendix.", "Note that in both cases an efficient formation of concatenated rings is only possible, if the degree of polymerization of the precursor linear chains is already above the cut-off for non-concatenation.", "Figure: Typical number fraction distribution n i n_{i} oflinear chains and cyclic polymers at P/B>1P/B>1.", "Data taken at p=0.9568p=0.9568of a sample with N=16N=16 and B=1B=1.", "Lines are fits to functions ap i-1 (1-p)ap^{i-1}(1-p)with variables aa and pp.Let us now discuss the gel point condition and network structure for DNA origami in both limits $P/B<1$ and $P/B>1$ based upon the available data.", "For simplicity, we assume, that the precursor chains have a higher degree of polymerization as the cut-off degree of polymerization for concatenation $N>N_{OO}$ .", "This allows us to use the weight fraction of rings $w_{O}$ as correction for the average number of concatenations and to drop the non-concatenation correction in equation (REF ).", "In the limit $P/B<1$ , the number fraction distribution of rings is dominated by the strong decay at small $i$ such that we can keep in first approximation $f_{w}\\approx f_{n}+1$ , which simplifies the gel point condition to $\\gamma \\phi ^{\\nu (3\\nu -1)}Nw_{O}\\approx f_{n}\\approx 1.$ Under these conditions, all results of the previous section can be kept as first order approximation, after the average number of concatenations is corrected by a factor $w_{O}$ as in the above equation.", "The number of batches $B$ necessary to pass through the gel point is here $B\\approx \\frac{zN^{1/2}}{\\gamma \\phi ^{\\nu (3\\nu -1)}N-1}$ because of $P\\approx N^{1/2}$ .", "Since $P/B<1$ , there is $B\\gtrsim N^{1/2}$ , which yields that the above criterion holds for $N\\phi ^{\\nu (3\\nu -1)}\\lesssim (z+1)/\\gamma $ .", "All samples with larger $N\\phi ^{\\nu (3\\nu -1)}$ are gelled, if $B>P$ .", "Note that the above two equations can be used to map the DNA Origami in the regime $P/B<1$ in first approximation back onto the ideal OG case.", "For instance, considering our results for the ideal OGs we conclude that almost defect free model networks (after removal of linear chains) with a weight fraction of $w_{net}\\approx w_{O}$ are obtained for $P/B<1$ , if $f_{n}\\gtrsim 8$ .", "In the limit $P/B>1$ , the number fraction distribution of rings is roughly given by a most probable weight distribution, see Figure REF .", "Because of the narrowly distributed functionalities for a given $N$ , equation (REF ), the relation between number and weight average functionality is dominated by the broader distribution of the molecular weights.", "Thus, $f_{w}\\approx 2f_{n}$ .", "Thus, the gel point condition is the same as in equation (REF ) by coincidence.", "The weak dependence of $w_{O}$ on $P/B$ for $P/B>1$ shows that a variation of the number of batches for reaction is not very effective unless one reaches the opposite regime $P/B<1$ .", "On the other hand, since the average functionality of the rings grows $\\propto \\gamma \\phi ^{\\nu (3\\nu -1)}N^{1-\\alpha /2}$ for constant $B$ , the probably most efficient strategy in this limit $P/B>1$ is simply to choose a precursor degree of polymerization $N\\gg N_{OO}$ such that $f_{n}>1$ despite the low $w_{O}$ .", "However, this yields low network weight fractions $w_{net}<w_{O}\\ll 1$ and a high fraction of long linear chains $1-w_{O}$ needs to be extracted.", "As a result, one might obtain a largely de-swollen Olympic network.", "We have to comment here, that the observed power law for $w_{O}$ as function of $B/P$ must not necessarily extend to ratios of $P/B$ much larger than the ones obtained in our study - as it would be necessary to achieve gelation.", "The worst case scenario is here $w_{O}\\propto (P/B)^{-1}$ , which still leads at a constant $B$ and $\\phi $ to a growing functionality of the rings, $f_{n}\\propto N^{1/2}$ , if $N$ remains below an $N^{*}$ above which any pair of overlapping rings is concatenated (see Ref.", "[11] for details).", "For $N>N^{*}$ on the other hand, $f_{n}$ becomes constant in this worst case scenario and increasing $N$ beyond $N^{*}$ will not cause the formation of an OG.", "However, for the worst case scenario $w_{O}\\propto (P/B)^{-1}$ , a variation of $B$ is very effective, since $f_{n}\\propto B$ for both regimes $N<N^{*}$ and $N>N^{*}$ ." ], [ "Progressive construction", "The results of the two previous sections can be applied to discuss the progressive construction methods for OGs.", "The original approach of Ref [4] is devoted to a mixture of long and short polymers that are both below the entanglement length in order to avoid complications by the compression of the long cyclic polymers.", "This requires that the concatenation degree of polymerization $N_{OO}$ is clearly below the entanglement degree of polymerization $N_{e}$ .", "In our previous work [11], [13], we rather find the opposite: $N_{OO}\\gtrsim N_{e}$ .", "Even ideal mono-disperse samples (see section ) do not form OGs for $N<N_{e}\\lesssim N_{OO}$ .", "Therefore, we develop an alternative approach for progressive construction at $N>N_{OO}$ using the same strategy as proposed in Ref.", "[4]: we use short linear strands to link long non-concatenated rings that were concentrated above overlap concentration.", "Let us use $L$ and $S$ to denote the degrees of polymerization of the long and short molecules respectively.", "Since the Kuhn molecular weight of the simulation model is close to one monomer, we drop all coefficients related to converting monomers to Kuhn monomers.", "Next, $w_{L}$ and $w_{S}$ denote the weight fractions of long and short rings among all polymer, while $w_{lin}$ is the weight fraction of linear chains that is obtained as by-product of the linking reactions.", "Thus, $w_{L}+w_{S}+w_{lin}=1$ .", "Finally, $\\phi $ is the weight fraction of polymers, $b$ the root mean square length of a segment and the monomeric volume $\\mbox{v}_{0}$ is $\\approx b^{3}$ .", "First, let us consider the case $w_{S}\\ll w_{L}\\approx 1$ so that conformational changes upon the addition of short polymers can be ignored in first approximation.", "Furthermore, we add short chains well below their overlap volume fraction such that $w_{S}>w_{lin}$ and the cyclic chains produced by the $S$ -mers consist predominantly of only one $S$ -mer.", "Since the $S$ -mers are dilute, we assume that $S$ -mers entrap in first approximation only $L$ -mers [29].", "Thus, $f_{n,S}\\approx \\gamma \\phi ^{\\nu /(3\\text{$\\nu $-1})}S(w_{L}+w_{S})\\approx \\gamma \\phi ^{\\nu /(3\\text{$\\nu $-1})}S.$ These concatenations are distributed among all $L$ -mers, for which we obtain a concatenation density that is reduced by a factor of $w_{S}/w_{L}$ .", "This lower concatenation density determines the average number of concatenations of the long chains $f_{n,L}\\approx \\frac{w_{S}}{w_{L}}\\gamma \\phi ^{\\nu /(3\\text{$\\nu $-1})}L.$ In both cases, we expect that the number of concatenations are Poisson distributed [30].", "For concatenations only between long and short rings we can map the problem to co-polymerizations of molecules with distributed functionalities that are discussed in Ref.", "[24].", "Then, the gel point condition can be written as $(f_{w,S}-1)(f_{w,L}-1)=f_{n,S}f_{n,L}\\approx \\frac{w_{S}}{w_{L}}\\frac{L}{S}f_{n,S}^{2}=1.$ Thus, $w_{L}\\gg w_{S}$ requires that $Lf_{n,S}^{2}/S\\gg 1$ , which is best obtained for $f_{n,S}\\gtrsim 1$ in order to avoid gigantically long polymers $L$ , since for very small $f_{n,S}$ the exponential cut-off for concatenation [11] would lead to a square exponential growth of $L$ .", "In contrast to this result, $f<1$ has been suggested in Ref.", "[4] for the small rings to create OGs.", "Another interesting point of equation (REF ) is that it allows to determine $\\gamma $ experimentally, if the polydispersity of the rings made by $S$ -mers is low, since then, all parameters of this equation except of $f_{n,S}^{2}$ and thus, $\\gamma $ are known.", "Similar to section , the final goal is to achieve OGs with a well developed network structure such that essentially all $L$ -mers are active.", "Since linking of long chains should happen by a volume fraction of short chains below their overlap volume fraction (note that $S$ -mers are larger than blob size at $f_{n,S}>1$ ), we require $w_{S}\\phi <\\phi _{S}^{*}\\approx \\frac{b^{3}S}{\\left(bS^{1/2}\\phi ^{-(\\nu -1/2)/(3\\nu -1)}\\right)^{3}},$ which is $w_{S}\\lesssim S^{-1/2}\\phi ^{-(6\\nu -5/2)/(3\\nu -1)}$ as upper limit for $w_{S}$ .", "Let us use this upper limit as bound for the maximum weight fraction to be added to obtain a “well developed” network.", "Using the results of the ideal OGs of section as reference, let us adopt the following criteria $f_{n,L}f_{n,S}\\ge 8$ and $f_{n,S}\\ge 1$ to ensure that essentially all large rings are incorporated into the gel.", "Thus, we require $\\frac{w_{S}}{w_{L}}\\frac{L}{S}f_{n,S}^{2}\\ge 8$ at the end of the reactions.", "A weight fraction $w_{S}$ that fulfills equation (REF ) and (REF ) can be found, if $L\\ge 8w_{L}S^{-1/2}\\gamma ^{-2}\\phi ^{-(5/2-4\\nu )/(3\\nu -1)},$ which requires an enormous $L\\gtrsim 900$ for our simulations to start with, because here $\\gamma ^{-1}\\approx 30$ and $S\\ge 50$ for $f_{n,S}\\ge 1$ (at $\\phi =0.5$ ).", "This result shows that the construction of well developed OGs is possible within a single concatenation step, given that sufficiently long chains $L$ are available to be linked.", "More details about network structure can be obtained numerically using the approach of Miller and Macosko [23], [24] applied to co-polymerization similar to our discussion in section .", "One interesting point is here, that the weight fraction of sol could be used for an alternative determination of $f_{n,S}$ and thus $\\gamma $ if $f_{n,S}$ is sufficiently large such that the exponential cut-off for $f_{n,S}$ could be ignored.", "This is because $f_{n,S}$ is Poisson distributed and a fraction of $e^{-f_{n,S}}$ of the short chains will not entrap any polymer.", "Thus, well beyond the gel point, $f_{n,S}f_{n,L}\\gg 1$ , the weight fraction of sol will be dominated by non-concatenated short chains and linear strands, $w_{sol}\\approx w_{lin}+w_{S}e^{-f_{n,S}}$ , which may be used to estimate $f_{n,S}$ , if $w_{lin}$ is sufficiently small or linear chains can be separated from cyclic polymers.", "A combination of progressive construction and DNA-Origami would allow to increase the lower boundary for the weight fraction of the short polymers, $w_{S}$ , in equation (REF ) by a factor equal to the number of batches, $B$, such that the required chain length $L$ is reduced to $L/B$ .", "However, a too large $B$ will break the assumption [31] $w_{S}\\ll w_{L}$ which is for $B=1$ essentially always satisfied because of the low value of $\\gamma $ .", "Instead, if a large $B$ is available, it could be used to minimize $w_{lin}$ instead in order to remove effects of poly-disperse rings made of $S$ -mers and to reduce the weight fraction of sol.", "On the other hand, the combination of DNA-Origami and progressive construction can be used to reduce the weight fraction of sol for a given possible maximum number of batches $B$ as compared to pure DNA-Origami.", "As trade off one obtains a sample that consists of partially compressed long chains concatenated with short chains that are at equilibrium at cross-linking conditions, which is the general complication of creating OGs by progressive construction.", "Indeed, for progressively constructed gels one has to expect an even more unusual swelling behavior as found recently for ideal OGs [3], since beyond dis-interpenetration of rings, the long rings gain extra conformations by reducing their compression." ], [ "Summary", "In the present paper we have discussed analytically and numerically the formation and structure of Olympic gels (OGs) and tested our predictions by simulation data.", "We focused on three different model cases: ideal OGs, DNA-Origami, and progressive construction.", "For ideal OGs we demonstrate that the distribution of the number of concatenations is well approximated by a Poisson distribution.", "This Poisson distribution can be used to predict numerically structural features of the gels, such as the gel point, the weight fraction of gel, or the weight fraction of elastically active rings.", "These predictions are well supported by simulation data.", "While the construction of ideal OGs was not considered explicitly in previous work, we found for the other two cases clear differences to previous works [4], [5].", "Below overlap concentration of the linear chains used for DNA-Origami, the model of Pickett [5] is well suited to describe the weight fraction of rings.", "Above overlap, we find a much smaller decay of the weight fraction of rings as function of the overlap number $P$ and the number of batches $B$ with selectively binding ends that is $\\propto \\left(P/B\\right)^{\\alpha }$ with an $\\alpha \\approx -0.12\\pm 0.03$ .", "A more detailed mean field analysis taking into account the competition between ring formation and growth of linear chains does not lead to an improved description of the simulation data.", "However, following the results of the mean field model one can conclude that by far the most efficient way to achieve a large weight fraction of rings is to choose a low overlap $P/B$ of the linear chains at the onset of the reaction.", "Since the weight fraction of rings is dominated by the smallest rings, possible steric cut-offs for ring formation and concatenation seriously affect the formation of rings that can concatenate.", "Progressive construction can be used indeed, to obtain OGs.", "However, the conditions necessary for gelation are rather opposite to the ones proposed originally by [4]: instead of using very short strands with a number average number of concatenations $f_{n}<1$ , we suggest to use $f_{n}>1$ , since otherwise, the required minimum degree of polymerization of the long rings, $L$ , will grow square exponential when further reducing $f_{n}$ .", "The Poisson distribution for the number of concatenations allows again to derive a rather simple condition for gelation.", "However, the low numerical constant $\\gamma $ for concatenation leads to still very large $L$ to obtain well developed OGs, if the weight fraction of the short polymers is low in order to not disturb the conformations of the long chains.", "In general, DNA-Origami can be combined with progressive construction.", "This allows to either use a smaller number of batches as compared to DNA-Origami, or a smaller $L$ could be used in progressive construction, which may simplify the construction of OGs." ], [ "Acknowledgement", "The authors acknowledge a generous grant of computing time at the ZIH Dresden for the project BiBPoDiA.", "Financial support was given by the DFG grants LA 2735/2-1 and SO 277/7-1.", "Appendix: Mean field approximation of ring-chain competition The competition between the growth of linear chains and ring formation can be analyzed numerically in a mean-field framework by an adequate set of differential equations.", "We consider first the linear condensation of short $N$ -mer chains (monomers “A”) without ring formation.", "Let $n_{i}(p)$ denote the number fraction of $iN$ -mers at conversion $p$ .", "At the absence of ring formation, one obtains for irreversible linear condensation reactions of monomers A $(\\mbox{A})_{i}+(\\mbox{A})_{j}\\rightarrow (\\mbox{A})_{i+j}$ a most probable weight distribution with the well known polymer number fraction distribution $n_{i}(p)=p^{i-1}(1-p)$ and an average degree of polymerization that is here $N_{\\mbox{n}}=N\\frac{1}{1-p}.$ A numerical solution of this problem can be computed using the following set of differential equations: The number fraction $di^{-}(p)$ of $i$ -mers that react within a integration interval $\\mbox{d}p$ is $\\mbox{d}i^{-}(p)=n_{i}(p)\\mbox{d}p,$ while the fraction $di^{+}(p)$ of $i$ -mers formed during the integration interval is given by $\\mbox{d}i^{+}(p)=\\sum _{k=1}^{i-1}n_{k}(p)n_{i-k}(p)\\mbox{d}p.$ Since the distribution $n_{i}(p)$ is normalized to one, the change in conversion equals $\\mbox{d}p$ .", "With initial conditions $n_{1}(0)=1$ and $n_{i}=0$ for $i>1$ we obtain numerically equation (REF ), by computing $n_{i}(p+\\mbox{d}p)=n_{i}(p)+\\mbox{d}i^{+}(p)-\\mbox{d}i^{-}(p)$ in infinitesimal integration intervals $\\mbox{d}p$ .", "The analytical solution, equation (REF ), can be used to check the accuracy of the numerical solution.", "Ring formation is implemented using the same assumptions (Gaussian statistics for all $iN$ -mers, mean field approximation) as in section .", "The average concentration of the reactive groups $c(p)$ is proportional to the initial concentration $c_{0}$ of reactive groups and decays with conversion $c(p)=c_{0}(1-p).$ Gaussian statistics for all precursor chains and combined linear chains of $i$ sections with $N$ monomers leads to concentrations $c_{i}\\approx c_{1}i^{-3/2}$ of the first end of an $iN$ -mer near its second end.", "Here $c_{1}\\approx 1/(2P)$ is the corresponding concentration for one chain of $N$ monomers.", "Assuming equal reactivity, all reaction rates are proportional to the concentrations of the reactive species only.", "Thus, we can equate for the rate to form a ring polymer of $iN$ monomers, $dC_{i}^{+}$ , that $\\frac{\\mbox{d}r_{i}^{+}(p)}{\\mbox{d}i^{-}(p)}=\\frac{c_{i}}{c_{0}(1-p)}.$ These additional reactions that convert linear chains of $iN$ monomers into cycles disturb the most probable distribution of the linear species.", "Nevertheless, the total number fraction of all species is still normalized, $\\sum _{i}n_{i}(p)+\\sum _{i}r_{i}(p)=1,$ whereby only the number fraction of linear chains, $n_{lin}(p)=\\sum _{i}n_{i}(p),$ is available for further reactions.", "Note that in the above equations $d_{i}^{+}(p)$ is $\\propto n_{lin}^{2}(p),$ while $d_{i}^{-}(p)$ and $dr_{i}^{+}(p)$ are proportional to $n_{lin}(p)$ .", "Therefore, we have to modify equation (REF ) to $\\mbox{d}i^{+}(p)=\\sum _{k=1}^{i-1}n_{k}(p)n_{i-k}(p)\\mbox{d}p/n_{lin}(p),$ if there is ring formation in order to maintain normalization of the distributions.", "The number fractions of rings and linear chains are obtained by computing $r_{i}(p+\\mbox{d}p^{\\prime })=r_{i}(p)+\\mbox{d}r_{i}^{+}(p)$ $n_{i}(p+\\mbox{d}p^{\\prime })=n_{i}(p)+\\mbox{d}i^{+}(p)-\\mbox{d}i^{-}(p)-\\mbox{d}r_{i}^{+}(p).$ Note that the effective infinitesimal conversion $\\mbox{d}p^{\\prime }$ is affected by ring forming reactions leading to $dp^{\\prime }=\\sum _{i}\\left(\\mbox{d}i^{-}(p)+\\mbox{d}r_{i}^{+}(p)\\right).$ Ring formation also modifies the conversion of the linear species, since $p\\equiv 1$ for all rings.", "Let $w_{O}(p)$ denote the weight fraction of rings at conversion $p$ .", "The conversion among the linear chains, $p_{lin}$ , is thus given by $p_{lin}=\\frac{p-w_{O}(p)}{1-w_{O}(p)}.$ This relation can be used to compute $p$ directly from the average degree of polymerization of the linear chains, $N_{\\mbox{n}}$ , by inserting $p_{lin}$ instead of $p$ into equation (REF ) for the linear species." ], [ "Mean field approximation of ring-chain\ncompetition", "The competition between the growth of linear chains and ring formation can be analyzed numerically in a mean-field framework by an adequate set of differential equations.", "We consider first the linear condensation of short $N$ -mer chains (monomers “A”) without ring formation.", "Let $n_{i}(p)$ denote the number fraction of $iN$ -mers at conversion $p$ .", "At the absence of ring formation, one obtains for irreversible linear condensation reactions of monomers A $(\\mbox{A})_{i}+(\\mbox{A})_{j}\\rightarrow (\\mbox{A})_{i+j}$ a most probable weight distribution with the well known polymer number fraction distribution $n_{i}(p)=p^{i-1}(1-p)$ and an average degree of polymerization that is here $N_{\\mbox{n}}=N\\frac{1}{1-p}.$ A numerical solution of this problem can be computed using the following set of differential equations: The number fraction $di^{-}(p)$ of $i$ -mers that react within a integration interval $\\mbox{d}p$ is $\\mbox{d}i^{-}(p)=n_{i}(p)\\mbox{d}p,$ while the fraction $di^{+}(p)$ of $i$ -mers formed during the integration interval is given by $\\mbox{d}i^{+}(p)=\\sum _{k=1}^{i-1}n_{k}(p)n_{i-k}(p)\\mbox{d}p.$ Since the distribution $n_{i}(p)$ is normalized to one, the change in conversion equals $\\mbox{d}p$ .", "With initial conditions $n_{1}(0)=1$ and $n_{i}=0$ for $i>1$ we obtain numerically equation (REF ), by computing $n_{i}(p+\\mbox{d}p)=n_{i}(p)+\\mbox{d}i^{+}(p)-\\mbox{d}i^{-}(p)$ in infinitesimal integration intervals $\\mbox{d}p$ .", "The analytical solution, equation (REF ), can be used to check the accuracy of the numerical solution.", "Ring formation is implemented using the same assumptions (Gaussian statistics for all $iN$ -mers, mean field approximation) as in section .", "The average concentration of the reactive groups $c(p)$ is proportional to the initial concentration $c_{0}$ of reactive groups and decays with conversion $c(p)=c_{0}(1-p).$ Gaussian statistics for all precursor chains and combined linear chains of $i$ sections with $N$ monomers leads to concentrations $c_{i}\\approx c_{1}i^{-3/2}$ of the first end of an $iN$ -mer near its second end.", "Here $c_{1}\\approx 1/(2P)$ is the corresponding concentration for one chain of $N$ monomers.", "Assuming equal reactivity, all reaction rates are proportional to the concentrations of the reactive species only.", "Thus, we can equate for the rate to form a ring polymer of $iN$ monomers, $dC_{i}^{+}$ , that $\\frac{\\mbox{d}r_{i}^{+}(p)}{\\mbox{d}i^{-}(p)}=\\frac{c_{i}}{c_{0}(1-p)}.$ These additional reactions that convert linear chains of $iN$ monomers into cycles disturb the most probable distribution of the linear species.", "Nevertheless, the total number fraction of all species is still normalized, $\\sum _{i}n_{i}(p)+\\sum _{i}r_{i}(p)=1,$ whereby only the number fraction of linear chains, $n_{lin}(p)=\\sum _{i}n_{i}(p),$ is available for further reactions.", "Note that in the above equations $d_{i}^{+}(p)$ is $\\propto n_{lin}^{2}(p),$ while $d_{i}^{-}(p)$ and $dr_{i}^{+}(p)$ are proportional to $n_{lin}(p)$ .", "Therefore, we have to modify equation (REF ) to $\\mbox{d}i^{+}(p)=\\sum _{k=1}^{i-1}n_{k}(p)n_{i-k}(p)\\mbox{d}p/n_{lin}(p),$ if there is ring formation in order to maintain normalization of the distributions.", "The number fractions of rings and linear chains are obtained by computing $r_{i}(p+\\mbox{d}p^{\\prime })=r_{i}(p)+\\mbox{d}r_{i}^{+}(p)$ $n_{i}(p+\\mbox{d}p^{\\prime })=n_{i}(p)+\\mbox{d}i^{+}(p)-\\mbox{d}i^{-}(p)-\\mbox{d}r_{i}^{+}(p).$ Note that the effective infinitesimal conversion $\\mbox{d}p^{\\prime }$ is affected by ring forming reactions leading to $dp^{\\prime }=\\sum _{i}\\left(\\mbox{d}i^{-}(p)+\\mbox{d}r_{i}^{+}(p)\\right).$ Ring formation also modifies the conversion of the linear species, since $p\\equiv 1$ for all rings.", "Let $w_{O}(p)$ denote the weight fraction of rings at conversion $p$ .", "The conversion among the linear chains, $p_{lin}$ , is thus given by $p_{lin}=\\frac{p-w_{O}(p)}{1-w_{O}(p)}.$ This relation can be used to compute $p$ directly from the average degree of polymerization of the linear chains, $N_{\\mbox{n}}$ , by inserting $p_{lin}$ instead of $p$ into equation (REF ) for the linear species." ] ]
2105.11819
[ [ "Quasi-strongly regular graphs of grade three with diameter two" ], [ "Abstract A quasi-strongly regular graph of grade $p$ with parameters $(n, k, a; c_1, \\ldots, c_p)$ is a $k$-regular graph of order $n$ such that any two adjacent vertices share $a$ common neighbours and any two non-adjacent vertices share $c_{i}$ common neighbours for some $1 \\leq i \\leq p$.", "This is a generalization of a strongly regular graph.", "In this paper, we focus on strictly quasi-strongly regular graphs of grade $3$ with $c_i = k - i$ for $i = 1, 2, 3$.", "The main result is to show the sharp bounds of order $n$ for a given $k \\geq 4$.", "Furthermore, by this result, we characterize all of these graphs whose $n$ satisfies upper or lower bounds." ], [ "Introduction", "Let $G = (V(G), E(G))$ be a graph where $V(G)$ is the set of all vertices and $E(G)$ is the set of all edges.", "The order of $G$ is $|V(G)|$ .", "For a vertex $u \\in V(G)$ , a neighbour of $u$ in $G$ is a vertex $v \\in V(G) \\setminus \\lbrace u\\rbrace $ that is adjacent to $u$ .", "The neighbour set of $u$ in $G$ is the set of all neighbours of $u$ in $G$ and is denoted by $N_{G}(u)$ .", "The closed neighbour set of $u$ in $G$ is $N_{G}[u] = N_{G}(u) \\cup \\lbrace u\\rbrace $ .", "The degree of $u$ in $G$ , denoted $deg_{G}(u)$ , is the number of neighbours of $u$ in $G$ , i.e., $deg_{G}(u) = |N_{G}(u)|$ .", "In particular, for a subset $X$ of $V(G)$ , a neighbour of $u$ in $X$ is a vertex $v$ in $X \\setminus \\lbrace u\\rbrace $ that is adjacent to $u$ .", "Hence, the neighbour set of $u$ in $X$ is the set of vertices in $X$ that are adjacent to $u$ and is denoted by $N_{X}(u)$ , and the degree of $u$ in $X$ is $deg_{X}(u) = |N_{X}(u)|$ .", "Clearly, $N_{X}(u) = N_{G}(u) \\cap X$ .", "Similarly, for a subgraph $H$ of $G$ , we denote $N_{V(H)}(u)$ and $deg_{V(H)}(u)$ by $N_{H}(u)$ and $deg_{H}(u)$ , respectively.", "Further, for subsets $X$ and $Y$ of $V(G)$ , we let $N_{Y}(X) = \\lbrace v \\in Y : vu \\in E(G)$ for some $u \\in X\\rbrace $ .", "In particular, if $x \\in X$ , the private neighbour set of $x$ in $Y$ with respect to $X$ is denoted by $PN_{Y}(x, X)$ which is the set $\\lbrace v \\in Y : N_{X}(v) = \\lbrace x\\rbrace \\rbrace $ .", "The subgraph of $G$ induced by $X$ is denoted by $G[X]$ .", "A subset $X$ of $V(G)$ is independent if $G[X]$ has no edge.", "The independence number of $G$ is the maximum cardinality of an independent set of $G$ and is denoted by $\\alpha (G)$ .", "When no ambiguity can occur, we denote $N_{G}(u), N_{G}[u]$ and $deg_{G}(u)$ by $N(u), N[u]$ and $deg(u)$ , respectively.", "For vertices $u, v \\in V(G)$ , the distance between $u$ and $v$ is the length of a shortest path in $G$ starting from $u$ to $v$ .", "The diameter of $G$ is the maximum distance of any pair of vertices of $G$ .", "A graph $G$ is called $k$-regular if every vertex of $G$ has degree $k$ .", "A $k$ -regular graph $G$ of order $n$ is said to be $(n, k, \\mu , \\lambda )$-strongly regular if any two adjacent vertices share $\\mu $ common neighbours and any two non-adjacent vertices share $\\lambda $ common neighbours, while an $(n, k, a, b)$-Deza graph is a $k$ -regular graph of order $n$ which any two vertices share either $a$ or $b$ common neighbours where the number of common neighbours between any two vertices does not depend on their adjacency.", "Furthermore, an $(n, k, a; c_{1}, ..., c_{p})$-quasi-strongly regular graph, or a $QSR(n, k, a; c_{1}, ..., c_{p})$ graph, is a $k$ -regular graph of order $n$ such that any two adjacent vertices share $a$ common neighbours and any two non-adjacent vertices share $c_{i}$ common neighbours for some $1 \\le i \\le p$ .", "The grade of this graph is the number of indices $1 \\le i \\le p$ for which there exist two non-adjacent vertices with $c_i$ common neighbours.", "Moreover, if the graph is of grade $p$ , then it is said to be proper.", "A proper $QSR(n, k, a; c_{1}, ..., c_{p})$ is said to be strictly quasi-strongly regular and denoted by $SQSR(n, k, a; c_{1}, ..., c_{p})$ , if all $a, c_{1}, ..., c_{p}$ are distinct.", "A concept of strongly regular graphs was introduced by Bose [1] in his classical paper in order to show the connection between these graphs with the concept of partial geometry which plays an important role in area of coding theory in the part of coding techniques, the so called low density parity check code (LDPC).", "For example, see [6], [7], [14], [17], [18], [21].", "Also, in [1], Bose pointed out that the concept of strongly regular graphs is, up to isomorphism, the same as the condition list called association schemes of partially balanced incomplete block design (PBIB) in [2] which has been extensively studied in area of combinatorial design.", "The research of strongly regular graphs has not only been enriched by a number of applications as mentioned in the above paragraph (for example), but it also has been varied and connected to many other mathematical theoretical concepts.", "One of very interesting connection was pointed out by Rotman [20] that a central simple Lie algebra over $\\mathbb {Z}/ 2\\mathbb {Z}$ can be constructed from a strongly regular graph.", "An example of a generalization of strongly regular graphs is the concept of Deza graphs which was initiated by Erickson et.", "al. [8].", "In fact the authors were inspired by Deza and Deza [5] who provided relationship of some Deza graph in their work concerning metric polytope.", "For more example of studies in Deza graphs see [12], [15], [16].", "Another generalization of strongly regular graphs is the concept of quasi-strongly regular graphs which was introduced in Golightly et.", "al.", "[10], [11] and was popularized by Goldberg [9].", "Interestingly, in [9], the author refined an observation in [19] to show connection between quasi-strongly regular graphs and distance regular graphs.", "For more details about the latter graphs see Brouwer et.", "al.", "[3] and Cameron[4].", "In the study of strictly quasi-strongly regular graphs of grade 2, Goldberg [9] established the following two equations.", "$t_{1} + t_{2} = n - k - 1$ and $c_{1}t_{1} + c_{2}t_{2} = k(k - a - 1)$ where $t_{i} = t_i(u)$ is the number of vertices in $V(G) \\setminus N[u]$ that share $c_{i}$ common neighbours for a fixed vertex $u \\in V(G)$ .", "It is worth noting that the proofs of these equations are obtained from choosing an arbitrary vertex $u$ and counting the number of vertices in $V(G) \\setminus N[u]$ and counting the number of edges between $N(u)$ and $V(G) \\setminus N[u]$ .", "As there are only two variables $t_{1}$ and $t_{2}$ and two equations, the values $t_{1}$ and $t_{2}$ do not depend on the choice of $u$ .", "However, this is not always true for the graphs of grade $p \\ge 3$ .", "In the same paper, it was pointed out that there exists an example of this type of graph of grade 3 with parameters $(12, 4, 0; 3, 2, 1)$ that the values of $t_{1}(u), t_{2}(u), t_{3}(u)$ are vary and depending on a vertex $u$ .", "So, a new machinery may be required to study the graphs of higher grade.", "These motivate us to explore strictly quasi-strongly regular graphs of grade 3.", "In this work, we focus on the graphs $SQSR(n, k, 0; k - 1, k - 2, k - 3)$ for $k \\ge 4$ .", "Of course, these graphs are a generalization of the above example and they have diameter 2.", "The main result here is to find the sharp bounds of $n$ for a given $k \\ge 4$ .", "Moreover, we can characterize these graphs whose $n$ is equal to the upper or lower bound.", "Some proper quasi-strongly regular graphs with particular parameters have been characterized in [13], [22].", "According to our knowledge, these graphs of grade higher than 2 with any parameters have not been characterized yet.", "It can be shown in this paper that there is only one graph of $SQSR(11, 4, 0; 3, 2, 1)$ and only one graph of $SQSR(12, 4, 0; 3, 2, 1)$ , up to isomorphism." ], [ "Main results", "In this section, we illustrate the main theorem where the proof is provided in Section .", "The result is to establish upper and lower bounds of $n$ for $SQSR(n, k, 0; k - 1, k - 2, k - 3)$ graphs when $k \\ge 4$ .", "Further, we characterize all such graphs achieving the upper and lower bound for $n$ .", "Let us introduce quasi-strongly regular graphs $G_1$ of order 11 and $G_2$ of order 12 in the figures below.", "Figure: The graph G 1 G_{1}.Figure: The graph G 2 G_{2}.Theorem 1 (Main Theorem) Let $G$ be an $SQSR(n, k, 0; k - 1, k - 2, k - 3)$ graph when $k \\ge 4$ .", "Then $2k + 3 \\le n \\le k^{2} - 4.$ Furthermore, $G$ achieves the upper bound if and only if $G$ is isomorphic to $G_{2}$ and $G$ achieves the lower bound if and only if $G$ is isomorphic to $G_{1}$ .", "Note that the graphs $G_{1}$ and $G_{2}$ are $SQSR(11, 4, 0; 3, 2, 1)$ and $SQSR(12, 4, 0; 3, 2, 1)$ , respectively.", "Thus we completely characterize all $SQSR(n, k, 0; k - 1, k - 2, k - 3)$ graphs when $k = 4$ as detailed in the following corollary.", "Corollary 1 Let $G$ be an $SQSR(n, k, 0; k - 1, k - 2, k - 3)$ graph.", "If $k = 4$ , then $G$ is either $G_{1}$ or $G_{2}$ .", "We further have the following corollary.", "Corollary 2 Let $G$ be an $SQSR(n, k, 0; k - 1, k - 2, k - 3)$ graph.", "If $k \\ge 5$ , then $2k + 4 \\le n \\le k^{2} - 5$ ." ], [ "Proof of the main theorem", "Let $u \\in V(G)$ , $G^{\\prime } = G - N[u], N_{G}(u) = U = \\lbrace u_{1}, ..., u_{k}\\rbrace $ and $n^{\\prime } = |V(G^{\\prime })|$ .", "Because $c_{3} = k - 3 > 0$ , $V(G^{\\prime })$ can be partitioned into $T_{1}, T_{2}, T_{3}$ where $T_{i}$ is the set of vertices sharing $c_{i}$ neighbours with $u$ for all $i \\in \\lbrace 1, 2, 3\\rbrace $ .", "We also let $t_{i} = |T_{i}|$ .", "Throughout the proof, all $t_{1}, t_{2}, t_{3}$ depend on the vertex $u$ only.", "We first state the following equations in the first proposition.", "These were also presented in the proofs of the case of strictly strongly regular graphs of grade 2 [9].", "For the completeness, we also give the detail of the proof here.", "Proposition 1 Under the above setting, we have $t_{1} + t_{2} + t_{3} = n - k - 1$ and $c_{1}t_{1} + c_{2}t_{2} + c_{3}t_{3} = k(k - 1).$ Clearly, $n - k - 1 = n^{\\prime } = |V(G^{\\prime })| = |T_{1}| + |T_{2}| + |T_{3}| = t_{1} + t_{2} + t_{3}$ because $V(G^{\\prime })$ is partitioned by $T_{1}, T_{2}, T_{3}$ .", "This proves Equation (REF ).", "To prove Equation (REF ), we count the number of edges between $U$ and $V(G^{\\prime })$ .", "Since $a = 0$ , the set $U$ is independent.", "Hence, every vertex in $U$ is adjacent to $k - 1$ vertices in $V(G^{\\prime })$ .", "This implies that there are $k(k - 1)$ edges from $U$ to $V(G^{\\prime })$ .", "On the other hand.", "Each vertex in $T_{i}$ share $c_{i}$ neighbours with $u$ .", "That is, $deg_{U}(v) = c_{i}$ for all $v \\in T_{i}$ and $i \\in \\lbrace 1, 2, 3\\rbrace $ .", "Thus, there are $c_{1}t_{1} + c_{2}t_{2} + c_{3}t_{3}$ edges from $V(G^{\\prime })$ to $U$ .", "By double counting, we have $c_{1}t_{1} + c_{2}t_{2} + c_{3}t_{3} = k(k - 1)$ and this proves Equation (REF ).", "In what follows, we separate the proof into two parts.", "The arguments of upper bound and the characterization of graphs satisfying the bound are given in Subsection REF .", "While those for lower bound part are given in Subsection REF ." ], [ "The Upper Bound", "Recall that $G^{\\prime } = G - N[u], N_{G}(u) = U = \\lbrace u_{1}, ..., u_{k}\\rbrace $ and $n^{\\prime } = |V(G^{\\prime })|$ .", "We, further, let $U_{i} = N_{G^{\\prime }}(u_{i})$ .", "Because $a = 0$ , $U$ is an independent set of $G$ .", "Hence, every vertex in $U$ is adjacent to $k - 1$ vertices in $G^{\\prime }$ .", "Therefore $n^{\\prime } \\le k(k - 1)$ because $|U| = k$ and $c_{3} > 0$ .", "That is $n = 1 + k + n^{\\prime } \\le k^{2} + 1$ .", "We establish the following Lemmas.", "Lemma 1 If $k^{2} - 4 \\le n \\le k^{2} + 1$ , then $k = 4$ .", "Assume that $k^{2} - 4 \\le n \\le k^{2} + 1$ .", "Thus, $k^{2} - k - 5 \\le n^{\\prime } \\le k^{2} - k$ .", "We suppose to the contrary that $k \\ge 5$ .", "We may let $S \\subseteq V(G^{\\prime })$ be the set such that every vertex in $S$ is adjacent to at least two vertices in $U$ and $T \\subseteq V(G^{\\prime })$ be the set such that every vertex in $T$ is adjacent to exactly one vertex in $U$ , further, we let $U^{\\prime }_{i}$ be the set of private neighbours of $u_{i}$ in $V(G^{\\prime })$ with respect to $U$ for $i \\in \\lbrace 1, ..., k\\rbrace $ .", "Clearly, $U^{\\prime }_{i} \\subset U_{i}$ .", "Also, it is worth noting that every vertex in $T$ is in exactly one set $U^{\\prime }_{i}$ .", "Because $c_{3} > 0$ , it follows that $U^{\\prime }_{1}, ..., U^{\\prime }_{k}, S$ partition $V(G^{\\prime })$ and $\\cup ^{k}_{i = 1}U^{\\prime }_{i} = T$ .", "Thus, $|T| + |S| = n^{\\prime }$ and $|T| = \\sum ^{k}_{i = 1}|U^{\\prime }_{i}|$ which imply that $|T| = \\sum ^{k}_{i = 1}|U^{\\prime }_{i}| = n^{\\prime } - |S|.$ For any $l \\in \\lbrace 0, 1, ..., 5\\rbrace $ , we let $n^{\\prime } = k^{2} - k - l$ .", "When $l = 5$ , we have that $n = 1 + k + k^{2} - k - l = k^{2} - 4$ which is odd if $k = 5$ .", "Thus, $k \\ge 6$ and this implies that $k - 1 \\ge l$ .", "When $l \\le 4$ , by the assumption that $k \\ge 5$ , we have $k - 1 \\ge 4 \\ge l$ .", "In both cases, $l \\le k - 1.$ We next show that the upper bound of $|S|$ is $l$ .", "It can be observed that, for each $u_{i} \\in U$ , the vertex $u_{i}$ is adjacent to $k - 1 - |U^{\\prime }_{i}|$ vertices in $S$ .", "Thus, $deg_{S}(u_{i}) = k - 1 - |U^{\\prime }_{i}|$ .", "By Equation (REF ), $\\sum ^{k}_{i = 1}deg_{S}(u_{i}) &= k(k - 1) - \\sum ^{k}_{i = 1}|U^{\\prime }_{i}|\\\\&= k(k - 1) - (n^{\\prime } - |S|) \\\\&= k(k - 1) - (k^{2} - k - l - |S|) = l + |S|.$ Let $S = \\lbrace s_{1}, ..., s_{|S|}\\rbrace $ .", "By the definition of $S$ , $s_{i}$ is adjacent to at least two vertices in $U$ .", "Thus $deg_{U}(s_{i}) \\ge 2$ and this implies that $\\sum ^{|S|}_{i = 1}deg_{U}(s_{i}) \\ge 2|S|.$ Hence, by Equation (REF ) and double counting, we have $2|S| \\le \\sum ^{|S|}_{i = 1}deg_{U}(s_{i}) = \\sum ^{k}_{i = 1}deg_{S}(u_{i}) = l + |S|$ which implies that $|S| \\le l.$ Since every vertex $s$ in $S$ is adjacent to at least two vertices in $U$ , $s$ is adjacent to at most $k - 2$ vertices in $T$ .", "By Equation (REF ), $\\sum _{s \\in S}deg_{T}(s) \\le |S|(k - 2) \\le l(k - 2).$ By Equation (REF ), we have $kl \\le k(k - 1)$ which implies that $l(k - 2) + 2l \\le k(k - 1)$ .", "Thus, by Equation (REF ), we have $l(k - 2) \\le k(k - 1) - 2l \\le k(k - 1) - l - |S| = n^{\\prime } - |S| = |T|.$ Therefore, by Equations (REF ) and (REF ), we have $\\sum _{s \\in S}deg_{T}(s) \\le |T|.$ If $\\sum _{s \\in S}deg_{T}(s) < |T|$ , then there exists a vertex $v \\in U^{\\prime }_{i}$ for some $i \\in \\lbrace 1, ..., k\\rbrace $ which is not adjacent to any vertex in $S$ .", "Since $c_{3} > 0$ and $deg_{G^{\\prime }}(v) = k - 1$ , $v$ is adjacent to exactly one vertex in each $U^{\\prime }_{j}$ for $j \\in \\lbrace 1, ..., k\\rbrace \\setminus \\lbrace i\\rbrace $ .", "Thus, $|N(v) \\cap N(u_{j})| = 1$ .", "But $c_{3} = k - 3$ is minimum among $c_{1}, c_{2}, c_{3}$ , it follows that $|N(v) \\cap N(u_{j})| = 1 = k - 3 = c_{3}$ which implies that $k = 4$ contradicting the assumption.", "Thus, we may assume the the equality in (REF ) holds.", "This implies that Equations (REF ), (REF ) and (REF ) holds.", "The equalities of (REF ) and (REF ) imply that $|S| = l = k - 1$ .", "Also, the equality of (REF ) implies that every vertex in $S$ is adjacent to exactly two vertices in $U$ and has $k - 2$ private neighbours in $T$ with respect to $S$ .", "Thus, the equality of (REF ) implies that every vertex in $T$ is adjacent to exactly one vertex in $S$ .", "Therefore, we can let $x$ be a vertex in $U^{\\prime }_{i}$ such that $x$ is adjacent to exactly one vertex $s \\in S$ which $s$ is adjacent to exactly two vertices $u_{j}$ and $u_{j^{\\prime }}$ in $U$ .", "Because $a = 0$ , $i \\notin \\lbrace j, j^{\\prime }\\rbrace $ .", "As $c_{3} > 0$ , we have that $x$ shares at least one common neighbour with $u_{i^{\\prime }}$ for all $i^{\\prime } \\in I = \\lbrace 1, ..., k\\rbrace \\setminus \\lbrace i, j, j^{\\prime }\\rbrace $ .", "Since $deg_{G^{\\prime }}(x) = k - 1$ and $xs \\in E(G)$ , it follows that $x$ is adjacent to at most $k - 2$ vertices in $\\cup _{i^{\\prime } \\in I}U_{i^{\\prime }}$ .", "Because $k \\ge 5$ , it follows that there exists $i^{\\prime \\prime } \\in I$ such that $|N(x) \\cap N(u_{i^{\\prime \\prime }})| = 1$ .", "Hence, $|N(x) \\cap N(u_{i^{\\prime \\prime }})| = 1 = k - 3 = c_{3}$ which implies that $k = 4$ contradicting the assumption.", "Therefore, $k = 4$ .", "Lemma 2 If $k = 4$ , then $n \\le k^{2} - 4$ .", "Suppose first that $n \\ge k^{2} - 2$ .", "Thus, $n \\ge 14$ .", "Since $c_{1} = 3$ , we may let $x$ and $y$ be a pair of non-adjacent vertices of $G$ such that $N(x) \\cap N(y) = \\lbrace z_{1}, z_{2}, z_{3}\\rbrace = Z$ .", "Further, we let $N(x) \\setminus Z = \\lbrace x_{1}\\rbrace $ and $N(y) \\setminus Z = \\lbrace y_{1}\\rbrace $ .", "Clearly, $N(x) = \\lbrace x_{1}\\rbrace \\cup Z$ and $N(y) = \\lbrace y_{1}\\rbrace \\cup Z$ are independent sets because $a = 0$ .", "Thus, to share a common neighbour between $x_{1}$ and $y$ , $x_{1}y_{1} \\in E(G)$ .", "Let $H$ be the subgraph of $G$ induced by $V(G) \\setminus (\\lbrace x, y, x_{1}, y_{1}\\rbrace \\cup Z)$ .", "By the assumption, $|V(H)| \\ge 7$ .", "Since all $z_{1}, z_{2}, z_{3}$ are adjacent to both $x$ and $y$ and $k = 4$ , it follows that $|N_{H}(Z)| \\le 6$ .", "So, there exists a vertex $p$ of $H$ which is not adjacent to any vertex in $Z$ .", "To share a common neighbour with $x$ and $y$ , we have $px_{1}, py_{1} \\in E(G)$ .", "Hence, $x_{1}x_{2}, px_{1}, py_{1} \\in E(G)$ contradicting $a = 0$ .", "So, we may assume that $n = k^{2} - 3$ .", "Thus, $|V(H)| = 6$ .", "Similarly, if $|N_{H}(Z)| \\le 5$ , then there exists a vertex $p \\in V(H)$ which is adjacent to both $x_{1}$ and $y_{1}$ in order to share a common neighbour with $x$ and $y$ , respectively.", "This yields a contradiction.", "Thus, $|N_{H}(Z)| = 6$ which implies that each $z_{i}$ has exactly two private neighbours in $H$ with respect to $Z$ .", "We let $PN_{H}(z_{i}, Z) = \\lbrace z^{\\prime }_{i}, z^{\\prime \\prime }_{i}\\rbrace $ for all $1 \\le i \\le 3$ .", "The graph $G$ now is illustrated by Figure REF .", "Figure: A subgraph of GG.By $a = 0$ , $\\lbrace z^{\\prime }_{i}, z^{\\prime \\prime }_{i}\\rbrace $ is independent.", "To share a common neighbour from $z^{\\prime }_{1}$ to both of $z_{2}$ and $z_{3}$ , we have that $z^{\\prime }_{1}$ is adjacent to a vertex in $\\lbrace z^{\\prime }_{2}, z^{\\prime \\prime }_{2}\\rbrace $ and a vertex in $\\lbrace z^{\\prime }_{3}, z^{\\prime \\prime }_{3}\\rbrace $ .", "Renaming vertices if necessary, we assume that $z^{\\prime }_{1}z^{\\prime }_{2}, z^{\\prime }_{1}z^{\\prime }_{3} \\in E(G)$ .", "Because $a = 0$ , $z^{\\prime }_{2}z^{\\prime }_{3} \\notin E(G)$ .", "Thus to share a common neighbour between $z^{\\prime }_{2}$ and $z_{3}$ and between $z^{\\prime }_{3}$ and $z_{2}$ , we have that $z^{\\prime }_{2}z^{\\prime \\prime }_{3}, z^{\\prime }_{3}z^{\\prime \\prime }_{2} \\in E(G)$ .", "Because $a = 0$ , $z^{\\prime }_{1}z^{\\prime \\prime }_{2}, z^{\\prime }_{1}z^{\\prime \\prime }_{3} \\notin E(G)$ .", "Therefore, to share a common neighbour with $z_{1}$ , we have that $z^{\\prime \\prime }_{2}z^{\\prime \\prime }_{1}, z^{\\prime \\prime }_{3}z^{\\prime \\prime }_{1} \\in E(G)$ .", "Thus $z^{\\prime \\prime }_{2}z^{\\prime \\prime }_{3} \\notin E(G)$ .", "The induced subgraph $H$ of $G$ is illustrated by Figure REF .", "Figure: The subgraph of GG induced by HH.Since $x_{1}y_{1}, x_{1}x, y_{1}y \\in E(G)$ and $k = 4$ , it follows that each of $x_{1}$ and $y_{1}$ is adjacent to exactly two vertices in $H$ .", "Thus, there is at least one vertex in $H$ has degree at most 3 in $G$ contradicting $k = 4$ .", "Therefore, $n \\le k^{2} - 4$ .", "By Lammas REF and REF , we can conclude that if $G$ is an $SQSR(n, k, 0;, k - 1, k - 2, k - 3)$ graph, then $n \\le k^{2} - 4$ .", "This establishes the upper bound.", "Now, we characterize all graphs whose $n$ satisfies the upper bound.", "Assume that $n = k^{2} - 4$ .", "By Lemma REF , we have that $k = 4$ and this implies that $n = 12$ .", "Thus, to characterize all graphs which are $SQSR(k^{2} - 4, k, 0; k - 1, k - 2, k - 3)$ , it suffices to characterize the graphs $SQSR(12, 4, 0; 3, 2, 1)$ .", "Let $G$ be a graph $SQSR(12, 4, 0; 3, 2, 1)$ .", "Recall that $x$ and $y$ are non-adjacent vertices of $G$ such that $N(x) \\cap N(y) = \\lbrace z_{1}, z_{2}, z_{3}\\rbrace = Z$ and $N(x) \\setminus Z = \\lbrace x_{1}\\rbrace , N(y) \\setminus Z = \\lbrace y_{1}\\rbrace $ .", "To share a common neighbour between $x_{1}$ and $y$ , we have that $x_{1}y_{1} \\in E(G)$ .", "We, further, recall that $H$ is the subgraph of $G$ induced by $V(G) \\setminus (\\lbrace x, y, x_{1}, y_{1}\\rbrace \\cup Z)$ .", "Thus, $|V(H)| = 5$ and we may let $V(H) = \\lbrace a_{1}, ..., a_{5}\\rbrace $ .", "Because $x_{1}y_{1} \\in E(G)$ and $a = 0$ , it follows that $N_{H}(x_{1}) \\cap N_{H}(y_{1}) = \\emptyset $ .", "Renaming vertices if necessary, we let $N_{H}(x_{1}) = \\lbrace a_{1}, a_{2}\\rbrace $ and $N_{H}(y_{1}) = \\lbrace a_{4}, a_{5}\\rbrace $ .", "Thus, $a_{1}a_{2}, a_{4}a_{5} \\notin E(G)$ since $a = 0$ .", "We have the following Lemma.", "Lemma 3 Under the above setting, if there exists $i \\in \\lbrace 1, 2, 3\\rbrace $ such that $|N_{H}(z_{i}) \\cap N_{H}(x_{1})| = 1$ and $|N_{H}(z_{i}) \\cap N_{H}(y_{1})| = 1$ , then $G$ is isomorphic to $G_{2}$ .", "Without loss of generality, we may assume that $i = 2$ which $N_{H}(z_{2}) \\cap N_{H}(x_{1}) = \\lbrace a_{2}\\rbrace $ and $N_{H}(z_{2}) \\cap N_{H}(y_{1}) = \\lbrace a_{4}\\rbrace $ .", "Since $a = 0$ , $a_{2}a_{4} \\notin E(G)$ .", "To share a common neighbour between $a_{5}$ and $z_{2}$ , we have $a_{5}a_{2} \\in E(G)$ .", "Similarly, to share a common neighbour between $a_{1}$ and $z_{2}$ , we have $a_{1}a_{4} \\in E(G)$ .", "We have that $a_{3}$ is adjacent to $z_{1}$ or $z_{3}$ in order to share a common neighbour with $x$ and $y$ .", "Without loss of generality, we let $a_{3}z_{1} \\in E(G)$ .", "If $a_{3}z_{3} \\notin E(G)$ , then $a_{3}$ is adjacent to three vertices of the set $\\lbrace a_{1}, a_{2}, a_{4}, a_{5}\\rbrace $ since $deg(a_{3}) = k = 4$ .", "This implies that $G[N(a_{3})]$ contains an edge $a_{1}a_{4}$ or $a_{2}a_{5}$ contradicting $a= 0$ .", "Hence, $a_{3}z_{3} \\in E(G)$ .", "To share a common neighbour between $a_{1}$ and $y$ and between $a_{5}$ and $x$ , we have that either $a_{1}z_{1}, a_{5}z_{3} \\in E(G)$ or $a_{1}z_{3}, a_{5}z_{1} \\in E(G)$ .", "It can be seen that these two cases are symmetric.", "Thus, without loss of generality, we let $a_{1}z_{1}, a_{5}z_{3} \\in E(G)$ .", "To share a common neighbour between $a_{3}$ and $z_{2}$ , we have that $a_{3}a_{2} \\in E(G)$ or $a_{3}a_{4} \\in E(G)$ .", "Further, because $deg(a_{2}) = deg(a_{4}) = k = 4$ , it follows that $a_{3}a_{2}, a_{3}a_{4} \\in E(G)$ .", "Finally, since $deg(a_{1}) = deg(a_{5}) = k = 4$ , we obtain $a_{1}a_{5} \\in E(G)$ .", "Therefore, $G$ is isomorphic to $G_{2}$ .", "We also establish the following Lemma.", "Lemma 4 For all $i \\in \\lbrace 1, ..., 5\\rbrace $ , each $a_{i}$ is adjacent to at least one vertex in $Z$ .", "To share a common neighbour to both of $x$ and $y$ , each $a_{i}$ is adjacent to at least one vertex in $Z$ .", "Clearly, $\\sum ^{3}_{i = 1}deg_{H}(z_{i}) = 6$ .", "Since $|V(H)| = 5$ , by Lemma REF , it follows that there exists $i \\in \\lbrace 1, ..., ,5\\rbrace $ such that $a_{i}$ is adjacent to exactly two vertices in $Z$ and $a_{j}$ is adjacent to exactly one vertex in $Z$ for all $j \\in \\lbrace 1, ..., 5\\rbrace \\setminus \\lbrace i\\rbrace $ .", "We may distinguish two cases.", "Case 1: $i \\in \\lbrace 1, 2, 4, 5\\rbrace $ .", "Without loss of generality, we let $i = 1$ and $a_{1}$ is adjacent to $z_{1}$ and $z_{2}$ .", "If $z_{1}$ is adjacent to $a_{4}$ or $a_{5}$ , then $|N_{H}(x_{1}) \\cap N_{H}(z_{1})| = 1$ and $|N_{H}(y_{1}) \\cap N_{H}(z_{1})| = 1$ .", "By Lemma REF , the graph $G$ is isomorphic to $G_{2}$ .", "Note that the similar arguments work for $z_2$ .", "Next, we assume otherwise that $z_{1}a_{4}, z_{1}a_{5} \\notin E(G)$ and $z_{2}a_{4}, z_{2}a_{5} \\notin E(G)$ .", "Thus, $a_{4}z_{3}, a_{5}z_{3} \\in E(G)$ by Lemma REF .", "Further, $a_{2}z_{i}, a_{3}z_{j} \\in E(G)$ where $\\lbrace i, j\\rbrace = \\lbrace 1, 2\\rbrace $ by Lemma REF .", "Thus, $a_{1}a_{3} \\notin E(G)$ since $a = 0$ .", "So, $a_{3}a_{2}, a_{3}a_{4}, a_{3}a_{5} \\in E(G)$ because $deg(a_{3}) = k = 4$ .", "Since $a = 0$ , $a_{2}a_{4}, a_{2}a_{5} \\notin E(G)$ .", "This implies that $deg(a_{2}) \\le 3$ which is a contradiction.", "Case 2: $i = 3$ .", "Without loss of generality, we let $a_{3}z_{1}, a_{3}z_{3} \\in E(G)$ .", "If the conditions in Lemma REF holds, we are done.", "Thus, it remains to consider when $N_{H}(z_{2}) = \\lbrace a_{4}, a_{5}\\rbrace $ or $N_{H}(z_{2}) = \\lbrace a_{1}, a_{2}\\rbrace $ .", "Due to the symmetry, we may assume that $N_{H}(z_{2}) = \\lbrace a_{4}, a_{5}\\rbrace $ .", "By Lemma REF , we have $a_{1}z_{1}, a_{2}z_{3} \\in E(G)$ or $a_{1}z_{3}, a_{2}z_{1} \\in E(G)$ .", "Renaming vertices if necessary, we let $a_{1}z_{1}, a_{2}z_{3} \\in E(G)$ .", "Thus, $a_{1}a_{2}, a_{2}a_{3}, a_{1}a_{3} \\notin E(G)$ because $a = 0$ .", "Hence, $a_{3}a_{4}, a_{3}a_{5} \\in E(G)$ as $deg(a_{3}) = 4$ .", "So, there are at most two edges from $a_{4}, a_{5}$ to $a_{1}, a_{2}$ .", "This yields $deg(a_{i}) \\le 3$ for some $i \\in \\lbrace 1, 2\\rbrace $ contradicting $k = 4$ .", "Therefore, from Case 1 and Case 2, we can conclude that if $G$ satisfies the upper bound, then $G$ is $G_{2}$ .", "These complete the subsection." ], [ "The Lower Bound", "Recall again that $G^{\\prime } = G - N[u], N(u) = U = \\lbrace u_{1}, ..., u_{k}\\rbrace , n^{\\prime } = |V(G^{\\prime })|$ and $U_{i} = N_{G^{\\prime }}(u_{i})$ .", "The following lemma is obvious under the conditions that $a = 0$ and $G$ is $k$ -regular.", "So, we may state without the proof.", "Lemma 5 There are $k(k - 1)$ edges between $G[U]$ and $G^{\\prime }$ .", "Since $a = 0$ , every vertex in $U$ is adjacent to $k - 1$ vertices in $G^{\\prime }$ .", "Thus, $n^{\\prime } \\ge k - 1$ .", "We prove the lower bound by constructing the following three lemmas.", "They will immediately imply $n^{\\prime } \\ge k + 2$ .", "Lemma 6 $n^{\\prime } \\ge k$ .", "We assume to the contrary that $n^{\\prime } = k - 1$ .", "Since we have by Lemma REF that there are $k(k - 1)$ edges from $G^{\\prime }$ to $G[U]$ , by the pigeonhole principle, there is a vertex $v$ in $G^{\\prime }$ such that $|N(u) \\cap N(v)| = |U \\cap N(v)| = k > c_{1}$ , a contradiction.", "Hence, $n^{\\prime } \\ge k$ .", "Lemma 7 $n^{\\prime } \\ge k + 1$ .", "We assume to the contrary that $n^{\\prime } \\le k$ .", "By Lemma REF , we have $n^{\\prime } = k$ .", "Let $V(G^{\\prime }) = \\lbrace x_{1}, ..., x_{k}\\rbrace $ .", "Lemma REF yields that $\\sum ^{k}_{i = 1}deg_{U}(x_{i}) = k(k - 1)$ .", "Clearly, $deg_{U}(x_{i}) = |N_{U}(x_{i})| \\le c_{1} = k - 1$ for all $i \\in \\lbrace 1, ..., k\\rbrace $ .", "If there exists $x_{i}$ such that $deg_{U}(x_{i}) < k - 1$ , then $k(k - 1) > \\sum ^{k}_{i = 1}deg_{U}(x_{i}) = k(k - 1)$ , a contradiction.", "Thus, $deg_{U}(x_{i}) = k - 1$ for all $i \\in \\lbrace 1, ..., n\\rbrace $ .", "Because every vertex in $U$ is adjacent to $k - 1$ vertices in $G^{\\prime }$ , the subgraph of $G$ containing edges between $U$ and $G^{\\prime }$ is complete bipartite minus a perfect matching.", "For $1 \\le i \\le k$ , we let $x_{i}$ be the only vertex in $G^{\\prime }$ that $u_{i}$ is not adjacent.", "Because $deg(x_{i}) = k$ , $deg_{G^{\\prime }}(x_{i}) = 1$ implies that $G^{\\prime }$ is a union of edges.", "Without loss of generality, we let $G^{\\prime }$ be the union of edges $x_{1}x_{2}, x_{3}x_{4}, ..., x_{k - 1}x_{k}$ .", "Because $k \\ge 4$ , $u_{3}$ exists.", "We see that $G[N(u_{3})]$ has an edge $x_{1}x_{2}$ contradicting $a = 0$ .", "Therefore, $n^{\\prime } \\ge k + 1$ .", "Lemma 8 $n^{\\prime } \\ge k + 2$ .", "We assume to the contrary $n^{\\prime } \\le k + 1$ .", "By Lemma REF , we have that $n^{\\prime } = k + 1$ .", "By Equations (REF ) and (REF ), we have $t_{1} + t_{2} + t_{3} = k + 1$ and $(k - 1)t_{1} + (k - 2)t_{2} + (k - 3)t_{3} = k(k - 1).$ By Equations (REF ) and (REF ), we have that $t_{1} + 2t_{2} + 3t_{3} = 2k$ Recall that $u_{1} \\in U$ and $U_{1} = N_{G^{\\prime }}(u_{1})$ .", "Because $a = 0$ , $U_{1}$ is an independent set of size $k - 1$ .", "Thus, there are two vertices in $V(G^{\\prime }) \\setminus U_{1}$ since $n^{\\prime } = k + 1$ .", "Let $x$ be one of these vertices.", "Clearly, $|N_{G^{\\prime }}(x) \\cap U_{1}| = |N(x) \\cap N(u_{1})| \\ge c_{3} = k - 3$ .", "Moreover, $x$ shares at least $c_{3}$ common neighbours with $u$ .", "That is $|N_{G^{\\prime }}(x)| \\le k - c_{3} = 3$ .", "Therefore, $3 \\ge |N_{G^{\\prime }}(x)| \\ge |N_{G^{\\prime }}(x) \\cap U_{1}| \\ge c_{3} = k - 3$ implying that $k \\le 6.$ We then consider $G$ when $k = 4, 5$ and 6.", "We first consider the case when $k = 6$ .", "Because $U_{1}$ is an independent set of size 5, $\\alpha (G^{\\prime }) \\ge 5$ .", "By Equations (REF ) and (REF ), we have $t_{1} + t_{2} + t_{3} = 7$ and $t_{1} + 2t_{2} + 3t_{3} = 12$ which can be solved that (i) $t_{1} = 2, t_{2} = 5, t_{3} = 0$ , (ii) $t_{1} = 3, t_{2} = 3, t_{3} = 1$ and (iii) $t_{1} = 4, t_{2} = 1, t_{3} = 2$ .", "For the solution (i) $t_{1} = 2, t_{2} = 5, t_{3} = 0$ , there are $t_{1} = 2$ vertices of $G^{\\prime }$ , each of which is adjacent to $c_{1} = k - 1$ vertices in $U$ .", "Thus, all the 2 vertices in $T_{1}$ have degree 1 in $G^{\\prime }$ .", "Similarly, there are $t_{2} = 5$ vertices of $G$ , each of which is adjacent to $c_{2} = k - 2$ vertices in $U$ .", "So, all the 5 vertices in $T_{2}$ have degree 2 in $G^{\\prime }$ .", "Also, there is no vertex in $G^{\\prime }$ which is adjacent to $c_{3} = k - 3$ vertices in $U$ as $t_{3} = 0$ .", "Thus, there is no vertex of degree 3 in $G$ .", "Therefore, the graph $G^{\\prime }$ has degree sequence $2, 2, 2, 2, 2, 1, 1$ .", "Thus $G^{\\prime }$ is either $P_{7}$ , the union of $C_{5}$ and $P_{2}$ , the union of $C_{4}$ and $P_{3}$ , or the union of $C_{3}$ and $P_{4}$ .", "In each case, it can be checked that $\\alpha (G^{\\prime }) \\le 4$ , a contradiction.", "For the solution (ii) $t_{1} = 3, t_{2} = 3, t_{3} = 1$ , the graph $G$ has degree sequence $3, 2, 2, 2, 1, 1, 1$ .", "Thus $G$ is the union of $K_{1, 3}$ and $C_{3}$ or $G$ is one of the graphs in Figure REF .", "Figure: The graphs with degree sequence 3,2,2,2,1,1,13, 2, 2, 2, 1, 1, 1.It is easy to check that $\\alpha (G^{\\prime }) \\le 4$ in all the cases, a contradiction.", "For the solution (iii) $t_{1} = 4, t_{2} = 1, t_{3} = 2$ , the graph $G^{\\prime }$ has degree sequence $3, 3, 2, 1, 1, 1, 1$ .", "Thus, the graph $G^{\\prime }$ is one of the graphs in Figure REF .", "Figure: The graphs with degree sequence 3,3,2,1,1,1,13, 3, 2, 1, 1, 1, 1.Clearly, the only one possible graph whose independence number is 5 is when $G^{\\prime } = H_{8}$ .", "However, $H_{8}$ has the unique maximum independent set $I_{H_{8}} = \\lbrace x, y, z, w, s\\rbrace $ .", "Because $a = 0$ and each $U_{i}$ is an independent set containing 5 vertices, it follows that $I_{H_{8}} = U_{1} = \\cdots = U_{6}$ .", "That is, all vertices in $U$ are adjacent to every vertex in $I_{H_{8}}$ .", "Hence, $|N(u_{i}) \\cap N(u_{j})| = k > c_{1}$ for any $u_{i}, u_{j} \\in U$ , a contradiction.", "We then consider the case when $k = 5$ .", "Clearly $U_{1}$ is an independent set of size 4 and this implies $\\alpha (G^{\\prime }) \\ge 4$ .", "By Equations (REF ) and (REF ), we have $t_{1} + t_{2} + t_{3} = 6$ and $t_{1} + 2t_{2} + 3t_{3} = 10$ which can be solve that (i) $t_{1} = 2, t_{2} = 4, t_{3} = 0$ , (ii) $t_{1} = 3, t_{2} = 2, t_{3} = 1$ and (iii) $t_{1} = 4, t_{2} = 0, t_{3} = 2$ .", "For the solution (i) $t_{1} = 2, t_{2} = 4, t_{3} = 0$ , the graph $G^{\\prime }$ has degree sequence $2, 2, 2, 2, 1, 1$ .", "Thus $G^{\\prime }$ is either $P_{6}$ , the union of $C_{3}$ and $P_{3}$ , or the union of $C_{4}$ and $P_{2}$ .", "In each case, it can be checked that $\\alpha (G^{\\prime }) \\le 3$ , a contradiction.", "For the solution (ii) $t_{1} = 3, t_{2} = 2, t_{3} = 1$ , the graph $G^{\\prime }$ has degree sequence $3, 2, 2, 1, 1, 1$ .", "Thus, $G^{\\prime }$ is one of the graphs in Figure REF .", "Figure: The graphs with degree sequence 3,2,2,1,1,13, 2, 2, 1, 1, 1.It can be checked that the only graph whose independence number is 4 is $H_{12}$ .", "However, $H_{12}$ has the unique maximum independent set $I_{H_{12}} = \\lbrace x, y, z, w\\rbrace $ .", "Because $a = 0$ and every $U_{i}$ is an independent set of 4 vertices, it follows that all vertices in $U$ are adjacent to every vertex in $I_{H_{12}}$ .", "Hence, $|N(u_{i}) \\cap N(u_{j})| = k > c_{1}$ for any $u_{i}, u_{j} \\in U$ , a contradiction.", "For the solution (iii) $t_{1} = 4, t_{2} = 0, t_{3} = 2$ , the graph $G^{\\prime }$ has degree sequence $3, 3, 1, 1, 1, 1$ which can be only the graph $H_{13}$ as illustrated in Figure REF .", "Figure: The graph with degree sequence 3,3,1,1,1,13, 3, 1, 1, 1, 1.Similarly, $H_{13}$ has the unique maximum independent set $I_{H_{13}} = \\lbrace x, y, z, w\\rbrace $ .", "Because $a = 0$ , all vertices in $U$ are adjacent to every vertex in $I_{H_{13}}$ .", "Hence, $|N(u_{i}) \\cap N(u_{j})| = k > c_{1}$ for any $u_{i}, u_{j} \\in U$ , a contradiction.", "We finally consider the case when $k = 4$ .", "Clearly $U_{1}$ is an independent set of size 3 and this implies $\\alpha (G^{\\prime }) \\ge 3$ .", "By Equations (REF ) and (REF ), we have $t_{1} + t_{2} + t_{3} = 5$ and $t_{1} + 2t_{2} + 3t_{3} = 8$ which can be solved that (i) $t_{1} = 2, t_{2} = 3, t_{3} = 0$ and (ii) $t_{1} = 3, t_{2} = 1, t_{3} = 1$ .", "For the solution (i) $t_{1} = 2, t_{2} = 3, t_{3} = 0$ , the graph $G^{\\prime }$ has degree sequence $2, 2, 2, 1, 1$ .", "Thus, $G^{\\prime }$ is either $P_{5}$ , or the union of $C_{3}$ and $P_{2}$ .", "Clearly, $\\alpha (G^{\\prime }) = 2$ when $G^{\\prime }$ is the union of $C_{3}$ and $P_{2}$ .", "Hence, $G$ is $P_{5}$ .", "However, it can be checked that $P_{5}$ has the unique maximum independent set, $I_{P_{5}}$ say.", "Because $a = 0$ , all vertices in $U$ are adjacent to every vertex in $I_{P_{5}}$ .", "Thus, $|N(u_{i}) \\cap N(u_{j})| = k > c_{1}$ for any $u_{i}, u_{j} \\in U$ , a contradiction.", "For the solution (ii) $t_{1} = 3, t_{2} = 1, t_{3} = 1$ , the graph $G^{\\prime }$ has degree sequence $3, 2, 1, 1, 1$ .", "Thus, the graph $G^{\\prime }$ is $H_{14}$ as illustrated in Figure REF .", "Figure: The graph with degree sequence 3,2,1,1,13, 2, 1, 1, 1.Similarly, $H_{14}$ has exactly two independent sets of size 3 which are $\\lbrace x, w, z\\rbrace $ and $\\lbrace y, w, z\\rbrace $ .", "Because $|U| = 4$ , by the pigeonhole principle, there are two vertices $u_{i}, u_{j} \\in U$ such that $N(u_{i}) \\cap N(u_{j}) = \\lbrace x, w, z, u\\rbrace $ or $N(u_{i}) \\cap N(u_{j}) = \\lbrace y, w, z, u\\rbrace $ .", "In any case, we have $|N(u_{i}) \\cap N(u_{j})| = k = 4 > c_{1}$ , a contradiction.", "Hence, $n^{\\prime } \\ge k + 2$ implying that $n \\ge 2k + 3$ .", "This proves the Lemma and establishes the lower bound.", "Finally, we will show that if the graph $G$ is $SQSR(n, k, 0;, k - 1, k - 2, k - 3)$ when $n = 2k + 3$ , then $G$ is isomorphic to $G_{1}$ .", "Let $n = 2k + 3$ .", "Because $|U_{1}| = k - 1$ and $n^{\\prime } = k + 2$ , it follows that there are 3 vertices in $V(G^{\\prime }) \\setminus U_{1}$ .", "As $a = 0$ , we have $U_{1}$ is an independent set.", "We let $x \\in V(G^{\\prime }) \\setminus U_{1}$ .", "Further, we let $X_{1} = N(x) \\cap U_{1}, X_{2} = N(x) \\cap U$ and $X_{3} = N(x) \\cap (V(G^{\\prime }) \\setminus U_{1})$ .", "Thus, $X_{3} = N(x) \\setminus (X_{1} \\cup X_{2})$ and $k = |N(x)| = |X_{1}| + |X_{2}| + |X_{3}|.$ Clearly, $x$ has at least $c_{3}$ common neighbours with $u$ and $u_{1}$ .", "Therefore, $|X_{1}| \\ge c_{3} = k - 3$ and $|X_{2}| \\ge c_{3} = k - 3.$ Hence, Equations (REF ), (REF ) and (REF ) imply the equation $k \\ge k - 3 + k - 3 + |X_{3}|$ which yields that $6 \\ge k + |X_{3}| \\ge k$ because $|X_{3}| \\ge 0$ .", "Lemma 9 If $n = 2k + 3$ , then $k = 4$ .", "By Equation (REF ), it suffice to show that $5 \\le k \\le 6$ is not possible.", "We first assume that $k = 6$ .", "Thus, it must be equality throughout Equations (REF ), (REF ) and (REF ).", "Therefore, $|X_{3}| = 0$ , further, $x$ is adjacent to exactly three vertices in $U_{1}$ and is adjacent to exactly three vertices in $U$ .", "Because $x$ is arbitrary vertex in $V(G^{\\prime }) \\setminus U_{1}$ , this must be true for the other two vertices too.", "That is, for $y, z \\in V(G^{\\prime }) \\setminus (U_{1} \\cup \\lbrace x\\rbrace )$ , $y$ and $z$ are adjacent to exactly three vertices in $U_{1}$ and are adjacent to exactly three vertices in $U$ .", "Hence, $V(G^{\\prime })$ is partitioned into two independent sets $U_{1}$ and $\\lbrace x, y, z\\rbrace $ and each vertex in $\\lbrace x, y, z\\rbrace $ is adjacent to exactly three vertices in $U_{1}$ .", "By Equation (REF ), we have $t_{1} + t_{2} + t_{3} = n - k - 1 = k + 2 = 8$ By Equations (REF ) and (REF ) we have that $t_{1} + 2t_{2} + 3t_{3} = 3k = 18$ Further, Equations (REF ) and (REF ) yield $t_{2} + 2t_{3} = 2k - 2 = 10.$ It can be checked that Equations (REF ) and (REF ) give (i) $t_{1} = 0, t_{2} = 6, t_{3} = 2$ , (ii) $t_{1} = 1, t_{2} = 4, t_{3} = 3$ and (iii) $t_{1} = 2, t_{2} = 2, t_{3} = 4$ .", "The solution (i) $t_{1} = 0, t_{2} = 6, t_{3} = 2$ gives that $G^{\\prime }$ has the degree sequences $3, 3, 2, 2, 2, 2, 2, 2$ .", "This is not possible because all $x, y, z$ have degree 3.", "Hence, $G^{\\prime }$ is the graph satisfying either (ii) $t_{1} = 1, t_{2} = 4, t_{3} = 3$ and (iii) $t_{1} = 2, t_{2} = 2, t_{3} = 4$ .", "The solutions (ii) and (iii) give degree sequence $3, 3, 3, 2, 2, 2, 2, 1$ and $3, 3, 3, 3, 2, 2, 1, 1$ (see Figure REF ), respectively.", "From these two graphs, $U_{1}$ is the unique maximum independent set of $G^{\\prime }$ that contains 5 vertices.", "Thus, $U_{1} = \\cdots = U_{6}$ .", "This yields a contradiction that $|N(u_{i}) \\cap N(u_{j})| = |\\lbrace u\\rbrace \\cup U_{1}| = k > k - 1 = c_{1}$ .", "Hence, $k \\ne 6$ .", "Clearly, the case when $k = 5$ does not occur because $n = 2k + 3 = 2(5) + 3 = 13$ is odd number.", "Thus, $k = 4$ .", "Figure: The graph with degree sequences 3,3,3,2,2,2,2,13, 3, 3, 2, 2, 2, 2, 1 (left) and 3,3,3,3,2,2,1,13, 3, 3, 3, 2, 2, 1, 1 (right).The above lemma implies that all graphs in the class of $SQSR(2k + 3, k, 0; k - 1, k - 2, k - 3)$ graphs are $SQSR(11, 4, 0; 3, 2, 1)$ .", "Hence, it remains to prove that every $SQSR(11, 4, 0; 3, 2, 1)$ graph is $G_{1}$ .", "Our approach here requires the following lemma.", "Lemma 10 If there exists a vertex $u \\in V(G)$ which $N(u)$ contains $x$ and $y$ such that $N(x) \\cap N(y) = \\lbrace u\\rbrace $ , then $G$ is $G_{1}$ .", "We may let $N_{G^{\\prime }}(x) = N(x) \\setminus \\lbrace u\\rbrace = \\lbrace x_{1}, x_{2}, x_{3}\\rbrace , N_{G^{\\prime }}(y) = N(y) \\setminus \\lbrace u\\rbrace = \\lbrace y_{1}, y_{2}, y_{3}\\rbrace $ and $N(u) \\setminus \\lbrace x, y\\rbrace = \\lbrace z, w\\rbrace $ .", "Clearly, $V(G) \\setminus N[u] = \\lbrace x_{1}, x_{2}, x_{3}, y_{1}, y_{2}, y_{3}\\rbrace $ because $n = 11$ .", "Since $a = 0$ , the sets $\\lbrace x_{1}, x_{2}, x_{3}\\rbrace $ and $\\lbrace y_{1}, y_{2}, y_{3}\\rbrace $ are independent.", "Because $c_{i} \\le 3$ for all $i \\in \\lbrace 1, 2, 3\\rbrace $ , it follows that $z$ is adjacent to exactly two vertices in one of the sets $N_{G^{\\prime }}(x)$ or $N_{G^{\\prime }}(y)$ and is adjacent to exactly one vertex in the other set.", "Renaming vertices if necessary, we may suppose that $z$ is adjacent to $x_{1}$ and $x_{2}$ in $N_{G^{\\prime }}(x)$ and is adjacent to $y_{1}$ in $N_{G^{\\prime }}(y)$ .", "Since $a = 0$ , $x_{1}y_{1}, x_{2}y_{1} \\notin E(G)$ .", "If neither $x_{1}$ nor $x_{2}$ is adjacent to $w$ , then both $x_{1}$ and $x_{2}$ are adjacent to $y_{2}$ and $y_{3}$ .", "This implies a contradiction that $|N(x_{1}) \\cap N(x_{2})| = 4 > 3 = c_{1}$ .", "Thus, at least one of $x_{1}$ or $x_{2}$ is adjacent to $w$ .", "Renaming vertices if necessary, we may suppose that $x_{1}w \\in E(G)$ .", "We first consider the case when $x_{2}w \\in E(G)$ .", "So, $\\lbrace x, z, w\\rbrace \\subseteq N(x_{1}) \\cap N(x_{2})$ .", "Because $|N(x_{1}) \\cap N(x_{2})| \\le 3 = c_{1}$ , $x_{1}$ and $x_{2}$ are adjacent to distinct vertices $y_{2}$ and $y_{3}$ .", "Renaming vertices if necessary, $x_{1}y_{2}, x_{2}y_{3} \\in E(G)$ .", "Thus, $wy_{2}, wy_{3} \\notin E(G)$ because $a = 0$ .", "Therefore, $y_{2}$ can be adjacent only $x_{3}$ implying a contradiction that $deg(y_{3}) < 4$ .", "We now consider the case when $x_{2}w \\notin E(G)$ .", "Thus, $x_{2}y_{2}, x_{2}y_{3} \\in E(G)$ as $deg(x_{2}) = 4$ .", "Similarly, $x_{1}$ is adjacent either $y_{2}$ or $y_{3}$ .", "Renaming vertices if necessary, we let $x_{1}y_{2} \\in E(G)$ .", "This implies by $a = 0$ that $y_{2}w \\notin E(G)$ .", "Further, $y_{3}w \\in E(G)$ as otherwise $|N(x_{1}) \\cap N(y_{3})| = 0 < c_{3}$ .", "This yields $y_{1}w, y_{1}x_{3} \\in E(G)$ because $deg(y_{1}) = 4$ .", "Thus $x_{3}$ is adjacent to both $y_{2}$ and $y_{3}$ .", "Clearly, the graph $G$ is $G_{1}$ and this proves the lemma.", "To complete the section, we suppose first that there exists $v \\in V(G) \\setminus N[u]$ such that $|N(v) \\cap N(u)| \\ge 3$ .", "Because $c_{1} = 3$ , $|N(v) \\cap N(u)| = 3$ .", "We may let $N(v) \\cap N(u) = \\lbrace x, y, z\\rbrace $ and $\\lbrace w\\rbrace = N(u) \\setminus N(v)$ .", "Because $k = 4$ , $v$ is adjacent to one vertex in $V(G) \\setminus N[u]$ , $s$ say.", "We see that $sx, sy, sz \\notin E(G)$ since $a = 0$ .", "So $sw \\in E(G)$ otherwise $|N(s) \\cap N(u)| = 0 < c_{3}$ .", "Clearly, $wx, wy, wz \\notin E(G)$ and this implies that $s$ is a vertex in $V(G)$ such that $N(s)$ has two vertices $v, w$ such that $N(v) \\cap N(w) = \\lbrace s\\rbrace $ .", "By Lemma REF , $G$ is $G_{1}$ .", "Hence, we may assume that that every vertex in $V(G) \\setminus N(u)$ has at most 2 neighbours in $N(u)$ .", "Because $N(u)$ is independent, there are 12 edges from $N(u)$ to $V(G) \\setminus N[u]$ .", "Because there are 6 vertices in $V(G) \\setminus N[u]$ .", "This implies that $|N(v) \\cap N(u)| = 2$ for each $v \\in V(G) \\setminus N[u]$ .", "Moreover, by Lemma REF , we can assume that every vertex $p \\in \\lbrace x, y, z, w\\rbrace $ shares at least one neighbour with every vertex $q \\in \\lbrace x, y, z, w\\rbrace \\setminus \\lbrace p\\rbrace $ in $G^{\\prime }$ .", "Hence, we may relabel the 6 vertices in $V(G) \\setminus N[u]$ as $v_{xy}, v_{xz}, v_{xw}, v_{yz}, v_{yw}, v_{zw}$ where the vertex $v_{ij}$ is adjacent to $i$ and $j$ for each $i, j \\in \\lbrace x, y, z, w\\rbrace $ .", "Because $a = 0$ , $v_{xy}$ is not adjacent to $v_{xz}, v_{xw}, v_{yz}, v_{yw}$ .", "Therefore, $deg(v_{xy}) < 4$ , a contradiction.", "These complete the proof of the Main Theorem." ], [ "Concluding Remark", "Quasi-strongly regular graphs of grade two have been widely studied via the independence from vertex $u$ of $t_i(u)$ .", "Many results of these graphs have been published including the characterization of graphs with some class of parameters.", "For the graphs of higher grade, we believe that the lack of the above independence as mentioned in Example 2 of Goldberg [9] causes the problem more complex.", "To the best of our knowledge, there is no any characterization of quasi-strongly regular graphs of grade three has been considered yet.", "Since the graph in Example 2 of [9] was presented, we consider a generic class of quasi-strongly regular graphs that this graph belongs to.", "Therefore, we found a bounds condition of the number of vertices $n$ for a given degree $k$ of these graphs and with this condition we can characterize the class of $SQSR(11, 4, 0; 3, 2, 1)$ and the class of $SQSR(12, 4, 0; 3, 2, 1)$ .", "For the future works, one may consider another class of quasi-strongly regular graphs of grade higher than 2 to analysis some general conditions and to find a new technique to study these types of graphs.", "Additionally, a construction of these such graphs may also be considered." ], [ "Acknowledgements", "The first author is supported by Thailand Science Research and Innovation (TSRI) Basic Research Fund: Fiscal year 2021 under project number 64A306000005." ] ]
2105.11787
[ [ "Minimal graph in which the intersection of two longest paths is not a\n separator" ], [ "Abstract We prove that for a connected simple graph $G$ with $n\\le 10$ vertices, and two longest paths $C$ and $D$ in $G$, the intersection of vertex sets $V(C)\\cap V(D)$ is a separator.", "This shows that the graph found previously with $n=11$, in which the complement of the intersection of vertex sets $V(C)\\cap V(D)$ of two longest paths is connected, is minimal." ], [ "[display] thechapter20pt 11em" ], [ "1.11em" ], [ "[runin] 1.1.11em" ], [ "[runin] 1.1.1.11em" ], [ "[runin] 1.1.1.1.11em *" ], [ "0pt50pt40pt *" ], [ "0pt3.5ex plus 1ex minus .2ex2.3ex plus .2ex *" ], [ "0pt3.25ex plus 1ex minus .2ex1.5ex plus .2ex *" ], [ "0pt3.25ex plus 1ex minus .2ex1.5ex plus .2ex *" ], [ "0pt3.25ex plus 1ex minus .2ex1em *" ], [ "3.25ex plus 1ex minus .2ex1em 2.55em section[1.5em]2.8em1pc subsection[3.7em]3.4em1pc subsubsection[7.6em]3.2em1pc paragraph[10.3em]3.2em1pc Intersection of two longest paths not a separator Juan GutierrezChristian Valqui $^1$ Departamento de Ciencia de la Computación Universidad de Ingeniería y Tecnología (UTEC)[email protected] $^2$ Pontificia Universidad Católica del Perú, Sección Matemáticas, PUCP, Av.", "Universitaria 1801, San Miguel, Lima 32, Perú.$^3$ Instituto de Matemática y Ciencias Afines (IMCA) Calle Los Biólogos 245.", "Urb San César.", "La Molina, Lima 12, Perú[email protected] primary 05C38; secondary 05C45 Hippchen's conjecture, three longest paths, traceable graph, intersection of longest paths" ], [ "Introduction", "In  the following result was established.", "Let $k\\in \\lbrace 3,4,5\\rbrace $ and let $G$ be a 2-connected graph.", "Suppose that $C$ and $D$ are two longest cycles of $G$ with $V(C)\\ne V(D)$ , meeting in a set $W$ of exactly $k$ vertices.", "Then $W$ is a separator of $G$ (called an articulation set in ), which means that the complement is not connected.", "In  the same result is proved for $k\\in \\lbrace 6,7 \\rbrace $ .", "The result cannot be true for $k=8$ , since the Petersen graph has two longest cycles $C$ and $D$ of length 9 with $V(C)\\ne V(D)$ , such that the intersection has 8 vertices, and $G\\setminus (V(C)\\cap V(D))$ is connected.", "Using methods developed in order to approach the Hippchen conjecture, a path version of some of these results was proved recently in *Theorem 5.7 and Corollary 5.8.", "Assume that $P$ and $Q$ are two longest paths in a simple graph $G$ .", "If $V(Q)\\ne V(P)$ and $V(P)\\cap V(Q)$ has cardinality $\\ell \\le 5$ , then it is a separator.", "Moreover, if $V(Q)\\ne V(P)$ and $n=|V(G)|\\le 7$ then $V(Q)\\cap V(P)$ is a separator.", "The following question was raised in : Which are the maximal $\\ell $ and $n$ such that the above results remain true?", "Consider the following graph with 11 vertices, that has two longest path $P$ and $Q$ of length 9, which satisfy $V(Q)\\ne V(P)$ and moreover, the complement of $V(Q)\\cap V(P)$ is connected.", "Since $\\# (V(P)\\cap V(Q))=9$ we have $n=11$ and $\\ell =9$ .", "[scale=1] (3,0.2) node Simple graph, $n=11$ vertices; (6,2.3) node ; [black] (0,3) circle (2pt) [black] (2,0.8) circle (2pt) [black] (1,3) circle (2pt) [black] (2,2) circle (2pt) [black] (2,3) circle (2pt) [black] (3,2) circle (2pt) [black] (4,2) circle (2pt) [black] (4,3) circle (2pt) [black] (4,0.8) circle (2pt) [black] (5,3) circle (2pt) [black] (6,3) circle (2pt); [-] (0,3)–(1,3); [-] (1,3)–(2,3); [-] (2,0.8)–(1,3); [-] (2,0.8)–(4,2); [-] (2,2)–(2,3); [-] (2,2)–(4,2); [-,white,line width=2pt] (2.5,1.7)–(3.5,1.1); [-] (2,2)–(4,0.8); [-] (4,2)–(4,3); [-] (4,3)–(5,3); [-] (5,3)–(6,3); [-] (4,0.8)–(5,3); [-] (2,3)–(4,3); [-] (2,0.8)–(4,0.8); (2,3.5) node ; [scale=1] (3,0.2) node Two longest paths, $\\ell =\\#(V(P)\\cap V(Q))=9$ ; (6,2.3) node ; [-] (0,3)–(1,3); [-] (1,3)–(2,3); [-] (2,0.8)–(1,3); [-] (2,0.8)–(4,2); [-] (2,2)–(2,3); [-] (2,2)–(4,2); [-] (4,2)–(4,3); [-] (4,3)–(5,3); [-] (5,3)–(6,3); [-] (4,0.8)–(5,3); [-] (2,3)–(4,3); [-] (2,0.8)–(4,0.8); [-,red] (0,2.95)–(1,2.95); [-,red] (2.03,0.83)–(1.03,3.03); [-,red] (2,2.05)–(4,2.05); [-,green] (2,0.85)–(4,2.05); [-,white,line width=3pt] (2.5,1.68)–(3.5,1.08); [-] (2,2)–(4,0.8); [-,red] (2,1.95)–(4,0.75); [-,red] (3.95,2)–(3.95,3); [-,red] (4,3.05)–(5,3.05); [-,red] (5,3.05)–(6,3.05); [-,red] (2,0.85)–(4,0.85); [-,green] (0,3.05)–(1,3.05); [-,green] (1,3.05)–(2,3.05); [-,green] (2,0.75)–(4,0.75); [-,green] (1.95,2)–(1.95,3); [-,green] (2,1.95)–(4,1.95); [-,green] (5,2.95)–(6,2.95); [-,green] (4.05,0.8)–(5.05,3); [red] (4,3) circle (2pt); [green] (0,3) circle (2pt) [green] (2,0.8) circle (2pt) [green] (1,3) circle (2pt) [green] (2,2) circle (2pt) [green] (2,3) circle (2pt) [green] (3,2) circle (2pt) [green] (4,2) circle (2pt) [green] (4,0.8) circle (2pt) [green] (5,3) circle (2pt) [green] (6,3) circle (2pt); [red,fill=red] (0.05,3.05) arc (45:225:.07cm); [red,fill=red] (1.05,3.05) arc (45:225:.07cm); [red,fill=red] (2.05,0.85) arc (45:225:.07cm); [red,fill=red] (2.05,2.05) arc (45:225:.07cm); [red,fill=red] (3.05,2.05) arc (45:225:.07cm); [red,fill=red] (4.05,0.85) arc (45:225:.07cm); [red,fill=red] (4.05,2.05) arc (45:225:.07cm); [red,fill=red] (5.05,3.05) arc (45:225:.07cm); [red,fill=red] (6.05,3.05) arc (45:225:.07cm); (2,3.5) node ; This shows that $n_{max}\\le 10$ and $\\ell _{max}\\le 8$ , and combining this with the results of , we obtain that $7\\le n_{max} \\le 10$ and $5\\le \\ell _{max} \\le 8$ .", "In this article we will prove that $n_{max}=10$ , as we have announced in .", "In order to prove that $n_{max}=10$ we assume that there exists a graph $G$ with $n=V(G)\\in \\lbrace 8,9,10\\rbrace $ , such that $V(Q)\\ne V(P)$ and such that $G\\setminus (V(P)\\cap V(Q))$ is connected, and we will arrive at a contradiction.", "If $n=\\ell +2$ , then $\\# (V(P)\\setminus V(Q))=1$ , $\\# (V(Q)\\setminus V(P))=1$ and $V(G)=V(P)\\cup V(Q)$ .", "This yields three cases that we will discard in section : $\\ell =6$ , $n=8$ , $\\ell =7$ , $n=9$ , $\\ell =8$ , $n=10$ .", "If $\\# (V(P)\\setminus V(Q))=1$ and $\\# (V(Q)\\setminus V(P))=1$ but $n> \\ell +2$ , then we write $\\lbrace p_0\\rbrace =(V(P)\\setminus V(Q))$ , $\\lbrace q_0\\rbrace =(V(Q)\\setminus V(P))$ and set $V_0=V(G)\\setminus (V(P)\\cup V(Q))$ .", "We build a new graph $G_1$ , deleting $V_0$ and all edges incident with vertices in $V_0$ , and then adding a new edge connecting $p_0$ with $q_0$ .", "In this new graph $P$ and $Q$ are still longest paths, $V(Q)\\ne V(P)$ and $G_1\\setminus (V(P)\\cap V(Q))$ is connected, and we have the same $\\ell =|V(P)\\cap V(Q)|$ , but now we are in the case $n=\\ell +2$ .", "Since $\\# (V(P)\\setminus V(Q))=\\# (V(Q)\\setminus V(P))$ , the only remaining case is $\\ell =6$ and $n=10$ , $\\# (V(P)\\setminus V(Q))=2$ and $\\# (V(Q)\\setminus V(P))=2$ , which we will discard in section ." ], [ "The case $n=\\ell +2$", "In this section we will we assume that there exists a graph $G$ with $n=V(G)\\in \\lbrace 8,9,10\\rbrace $ and two longest paths $P$ and $Q$ with $\\ell =\\#(V(P)\\cap V(Q))=n-2$ , such that $V(Q)\\ne V(P)$ and $G\\setminus (V(P)\\cap V(Q))$ is connected.", "We set $P^{\\prime }=V(P)\\setminus V(Q)$ and $Q^{\\prime }=V(Q)\\setminus V(P)$ .", "In this case we know already that $\\# P^{\\prime }=1$ , $\\# Q^{\\prime }=1$ and $V(G)=V(P)\\cup V(Q)$ .", "We write $V(P)=\\lbrace p_1,\\dots ,p_{n-1}\\rbrace $ , and assume that in the path $P$ the vertices $p_i$ and $p_{i+1}$ are consecutive.", "Clearly $P^{\\prime }\\ne \\lbrace p_1\\rbrace $ and $P^{\\prime }\\ne \\lbrace p_{n-1}\\rbrace $ .", "Otherwise, since there is an edge from $P^{\\prime }$ to $Q^{\\prime }$ , we could expand the path $P$ to a Hamiltonian path.", "Remark 3.1 Write $V(Q)\\setminus V(P)=\\lbrace q\\rbrace $ .", "By the same (symmetric) argument as above $q$ cannot be and endpoint of $Q$ .", "Note also that $q$ cannot connect directly with $p_1$ or $p_{n-1}$ .", "If $P^{\\prime }=\\lbrace p_i\\rbrace $ , then by assumption there is an edge connecting $q$ with $p_i$ .", "Since $q$ is not an endpoint of $Q$ , there are two vertices $p_j,p_k$ in $V(Q)\\cap V(P)$ such that $q$ connects directly to them, and we can and will assume that $j<k$ .", "Note that $|i-j|,|k-i|,|j-k|>1.$ In fact, if $|j-i|=1$ , then we can replace the edge connecting $p_i$ with $p_j$ by the path $p_i q p_j$ in the path $P$ and obtain a longer path.", "Similarly, if $|k-i|=1$ then we can replace the edge connecting $p_i$ with $p_k$ in $P$ by the path $p_i q p_k$ , and obtain a longer path, and if $|k-j|=1$ then we can replace the edge connecting $p_k$ with $p_j$ in $P$ by the path $p_k q p_j$ , and obtain a longer path.", "Proposition 3.2 We can assume that $P^{\\prime }\\ne \\lbrace p_2\\rbrace $ (and by symmetry $P^{\\prime }\\ne \\lbrace p_{n-2}\\rbrace $ ).", "Assume that $V(P)\\setminus V(Q)=P^{\\prime }=\\lbrace p_2\\rbrace $ .", "We will use the lollipop method in order to replace the path $P$ with another longest path $\\widetilde{P}$ with $V(\\widetilde{P})\\ne V(Q)$ such that $G\\setminus (V(P)\\cap V(Q))$ is connected, and such that $V(\\widetilde{P})\\setminus V(Q)\\ne \\lbrace \\tilde{p}_2\\rbrace ,\\lbrace \\tilde{p}_{n-2}\\rbrace $ .", "By Remark REF the vertex $q$ connects directly with vertices $p_j$ and $p_k$ .", "We can assume $j<k$ and then, again by Remark REF , we have $5<k<n-1$ .", "Set $\\widetilde{P}:=p_{k-1}p_{k-2}\\dots p_{3} p_2 q p_k p_{k+1}\\dots p_{n-2}p_{n-1}.$ [scale=0.68] (4.5,-0.2) node $q$ connects with $p_j$ and $p_k$ ; (6,2.3) node ; [-] (0,1)–(1,3); [-] (1,3)–(2,1); [-] (2,1)–(3,1); [-] (3,1)–(4,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (8,1)–(9,1); [-,red] (3,1)–(4.5,3); [-,red] (4.5,3)–(6,1); [-,dotted] (1,3)..controls(2.75,3.5)..(4.5,3); [red] (4.5,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,3) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt) [black] (9,1) circle (2pt); (0,0.6) node $p_1$ ; (1,3.4) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_j$ ; (5,0.6) node $p_{k-1}$ ; (6,0.6) node $p_k$ ; (9,0.6) node $p_{n-1}$ ; (4.5,3.4) node $q$ ; (4.5,1) node $\\dots $ ; (7.5,1) node $\\dots $ ; (10.5,1) node ; [scale=0.68] (4.5,-0.3) node The path $\\widetilde{P}$ in blue; (6,2.3) node ; [-] (0,1)–(1,3); [-,line width=2pt,cyan] (1,3)–(2,1); [-,line width=2pt,cyan] (2,1)–(3,1); [-,line width=2pt,cyan] (3,1)–(4,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-] (5,1)–(6,1); [-,line width=2pt,cyan] (6,1)–(7,1); [-,line width=2pt,cyan] (7,1)–(8,1); [-,line width=2pt,cyan] (8,1)–(9,1); [-,red] (3,1)–(4.5,3); [-,line width=2pt,cyan] (4.5,3)–(6,1); [-,line width=2pt,cyan] (1,3)..controls(2.75,3.5)..(4.5,3); [red] (4.5,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,3) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); [cyan] (9,1) circle (2.5pt) [cyan] (5,1) circle (2.5pt); (0,0.6) node $p_1$ ; (1,3.4) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_j$ ; (5,0.6) node $p_{k-1}$ ; (6,0.6) node $p_k$ ; (9,0.6) node $p_{n-1}$ ; (4.5,3.4) node $q$ ; (4.5,1) node $\\dots $ ; (7.5,1) node $\\dots $ ; Then $\\widetilde{P}$ is also a longest path with $n-1$ vertices, $V(\\widetilde{P})\\setminus V(Q)=\\lbrace p_2\\rbrace $ is connected with $\\lbrace p_1\\rbrace =V(Q)\\setminus V(\\widetilde{P})$ , and so $G\\setminus (V(\\widetilde{P})\\cap V(Q)$ is connected.", "Moreover, since $\\tilde{p}_1=p_{k-1}$ , $\\tilde{p}_2=p_{k-2}$ and $k>5$ we have $k-2>3>2$ , and so $V(\\widetilde{P})\\setminus V(Q)=\\lbrace p_2\\rbrace \\ne \\lbrace p_{k-2}\\rbrace = \\lbrace \\tilde{p}_2\\rbrace .$ Since $n>4$ we have $2<n-2$ , and so $V(\\widetilde{P})\\setminus V(Q)=\\lbrace p_2\\rbrace \\ne \\lbrace p_{n-2}\\rbrace = \\lbrace \\tilde{p}_{n-2}\\rbrace ,$ which concludes the proof.", "There are some pairs of vertices of $P$ that cannot be connected by an edge of $Q$ without generating a Hamiltonian path.", "The following proposition collects some of the forbidden pairs in $\\lbrace p_1,p_{i-1},p_{i+1},p_{j-1},p_{j+1},p_{k-1},p_{k+1},p_{n-1}\\rbrace $ Proposition 3.3 The following pairs cannot be connected by an edge in $Q$ .", "$(p_{i-1},p_{i+1})$ , $(p_1,p_{n-1})$ , $(p_1,p_{i+1})$ , $(p_{i-1},p_{n-1})$ .", "For $x\\in \\lbrace j,k\\rbrace $ , the pairs $(p_{1},p_{x+1})$ , $(p_{x-1},p_{n-1})$ , $(p_{i-1},p_{x-1})$ , $(p_{i+1},p_{x+1})$ .", "For $x\\in \\lbrace j,k\\rbrace $ with $x>i$ , the pair $(p_1,p_{x-1})$ is forbidden, and for $x\\in \\lbrace j,k\\rbrace $ with $x<i$ , the pair $(p_{x+1},p_{n-1})$ is forbidden.", "If $j<i$ , the pair $(p_1,p_{i-1})$ is forbidden, and if $k>i$ , the pair $(p_{i+1},p_{n-1})$ is forbidden.", "(1).", "There cannot be an edge of $Q$ connecting $p_{i-1}$ with $p_{i+1}$ , since then we could replace that edge with the path $p_{i-1}p_i p_{i+1}$ in $Q$ and obtain a longer path.", "The Hamiltonian paths in each of following three diagrams show that none of the pairs $(p_1,p_{i+1})$ , $(p_1,p_{n-1})$ , $(p_{i-1},p_{n-1})$ are allowed.", "[scale=0.45] (6,2.3) node ; (4,-0.5) node $Q$ can't connect $p_1$ with $p_{i+1}$ ; [-,line width=2pt,cyan] (0,1)–(1,1); [-,line width=2pt,cyan] (1,1)–(2,1); [-,line width=2pt,cyan] (2,1)–(3,2); [-] (3,2)–(4,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-,line width=2pt,cyan] (5,1)–(6,1); [-,line width=2pt,cyan] (6,1)–(7,1); [-,line width=2pt,cyan] (7,1)–(8,1); [-,line width=2pt,cyan] (3,2)–(6,2); [-,line width=2pt,cyan] (0,1)..controls(-0.2,-0.2)and(4,0)..(4,1); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (-0.3,0.7) node $p_1$ ; (2.2,0.6) node $p_{i-1}$ ; (2.6,2.2) node $p_i$ ; (4.6,0.6) node $p_{i+1}$ ; (8,0.6) node $p_{n-1}$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; [scale=0.45] (6,2.3) node ; (4.2,-0.5) node $Q$ can't connect $p_1$ with $p_{n-1}$ ; [-,line width=2pt,cyan] (0,1)–(1,1); [-,line width=2pt,cyan] (1,1)–(2,1); [-,line width=2pt,cyan] (2,1)–(3,2); [-] (3,2)–(4,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-,line width=2pt,cyan] (5,1)–(6,1); [-,line width=2pt,cyan] (6,1)–(7,1); [-,line width=2pt,cyan] (7,1)–(8,1); [-,line width=2pt,cyan] (3,2)–(6,2); [-,line width=2pt,cyan] (0,1)..controls(0,-0.2)and(8.3,-0.3)..(8,1); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (-0.3,0.7) node $p_1$ ; (2.2,0.6) node $p_{i-1}$ ; (2.6,2.2) node $p_i$ ; (4.5,0.6) node $p_{i+1}$ ; (8.4,1.4) node $p_{n-1}$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; [scale=0.45] (6,2.3) node ; (4,-0.5) node $Q$ can't connect $p_{i-1}$ with $p_{n-1}$ ; [-,line width=2pt,cyan] (0,1)–(1,1); [-,line width=2pt,cyan] (1,1)–(2,1); [-] (2,1)–(3,2); [-,line width=2pt,cyan] (3,2)–(4,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-,line width=2pt,cyan] (5,1)–(6,1); [-,line width=2pt,cyan] (6,1)–(7,1); [-,line width=2pt,cyan] (7,1)–(8,1); [-,line width=2pt,cyan] (3,2)–(6,2); [-,line width=2pt,cyan] (2,1)..controls(2,-0.2)and(8.3,-0.3)..(8,1); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (2.9,1) node $p_{i-1}$ ; (2.6,2.2) node $p_i$ ; (4.6,0.6) node $p_{i+1}$ ; (8.4,1.4) node $p_{n-1}$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; (2).", "The path $q p_x p_{x-1} p_{x-2}\\dots p_2 p_1 p_{x+1} p_{x+2}\\dots p_{n-1}$ shows that $(p_{1},p_{x+1})$ is forbidden, and the path $p_1 p_2 \\dots p_{x-2}p_{x-1}p_{n-1} p_{n-2}\\dots p_{x+2} p_{x+1} p_x q$ discards the pair $(p_{x-1},p_{n-1})$ .", "If $x<i$ , then the Hamiltonian paths in the following two diagrams show that $(p_{x-1},p_{i-1})$ , $(p_{x+1},p_{i+1})$ are not allowed in that case, [scale=0.5] (6,2.3) node ; (4,-0.5) node $Q$ can't connect $p_{x-1}$ with $p_{i-1}$ ; [-,line width=2pt,cyan] (0,1)–(2,1); [-,line width=2pt,cyan] (3,1)–(5,1); [-,line width=2pt,cyan] (3,1)–(3,2); [-] (2,1)–(3,1); [-] (5,1)–(6,2); [-,line width=2pt,cyan] (6,2)–(7,1); [-,line width=2pt,cyan] (7,1)–(9,1); [-,line width=2pt,cyan] (7,1)–(8,1); [-,line width=2pt,cyan] (3,2)–(6,2); [-,line width=2pt,cyan] (2,1)..controls(1.8,-0.2)and(5,0)..(5,1); [red] (3,2) circle (2pt); [black] (0,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,2) circle (2pt) [black] (7,1) circle (2pt) [black] (9,1) circle (2pt); (0,0.6) node $p_1$ ; (2,1.4) node $p_{x-1}$ ; (3.2,0.6) node $p_{x}$ ; (5.6,0.6) node $p_{i-1}$ ; (6.3,2.3) node $p_i$ ; (7.2,0.6) node $p_{i+1}$ ; (9,0.6) node $p_{n-1}$ ; (2.7,2.2) node $q$ ; (8.5,3) node ; [scale=0.5] (6,2.3) node ; (4,-0.5) node $Q$ can't connect $p_{x+1}$ with $p_{i+1}$ ; [-,line width=2pt,cyan] (0,1)–(2,1); [-,line width=2pt,cyan] (3,1)–(5,1); [-,line width=2pt,cyan] (2,1)–(2,2); [-] (2,1)–(3,1); [-] (6,2)–(7,1); [-,line width=2pt,cyan] (5,1)–(6,2); [-,line width=2pt,cyan] (7,1)–(9,1); [-,line width=2pt,cyan] (7,1)–(8,1); [-,line width=2pt,cyan] (2,2)–(6,2); [-,line width=2pt,cyan] (3,1)..controls(2.8,-0.2)and(7,0)..(7,1); [red] (2,2) circle (2pt); [black] (0,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,2) circle (2pt) [black] (7,1) circle (2pt) [black] (9,1) circle (2pt); (0,0.6) node $p_1$ ; (2,0.6) node $p_{x}$ ; (3.2,1.4) node $p_{x+1}$ ; (5.6,0.6) node $p_{i-1}$ ; (6.3,2.3) node $p_i$ ; (7.6,0.6) node $p_{i+1}$ ; (9.3,1.4) node $p_{n-1}$ ; (1.7,2.2) node $q$ ; (8.5,3) node ; and the Hamiltonian paths in the following two diagrams show that $(p_{i-1},p_{x-1})$ , $(p_{i+1},p_{x+1})$ are impossible if $x>i$ , which concludes item (2).", "[scale=0.5] (6,2.3) node ; (4,-0.5) node $Q$ can't connect $p_{i-1}$ with $p_{x-1}$ ; [-,line width=2pt,cyan] (0,1)–(1,1); [-,line width=2pt,cyan] (1,1)–(2,1); [-] (2,1)–(3,2); [-,line width=2pt,cyan] (3,2)–(4,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-,line width=2pt,cyan] (5,1)–(6,1); [-] (6,1)–(7,1); [-,line width=2pt,cyan] (7,1)–(9,1); [-,line width=2pt,cyan] (7,2)–(7,1); [-,line width=2pt,cyan] (3,2)–(7,2); [-,line width=2pt,cyan] (2,1)..controls(2,-0.2)and(6.3,-0.3)..(6,1); [red] (7,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (9,1) circle (2pt); (0,0.6) node $p_1$ ; (2.9,1) node $p_{i-1}$ ; (2.6,2.2) node $p_i$ ; (4.4,0.6) node $p_{i+1}$ ; (6.1,1.4) node $p_{x-1}$ ; (7.2,0.6) node $p_x$ ; (9,0.6) node $p_{n-1}$ ; (7.2,2.2) node $q$ ; (8.5,3) node ; [scale=0.5] (6,2.3) node ; (4,-0.5) node $Q$ can't connect $p_{i+1}$ with $p_{x+1}$ ; [-,line width=2pt,cyan] (0,1)–(1,1); [-,line width=2pt,cyan] (1,1)–(2,1); [-] (4,1)–(3,2); [-,line width=2pt,cyan] (3,2)–(2,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-,line width=2pt,cyan] (5,1)–(6,1); [-] (6,1)–(7,1); [-,line width=2pt,cyan] (7,1)–(9,1); [-,line width=2pt,cyan] (6,2)–(6,1); [-,line width=2pt,cyan] (3,2)–(6,2); [-,line width=2pt,cyan] (4,1)..controls(4,-0.2)and(7.3,-0.3)..(7,1); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (9,1) circle (2pt); (0,0.6) node $p_1$ ; (2.3,0.6) node $p_{i-1}$ ; (2.6,2.2) node $p_i$ ; (4.6,1.4) node $p_{i+1}$ ; (6.1,0.6) node $p_{x}$ ; (7.2,1.4) node $p_{x+1}$ ; (9,0.6) node $p_{n-1}$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; (3) The Hamiltonian paths in the following two diagrams show that for $x\\in \\lbrace j,k\\rbrace $ with $x>i$ , the pair $(p_1,p_{x-1})$ is forbidden, and for $x\\in \\lbrace j,k\\rbrace $ with $x<i$ , the pair $(p_{x+1},p_{n-1})$ is forbidden.", "[scale=0.5] (6,2.3) node ; (4,-0.5) node $Q$ can't connect $p_{1}$ with $p_{x-1}$ ; [-,line width=2pt,cyan] (0,1)–(1,1); [-,line width=2pt,cyan] (1,1)–(2,1); [-] (2,1)–(3,2); [-,line width=2pt,cyan] (3,2)–(4,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-,line width=2pt,cyan] (5,1)–(6,1); [-] (6,1)–(7,1); [-,line width=2pt,cyan] (7,1)–(9,1); [-,line width=2pt,cyan] (7,2)–(7,1); [-,line width=2pt,cyan] (3,2)–(7,2); [-,line width=2pt,cyan] (0,1)..controls(0,-0.2)and(6.3,-0.3)..(6,1); [red] (7,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (9,1) circle (2pt); (-0.3,0.7) node $p_1$ ; (2,0.6) node $p_{i-1}$ ; (2.6,2.2) node $p_i$ ; (4.4,0.6) node $p_{i+1}$ ; (6.1,1.4) node $p_{x-1}$ ; (7.2,0.6) node $p_x$ ; (9,0.6) node $p_{n-1}$ ; (7.2,2.2) node $q$ ; (8.5,3) node ; [scale=0.5] (6,2.3) node ; (4,-0.5) node $Q$ can't connect $p_{x+1}$ with $p_{n-1}$ ; [-,line width=2pt,cyan] (0,1)–(2,1); [-,line width=2pt,cyan] (3,1)–(5,1); [-,line width=2pt,cyan] (2,1)–(2,2); [-] (2,1)–(3,1); [-] (6,2)–(7,1); [-,line width=2pt,cyan] (5,1)–(6,2); [-,line width=2pt,cyan] (7,1)–(9,1); [-,line width=2pt,cyan] (7,1)–(8,1); [-,line width=2pt,cyan] (2,2)–(6,2); [-,line width=2pt,cyan] (3,1)..controls(2.8,-0.2)and(9.2,-0.2)..(9,1); [red] (2,2) circle (2pt); [black] (0,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,2) circle (2pt) [black] (7,1) circle (2pt) [black] (9,1) circle (2pt); (0,0.6) node $p_1$ ; (2,0.6) node $p_{x}$ ; (3.2,1.4) node $p_{x+1}$ ; (5.6,0.6) node $p_{i-1}$ ; (6.3,2.3) node $p_i$ ; (7.5,0.6) node $p_{i+1}$ ; (9.4,1.4) node $p_{n-1}$ ; (1.7,2.2) node $q$ ; (8.5,3) node ; (4) The Hamiltonian paths in the following two diagrams show that when $j<i$ , the pair $(p_1,p_{i-1})$ is forbidden, and if $k>i$ , the pair $(p_{i+1},p_{n-1})$ is forbidden, which concludes the proof.", "[scale=0.5] (6,2.3) node ; (4,-0.5) node $Q$ can't connect $p_{1}$ with $p_{i-1}$ ; [-,line width=2pt,cyan] (0,1)–(2,1); [-,line width=2pt,cyan] (3,1)–(5,1); [-,line width=2pt,cyan] (3,1)–(3,2); [-] (2,1)–(3,1); [-] (5,1)–(6,2); [-,line width=2pt,cyan] (6,2)–(7,1); [-,line width=2pt,cyan] (7,1)–(9,1); [-,line width=2pt,cyan] (7,1)–(8,1); [-,line width=2pt,cyan] (3,2)–(6,2); [-,line width=2pt,cyan] (0,1)..controls(0,-0.2)and(5,0)..(5,1); [red] (3,2) circle (2pt); [black] (0,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,2) circle (2pt) [black] (7,1) circle (2pt) [black] (9,1) circle (2pt); (-0.3,0.7) node $p_1$ ; (2,1.4) node $p_{j-1}$ ; (3.2,0.6) node $p_{j}$ ; (5.6,0.6) node $p_{i-1}$ ; (6.3,2.3) node $p_i$ ; (7.2,0.6) node $p_{i+1}$ ; (9,0.6) node $p_{n-1}$ ; (2.7,2.2) node $q$ ; (8.5,3) node ; [scale=0.5] (6,2.3) node ; (4,-0.5) node $Q$ can't connect $p_{i+1}$ with $p_{n-1}$ ; [-,line width=2pt,cyan] (0,1)–(1,1); [-,line width=2pt,cyan] (1,1)–(2,1); [-] (4,1)–(3,2); [-,line width=2pt,cyan] (3,2)–(2,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-,line width=2pt,cyan] (5,1)–(6,1); [-] (6,1)–(7,1); [-,line width=2pt,cyan] (7,1)–(9,1); [-,line width=2pt,cyan] (6,2)–(6,1); [-,line width=2pt,cyan] (3,2)–(6,2); [-,line width=2pt,cyan] (4,1)..controls(4,-0.2)and(9.3,-0.3)..(9,1); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (9,1) circle (2pt); (0,0.6) node $p_1$ ; (2.3,0.6) node $p_{i-1}$ ; (2.6,2.2) node $p_i$ ; (4.6,1.4) node $p_{i+1}$ ; (6.1,0.6) node $p_{k}$ ; (7.2,1.4) node $p_{k+1}$ ; (9.4,1.4) node $p_{n-1}$ ; (6.2,2.2) node $q$ ; (8.5,3) node ;" ], [ "The case $\\ell =6$ and {{formula:5087815f-6519-4fb5-9465-bd5729c3603d}}", "Remember that $\\lbrace p_i\\rbrace =V(P)\\setminus V(Q)$ and $\\lbrace q\\rbrace =V(Q)\\setminus V(P)$ .", "Since $i\\notin \\lbrace 1,n-1=7\\rbrace $ and by Proposition REF $i\\notin \\lbrace 2,n-2=6\\rbrace $ , we know that $i\\in \\lbrace 3,4,5\\rbrace $ .", "By Remark REF there exist $j,k$ such that $1<j<k<n-1$ , and $|i-j|,|k-i|,|j-k|>1$ .", "The case $i=3$ is impossible, since then $3=i<j-1$ and $j+1<k$ , and so $5=i+2<j+1<k$ , thus $7\\le k$ , which contradicts $k<n-1=7$ .", "By symmetry, the case $i=5$ is also impossible.", "Finally, we discard the case $i=4$ .", "If $i<j<k$ , then $4<j-1<k-3$ leads to $k\\ge 8$ , which is impossible, and if $j<k<i$ , then $j<k-1<i-3=1$ leads to the contradiction $j<0$ .", "Thus $j<i<k$ , and so $j=2$ , $i=4$ and $k=6$ is the only case left.", "So assume that $i=4$ , $j=2$ and $k=6$ .", "We know that $p_3$ cannot be an endpoint of $Q$ , since then we could continue $Q$ with the edge $p_3p_4$ .", "Thus two edges of the path $Q$ connect $p_3$ directly with two vertices in the set $\\lbrace p_1,p_2,p_5,p_6,p_7\\rbrace $ .", "By Proposition REF (1) an edge of $Q$ cannot connect $p_3=p_{i-1}$ with $p_{i+1}=p_5$ nor $p_3$ with $p_{n-1}=p_7$ , and by item (4) of the same proposition an edge of $Q$ cannot connect $p_3=p_{i-1}$ with $p_1$ , since $j=2<i$ .", "Thus two edges of the path $Q$ connect $p_3$ with $p_2$ and $p_3$ with $p_6$ , which implies that $Q$ contains the 4-cycle $p_2 p_3 p_6 q p_3$ , a contradiction that discards the case $i=4$ and finishes the proof of the case $\\ell =6$ and $n=8$ .", "[scale=0.7] (3,-0.2) node $Q$ contains a 4-cycle; (-6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-,red,line width=2pt] (1,1)–(3,3); [-,red,line width=2pt] (3,3)–(5,1); [-,red,line width=2pt] (1,1)..controls(1,0)and(2,0)..(2,1); [-,red,line width=2pt] (2,1)..controls(2,0)and(5,0)..(5,1); [-,dotted] (3,2)..controls(3,2.5)..(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,0.6) node $p_2$ ; (2.5,0.9) node $p_3$ ; (3.4,2.2) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5.3,0.6) node $p_6$ ; (6.3,0.6) node $p_7$ ; (3,3.4) node $q$ ; (10.5,1) node ;" ], [ "The case $\\ell =7$ and {{formula:500461f5-ee8a-4cac-afc6-48beae63bdfb}}", "Remember that $\\lbrace p_i\\rbrace =V(P)\\setminus V(Q)$ and $\\lbrace q\\rbrace =V(Q)\\setminus V(P)$ .", "Since $i\\notin \\lbrace 1,n-1=8\\rbrace $ and by Proposition REF $i\\notin \\lbrace 2,n-2=7\\rbrace $ , we know that $i\\in \\lbrace 3,4,5,6\\rbrace $ .", "By symmetry it suffices to discard the cases $i=3$ and $i=4$ .", "There are two vertices $p_j,p_k$ in $V(Q)\\cap V(P)$ that are connected with $q$ via edges of $Q$ , with $j<k$ .", "Assume that $i=3$ .", "By Remark REF we know that $1<j,k<n-1$ and $|i-j|,|k-i|,|j-k|>1$ .", "The case $j<i=3$ is impossible, since then $j<i-1=2$ leads to $j=1$ , which is impossible.", "Hence $i<j<k$ , and the only possibility is $i=3,j=5,k=7.$ We know that $p_4$ cannot be an endpoint of $Q$ , since then we could continue $Q$ with the edge $p_4p_3$ .", "Thus two edges of the path $Q$ must connect $p_4$ directly with two vertices in the set $\\lbrace p_1,p_2,p_5,p_6,p_7,p_8\\rbrace $ .", "By Proposition REF an edge of $Q$ cannot connect $p_4=p_{i+1}$ with $p_1$ , $p_2=p_{i-1}$ , $p_6=p_{j+1}$ nor $p_8=p_{n-1}$ .", "Thus two edges of the path $Q$ connect $p_4$ with $p_5$ and $p_4$ with $p_7$ , which implies that $Q$ contains the 4-cycle $p_4 p_5 q p_7 p_4$ , a contradiction that discards the case $i=3$ .", "[scale=0.8] (3.5,-0.2) node $Q$ contains a 4-cycle; (-6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,2); [-] (2,2)–(3,1); [-] (3,1)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-,red,line width=2pt] (4,1)–(5,2); [-,red,line width=2pt] (5,2)–(6,1); [-,red,line width=2pt] (3,1)..controls(3,0)and(6,0)..(6,1); [-,red,line width=2pt] (3,1)..controls(3,0.5)and(4,0.5)..(4,1); [-,dotted] (2,2)..controls(3.5,2.5)..(5,2); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.8,2.3) node $p_3$ ; (2.6,0.9) node $p_4$ ; (4.2,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6.3,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (5.2,2.2) node $q$ ; (8.5,3) node ; Assume that $i=4$ .", "By Remark REF we know that $1<j,k<n-1=8$ and $|i-j|,|k-i|,|j-k|>1$ .", "The case $i=4<j$ can be discarded, since then $6\\le j$ and $8\\le k$ , which is impossible.", "Hence $j<i<k$ , and the only two possibilities are $j=2,i=4,k=6\\quad \\text{or}\\quad j=2,i=4,k=7.$ $\\bullet \\bullet $ Assume first that $(i,j,k)=(4,2,6)$ .", "We know that $p_5$ cannot be an endpoint of $Q$ , since then we could continue $Q$ with the edge $p_5p_4$ .", "Thus two edges of the path $Q$ connect $p_5$ directly with two vertices in the set $(V(P)\\cap V(Q))\\setminus \\lbrace p_5\\rbrace =\\lbrace p_1,p_2,p_3,p_6,p_7,p_8\\rbrace .$ By Proposition REF an edge of $Q$ cannot connect $p_5=p_{i+1}$ with $p_1$ , $p_3=p_{i-1}$ , $p_7=p_{j+1}$ nor $p_8=p_{n-1}$ .", "Thus two edges of the path $Q$ connect $p_5$ with $p_2$ and $p_5$ with $p_6$ , which implies that $Q$ contains the 4-cycle $p_2 p_5 p_6 q p_2$ , a contradiction that discards the case $i=4$ , $j=2$ and $k=6$ .", "[scale=0.8] (3.5,-0.2) node $Q$ contains a 4-cycle; (-6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-,red,line width=2pt] (1,1)–(3,3); [-,red,line width=2pt] (3,3)–(5,1); [-,red,line width=2pt] (4,1)..controls(4,0)and(1,0)..(1,1); [-,red,line width=2pt] (4,1)..controls(4,0.5)and(5,0.5)..(5,1); [-,dotted] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (2.6,2.1) node $p_4$ ; (3.6,1) node $p_{5}$ ; (5.3,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (3.2,3.2) node $q$ ; (8.5,3) node ; $\\bullet \\bullet $ Now assume that $(i,j,k)=(4,2,7)$ .", "We know that $p_3$ cannot be an endpoint of $Q$ , since then we could continue $Q$ with the edge $p_3p_4$ .", "Thus two edges of the path $Q$ connect $p_3$ directly with two vertices in the set $(V(P)\\cap V(Q))\\setminus \\lbrace p_3\\rbrace =\\lbrace p_1,p_2,p_5,p_6,p_7,p_8\\rbrace .$ By Proposition REF an edge of $Q$ cannot connect $p_3=p_{i-1}$ with $p_1$ , $p_5=p_{i+1}$ , $p_6=p_{k-1}$ nor $p_8=p_{n-1}$ .", "Thus two edges of the path $Q$ connect $p_3$ with $p_2$ and $p_3$ with $p_7$ , which implies that $Q$ contains the 4-cycle $p_2 p_3 p_7 q p_2$ , a contradiction that discards the case $i=4$ , $j=2$ and $k=7$ , finishing the case $i=4$ and thus we have proved that $\\ell =7$ and $n=9$ is impossible.", "[scale=0.8] (3.5,-0.2) node $Q$ contains a 4-cycle; (-6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-,red,line width=2pt] (1,1)–(3,3); [-,red,line width=2pt] (3,3)–(6,1); [-,red,line width=2pt] (1,1)..controls(1,0.5)and(2,0.5)..(2,1); [-,red,line width=2pt] (2,1)..controls(2,0)and(6,0)..(6,1); [-,dotted] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,0.6) node $p_2$ ; (2.4,0.9) node $p_3$ ; (2.6,2.1) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6.3,0.6) node $p_7$ ; (7.3,0.6) node $p_8$ ; (3.2,3.2) node $q$ ; (8.5,3) node ;" ], [ "The case $\\ell =8$ and {{formula:96767e97-1348-4717-bade-8f7e7cb4eb6b}}", "Remember that $\\lbrace p_i\\rbrace =V(P)\\setminus V(Q)$ and $\\lbrace q\\rbrace =V(Q)\\setminus V(P)$ .", "Since $i\\notin \\lbrace 1,n-1=9\\rbrace $ and by Proposition REF $i\\notin \\lbrace 2,n-2=8\\rbrace $ , we know that $i\\in \\lbrace 3,4,5,6,7\\rbrace $ .", "By symmetry it suffices to discard the cases $i=3$ , $i=4$ and $i=5$ .", "There are two vertices $p_j,p_k$ in $V(Q)\\cap V(P)$ that are connected with $q$ via edges of $Q$ , and we can and will assume that $j<k$ .", "Assume that $i=3$ .", "By Remark REF we know that $1<j,k<n-1=9$ and $|i-j|,|k-i|,|j-k|>1$ .", "The case $j<i=3$ is impossible, since then $j<i-1=2$ leads to $j=1$ , which we already discarded.", "Hence $i<j<k$ , and the only three possibilities are $(i,j,k)=(3,5,7),\\quad (i,j,k)=(3,5,8)\\quad \\text{and} \\quad (i,j,k)=(3,6,8)$ $\\bullet \\bullet $ Assume that $(i,j,k)=(3,5,7)$ .", "We know that $p_4$ cannot be an endpoint of $Q$ , since then we could continue $Q$ with the edge $p_4p_3$ .", "Thus two edges of the path $Q$ connect $p_4$ directly with two vertices in the set $(V(P)\\cap V(Q))\\setminus \\lbrace p_3\\rbrace =\\lbrace p_1,p_2,p_5,p_6,p_7,p_8,p_9\\rbrace .$ By Proposition REF an edge of $Q$ cannot connect $p_4=p_{i+1}$ with $p_1$ , $p_2=p_{i-1}$ , $p_6=p_{j+1}$ , $p_8=p_{k+1}$ nor $p_9=p_{n-1}$ .", "Thus two edges of the path $Q$ connect $p_4$ with $p_5$ and $p_4$ with $p_7$ , which implies that $Q$ contains the 4-cycle $p_4 p_5 q p_7 p_4$ , a contradiction that discards the case $i=3$ , $j=5$ and $k=7$ .", "[scale=0.8] (4,-0.2) node $Q$ contains a 4-cycle; (-6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,2); [-] (2,2)–(3,1); [-] (3,1)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,line width=2pt,red] (4,1)–(5,2); [-,line width=2pt,red] (5,2)–(6,1); [-,line width=2pt,red] (3,1)..controls(3,0.5)and(4,0.5)..(4,1); [-,line width=2pt,red] (3,1)..controls(3,0)and(6,0)..(6,1); [-,dotted] (2,2)..controls(3.5,2.5)..(5,2); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.8,2.3) node $p_3$ ; (2.6,0.9) node $p_4$ ; (4.3,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6.3,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (5.2,2.2) node $q$ ; (8.5,3) node ; $\\bullet \\bullet $ Assume that $(i,j,k)=(3,5,8)$ .", "We know that $p_4$ cannot be an endpoint of $Q$ , since then we could continue $Q$ with the edge $p_4p_3$ .", "Thus two edges of the path $Q$ connect $p_4$ directly with two vertices in the set $(V(P)\\cap V(Q))\\setminus \\lbrace p_3\\rbrace =\\lbrace p_1,p_2,p_5,p_6,p_7,p_8,p_9\\rbrace .$ By Proposition REF an edge of $Q$ cannot connect $p_4=p_{i+1}$ with $p_1$ , $p_2=p_{i-1}$ , $p_6=p_{j+1}$ nor $p_9=p_{n-1}$ .", "The first diagram shows that if an edge of $Q$ connects $p_4$ with $p_7$ , then there is a (Hamiltonian) path of length 9, which contradicts that $P$ and $Q$ of length $n-2=8$ are longest paths.", "[scale=0.8] (4,-0.2) node $Q$ cannot connect $p_4$ with $p_7$ ; (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,1); [-,line width=2pt,cyan] (1,1)–(2,2); [-,line width=2pt,cyan] (2,2)–(3,1); [-] (3,1)–(4,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-,line width=2pt,cyan] (5,1)–(6,1); [-] (6,1)–(7,1); [-,line width=2pt,cyan] (7,1)–(8,1); [-,line width=2pt,cyan] (4,1)–(5,2); [-,line width=2pt,cyan] (5,2)–(7,1); [-,line width=2pt,cyan] (3,1)..controls(3,0)and(6,0)..(6,1); [-,dotted] (2,2)..controls(3.5,2.5)..(5,2); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.8,2.3) node $p_3$ ; (2.6,0.9) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6.3,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (5.2,2.2) node $q$ ; (8.5,3) node ; [scale=0.8] (4,-0.2) node $Q$ contains a 4-cycle; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,2); [-] (2,2)–(3,1); [-] (3,1)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,line width=2pt,red] (4,1)–(5,2); [-,line width=2pt,red] (5,2)–(7,1); [-,line width=2pt,red] (3,1)..controls(3,0.5)and(4,0.5)..(4,1); [-,line width=2pt,red] (3,1)..controls(3,0)and(7,0)..(7,1); [-,dotted] (2,2)..controls(3.5,2.5)..(5,2); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.8,2.3) node $p_3$ ; (2.6,0.9) node $p_4$ ; (4.3,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7.3,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (5.2,2.2) node $q$ ; (8.5,3) node ; Thus two edges of the path $Q$ connect $p_4$ with $p_5$ and $p_4$ with $p_8$ , which implies that $Q$ contains the 4-cycle $p_4 p_5 q p_8 p_4$ , a contradiction that discards the case $i=3$ , $j=5$ and $k=8$ .", "$\\bullet \\bullet $ Assume that $(i,j,k)=(3,6,8)$ .", "In this case it doesn't suffice to analyze the connections of a single vertex as in all the previous cases, and we have to consider the connections of $p_7$ and $p_9$ .", "Proposition REF and the Hamiltonian path in the following diagram show that an edge of $Q$ cannot connect $p_7$ with one of $\\lbrace p_1,p_2,p_4,p_5,p_9\\rbrace $ , and that an edge of $Q$ cannot connect $p_9$ with one of $\\lbrace p_1,p_2,p_4,p_5,p_7\\rbrace $ .", "[scale=0.5] (4,-0.4) node $Q$ can't connect $p_5$ with $p_7$ ; (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,1); [-,line width=2pt,cyan] (1,1)–(2,2); [-,line width=2pt,cyan] (2,2)–(3,1); [-,line width=2pt,cyan] (3,1)–(4,1); [-] (4,1)–(5,1); [-,line width=2pt,cyan] (5,1)–(6,1); [-] (6,1)–(7,1); [-,line width=2pt,cyan] (7,1)–(8,1); [-,line width=2pt,cyan] (5,1)–(6,2); [-,line width=2pt,cyan] (6,2)–(7,1); [-,line width=2pt,cyan] (4,1)..controls(4,-0.3)and(6,-0.3)..(6,1); [-,dotted] (2,2)..controls(4,2.5)..(6,2); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.8,2.4) node $p_3$ ; (2.6,0.9) node $p_4$ ; (3.7,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6.3,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; Thus $p_7$ can be connected only with $p_6$ and $p_8$ by edges of $Q$ , and similarly $p_9$ can be connected only with $p_6$ and $p_8$ by edges of $Q$ .", "Hence, if $p_7$ is not an endpoint of $Q$ , then $Q$ contains the 4-cycle $p_6 p_7 p_8 q p_6$ , and similarly, if $p_9$ is not an endpoint of $Q$ , then $Q$ contains the 4-cycle $p_6 p_9 p_8 q p_6$ .", "[scale=0.8] (4,-0.4) node A 4-cycle, if $p_7$ is not an endpoint of $Q$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,2); [-] (2,2)–(3,1); [-] (3,1)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,line width=2pt,red] (5,1)–(6,2); [-,line width=2pt,red] (6,2)–(7,1); [-,line width=2pt,red] (5,1)..controls(5,0.3)and(6,0.3)..(6,1); [-,line width=2pt,red] (6,1)..controls(6,0.3)and(7,0.3)..(7,1); [-,dotted] (2,2)..controls(4,2.5)..(6,2); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.8,2.4) node $p_3$ ; (2.6,0.9) node $p_4$ ; (4,0.6) node $p_{5}$ ; (4.7,0.6) node $p_6$ ; (6,1.3) node $p_7$ ; (7.3,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; [scale=0.8] (4,-0.4) node A 4-cycle, if $p_9$ is not an endpoint of $Q$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,2); [-] (2,2)–(3,1); [-] (3,1)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,line width=2pt,red] (5,1)–(6,2); [-,line width=2pt,red] (6,2)–(7,1); [-,line width=2pt,red] (5,1)..controls(5,-0.3)and(8,-0.3)..(8,1); [-,line width=2pt,red] (7,1)..controls(7,0.3)and(8,0.3)..(8,1); [-,dotted] (2,2)..controls(4,2.5)..(6,2); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.8,2.4) node $p_3$ ; (2.6,0.9) node $p_4$ ; (4,0.6) node $p_{5}$ ; (4.7,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (6.7,0.6) node $p_8$ ; (8.3,0.6) node $p_9$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; So $p_7$ and $p_9$ are the endpoints of $Q$ , and each can be connected by $Q$ only with $p_6$ or $p_8$ .", "Then either $p_7$ connects with $p_6$ and $p_9$ with $p_8$ , which leads to $Q=p_7p_6qp_8p_9$ , and $Q$ has length 4, or $p_7$ connects with $p_8$ and $p_9$ with $p_6$ , which leads to $Q=p_7p_8qp_6p_9$ , and $Q$ has length 4, too.", "[scale=0.8] (4,-0.4) node $p_7$ and $p_9$ are endpoints of $Q$ , $L(Q)=4$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,2); [-] (2,2)–(3,1); [-] (3,1)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,line width=2pt,red] (5,1)–(6,2); [-,line width=2pt,red] (6,2)–(7,1); [-,line width=2pt,red] (5,1)..controls(5,-0.3)and(8,-0.3)..(8,1); [-,line width=2pt,red] (6,1)..controls(6,0.3)and(7,0.3)..(7,1); [-,dotted] (2,2)..controls(4,2.5)..(6,2); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.8,2.4) node $p_3$ ; (2.6,0.9) node $p_4$ ; (4,0.6) node $p_{5}$ ; (4.7,0.6) node $p_6$ ; (5.7,0.6) node $p_7$ ; (7.3,0.6) node $p_8$ ; (8.3,0.6) node $p_9$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; [scale=0.8] (4,-0.4) node $p_7$ and $p_9$ are endpoints of $Q$ , $L(Q)=4$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,2); [-] (2,2)–(3,1); [-] (3,1)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,line width=2pt,red] (5,1)–(6,2); [-,line width=2pt,red] (6,2)–(7,1); [-,line width=2pt,red] (5,1)..controls(5,0.3)and(6,0.3)..(6,1); [-,line width=2pt,red] (7,1)..controls(7,0.3)and(8,0.3)..(8,1); [-,dotted] (2,2)..controls(4,2.5)..(6,2); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.8,2.4) node $p_3$ ; (2.6,0.9) node $p_4$ ; (4,0.6) node $p_{5}$ ; (4.7,0.6) node $p_6$ ; (6,1.3) node $p_7$ ; (6.7,0.6) node $p_8$ ; (8.3,0.6) node $p_9$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; This contradiction discards the case $i=3$ , $j=6$ , $k=8$ , and finishes the case $i=3$ .", "Assume that $i=4$ .", "By Remark REF and assumption we know that $1<j<k<n-1=9$ and $|i-j|,|k-i|,|j-k|>1$ .", "The case $j<i=4$ yields $j=2$ , since then $j<i-1=3$ .", "Here we have the three cases $k=6$ , $k=7$ and $k=8$ .", "[scale=0.5] (3.5,-0.5) node $(i,j,k)=(4,2,6)$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red] (1,1)–(3,3); [-,red] (3,3)–(5,1); [-,line width=1.5pt,dotted] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (2.6,2.1) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (3.2,3.2) node $q$ ; (8.5,3) node ; [scale=0.5] (3.5,-0.5) node $(i,j,k)=(4,2,7)$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red] (1,1)–(3,3); [-,red] (3,3)–(6,1); [-,line width=1.5pt,dotted] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (2.6,2.1) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (3.2,3.2) node $q$ ; (8.5,3) node ; [scale=0.5] (3.5,-0.5) node $(i,j,k)=(4,2,8)$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red] (1,1)–(3,3); [-,red] (3,3)–(7,1); [-,line width=1.5pt,dotted] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (2.6,2.1) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (3.2,3.2) node $q$ ; (8.5,3) node ; If $j>i=4$ , then the only possibility is $(i,j,k)=(4,6,8)$ .", "${\\hbox{\\begin{tikzpicture}[scale=0.5](6,2.3) node {};[-] (0,1)--(1,1);[-] (1,1)--(2,1);[-] (2,1)--(3,2);[-] (3,2)--(4,1);[-] (4,1)--(5,1);[-] (5,1)--(6,1);[-] (6,1)--(7,1);[-] (7,1)--(8,1);[-,red] (5,1)--(6,2);[-,red] (6,2)--(7,1);[-,line width=1.5pt,dotted] (3,2)--(6,2);[red] (6,2) circle (2pt);[black] (0,1) circle (2pt)[black] (1,1) circle (2pt)[black] (2,1) circle (2pt)[black] (3,2) circle (2pt)[black] (4,1) circle (2pt)[black] (5,1) circle (2pt)[black] (6,1) circle (2pt)[black] (7,1) circle (2pt)[black] (8,1) circle (2pt);(0,0.6) node {p_1};(1,0.6) node {p_2};(2,0.6) node {p_3};(2.6,2.2) node {p_4};(4,0.6) node {p_{5}};(5,0.6) node {p_6};(6,0.6) node {p_7};(7,0.6) node {p_8};(8,0.6) node {p_9};(6.2,2.2) node {q};(8.5,3) node {};\\end{tikzpicture}}}$$$ $\\bullet \\bullet $ Assume that $(i,j,k)=(4,2,6)$ .", "In this case it doesn't suffice to analyze the connections of a single vertex, we have to consider the connections of $p_3$ and $p_5$ .", "Proposition REF shows that an edge of $Q$ cannot connect $p_3$ with one of $\\lbrace p_1,p_5,p_7,p_9\\rbrace $ , nor can it connect $p_5$ with one of $\\lbrace p_1,p_3,p_7,p_9\\rbrace $ .", "Moreover, neither $p_3$ nor $p_5$ can be an endpoint of $Q$ , since then we could extend $Q$ connecting $p_4$ .", "Hence each of $p_3$ and $p_5$ connects with two of $\\lbrace p_2,p_6,p_8\\rbrace $ , and since only one of them can connect with $p_2$ , and only one of them can connect with $p_6$ , we have two possibilities: The first is that $p_3$ connects with $p_2$ and $p_8$ ; and $p_5$ connects with $p_6$ and $p_8$ .", "In the second one $p_3$ connects with $p_6$ and $p_8$ ; and $p_5$ connects with $p_2$ and $p_8$ .", "In both cases we obtain a 6-cycle contained in $Q$ , which is impossible and discards the case $(i,j,k)=(4,2,6)$ .", "[scale=0.8] (4,-0.5) node $p_2\\sim p_3\\sim p_8$ and $p_6\\sim p_5\\sim p_8$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (1,1)–(3,3); [-,red,line width=2pt] (3,3)–(5,1); [-,red,line width=2pt] (4,1)..controls(4,0)and(7,0)..(7,1); [-,red,line width=2pt] (2,1)..controls(2,0)and(7.3,-0.6)..(7,1); [-,red,line width=2pt] (1,1)..controls(1,0.5)and(2,0.5)..(2,1); [-,red,line width=2pt] (4,1)..controls(4,0.5)and(5,0.5)..(5,1); [-,line width=1.5pt,dotted] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (0.8,0.6) node $p_2$ ; (2.4,0.9) node $p_3$ ; (2.6,2.1) node $p_4$ ; (3.7,0.6) node $p_{5}$ ; (5.2,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7.3,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (3.2,3.2) node $q$ ; (8.5,3) node ; [scale=0.8] (4,-0.5) node $p_6\\sim p_3\\sim p_8$ and $p_2\\sim p_5\\sim p_8$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (1,1)–(3,3); [-,red,line width=2pt] (3,3)–(5,1); [-,red,line width=2pt] (2,1)..controls(1.7,-0.6)and(7.3,-0.6)..(7,1); [-,red,line width=2pt] (2,1)..controls(2,0)and(5,0)..(5,1); [-,white,line width=4pt] (1,1)..controls(1,0)and(4,0)..(4,1); [-,red,line width=2pt] (1,1)..controls(1,0)and(4,0)..(4,1); [-,white,line width=4pt] (4,1)..controls(4,0)and(7,0)..(7,1); [-,red,line width=2pt] (4,1)..controls(4,0)and(7,0)..(7,1); [-,line width=1.5pt,dotted] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (0.8,0.6) node $p_2$ ; (2.4,0.9) node $p_3$ ; (2.6,2.1) node $p_4$ ; (3.6,0.9) node $p_{5}$ ; (5.2,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7.3,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (3.2,3.2) node $q$ ; (8.5,3) node ; $\\bullet \\bullet $ Assume that $(i,j,k)=(4,2,7)$ .", "We know that $p_3$ cannot be an endpoint of $Q$ , since then we could continue $Q$ with the edge $p_3p_4$ .", "Thus two edges of the path $Q$ connect $p_3$ directly with two vertices in the set $(V(P)\\cap V(Q))\\setminus \\lbrace p_3\\rbrace =\\lbrace p_1,p_2,p_5,p_6,p_7,p_8,p_9\\rbrace .$ By Proposition REF an edge of $Q$ cannot connect $p_3=p_{i-1}$ with $p_1$ , $p_5=p_{i+1}$ , $p_6=p_{k-1}$ , $p_8=p_{k+1}$ nor $p_9=p_{n-1}$ .", "Thus two edges of the path $Q$ connect $p_3$ with $p_2$ and $p_3$ with $p_7$ , which implies that $Q$ contains the 4-cycle $p_2 p_3 p_7 q p_2$ , a contradiction that discards the case $i=4$ , $j=2$ and $k=7$ .", "[scale=0.8] (4,-0.5) node $Q$ contains a 4-cycle; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (1,1)–(3,3); [-,red,line width=2pt] (3,3)–(6,1); [-,red,line width=2pt] (2,1)..controls(2,0)and(6.2,-0.2)..(6,1); [-,red,line width=2pt] (1,1)..controls(1,0.5)and(2,0.5)..(2,1); [-] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (0.9,0.6) node $p_2$ ; (2.4,0.9) node $p_3$ ; (2.6,2.1) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6.3,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (3.2,3.2) node $q$ ; (8.5,3) node ; $\\bullet \\bullet $ Assume that $(i,j,k)=(4,2,8)$ .", "In this case it doesn't suffice to analyze the connections of a single vertex or the connections of two vertices, we have to consider the connections of $p_1$ , $p_3$ and $p_9$ .", "By Proposition REF there cannot be an edge of $Q$ connecting $p_1$ with one of $\\lbrace p_3,p_5,p_7,p_9\\rbrace $ , nor an edge of $Q$ connecting $p_3$ with one of $\\lbrace p_1,p_5,p_7,p_9\\rbrace $ , nor an edge of $Q$ connecting $p_9$ with one of $\\lbrace p_1,p_3,p_5,p_7\\rbrace $ .", "We know that $p_3$ cannot be an endpoint of $Q$ , since then we could continue $Q$ with the edge $p_3p_4$ .", "Thus $p_3$ must connect with two of $\\lbrace p_2,p_6,p_8\\rbrace $ .", "But it cannot connect with $p_2$ and $p_8$ , since then we would have a 4-cycle contained in $Q$ .", "So it connects with $p_6$ and one of $p_2$ or $p_8$ .", "If $p_3$ connects with $p_2$ , then one of $\\lbrace p_1,p_9\\rbrace $ connects with $p_6$ and the other with $p_8$ .", "Hence, both $p_1$ and $p_9$ must be endpoints, and in each case, $p_1\\sim p_6$ , $p_9\\sim p_8$ or $p_1\\sim p_8$ , $p_9\\sim p_6$ , we obtain that $Q$ has length 6, which is a contradiction.", "[scale=0.78] (4,-0.5) node $p_1$ , $p_9$ endpoints, $p_2\\sim p_3\\Rightarrow Q$ has length 6; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (1,1)–(3,3); [-,red,line width=2pt] (3,3)–(7,1); [-,red,line width=2pt] (7,1)..controls(7,0.5)and(8,0.5)..(8,1); [-,red,line width=2pt] (1,1)..controls(1,0.5)and(2,0.5)..(2,1); [-,red,line width=2pt] (0,1)..controls(0,-0.2)and(5.3,-0.5)..(5,1); [-,red,line width=2pt] (2,1)..controls(2,0)and(5,0)..(5,1); [-,line width=1.5pt,dotted] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (-0.2,0.6) node $p_1$ ; (0.8,0.6) node $p_2$ ; (2.4,0.9) node $p_3$ ; (2.6,2.1) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5.3,0.6) node $p_6$ ; (6.1,0.6) node $p_7$ ; (6.9,0.6) node $p_8$ ; (8.2,0.6) node $p_9$ ; (3.2,3.2) node $q$ ; (8.5,3) node ; [scale=0.78] (4,-0.5) node $p_1$ , $p_9$ endpoints, $p_2\\sim p_3\\Rightarrow Q$ has length 6; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (1,1)–(3,3); [-,red,line width=2pt] (3,3)–(7,1); [-,red,line width=2pt] (1,1)..controls(1,0.5)and(2,0.5)..(2,1); [-,red,line width=2pt] (0,1)..controls(0,-0.4)and(7,-0.4)..(7,1); [-,red,line width=2pt] (2,1)..controls(2,0.1)and(5,0.1)..(5,1); [-,white,line width=4pt] (5,1)..controls(5,0)and(8,0)..(8,1); [-,red,line width=2pt] (5,1)..controls(5,0)and(8,0)..(8,1); [-,line width=1.5pt,dotted] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (-0.2,0.6) node $p_1$ ; (0.8,0.6) node $p_2$ ; (2.4,0.9) node $p_3$ ; (2.6,2.1) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,1.3) node $p_6$ ; (6.1,0.6) node $p_7$ ; (7.1,1.3) node $p_8$ ; (8.2,0.6) node $p_9$ ; (3.2,3.2) node $q$ ; (8.5,3) node ; If $p_3$ connects with $p_8$ , then one of $\\lbrace p_1,p_9\\rbrace $ connects with $p_2$ and the other with $p_6$ .", "Hence, both $p_1$ and $p_9$ must be endpoints, and in each case, $p_1\\sim p_6$ , $p_9\\sim p_2$ or $p_1\\sim p_2$ , $p_9\\sim p_6$ , we obtain that $Q$ has length 6, which is a contradiction.", "[scale=0.78] (4,-0.5) node $p_1$ , $p_9$ endpoints, $p_3\\sim p_8\\Rightarrow Q$ has length 6; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (1,1)–(3,3); [-,red,line width=2pt] (3,3)–(7,1); [-,red,line width=2pt] (1,1)..controls(1,-0.5)and(8,-0.5)..(8,1); [-,red,line width=2pt] (2,1)..controls(2,0)and(7,0)..(7,1); [-,red,line width=2pt] (2,1)..controls(2,0.5)and(5,0.5)..(5,1); [-,white,line width=4pt] (0,1)..controls(0,-0.5)and(5.3,-0.5)..(5,1); [-,red,line width=2pt] (0,1)..controls(0,-0.5)and(5.3,-0.5)..(5,1); [-,line width=1.5pt,dotted] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (-0.2,0.6) node $p_1$ ; (0.8,0.6) node $p_2$ ; (2.4,0.9) node $p_3$ ; (2.6,2.1) node $p_4$ ; (3.6,0.9) node $p_{5}$ ; (5.3,0.6) node $p_6$ ; (6.1,0.6) node $p_7$ ; (7.1,1.3) node $p_8$ ; (8.2,0.6) node $p_9$ ; (3.2,3.2) node $q$ ; (8.5,3) node ; [scale=0.78] (4,-0.5) node $p_1$ , $p_9$ endpoints, $p_3\\sim p_8\\Rightarrow Q$ has length 6; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (1,1)–(3,3); [-,red,line width=2pt] (3,3)–(7,1); [-,red,line width=2pt] (2,1)..controls(2,0)and(7,0)..(7,1); [-,red,line width=2pt] (2,1)..controls(2,0.5)and(5,0.5)..(5,1); [-,red,line width=2pt] (0,1)..controls(0,0.5)and(1,0.5)..(1,1); [-,white,line width=4pt] (5,1)..controls(5,0)and(8,0)..(8,1); [-,red,line width=2pt] (5,1)..controls(5,0)and(8,0)..(8,1); [-,line width=1.5pt,dotted] (3,2)–(3,3); [red] (3,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (-0.2,0.6) node $p_1$ ; (0.8,1.3) node $p_2$ ; (1.7,0.6) node $p_3$ ; (2.6,2.1) node $p_4$ ; (3.6,0.9) node $p_{5}$ ; (5,1.3) node $p_6$ ; (6.1,0.6) node $p_7$ ; (7.1,1.3) node $p_8$ ; (8.2,0.6) node $p_9$ ; (3.2,3.2) node $q$ ; (8.5,3) node ; This discards the case $(i,j,k)=(4,2,8)$ .", "$\\bullet \\bullet $ Assume that $(i,j,k)=(4,6,8)$ .", "In this case it doesn't suffice to analyze the connections of a single vertex or the connections of two vertices, we have to consider the connections of $p_5$ , $p_7$ and $p_9$ .", "By Proposition REF there cannot be an edge of $Q$ connecting $p_5$ with one of $\\lbrace p_1,p_3,p_7,p_9\\rbrace $ , nor an edge of $Q$ connecting $p_7$ with one of $\\lbrace p_1,p_3,p_5,p_9\\rbrace $ , nor an edge of $Q$ connecting $p_9$ with one of $\\lbrace p_1,p_3,p_5,p_7\\rbrace $ .", "We know that $p_5$ cannot be an endpoint of $Q$ , since then we could continue $Q$ with the edge $p_5p_4$ .", "Thus $p_5$ must be connected by an edge of $Q$ with two of $\\lbrace p_2,p_6,p_8\\rbrace $ .", "But it cannot be connected with $p_6$ and $p_8$ at the same time, since then we would have a 4-cycle contained in $Q$ .", "So it is connected with $p_2$ and one of $p_6$ or $p_8$ .", "If $p_5$ is connected with $p_6$ , then one of $\\lbrace p_7,p_9\\rbrace $ is connected with $p_2$ and the other with $p_8$ .", "Hence, both $p_7$ and $p_9$ must be endpoints, and in each case, $p_7\\sim p_2$ , $p_9\\sim p_8$ or $p_7\\sim p_8$ , $p_9\\sim p_2$ , we obtain that $Q$ has length 6, which is a contradiction.", "[scale=0.8] (6,2.3) node ; (4,-0.5) node $p_7$ , $p_9$ endpoints, $p_5\\sim p_6\\Rightarrow Q$ has length 6;[-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (5,1)–(6,2); [-,red,line width=2pt] (6,2)–(7,1); [-] (3,2)–(6,2); [-,red,line width=2pt] (1,1)..controls(0.7,-0.3)and(6.3,-0.3)..(6,1); [-,red,line width=2pt] (1,1)..controls(1,0)and(4,0)..(4,1); [-,red,line width=2pt] (4,1)..controls(4,0.5)and(5,0.5)..(5,1); [-,red,line width=2pt] (7,1)..controls(7,0.5)and(8,0.5)..(8,1); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (2.6,2.2) node $p_4$ ; (3.6,0.9) node $p_{5}$ ; (5.2,0.6) node $p_6$ ; (6.3,0.6) node $p_7$ ; (6.9,0.6) node $p_8$ ; (8.2,0.6) node $p_9$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; [scale=0.8] (6,2.3) node ; (4,-0.5) node $p_7$ , $p_9$ endpoints, $p_5\\sim p_6\\Rightarrow Q$ has length 6;[-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (5,1)–(6,2); [-,red,line width=2pt] (6,2)–(7,1); [-] (3,2)–(6,2); [-,red,line width=2pt] (1,1)..controls(0.7,-0.4)and(8.3,-0.3)..(8,1); [-,red,line width=2pt] (1,1)..controls(1,0.2)and(4,0.2)..(4,1); [-,red,line width=2pt] (4,1)..controls(4,0.5)and(5,0.5)..(5,1); [-,red,line width=2pt] (6,1)..controls(6,0.5)and(7,0.5)..(7,1); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,0.6) node $p_2$ ; (2,0.7) node $p_3$ ; (2.6,2.2) node $p_4$ ; (3.6,0.9) node $p_{5}$ ; (5.2,0.6) node $p_6$ ; (5.8,0.6) node $p_7$ ; (7.2,0.6) node $p_8$ ; (8.3,0.6) node $p_9$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; If $p_5$ is connected with $p_8$ , then one of $\\lbrace p_7,p_9\\rbrace $ is connected with $p_2$ and the other with $p_6$ .", "Hence, both $p_7$ and $p_9$ must be endpoints, and in each case, $p_7\\sim p_2$ , $p_9\\sim p_6$ or $p_7\\sim p_6$ , $p_9\\sim p_2$ , we obtain that $Q$ has length 6, which is a contradiction.", "[scale=0.8] (6,2.3) node ; (4,-0.5) node $p_7$ , $p_9$ endpoints, $p_5\\sim p_8\\Rightarrow Q$ has length 6;[-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (5,1)–(6,2); [-,red,line width=2pt] (6,2)–(7,1); [-] (3,2)–(6,2); [-,red,line width=2pt] (1,1)..controls(1,0)and(4,0)..(4,1); [-,red,line width=2pt] (4,1)..controls(4,0)and(7,0)..(7,1); [-,white,line width=4pt] (5,1)..controls(5,0)and(8,0)..(8,1); [-,red,line width=2pt] (5,1)..controls(5,0)and(8,0)..(8,1); [-,white,line width=4pt] (1,1)..controls(0.7,-0.3)and(6.3,-0.3)..(6,1); [-,red,line width=2pt] (1,1)..controls(0.7,-0.3)and(6.3,-0.3)..(6,1); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (2.6,2.2) node $p_4$ ; (3.6,0.9) node $p_{5}$ ; (4.7,0.6) node $p_6$ ; (6.3,0.6) node $p_7$ ; (7.3,0.6) node $p_8$ ; (8.2,0.6) node $p_9$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; [scale=0.8] (6,2.3) node ; (4,-0.5) node $p_7$ , $p_9$ endpoints, $p_5\\sim p_8\\Rightarrow Q$ has length 6;[-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,1); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (5,1)–(6,2); [-,red,line width=2pt] (6,2)–(7,1); [-] (3,2)–(6,2); [-,red,line width=2pt] (1,1)..controls(0.7,-0.4)and(8.3,-0.3)..(8,1); [-,red,line width=2pt] (1,1)..controls(1,0.2)and(4,0.2)..(4,1); [-,red,line width=2pt] (5,1)..controls(5,0.5)and(6,0.5)..(6,1); [-,red,line width=2pt] (4,1)..controls(4,0.1)and(7,0.1)..(7,1); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,0.6) node $p_2$ ; (2,0.7) node $p_3$ ; (2.6,2.2) node $p_4$ ; (3.6,0.9) node $p_{5}$ ; (4.8,0.7) node $p_6$ ; (6.2,0.7) node $p_7$ ; (7.2,0.6) node $p_8$ ; (8.3,0.6) node $p_9$ ; (6.2,2.2) node $q$ ; (8.5,3) node ; This discards the case $(i,j,k)=(4,6,8)$ and finishes the case $i=4$ .", "Assume that $i=5$ .", "By Remark REF and assumption we know that $1<j<k<n-1=9$ and $|i-j|,|k-i|,|j-k|>1$ .", "The case $i<j$ and $k<i$ are impossible, hence $j<i<k$ , and we have four cases $(i,j,k)\\in \\lbrace (5,2,7),(5,2,8),(5,3,7),(5,3,8)\\rbrace .$ But the cases $(i,j,k)=(5,2,7)$ and $(i,j,k)=(5,3,8)$ are symmetric, so we have to discard only the first three cases.", "[scale=0.5] (3.5,-0.5) node $(i,j,k)=(5,2,7)$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,1); [-] (3,1)–(4,2); [-] (4,2)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red] (1,1)–(4,3); [-,red] (4,3)–(6,1); [-,line width=1.5pt,dotted] (4,2)–(4,3); [red] (4,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (3.6,2.1) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (4.3,3.2) node $q$ ; (8.5,3) node ; [scale=0.5] (3.5,-0.5) node $(i,j,k)=(5,2,8)$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,1); [-] (3,1)–(4,2); [-] (4,2)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red] (1,1)–(4,3); [-,red] (4,3)–(7,1); [-,line width=1.5pt,dotted] (4,2)–(4,3); [red] (4,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (3.6,2.1) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (4.3,3.2) node $q$ ; (8.5,3) node ; [scale=0.5] (3.5,-0.5) node $(i,j,k)=(5,3,7)$ ; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,1); [-] (3,1)–(4,2); [-] (4,2)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red] (2,1)–(4,3); [-,red] (4,3)–(6,1); [-,line width=1.5pt,dotted] (4,2)–(4,3); [red] (4,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (3.6,2.1) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (4.3,3.2) node $q$ ; (8.5,3) node ; $\\bullet \\bullet $ Assume that $(i,j,k)=(5,2,7)$ .", "We know that $p_6$ cannot be an endpoint of $Q$ , since then we could continue $Q$ with the edge $p_6p_5$ .", "Thus two edges of the path $Q$ connect $p_6$ directly with two vertices in the set $(V(P)\\cap V(Q))\\setminus \\lbrace p_3\\rbrace =\\lbrace p_1,p_2,p_3,p_4,p_7,p_8,p_9\\rbrace .$ By Proposition REF an edge of $Q$ cannot connect $p_6=p_{i+1}$ with $p_1$ , $p_3=p_{j+1}$ , $p_4=p_{i-1}$ , $p_8=p_{k+1}$ nor $p_9=p_{n-1}$ .", "Thus two edges of the path $Q$ connect $p_6$ with $p_2$ and $p_6$ with $p_7$ , which implies that $Q$ contains the 4-cycle $p_2 p_6 p_7 q p_2$ , a contradiction that discards the case $i=5$ , $j=2$ and $k=7$ .", "[scale=0.8] (4,-0.5) node $Q$ contains a 4-cycle; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,1); [-] (3,1)–(4,2); [-] (4,2)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (1,1)–(4,3); [-,red,line width=2pt] (4,3)–(6,1); [-,line width=1.5,dotted] (4,2)–(4,3); [-,red,line width=2pt] (1,1)..controls(1,0)and(5,0)..(5,1); [-,red,line width=2pt] (5,1)..controls(5,0.5)and(6,0.5)..(6,1); [red] (4,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (3.6,2.1) node $p_{5}$ ; (4.6,0.9) node $p_6$ ; (6.3,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (4.2,3.2) node $q$ ; (8.5,3) node ; $\\bullet \\bullet $ Assume that $(i,j,k)=(5,3,7)$ .", "We know that $p_6$ cannot be an endpoint of $Q$ , since then we could continue $Q$ with the edge $p_6p_5$ .", "Thus two edges of the path $Q$ connect $p_6$ directly with two vertices in the set $(V(P)\\cap V(Q))\\setminus \\lbrace p_3\\rbrace =\\lbrace p_1,p_2,p_3,p_4,p_7,p_8,p_9\\rbrace .$ Proposition REF shows that an edge of $Q$ cannot connect $p_6=p_{i+1}$ with $p_1$ , $p_4=p_{i-1}$ , $p_8=p_{k+1}$ nor $p_9=p_{n-1}$ ; and the Hamiltonian path in the first diagram shows that an edge of $Q$ cannot connect $p_6$ with $p_2$ .", "[scale=0.8] (4,-0.5) node $Q$ can't connect $p_2$ with $p_6$ ; (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,1); [-] (1,1)–(2,1); [-,line width=2pt,cyan] (2,1)–(3,1); [-,line width=2pt,cyan] (3,1)–(4,2); [-,line width=2pt,cyan] (4,2)–(5,1); [-] (5,1)–(6,1); [-,line width=2pt,cyan] (6,1)–(7,1); [-,line width=2pt,cyan] (7,1)–(8,1); [-,line width=2pt,cyan] (2,1)–(4,3); [-,line width=2pt,cyan] (4,3)–(6,1); [-,line width=1.5pt,dotted] (4,2)–(4,3); [-,line width=2pt,cyan] (1,1)..controls(1,-0.2)and(5,0)..(5,1); [red] (4,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (3.6,2.1) node $p_{5}$ ; (5.3,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (4.2,3.2) node $q$ ; (8.5,3) node ; [scale=0.8] (4,-0.5) node $Q$ contains a 4-cycle; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,1); [-] (3,1)–(4,2); [-] (4,2)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (2,1)–(4,3); [-,red,line width=2pt] (4,3)–(6,1); [-,line width=1.5,dotted] (4,2)–(4,3); [-,red,line width=2pt] (2,1)..controls(2,0)and(5,0)..(5,1); [-,red,line width=2pt] (5,1)..controls(5,0.5)and(6,0.5)..(6,1); [red] (4,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.7,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (3.6,2.1) node $p_{5}$ ; (4.6,0.9) node $p_6$ ; (6.3,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (8,0.6) node $p_9$ ; (4.2,3.2) node $q$ ; (8.5,3) node ; Thus two edges of the path $Q$ connect $p_6$ with $p_3$ and $p_6$ with $p_7$ , which implies that $Q$ contains the 4-cycle $p_3 p_6 p_7 q p_3$ , a contradiction that discards the case $i=5$ , $j=3$ and $k=7$ .", "$\\bullet \\bullet $ Assume that $(i,j,k)=(5,2,8)$ .", "In this case we consider the connections of $p_1$ and $p_9$ .", "By Proposition REF an edge of $Q$ cannot connect $p_1$ with one of $\\lbrace p_3,p_4,p_6,p_7,p_9\\rbrace $ , nor can it connect $p_9$ with one of $\\lbrace p_1,p_3,p_4,p_6,p_7\\rbrace $ .", "Thus $p_1$ can be connected only to $p_2$ or $p_8$ ; and similarly $p_9$ can be connected only to $p_2$ or $p_8$ .", "Hence, one of $\\lbrace p_1,p_9\\rbrace $ is connected with $p_2$ and the other with $p_8$ and both $p_1$ and $p_9$ must be endpoints.", "In each case, $p_1\\sim p_2$ , $p_9\\sim p_8$ or $p_1\\sim p_8$ , $p_9\\sim p_2$ , we obtain that $Q$ has length 4, which is a contradiction.", "[scale=0.8] (4,-0.5) node $p_1$ , $p_9$ endpoints $\\Rightarrow Q$ has length 4; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,1); [-] (3,1)–(4,2); [-] (4,2)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (1,1)–(4,3); [-,red,line width=2pt] (4,3)–(7,1); [-,dotted,line width=1.5pt] (4,2)–(4,3); [-,red,line width=2pt] (0,1)..controls(0,0.5)and(1,0.5)..(1,1); [-,red,line width=2pt] (7,1)..controls(7,0.5)and(8,0.5)..(8,1); [red] (4,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,1.3) node $p_1$ ; (1.2,0.6) node $p_2$ ; (2.3,0.6) node $p_3$ ; (3.1,0.6) node $p_4$ ; (3.6,2.1) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (6.8,0.6) node $p_8$ ; (8.3,0.6) node $p_9$ ; (4.3,3.2) node $q$ ; (8.5,3) node ; [scale=0.8] (4,-0.5) node $p_1$ , $p_9$ endpoints $\\Rightarrow Q$ has length 4; (6,2.3) node ; [-] (0,1)–(1,1); [-] (1,1)–(2,1); [-] (2,1)–(3,1); [-] (3,1)–(4,2); [-] (4,2)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-] (7,1)–(8,1); [-,red,line width=2pt] (1,1)–(4,3); [-,red,line width=2pt] (4,3)–(7,1); [-,dotted,line width=1.5pt] (4,2)–(4,3); [-,red,line width=2pt] (0,1)..controls(0,-0.5)and(7,0.2)..(7,1); [-,white,line width=4pt] (1,1)..controls(1,0.2)and(8,-0.5)..(8,1); [-,red,line width=2pt] (1,1)..controls(1,0.2)and(8,-0.5)..(8,1); [red] (4,3) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt) [black] (8,1) circle (2pt); (0,1.3) node $p_1$ ; (0.8,0.6) node $p_2$ ; (2.3,0.7) node $p_3$ ; (3.1,0.6) node $p_4$ ; (3.6,2.1) node $p_{5}$ ; (5,0.6) node $p_6$ ; (5.9,0.7) node $p_7$ ; (7.2,0.6) node $p_8$ ; (8.3,0.6) node $p_9$ ; (4.3,3.2) node $q$ ; (8.5,3) node ; This contradiction discards the case $(i,j,k)=(5,2,8)$ , finishing the case $i=5$ and thus we have proved that $\\ell =8$ and $n=10$ is impossible." ], [ "The case $\\ell =6$ and {{formula:6f907329-38d9-4e3c-a215-34f6e28fbc29}}", "In this section we will discard the case $\\ell =6$ and $n=10$ .", "We will assume that there exists a minimal graph $G$ with two longest paths $P$ and $Q$ , such that $|V(P)\\cap V(Q)|=6$ , $n(G)=10$ and so $|V(P)|=|V(Q)|=8$ , and we will show that this is impossible.", "For any set of vertices $A$ in $G$ , we denote by $N(A)$ the set of neighbors of vertices in $A$ , that is $N(A)=\\lbrace u \\in V(G): \\mbox{there exists a neighbor of }u \\mbox{ in } A\\rbrace $ .", "Let $& P^{\\prime }=V(P) \\setminus V(Q), && Q^{\\prime }=V(Q) \\setminus V(P), & S=V(P) \\cap V(Q), \\\\& P^{\\prime \\prime }=S \\cap N(P^{\\prime }), && Q^{\\prime \\prime }=S \\cap N(Q^{\\prime }).", "&$ Then $|S|=6$ , and $|P^{\\prime }|=|Q^{\\prime }|=2$ .", "Proposition 4.1 We may assume that both $P$ and $Q$ have no endpoints in $P^{\\prime }$ and $Q^{\\prime }$ respectively.", "Suppose for a moment that $P$ has one extreme in $P^{\\prime }$ , and let $p$ be that extreme.", "As $P^{\\prime } \\cup Q^{\\prime }$ is connected, $p$ has at least one neighbor in $P^{\\prime } \\cup Q^{\\prime }$ .", "If such a neighbor is in $Q^{\\prime }$ , then we can add this vertex to $P$ , obtaining a path longer than $P$ , a contradiction.", "Hence, $p$ has a neighbor, say $p_1$ in $P^{\\prime }$ .", "Thus the graph induced by $P^{\\prime }$ contains an edge $pp_1$ .", "Suppose for a moment that $pp_1 \\notin E(P)$ .", "Let $r$ be the neighbor of $p_1$ in $P$ , closest to $p$ in $P$ .", "Then $P+pp_1-p_1r$ is a path with the same vertex set as $P$ with its two extremes in $S$ , and we are in another case.", "[scale=0.8] (6,2.3) node ; [-] (0,2)–(1,1); [-] (1,1)–(3,1); [-] (3,1)–(4,2); [-] (4,2)–(5,1); [-] (5,1)–(7,1); [-,line width=1.5pt,dotted] (0,2)–(4,2); [black] (0,2) circle (2pt) [black] (1,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (7,1) circle (2pt); (0,2.3) node $p$ ; (3,0.6) node $r$ ; (4.4,2.1) node $p_1$ ; (8.5,0) node ; [scale=0.8] (3.5,0.1) node In $P+pp_1-p_1r$ the endpoints are not in $P^{\\prime }$ ; [-,line width=2pt,cyan] (0,2)–(1,1); [-,line width=2pt,cyan] (1,1)–(3,1); [-] (3,1)–(4,2); [-,line width=2pt,cyan] (4,2)–(5,1); [-,line width=2pt,cyan] (5,1)–(7,1); [-,line width=2pt,cyan] (0,2)–(4,2); [black] (0,2) circle (2pt) [black] (1,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (7,1) circle (2pt); (0,2.3) node $p$ ; (3,0.6) node $r$ ; (4.4,2.1) node $p_1$ ; (8.5,3) node ; Hence, we may assume that $pp_1 \\in E(P)$ .", "Since $P^{\\prime }\\cap Q^{\\prime }$ is connected, $p_1$ has a neighbor $q_1$ in $Q^{\\prime }$ .", "Let $q$ be the other vertex of $Q^{\\prime }$ .", "If $q$ is adjacent to $p$ or $q_1$ then we can augment the path $P$ .", "Hence, $qp_1$ is an edge.", "Thus, the graph induced by $P^{\\prime } \\cup Q^{\\prime }$ is formed by the edges $pp_1$ , $p_1q_1$ and $p_1q$ (Figure).", "[scale=0.8] (6,0) node ; [-] (0,2)–(1,2); [-] (1,2)–(2,1); [-] (2,1)–(7,1); [-,line width=1.5pt,dotted] (1,2)–(2,2); [-,line width=1.5pt,dotted] (1,2)..controls (2,2.3)..(3,2); [black] (0,2) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (7,1) circle (2pt); [red] (2,2) circle (2pt) [red] (3,2) circle (2pt); (0,2.3) node $p$ ; (1,2.3) node $p_1$ ; (2,1.6) node $q_1$ ; (3,1.6) node $q$ ; (8.5,3) node ; [scale=0.8] (3.5,0.1) node $\\widehat{P}$ in blue; [-] (0,2)–(1,2); [-,line width=2pt,cyan] (1,2)–(2,1); [-,line width=2pt,cyan] (2,1)–(7,1); [-,line width=2pt,cyan] (1,2)–(2,2); [-,line width=1.5pt,dotted] (1,2)..controls (2,2.3)..(3,2); [black] (0,2) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (7,1) circle (2pt); [red] (2,2) circle (2pt) [red] (3,2) circle (2pt); (0,2.3) node $p$ ; (1,2.3) node $p_1$ ; (2,1.6) node $q_1$ ; (3,1.6) node $q$ ; (8.5,3) node ; Consider the path $\\hat{P}=P-pp_1+p_1q_1$ and the graph $\\hat{G}=G-p$ .", "It is clear that both $\\hat{P}$ and $Q$ are longest paths in $\\hat{G}$ .", "Note also that $\\hat{P} \\setminus Q=\\lbrace p_1\\rbrace $ and $Q \\setminus \\hat{P}=\\lbrace q\\rbrace $ and that $p_1q$ is an edge in $\\hat{G}$ .", "As $n(\\hat{G}) < n(G)$ and $|V(\\hat{P}) \\cap V(Q)|>|V(P) \\cap V(Q)|$ , this is a contradiction to the minimality of $G$ .", "The proof is similar for $Q$ instead of $P$ .", "Theorem 4.2 The case $\\ell =6$ and $n=10$ is impossible.", "As $P^{\\prime } \\cup Q^{\\prime }$ is connected, we have, without loss of generality, three cases.", "Case 1: $P^{\\prime }=ab$ is an edge of $P$ , $Q^{\\prime }=cd$ is an edge and $ac$ is an edge.", "In that situation, $P\\cap S$ consists on two paths $P_1$ and $P_2$ .", "Let $p_1$ and $p_2$ be the extremes of $P_1$ and $P_2$ that are not in $P^{\\prime \\prime }$ .", "Let $c^{\\prime }$ and $d^{\\prime }$ be the vertices adjacent to $c$ and $d$ that are not in $Q^{\\prime }$ .", "Then ${d_P(c^{\\prime },p_1),d_P(c^{\\prime },p_2),d_P(d^{\\prime },p_1),d_P(d^{\\prime },p_2) \\ge 2}.$ Suppose for a moment that both $c^{\\prime }$ and $d^{\\prime }$ are in the same subpath of $P$ , say $P_1$ .", "Without loss of generality suppose that $d$ is closer to $p_1$ than $c$ .", "Note that $d_P(c^{\\prime },d^{\\prime })\\ge 3$ (otherwise we can augment $P$ ).", "Then we obtain the contradiction.", "$|P_1|\\ge d_P(c^{\\prime },p_1) = d_P(c^{\\prime },d^{\\prime })+d_P(d^{\\prime },p_1)\\ge 5.$ [scale=0.8] (-3.5,0.1) node ; [-] (0,1)–(5,1); [-] (5,1)–(6,2); [-] (6,2)–(7,2); [-] (7,2)–(8,1); [-] (8,1)–(9,1); [-,red] (2,1)–(3,2); [-,red] (4,2)–(5,1); [-,line width=1.5pt,dotted] (3,2)–(4,2); [black] (0,1) circle (2pt) [black] (2,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,2) circle (2pt) [black] (7,2) circle (2pt) [black] (8,1) circle (2pt) [black] (9,1) circle (2pt); [red] (3,2) circle (2pt) [red] (4,2) circle (2pt); [ thick, decoration= brace, mirror, raise=0.5cm , decorate ](0,1.4)–(1.97,1.4); [ thick, decoration= brace, mirror, raise=0.5cm , decorate ](2.03,1.4)–(5,1.4); (0,1.3) node $p_1$ ; (1.8,1.3) node $d^{\\prime }$ ; (3,2.3) node $d$ ; (4,2.3) node $c$ ; (5.3,0.9) node $c^{\\prime }$ ; (6,2.3) node $a$ ; (7,2.3) node $b$ ; (9,0.6) node $p_2$ ; (0.5,0.3) node $d_P(p_1,d^{\\prime })\\ge 2$ ; (4.5,0.3) node $d_P(d^{\\prime },c^{\\prime })\\ge 3$ ; (8.5,3) node ; Hence, without loss of generality, $c^{\\prime }$ is in $P_1$ and $d^{\\prime }$ is in $P_2$ , which implies that $|P_1|,|P_2|\\ge 2$ and $|P_1|=|P_2|=2$ .", "Without loss of generality, assume that $P_1=u_0u_1u_2$ , $P_2=v_0v_1v_2$ , $u_0$ is adjacent to $a$ , and $v_0$ is adjacent to $b$ (see figure).", "[scale=0.8] (3.5,3.5) node ; [-] (0,1)–(2,1); [-] (2,1)–(3,2); [-] (3,2)–(4,2); [-] (4,2)–(5,1); [-] (5,1)–(7,1); [-,red] (2,1)–(2,3); [-,red] (5,3)–(5,1); [-,line width=1.5pt,dotted] (2,3)–(5,3); [-,line width=1.5pt,dotted] (3,2)..controls(4,2.8)..(5,3); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); [red] (2,3) circle (2pt) [red] (5,3) circle (2pt); (0,0.6) node $u_2$ ; (1,0.6) node $u_1$ ; (2,0.6) node $u_0$ ; (5,0.6) node $v_0$ ; (6,0.6) node $v_1$ ; (7,0.6) node $v_2$ ; (3,2.3) node $a$ ; (4,2.3) node $b$ ; (2,3.3) node $d$ ; (5,3.3) node $c$ ; (8.5,3) node ; [scale=0.8] (3.5,3.5) node ; [-,line width=2pt,cyan] (0,1)–(2,1); [-] (2,1)–(3,2); [-,line width=2pt,cyan] (3,2)–(4,2); [-,line width=2pt,cyan] (4,2)–(5,1); [-,line width=2pt,cyan] (5,1)–(7,1); [-,line width=2pt,cyan] (2,1)–(2,3); [-,red] (5,3)–(5,1); [-,line width=2pt,cyan] (2,3)–(5,3); [-,line width=2pt,cyan] (3,2)..controls(4,2.8)..(5,3); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); [red] (2,3) circle (2pt) [red] (5,3) circle (2pt); (0,0.6) node $u_2$ ; (1,0.6) node $u_1$ ; (2,0.6) node $u_0$ ; (5,0.6) node $v_0$ ; (6,0.6) node $v_1$ ; (7,0.6) node $v_2$ ; (3,2.3) node $a$ ; (4,2.3) node $b$ ; (2,3.3) node $d$ ; (5,3.3) node $c$ ; (3.5,-0.1) node $P - au_0 + ac + cd + du_0$ is longer than $P$ ; Note that $cu_0$ is not an edge, otherwise $P-au_0+ac+cu_0$ is a path longer than $P$ .", "As $d_P(c^{\\prime },u_2),d_P(c^{\\prime },v_2),d_P(d^{\\prime },u_2),d_P(d^{\\prime },v_2)\\ge 2$ , we must have that $du_0$ and $cv_0$ are edges.", "But then $P-au_0+ac+cd+du_0$ is a path longer than $P$ , a contradiction.", "Case 2.", "$P^{\\prime }=p_ip_{i+1}$ is an edge of $P$ , $Q^{\\prime }=\\lbrace c,d\\rbrace $ and $cd$ is not an edge in $G$ .", "Let $P=p_1p_2\\dots p_8$ .", "Then $i\\in \\lbrace 2,3,4,5,6\\rbrace $ , and by symmetry it suffices to discard the cases $i=2$ , $i=3$ and $i=4$ .", "Let $Q^{\\prime \\prime }=\\lbrace c_1,c_2,d_1,d_2\\rbrace $ be such that $cc_1,cc_2,dd_1,dd_2$ are edges of $Q$ .", "$\\bullet $ If $i=2$ , then $Q^{\\prime \\prime }\\subset \\lbrace p_4,p_5,p_6,p_7\\rbrace $ .", "Moreover, $d_P(c_1,c_2)\\ge 2$ , $d_P(d_1,d_2)\\ge 2$ and $\\lbrace c_1,c_2\\rbrace \\ne \\lbrace d_1,d_2\\rbrace $ , and so there are only three possibilities: $\\bullet \\bullet $ $\\lbrace \\lbrace c_1,c_2\\rbrace ,\\lbrace d_1,d_2\\rbrace \\rbrace =\\lbrace \\lbrace p_4,p_6\\rbrace ,\\lbrace p_4,p_7\\rbrace \\rbrace ,{\\hbox{\\begin{tikzpicture}[scale=0.5](6,2.3) node {};[-] (0,1)--(1,2);[-] (1,2)--(2,2);[-] (2,2)--(3,1);[-] (3,1)--(7,1);[-,red] (3,1)--(4,2);[-,red] (5,1)--(4,2);[-,white,line width=2pt] (3,1)--(5,2);[-,red] (3,1)--(5,2);[-,red] (6,1)--(5,2);[red] (4,2) circle (2pt);[red] (5,2) circle (2pt);[black] (0,1) circle (2pt)[black] (1,2) circle (2pt)[black] (2,2) circle (2pt)[black] (3,1) circle (2pt)[black] (4,1) circle (2pt)[black] (5,1) circle (2pt)[black] (6,1) circle (2pt)[black] (7,1) circle (2pt);(0,0.6) node {p_1};(0.7,2.3) node {p_2};(2.3,2.3) node {p_3};(3,0.6) node {p_4};(4,0.6) node {p_{5}};(5,0.6) node {p_6};(6,0.6) node {p_7};(7,0.6) node {p_8};(4.3,2.2) node {c};(5.3,2.2) node {d};(8.5,2.6) node {};\\end{tikzpicture}}}$ $\\item [$$] $ {{c1,c2},{d1,d2}}={{p4,p6},{p5,p7}}, $\\item [$$] $ {{c1,c2},{d1,d2}}={{p4,p7},{p5,p7}}, $$ In the first case $c$ must be connected by an edge with $p_2$ , since by assumption $P^{\\prime }\\cup Q^{\\prime }$ is connected, $cd$ is not an edge, and if $c$ is connected with $p_3$ , then we can replace $p_3p_4$ with $p_3cp_4$ , obtaining a longer path.", "The following diagrams shows that with the edge $cp_2$ we obtain a path $\\widehat{P}$ , that is longer than $P$ , discarding this case.", "[scale=0.8] (3.5,0) node $\\widehat{P}$ in blue, with $L(\\widehat{P})=8>L(P)=7$ ; [-] (0,1)–(1,2); [-,line width=2pt,cyan] (1,2)–(2,2); [-,line width=2pt,cyan] (2,2)–(3,1); [-] (3,1)–(4,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-] (5,1)–(6,1); [-,line width=2pt,cyan] (6,1)–(7,1); [-,red] (3,1)–(4,2); [-,line width=2pt,cyan] (5,1)–(4,2); [-,white,line width=4pt] (3,1)–(5,2); [-,line width=2pt,cyan] (3,1)–(5,2); [-,line width=2pt,cyan] (6,1)–(5,2); [-,line width=2pt,cyan] (1,2)..controls(1,3) and (4,3)..(4,2); [red] (4,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2.3,2.3) node $p_3$ ; (3,0.6) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (4.3,2.2) node $c$ ; (5.3,2.2) node $d$ ; (8.5,2.6) node ; In the two other cases can be discarded by the paths $\\widehat{P}$ in the following diagrams that are longer than $P$ , which concludes the case $i=2$ .", "[scale=0.8] (3.5,0) node $\\widehat{P}$ in blue, with $L(\\widehat{P})=9>L(P)=7$ ; [-,line width=2pt,cyan] (0,1)–(1,2); [-,line width=2pt,cyan] (1,2)–(2,2); [-,line width=2pt,cyan] (2,2)–(3,1); [-] (3,1)–(4,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-] (5,1)–(6,1); [-,line width=2pt,cyan] (6,1)–(7,1); [-,line width=2pt,cyan] (3,1)–(4,2); [-,line width=2pt,cyan] (5,1)–(4,2); [-,white,line width=4pt] (4,1)–(5,2); [-,line width=2pt,cyan] (4,1)–(5,2); [-,line width=2pt,cyan] (6,1)–(5,2); [red] (4,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2.3,2.3) node $p_3$ ; (3,0.6) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (4.3,2.2) node $c$ ; (5.3,2.2) node $d$ ; (8.5,2.6) node ; [scale=0.8] (3.5,0) node $\\widehat{P}$ in blue, with $L(\\widehat{P})=8>L(P)=7$ ; [-,line width=2pt,cyan] (0,1)–(1,2); [-,line width=2pt,cyan] (1,2)–(2,2); [-,line width=2pt,cyan] (2,2)–(3,1); [-] (3,1)–(4,1); [-,line width=2pt,cyan] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-,line width=2pt,cyan] (3,1)–(4,2); [-,line width=2pt,cyan] (6,1)–(4,2); [-,white,line width=4pt] (4,1)–(5,2); [-,line width=2pt,cyan] (4,1)–(5,2); [-,line width=2pt,cyan] (6,1)–(5,2); [red] (4,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2.3,2.3) node $p_3$ ; (3,0.6) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (4.3,2.2) node $c$ ; (5.3,2.2) node $d$ ; (8.5,2.6) node ; $\\bullet $ If $i=3$ , then $Q^{\\prime \\prime }\\subset \\lbrace p_2,p_5,p_6,p_7\\rbrace $ .", "Moreover, $d_P(c_1,c_2)\\ge 2$ , $d_P(d_1,d_2)\\ge 2$ , $\\lbrace c_1,c_2\\rbrace \\ne \\lbrace d_1,d_2\\rbrace $ , and neither $\\lbrace c_1,c_2\\rbrace $ nor $\\lbrace d_1,d_2\\rbrace $ are equal to $\\lbrace p_2,p_5\\rbrace $ , since then either $c$ or $d$ could not be connected with $p_3$ nor $p_4$ .", "Hence there are only three possibilities: $\\bullet \\bullet $ $\\lbrace \\lbrace c_1,c_2\\rbrace ,\\lbrace d_1,d_2\\rbrace \\rbrace =\\lbrace \\lbrace p_2,p_6\\rbrace ,\\lbrace p_2,p_7\\rbrace \\rbrace ,{\\hbox{\\begin{tikzpicture}[scale=0.5](6,2.3) node {};[-] (0,1)--(1,1);[-] (1,1)--(2,2);[-] (2,2)--(3,2);[-] (4,1)--(7,1);[-,red] (1,1)--(4,2);[-,red] (1,1)--(6,2);[-,red] (6,1)--(6,2);[-,white,line width=2pt] (5,1)--(4,2);[-,red] (5,1)--(4,2);[-,white,line width=2pt] (3,2)--(4,1);[-] (3,2)--(4,1);[red] (4,2) circle (2pt);[red] (6,2) circle (2pt);[black] (0,1) circle (2pt)[black] (1,1) circle (2pt)[black] (2,2) circle (2pt)[black] (3,2) circle (2pt)[black] (4,1) circle (2pt)[black] (5,1) circle (2pt)[black] (6,1) circle (2pt)[black] (7,1) circle (2pt);(0,0.6) node {p_1};(1,0.6) node {p_2};(1.7,2.3) node {p_3};(3.2,2.4) node {p_4};(4,0.6) node {p_{5}};(5,0.6) node {p_6};(6,0.6) node {p_7};(7,0.6) node {p_8};(4.3,2.2) node {c};(6.3,2.2) node {d};(8.5,2.6) node {};\\end{tikzpicture}}}$ $\\item [$$] $ {{c1,c2},{d1,d2}}={{p2,p6},{p5,p7}}, $\\item [$$] $ {{c1,c2},{d1,d2}}={{p2,p7},{p5,p7}}, $$ The first two cases can be discarded by the paths $\\widehat{P}$ in the following diagrams that are longer than $P$ .", "[scale=0.8] (6,2.3) node ; [-] (0,1)–(1,1); [-,cyan,line width=2pt] (1,1)–(2,2); [-,cyan,line width=2pt] (2,2)–(3,2); [-,cyan,line width=2pt] (4,1)–(5,1); [-] (5,1)–(6,1); [-,cyan,line width=2pt] (6,1)–(7,1); [-,red] (1,1)–(4,2); [-,cyan,line width=2pt] (1,1)–(6,2); [-,cyan,line width=2pt] (6,1)–(6,2); [-,white,line width=4pt] (5,1)–(4,2); [-,cyan,line width=2pt] (5,1)–(4,2); [-,white,line width=4pt] (3,2)–(4,1); [-,cyan,line width=2pt] (3,2)–(4,1); [red] (4,2) circle (2pt); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.7,2.3) node $p_3$ ; (3.2,2.4) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (4.3,2.2) node $c$ ; (6.3,2.2) node $d$ ; (8.5,2.6) node ; (3.5,0) node $\\widehat{P}$ in blue, with $L(\\widehat{P})=8>L(P)=7$ ; [scale=0.8] (6,2.3) node ; [-] (0,1)–(1,1); [-,cyan,line width=2pt] (1,1)–(2,2); [-,cyan,line width=2pt] (2,2)–(3,2); [-,cyan,line width=2pt] (4,1)–(5,1); [-] (5,1)–(6,1); [-,cyan,line width=2pt] (6,1)–(7,1); [-,cyan,line width=2pt] (1,1)–(4,2); [-,cyan,line width=2pt] (4,1)–(6,2); [-,cyan,line width=2pt] (6,1)–(6,2); [-,white,line width=4pt] (5,1)–(4,2); [-,cyan,line width=2pt] (5,1)–(4,2); [-,white,line width=2pt] (3,2)–(4,1); [-] (3,2)–(4,1); [red] (4,2) circle (2pt); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.7,2.3) node $p_3$ ; (3.2,2.4) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (4.3,2.2) node $c$ ; (6.3,2.2) node $d$ ; (8.5,2.6) node ; (3.5,0) node $\\widehat{P}$ in blue, with $L(\\widehat{P})=8>L(P)=7$ ; In the third case $c$ must be connected by an edge with $p_4$ , since by assumption $P^{\\prime }\\cup Q^{\\prime }$ is connected, $cd$ is not an edge, and if $c$ is connected with $p_3$ , then we can replace $p_2p_3$ with $p_2cp_3$ , obtaining a longer path.", "The following diagrams shows that with the edge $cp_4$ we obtain a path $\\widehat{P}$ , that is longer than $P$ , discarding the third case and concluding the case $i=3$ .", "[scale=0.8] (6,2.3) node ; [-,cyan,line width=2pt] (0,1)–(1,1); [-,cyan,line width=2pt] (1,1)–(2,2); [-,cyan,line width=2pt] (2,2)–(3,2); [-,cyan,line width=2pt] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-,red] (1,1)–(4,2); [-,cyan,line width=2pt] (4,1)–(6,2); [-,cyan,line width=2pt] (6,1)–(6,2); [-,cyan,line width=2pt] (3,2)..controls (3.5,2.3)..(4,2); [-,white,line width=4pt] (6,1)–(4,2); [-,cyan,line width=2pt] (6,1)–(4,2); [-,white,line width=2pt] (3,2)–(4,1); [-] (3,2)–(4,1); [red] (4,2) circle (2pt); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.7,2.3) node $p_3$ ; (3.2,2.4) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (4.3,2.2) node $c$ ; (6.3,2.2) node $d$ ; (8.5,2.6) node ; (3.5,0) node $\\widehat{P}$ in blue, with $L(\\widehat{P})=8>L(P)=7$ ; $\\bullet $ If $i=4$ , then $Q^{\\prime \\prime }\\subset \\lbrace p_2,p_3,p_6,p_7\\rbrace $ .", "Moreover, $d_P(c_1,c_2)\\ge 2$ , $d_P(d_1,d_2)\\ge 2$ , $\\lbrace c_1,c_2\\rbrace \\ne \\lbrace d_1,d_2\\rbrace $ , and neither $\\lbrace c_1,c_2\\rbrace $ nor $\\lbrace d_1,d_2\\rbrace $ are equal to $\\lbrace p_3,p_6\\rbrace $ , since then either $c$ or $d$ could not be connected with $p_4$ nor $p_5$ .", "Hence there are only three possibilities: $\\bullet \\bullet $ $\\lbrace \\lbrace c_1,c_2\\rbrace ,\\lbrace d_1,d_2\\rbrace \\rbrace =\\lbrace \\lbrace p_2,p_6\\rbrace ,\\lbrace p_2,p_7\\rbrace \\rbrace ,{\\hbox{\\begin{tikzpicture}[scale=0.5](6,2.3) node {};[-] (0,1)--(2,1);[-] (3,2)--(4,2);[-] (5,1)--(7,1);[-,red] (6,1)--(5,2);[-,red] (1,1)--(2,2);[-,red] (2,2)--(5,1);[-,white,line width=2pt] (1,1)--(5,2);[-,red] (1,1)--(5,2);[-,white,line width=2pt] (4,2)--(5,1);[-] (4,2)--(5,1);[-,white,line width=2pt] (2,1)--(3,2);[-] (2,1)--(3,2);[red] (2,2) circle (2pt);[red] (5,2) circle (2pt);[black] (0,1) circle (2pt)[black] (1,1) circle (2pt)[black] (2,1) circle (2pt)[black] (3,2) circle (2pt)[black] (4,2) circle (2pt)[black] (5,1) circle (2pt)[black] (6,1) circle (2pt)[black] (7,1) circle (2pt);(0,0.6) node {p_1};(1,0.6) node {p_2};(2,0.6) node {p_3};(3.2,2.4) node {p_4};(4.3,2.4) node {p_{5}};(5,0.6) node {p_6};(6,0.6) node {p_7};(7,0.6) node {p_8};(1.7,2.3) node {c};(5.4,2.2) node {d};(8.5,2.6) node {};\\end{tikzpicture}}}$ $\\item [$$] $ {{c1,c2},{d1,d2}}={{p2,p6},{p3,p7}}, $\\item [$$] $ {{c1,c2},{d1,d2}}={{p2,p7},{p3,p7}}, $$ The three cases can be discarded by the paths $\\widehat{P}$ in the following diagrams that are longer than $P$ .", "[scale=0.5] (6,2.3) node ; [-] (0,1)–(2,1); [-,line width=2pt,cyan] (3,2)–(4,2); [-] (5,1)–(7,1); [-,line width=2pt,cyan] (6,1)–(7,1); [-,line width=2pt,cyan] (6,1)–(5,2); [-,line width=2pt,cyan] (1,1)–(2,2); [-,line width=2pt,cyan] (2,2)–(5,1); [-,white,line width=4pt] (1,1)–(5,2); [-,line width=2pt,cyan] (1,1)–(5,2); [-,white,line width=4pt] (4,2)–(5,1); [-,line width=2pt,cyan] (4,2)–(5,1); [-,white,line width=4pt] (2,1)–(3,2); [-,line width=2pt,cyan] (2,1)–(3,2); [red] (2,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (3.2,2.4) node $p_4$ ; (4.3,2.4) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (1.7,2.3) node $c$ ; (5.4,2.2) node $d$ ; (8.5,2.6) node ; (3.5,-0.3) node $\\widehat{P}$ in blue, $L(\\widehat{P})>L(P)$ ; [scale=0.5] (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,1); [-] (1,1)–(2,1); [-,line width=2pt,cyan] (3,2)–(4,2); [-] (5,1)–(7,1); [-,line width=2pt,cyan] (6,1)–(7,1); [-,line width=2pt,cyan] (6,1)–(5,2); [-,line width=2pt,cyan] (1,1)–(2,2); [-,line width=2pt,cyan] (2,2)–(5,1); [-,white,line width=4pt] (2,1)–(5,2); [-,line width=2pt,cyan] (2,1)–(5,2); [-,white,line width=4pt] (4,2)–(5,1); [-,line width=2pt,cyan] (4,2)–(5,1); [-,white,line width=4pt] (2,1)–(3,2); [-,line width=2pt,cyan] (2,1)–(3,2); [red] (2,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (3.2,2.4) node $p_4$ ; (4.3,2.4) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (1.7,2.3) node $c$ ; (5.4,2.2) node $d$ ; (8.5,2.6) node ; (3.5,-0.3) node $\\widehat{P}$ in blue, $L(\\widehat{P})>L(P)$ ; [scale=0.5] (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,1); [-] (1,1)–(2,1); [-,line width=2pt,cyan] (3,2)–(4,2); [-] (5,1)–(7,1); [-,line width=2pt,cyan] (6,1)–(5,2); [-,line width=2pt,cyan] (1,1)–(2,2); [-,line width=2pt,cyan] (2,2)–(6,1); [-,white,line width=4pt] (2,1)–(5,2); [-,line width=2pt,cyan] (2,1)–(5,2); [-,white,line width=4pt] (4,2)–(5,1); [-,line width=2pt,cyan] (4,2)–(5,1); [-,white,line width=4pt] (2,1)–(3,2); [-,line width=2pt,cyan] (2,1)–(3,2); [red] (2,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (2,0.6) node $p_3$ ; (3.2,2.4) node $p_4$ ; (4.3,2.4) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (1.7,2.3) node $c$ ; (5.4,2.2) node $d$ ; (8.5,2.6) node ; (3.5,-0.3) node $\\widehat{P}$ in blue, $L(\\widehat{P})>L(P)$ ; This discards the possibility that $i=4$ and concludes the Case 2.", "Case 3.", "$P^{\\prime }=\\lbrace a,b\\rbrace $ , such that $ab$ is not an edge of $P$ , $Q^{\\prime }=\\lbrace c,d\\rbrace $ and $cd$ is not an edge of $Q$ .", "Let $P=p_1p_2\\dots p_8$ be the sequential order of the points in $P$ .", "Assume that $P^{\\prime }=\\lbrace i,j\\rbrace $ , and that $i<j$ .", "Then we know that $i,j\\notin \\lbrace 1,8\\rbrace $ and that $j-i>1$ .", "Thus $\\lbrace i,j\\rbrace \\in \\lbrace \\lbrace 2,4\\rbrace ,\\lbrace 2,5\\rbrace ,\\lbrace 2,6\\rbrace ,\\lbrace 2,7\\rbrace ,\\lbrace 3,5\\rbrace ,\\lbrace 3,6\\rbrace ,\\lbrace 3,7\\rbrace ,\\lbrace 4,6\\rbrace ,\\lbrace 4,7\\rbrace ,\\lbrace 5,7\\rbrace \\rbrace .$ By symmetry it suffices to discard the following cases $\\lbrace i,j\\rbrace \\in \\lbrace \\lbrace 2,4\\rbrace ,\\lbrace 2,5\\rbrace ,\\lbrace 2,6\\rbrace ,\\lbrace 2,7\\rbrace ,\\lbrace 3,5\\rbrace ,\\lbrace 3,6\\rbrace \\rbrace .$ Let $Q^{\\prime \\prime }=\\lbrace c_1,c_2,d_1,d_2\\rbrace $ be such that $cc_1,cc_2,dd_1,dd_2$ are edges of $Q$ .", "Then $d_P(c_1,c_2)\\ge 2$ , $d_P(d_1,d_2)\\ge 2$ and $\\lbrace c_1,c_2\\rbrace \\ne \\lbrace d_1,d_2\\rbrace $ .", "$\\bullet $ If $\\lbrace i,j\\rbrace =\\lbrace 2,4\\rbrace $ , then $Q^{\\prime \\prime }\\subset \\lbrace p_3,p_5,p_6,p_7\\rbrace $ .", "If $p_3\\in \\lbrace c_1,c_2\\rbrace $ , then $c$ cannot be connected with $p_2$ nor with $p_4$ by an edge, since then, we could extend $P$ .", "For example, if $c_1=p_3$ and $c$ is connected with $p_2$ , then we replace $p_2p_3$ by $p_2cp_3$ in $P$ and obtain a longer path.", "Similarly, if $p_3\\in \\lbrace d_1,d_2\\rbrace $ , then $d$ cannot be connected with $p_2$ nor with $p_4$ by an edge.", "Hence, since $P^{\\prime }\\cup Q^{\\prime }=\\lbrace p_2,p_4,c,d\\rbrace $ is connected, $p_3$ cannot be in both $\\lbrace c_1,c_2\\rbrace $ and $\\lbrace d_1,d_2\\rbrace $ .", "But it has to be in one of them, since otherwise $Q^{\\prime \\prime }\\subset \\lbrace p_5,p_6,p_7\\rbrace $ , which leads to $\\lbrace c_1,c_2\\rbrace = \\lbrace d_1,d_2\\rbrace =\\lbrace p_5,p_7\\rbrace $ , contradicting $\\lbrace c_1,c_2\\rbrace \\ne \\lbrace d_1,d_2\\rbrace $ .", "Assume for example that $p_3\\in \\lbrace c_1,c_2\\rbrace $ but $p_3\\notin \\lbrace d_1,d_2\\rbrace $ .", "Then $c$ cannot be connected with $p_2$ nor with $p_4$ by an edge.", "Since $P^{\\prime }\\cup Q^{\\prime }=\\lbrace p_2,p_4,c,d\\rbrace $ is connected, $c$ must be connected with $d$ by an edge.", "Moreover, $\\lbrace d_1,d_2\\rbrace \\subset \\lbrace p_5,p_6,p_7\\rbrace $ , and so $\\lbrace d_1,d_2\\rbrace =\\lbrace p_5,p_7\\rbrace $ , which yields the path $\\widehat{P}= p_1p_2p_3p_4p_5p_6p_7dc$ , which is longer than $P$ .", "[scale=0.5] (6,2.3) node ; [-] (0,1)–(1,2); [-] (1,2)–(2,1); [-] (2,1)–(3,2); [-] (4,1)–(5,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-,red] (2,1)–(4,2); [-,red] (4,1)–(5,2); [-,red] (5,2)–(6,1); [-,line width=1.5pt,dotted] (4,2)–(5,2); [-,white,line width=2pt] (3,2)–(4,1); [-] (3,2)–(4,1); [red] (4,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,2.4) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (4,2.3) node $c$ ; (5.4,2.2) node $d$ ; (8.5,2.6) node ; [scale=0.5] (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,2); [-,line width=2pt,cyan] (1,2)–(2,1); [-,line width=2pt,cyan] (2,1)–(3,2); [-,line width=2pt,cyan] (4,1)–(5,1); [-,line width=2pt,cyan] (5,1)–(6,1); [-] (6,1)–(7,1); [-,red] (2,1)–(4,2); [-,red] (4,1)–(5,2); [-,line width=2pt,cyan] (5,2)–(6,1); [-,line width=2pt,cyan] (4,2)–(5,2); [-,white,line width=4pt] (3,2)–(4,1); [-,line width=2pt,cyan] (3,2)–(4,1); [red] (4,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,2) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,2.4) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (4,2.3) node $c$ ; (5.4,2.2) node $d$ ; (8.5,2.6) node ; (3.5,-0.3) node $\\widehat{P}$ in blue, $L(\\widehat{P})>L(P)$ ; If $p_3\\in \\lbrace d_1,d_2\\rbrace $ but $p_3\\notin \\lbrace c_1,c_2\\rbrace $ , then the same argument interchanging $c$ and $d$ yields a contradiction, concluding the case $\\lbrace i,j\\rbrace =\\lbrace 2,4\\rbrace $ .", "$\\bullet $ If $\\lbrace i,j\\rbrace =\\lbrace 2,5\\rbrace $ , then $Q^{\\prime \\prime }\\subset \\lbrace p_3,p_4,p_6,p_7\\rbrace $ .", "If $\\lbrace c_1,c_2\\rbrace $ (respectively $\\lbrace d_1,d_2\\rbrace $ ) is equal to $\\lbrace p_3,p_6\\rbrace $ , then $c$ (respectively $d$ ) cannot be connected by an edge with $p_2$ nor with $p_5$ , since that would extend $P$ .", "Since $P^{\\prime }\\cup Q^{\\prime }=\\lbrace p_2,p_5,c,d\\rbrace $ is connected, there is an edge from $c$ to $d$ .", "But then none of $c_1,d_1,d_1,d_2$ can be equal to $p_7$ , since then we could replace $p_8$ by $cd$ in $P$ and obtain a longer path.", "So $\\lbrace \\lbrace c_1,c_2\\rbrace ,\\lbrace d_1,d_2\\rbrace \\rbrace =\\lbrace \\lbrace p_3,p_6\\rbrace ,\\lbrace p_4,p_6\\rbrace \\rbrace .$ The path $\\widehat{P}$ in the following diagram, which is longer than $P$ , shows that this is impossible.", "[scale=0.5] (6,2.3) node ; [-] (0,1)–(1,2); [-] (1,2)–(2,1); [-] (2,1)–(3,1); [-] (5,1)–(7,1); [-,red] (2,1)–(3,2); [-,red] (3,1)–(5,2); [-,red] (5,2)–(5,1); [-,white,line width=2pt] (3,2)–(5,1); [-,red] (3,2)–(5,1); [-,line width=1.5pt,dotted] (3,2)..controls (3,3.2)and(5,3.2)..(5,2); [-,white,line width=2pt] (3,1)–(4,2); [-] (3,1)–(4,2); [-,white,line width=2pt] (4,2)–(5,1); [-] (4,2)–(5,1); [red] (3,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4,2.3) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (2.7,2.3) node $c$ ; (5.4,2.2) node $d$ ; (8.5,2.6) node ; [scale=0.5] (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,2); [-,line width=2pt,cyan] (1,2)–(2,1); [-] (2,1)–(3,1); [-,line width=2pt,cyan] (5,1)–(7,1); [-,line width=2pt,cyan] (2,1)–(3,2); [-,line width=2pt,cyan] (3,1)–(5,2); [-,red] (5,2)–(5,1); [-,white,line width=2pt] (3,2)–(5,1); [-,red] (3,2)–(5,1); [-,line width=2pt,cyan] (3,2)..controls (3,3.2)and(5,3.2)..(5,2); [-,white,line width=4pt] (3,1)–(4,2); [-,line width=2pt,cyan] (3,1)–(4,2); [-,white,line width=4pt] (4,2)–(5,1); [-,line width=2pt,cyan] (4,2)–(5,1); [red] (3,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4,2.3) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (2.7,2.3) node $c$ ; (5.4,2.2) node $d$ ; (8.5,2.6) node ; (3.5,-0.3) node $\\widehat{P}$ in blue, $L(\\widehat{P})>L(P)$ ; Thus the only possibilities for $\\lbrace c_1,c_2\\rbrace $ and $\\lbrace d_1,d_2\\rbrace $ are $\\lbrace p_3,p_7\\rbrace $ , $\\lbrace p_4,p_6\\rbrace $ and $\\lbrace p_4,p_7\\rbrace $ and we have the following three scenarios.", "$\\bullet \\bullet $ $\\lbrace \\lbrace c_1,c_2\\rbrace ,\\lbrace d_1,d_2\\rbrace \\rbrace =\\lbrace \\lbrace p_3,p_7\\rbrace ,\\lbrace p_4,p_6\\rbrace \\rbrace ,{\\hbox{\\begin{tikzpicture}[scale=0.5](6,2.3) node {};[-] (0,1)--(1,2);[-] (1,2)--(2,1);[-] (2,1)--(3,1);[-] (5,1)--(7,1);\\end{tikzpicture}[-,red] (2,1)--(3,2);[-,red] (3,1)--(5,2);[-,red] (5,2)--(5,1);[-,white,line width=2pt] (3,2)--(6,1);[-,red] (3,2)--(6,1);[-,white,line width=2pt] (3,1)--(4,2);[-] (3,1)--(4,2);[-,white,line width=2pt] (4,2)--(5,1);[-] (4,2)--(5,1);[red] (3,2) circle (2pt);[red] (5,2) circle (2pt);[black] (0,1) circle (2pt)[black] (1,2) circle (2pt)[black] (2,1) circle (2pt)[black] (3,1) circle (2pt)[black] (4,2) circle (2pt)[black] (5,1) circle (2pt)[black] (6,1) circle (2pt)[black] (7,1) circle (2pt);(0,0.6) node {$p_1$};(0.7,2.3) node {$p_2$};(2,0.6) node {$p_3$};(3,0.6) node {$p_4$};(4,2.3) node {$p_{5}$};(5,0.6) node {$p_6$};(6,0.6) node {$p_7$};(7,0.6) node {$p_8$};(2.7,2.3) node {$c$};(5.4,2.2) node {$d$};(8.5,2.6) node {};}}$ $\\item [$$] $ {{c1,c2},{d1,d2}}={{p3,p7},{p4,p7}}, $\\item [$$] $ {{c1,c2},{d1,d2}}={{p4,p6},{p4,p7}}, $$ The path $\\widehat{P}$ in each of the following diagrams shows that none of the first two cases is possible.", "[scale=0.8] (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,2); [-,line width=2pt,cyan] (1,2)–(2,1); [-] (2,1)–(3,1); [-,line width=2pt,cyan] (5,1)–(6,1); [-] (6,1)–(7,1); [-,line width=2pt,cyan] (2,1)–(3,2); [-,line width=2pt,cyan] (3,1)–(5,2); [-,line width=2pt,cyan] (5,2)–(5,1); [-,white,line width=4pt] (3,2)–(6,1); [-,line width=2pt,cyan] (3,2)–(6,1); [-,white,line width=4pt] (3,1)–(4,2); [-,line width=2pt,cyan] (3,1)–(4,2); [-,white,line width=2pt] (4,2)–(5,1); [-] (4,2)–(5,1); [red] (3,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4,2.3) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (2.7,2.3) node $c$ ; (5.4,2.2) node $d$ ; (8.5,2.6) node ; [scale=0.8] (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,2); [-,line width=2pt,cyan] (1,2)–(2,1); [-] (2,1)–(3,1); [-] (5,1)–(6,1); [-] (6,1)–(7,1); [-,line width=2pt,cyan] (2,1)–(3,2); [-,line width=2pt,cyan] (3,1)–(5,2); [-,line width=2pt,cyan] (5,2)–(6,1); [-,white,line width=4pt] (3,2)–(6,1); [-,line width=2pt,cyan] (3,2)–(6,1); [-,white,line width=4pt] (3,1)–(4,2); [-,line width=2pt,cyan] (3,1)–(4,2); [-,white,line width=4pt] (4,2)–(5,1); [-,line width=2pt,cyan] (4,2)–(5,1); [red] (3,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4,2.3) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (2.7,2.3) node $c$ ; (5.4,2.2) node $d$ ; (8.5,2.6) node ; In the third case $p_5$ cannot connect with $c$ nor $d$ , thus it has to be connected with $p_2$ , since $P^{\\prime }\\cup Q^{\\prime }$ is connected.", "But then the path $\\widehat{P}$ in second diagram shows that the third case is not possible, thus discarding the case $\\lbrace i,j\\rbrace =\\lbrace 2,5\\rbrace $ .", "[scale=0.5] (6,2.3) node ; [-] (0,1)–(1,2); [-] (1,2)–(2,1); [-] (2,1)–(3,1); [-] (5,1)–(7,1); [-,red] (3,1)–(3,2); [-,red] (3,1)–(5,2); [-,red] (5,2)–(5,1); [-,white,line width=2pt] (3,2)–(6,1); [-,red] (3,2)–(6,1); [-,white,line width=2pt] (3,1)–(4,2); [-] (3,1)–(4,2); [-,white,line width=2pt] (4,2)–(5,1); [-] (4,2)–(5,1); [-,line width=1.5pt,dotted] (1,2)..controls(1,3)and(4,3)..(4,2); [red] (3,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4.3,2.3) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (2.7,2.3) node $c$ ; (5.4,2.2) node $d$ ; (8.5,2.6) node ; [scale=0.5] (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,2); [-] (1,2)–(2,1); [-] (2,1)–(3,1); [-] (5,1)–(6,1); [-,line width=2pt,cyan] (6,1)–(7,1); [-,line width=2pt,cyan] (3,1)–(3,2); [-,line width=2pt,cyan] (3,1)–(5,2); [-,line width=2pt,cyan] (5,2)–(5,1); [-,white,line width=4pt] (3,2)–(6,1); [-,line width=2pt,cyan] (3,2)–(6,1); [-,white,line width=2pt] (3,1)–(4,2); [-] (3,1)–(4,2); [-,white,line width=4pt] (4,2)–(5,1); [-,line width=2pt,cyan] (4,2)–(5,1); [-,line width=2pt,cyan] (1,2)..controls(1,3)and(4,3)..(4,2); [red] (3,2) circle (2pt); [red] (5,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4.3,2.3) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (2.7,2.3) node $c$ ; (5.4,2.2) node $d$ ; (8.5,2.6) node ; $\\bullet $ If $\\lbrace i,j\\rbrace =\\lbrace 2,6\\rbrace $ , then $Q^{\\prime \\prime }\\subset \\lbrace p_3,p_4,p_5,p_7\\rbrace $ .", "If one of $c_1,c_2$ is equal to $p_3$ , then $c$ cannot be connected by an edge with $p_2$ nor with $p_5$ , since then one could extend $P$ .", "So it has to be connected with $d$ , but then one of $c$ or $d$ has to be connected with $p_7$ , and we can replace $p_8$ by the edge $cd$ in $P$ , obtaining a longer path.", "Thus $p_3\\notin \\lbrace c_1,c_2\\rbrace $ and by the same argument, interchanging $c$ with $d$ , we obtain $p_3\\notin \\lbrace d_1,d_2\\rbrace $ .", "Hence $Q^{\\prime \\prime }=\\lbrace c_1,c_2,d_1,d_2\\rbrace \\subset \\lbrace p_4,p_5,p_7\\rbrace $ and there is only one possibility left: $\\lbrace \\lbrace c_1,c_2\\rbrace ,\\lbrace d_1,d_2\\rbrace \\rbrace =\\lbrace \\lbrace p_4,p_7\\rbrace ,\\lbrace p_5,p_7\\rbrace \\rbrace .$ The blue path $\\widehat{P}$ in the second diagram, which is longer than $P$ , shows that this is not possible, and so we have discarded the case $\\lbrace i,j\\rbrace =\\lbrace 2,6\\rbrace $ .", "[scale=0.5] (6,2.3) node ; [-] (0,1)–(1,2); [-] (1,2)–(2,1); [-] (2,1)–(4,1); [-] (6,1)–(7,1); [-,red] (3,1)–(4,2); [-,red] (4,1)–(6,2); [-,red] (6,2)–(6,1); [-,white,line width=2pt] (6,1)–(4,2); [-,red] (6,1)–(4,2); [-,white,line width=2pt] (4,1)–(5,2); [-] (4,1)–(5,2); [-,white,line width=2pt] (5,2)–(6,1); [-] (5,2)–(6,1); [red] (4,2) circle (2pt); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,2) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,2.4) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (3.7,2.3) node $c$ ; (6.4,2.2) node $d$ ; (8.5,2.6) node ; [scale=0.5] (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,2); [-,line width=2pt,cyan] (1,2)–(2,1); [-,line width=2pt,cyan] (2,1)–(3,1); [-] (3,1)–(4,1); [-] (6,1)–(7,1); [-,line width=2pt,cyan] (3,1)–(4,2); [-,line width=2pt,cyan] (4,1)–(6,2); [-,line width=2pt,cyan] (6,2)–(6,1); [-,white,line width=4pt] (6,1)–(4,2); [-,line width=2pt,cyan] (6,1)–(4,2); [-,white,line width=4pt] (4,1)–(5,2); [-,line width=2pt,cyan] (4,1)–(5,2); [-,white,line width=2pt] (5,2)–(6,1); [-] (5,2)–(6,1); [red] (4,2) circle (2pt); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,2) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,2.4) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (3.7,2.3) node $c$ ; (6.4,2.2) node $d$ ; (8.5,2.6) node ; $\\bullet $ If $\\lbrace i,j\\rbrace =\\lbrace 2,7\\rbrace $ , then $Q^{\\prime \\prime }\\subset \\lbrace p_3,p_4,p_5,p_6\\rbrace $ .", "Thus the only possibilities for $\\lbrace c_1,c_2\\rbrace $ and $\\lbrace d_1,d_2\\rbrace $ are $\\lbrace p_3,p_5\\rbrace $ , $\\lbrace p_3,p_6\\rbrace $ and $\\lbrace p_4,p_6\\rbrace $ and we have the following three scenarios.", "$\\bullet \\bullet $ $\\lbrace \\lbrace c_1,c_2\\rbrace ,\\lbrace d_1,d_2\\rbrace \\rbrace =\\lbrace \\lbrace p_3,p_5\\rbrace ,\\lbrace p_3,p_6\\rbrace \\rbrace ,{\\hbox{\\begin{tikzpicture}[scale=0.5](6,2.3) node {};[-] (0,1)--(1,2);[-] (1,2)--(2,1);[-] (2,1)--(5,1);[-,red] (2,1)--(3,2);[-,red] (2,1)--(4,2);[-,line width=2pt,white] (3,2)--(4,1);[-,red] (3,2)--(4,1);[-,white,line width=2pt] (5,1)--(4,2);[-,red] (5,1)--(4,2);[-] (5,1)--(6,2);[-] (6,2)--(7,1);[red] (3,2) circle (2pt);[red] (4,2) circle (2pt);[black] (0,1) circle (2pt)[black] (1,2) circle (2pt)[black] (2,1) circle (2pt)[black] (3,1) circle (2pt)[black] (4,1) circle (2pt)[black] (5,1) circle (2pt)[black] (6,2) circle (2pt)[black] (7,1) circle (2pt);(0,0.6) node {p_1};(0.7,2.3) node {p_2};(2,0.6) node {p_3};(3,0.6) node {p_4};(4,0.6) node {p_{5}};(5,0.6) node {p_6};(6,2.4) node {p_7};(7,0.6) node {p_8};(2.7,2.3) node {c};(4.4,2.2) node {d};(8.5,2.6) node {};\\end{tikzpicture}}}$ $\\item [$$] $ {{c1,c2},{d1,d2}}={{p3,p5},{p4,p6}}, $\\item [$$] $ {{c1,c2},{d1,d2}}={{p3,p6},{p4,p6}}.", "$$ In the first case $d$ must be connected with $c$ by an edge.", "In fact, it cannot be connected with $p_7$ , since otherwise we could replace $p_6p_7$ with $p_6dp_7$ in $P$ and obtain a longer path than $P$ , and similarly, if it is connected with $p_2$ , then we replace $p_2p_3$ by $p_2dp_3$ in $P$ , obtaining a longer path.", "Since $P^{\\prime }\\cup Q^{\\prime }=\\lbrace p_2,p_7,c,d\\rbrace $ is connected, it has to be connected with $c$ .", "But then the blue path $\\widehat{P}$ in the second diagram is longer than $P$ , which discards the case $\\lbrace \\lbrace c_1,c_2\\rbrace ,\\lbrace d_1,d_2\\rbrace \\rbrace =\\lbrace \\lbrace p_3,p_5\\rbrace ,\\lbrace p_3,p_6\\rbrace \\rbrace $ .", "[scale=0.5] (6,2.3) node ; [-] (0,1)–(1,2); [-] (1,2)–(2,1); [-] (2,1)–(5,1); [-,red] (2,1)–(3,2); [-,red] (2,1)–(4,2); [-,line width=2pt,white] (3,2)–(4,1); [-,red] (3,2)–(4,1); [-,line width=1.5pt,dotted] (3,2)–(4,2); [-,white,line width=2pt] (5,1)–(4,2); [-,red] (5,1)–(4,2); [-] (5,1)–(6,2); [-] (6,2)–(7,1); [red] (3,2) circle (2pt); [red] (4,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,2) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,2.4) node $p_7$ ; (7,0.6) node $p_8$ ; (2.7,2.3) node $c$ ; (4.4,2.2) node $d$ ; (8.5,2.6) node ; [scale=0.5] (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,2); [-,line width=2pt,cyan] (1,2)–(2,1); [-,line width=2pt,cyan] (2,1)–(4,1); [-] (4,1)–(5,1); [-,red] (2,1)–(3,2); [-,red] (2,1)–(4,2); [-,line width=4pt,white] (3,2)–(4,1); [-,line width=2pt,cyan] (3,2)–(4,1); [-,line width=2pt,cyan] (3,2)–(4,2); [-,cyan,line width=2pt] (5,1)–(4,2); [-,line width=2pt,cyan] (5,1)–(6,2); [-,line width=2pt,cyan] (6,2)–(7,1); [red] (3,2) circle (2pt); [red] (4,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,2) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,2.4) node $p_7$ ; (7,0.6) node $p_8$ ; (2.7,2.3) node $c$ ; (4.4,2.2) node $d$ ; (8.5,2.6) node ; The second case is discarded by the path $\\widehat{P}$ in the second diagram, since it is longer than $P$ .", "[scale=0.5] (6,2.3) node ; [-] (0,1)–(1,2); [-] (1,2)–(2,1); [-] (2,1)–(5,1); [-,red] (2,1)–(3,2); [-,red] (3,1)–(4,2); [-,line width=2pt,white] (3,2)–(4,1); [-,red] (3,2)–(4,1); [-,white,line width=2pt] (5,1)–(4,2); [-,red] (5,1)–(4,2); [-] (5,1)–(6,2); [-] (6,2)–(7,1); [red] (3,2) circle (2pt); [red] (4,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,2) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,2.4) node $p_7$ ; (7,0.6) node $p_8$ ; (2.7,2.3) node $c$ ; (4.4,2.2) node $d$ ; (8.5,2.6) node ; [scale=0.5] (6,2.3) node ; [-,line width=2pt,cyan] (0,1)–(1,2); [-,line width=2pt,cyan] (1,2)–(2,1); [-,line width=2pt,cyan] (3,1)–(4,1); [-] (2,1)–(3,1); [-] (4,1)–(5,1); [-,line width=2pt,cyan] (2,1)–(3,2); [-,line width=2pt,cyan] (3,1)–(4,2); [-,line width=4pt,white] (3,2)–(4,1); [-,line width=2pt,cyan] (3,2)–(4,1); [-,white,line width=2pt] (5,1)–(4,2); [-,line width=2pt,cyan] (5,1)–(4,2); [-,line width=2pt,cyan] (5,1)–(6,2); [-,line width=2pt,cyan] (6,2)–(7,1); [red] (3,2) circle (2pt); [red] (4,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,2) circle (2pt) [black] (2,1) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,1) circle (2pt) [black] (6,2) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (0.7,2.3) node $p_2$ ; (2,0.6) node $p_3$ ; (3,0.6) node $p_4$ ; (4,0.6) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,2.4) node $p_7$ ; (7,0.6) node $p_8$ ; (2.7,2.3) node $c$ ; (4.4,2.2) node $d$ ; (8.5,2.6) node ; The third case is symmetric to the first case, and so we have discarded the case $\\lbrace i,j\\rbrace =\\lbrace 2,7\\rbrace $ .", "$\\bullet $ If $\\lbrace i,j\\rbrace =\\lbrace 3,5\\rbrace $ , then $Q^{\\prime \\prime }\\subset \\lbrace p_2,p_4,p_6,p_7\\rbrace $ .", "If one of $c_1,c_2$ is equal to $p_4$ , then $c$ cannot be connected by an edge with $p_3$ nor with $p_5$ , since then one could extend $P$ .", "So it has to be connected with $d$ , but then none of $c$ or $d$ can be connected with $p_7$ nor with $p_2$ , since we could replace $p_8$ (respectively $p_1$ ) by the edge $cd$ in $P$ , obtaining a longer path.", "This implies that $Q^{\\prime \\prime }\\subset \\lbrace p_4,p_6\\rbrace $ which is impossible, since $\\lbrace c_1,c_2\\rbrace \\ne \\lbrace d_1,d_2\\rbrace $ .", "Thus $p_4\\notin \\lbrace c_1,c_2\\rbrace $ and by the same argument, interchanging $c$ with $d$ , we obtain $p_4\\notin \\lbrace d_1,d_2\\rbrace $ .", "Hence $Q^{\\prime \\prime }=\\lbrace c_1,c_2,d_1,d_2\\rbrace \\subset \\lbrace p_2,p_6,p_7\\rbrace $ and there is only one possibility left: $\\lbrace \\lbrace c_1,c_2\\rbrace ,\\lbrace d_1,d_2\\rbrace \\rbrace =\\lbrace \\lbrace p_2,p_6\\rbrace ,\\lbrace p_2,p_7\\rbrace \\rbrace .$ The blue path $\\widehat{P}$ in the second diagram, which is longer than $P$ , shows that this is not possible, and so we have discarded the case $\\lbrace i,j\\rbrace =\\lbrace 3,5\\rbrace $ .", "[scale=0.5] (6,2.3) node ; [-] (0,1)–(1,1); [-] (5,1)–(7,1); [-,red] (1,1)–(1,2); [-,red] (5,1)–(1,2); [-,line width=2pt,white] (1,1)–(6,2); [-,red] (1,1)–(6,2); [-,white,line width=2pt] (6,1)–(6,2); [-,red] (6,1)–(6,2); [-,white,line width=2pt] (1,1)–(2,2); [-,white,line width=2pt] (2,2)–(3,1); [-,white,line width=2pt] (3,1)–(4,2); [-,white,line width=2pt] (4,2)–(5,1); [-] (1,1)–(2,2); [-] (2,2)–(3,1); [-] (3,1)–(4,2); [-] (4,2)–(5,1); [red] (1,2) circle (2pt); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (2.4,2.3) node $p_3$ ; (3,0.6) node $p_4$ ; (4.4,2.4) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (0.7,2.3) node $c$ ; (6.4,2.2) node $d$ ; (8.5,2.6) node ; [scale=0.5] (6,2.3) node ; [-] (0,1)–(1,1); [-] (5,1)–(6,1); [-,line width=2pt,cyan] (6,1)–(7,1); [-,line width=2pt,cyan] (1,1)–(1,2); [-,line width=2pt,cyan] (5,1)–(1,2); [-,line width=4pt,white] (1,1)–(6,2); [-,line width=2pt,cyan] (1,1)–(6,2); [-,white,line width=4pt] (6,1)–(6,2); [-,line width=2pt,cyan] (6,1)–(6,2); [-,white,line width=2pt] (1,1)–(2,2); [-,white,line width=4pt] (2,2)–(3,1); [-,white,line width=4pt] (3,1)–(4,2); [-,white,line width=4pt] (4,2)–(5,1); [-] (1,1)–(2,2); [-,line width=2pt,cyan] (2,2)–(3,1); [-,line width=2pt,cyan] (3,1)–(4,2); [-,line width=2pt,cyan] (4,2)–(5,1); [red] (1,2) circle (2pt); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,2) circle (2pt) [black] (5,1) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (2.4,2.3) node $p_3$ ; (3,0.6) node $p_4$ ; (4.4,2.4) node $p_{5}$ ; (5,0.6) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (0.7,2.3) node $c$ ; (6.4,2.2) node $d$ ; (8.5,2.6) node ; $\\bullet $ If $\\lbrace i,j\\rbrace =\\lbrace 3,6\\rbrace $ , then $Q^{\\prime \\prime }\\subset \\lbrace p_2,p_4,p_5,p_7\\rbrace $ .", "We claim that $c$ and $d$ cannot be connected by an edge.", "In fact, if they are connected, then none of the two can be connected with $p_2$ , since we could replace $p_1$ by $cd$ in $P$ and obtain a longer path, and similarly none of both can be connected with $p_7$ .", "Hence $Q^{\\prime \\prime }\\subset \\lbrace p_4,p_5\\rbrace $ , which is impossible and proves the claim.", "If $c$ is connected with $p_3$ , then $\\lbrace c_1,c_2\\rbrace =\\lbrace p_5,p_7\\rbrace $ , since a connection of $c$ with $p_2$ leads to a path longer than $P$ , replacing $p_2p_3$ by $p_2cp_3$ in $P$ , and if $c$ is connected with $p_4$ , then replacing $p_3p_4$ by $p_3cp_4$ in $P$ yields a path longer than $P$ .", "Similarly, - If $d$ is connected with $p_3$ , then $\\lbrace d_1,d_2\\rbrace =\\lbrace p_5,p_7\\rbrace $ , - If $c$ is connected with $p_5$ , then $\\lbrace c_1,c_2\\rbrace =\\lbrace p_2,p_4\\rbrace $ , - If $d$ is connected with $p_5$ , then $\\lbrace d_1,d_2\\rbrace =\\lbrace p_2,p_4\\rbrace $ .", "Since $\\lbrace c_1,c_2\\rbrace \\ne \\lbrace d_1,d_2\\rbrace $ , one of $c,d$ is connected with $p_3$ and the other with $p_5$ , and we have $\\lbrace \\lbrace c_1,c_2\\rbrace , \\lbrace d_1,d_2\\rbrace \\rbrace =\\lbrace \\lbrace p_2,p_4\\rbrace , \\lbrace p_5,p_7\\rbrace \\rbrace .$ The blue path $\\widehat{P}$ in the second diagram, which is longer than $P$ , shows that this is not possible.", "[scale=0.5] (6,2.3) node ; [-] (0,1)–(1,1); [-] (3,1)–(4,1); [-] (6,1)–(7,1); [-,red] (1,1)–(3,2); [-,red] (3,1)–(3,2); [-,line width=2pt,white] (4,1)–(6,2); [-,red] (4,1)–(6,2); [-,white,line width=2pt] (6,1)–(6,2); [-,red] (6,1)–(6,2); [-,white,line width=2pt] (1,1)–(2,2); [-,white,line width=2pt] (2,2)–(3,1); [-,white,line width=2pt] (4,1)–(5,2); [-,white,line width=2pt] (5,2)–(6,1); [-] (1,1)–(2,2); [-] (2,2)–(3,1); [-] (4,1)–(5,2); [-] (5,2)–(6,1); [-,dotted,line width=2pt] (3,2)–(5,2); [-,dotted,line width=2pt] (2,2)..controls (2,3.2)and(6,3.2)..(6,2); [red] (3,2) circle (2pt); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,2) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.6,2.3) node $p_3$ ; (3,0.6) node $p_4$ ; (4,0.6) node $p_{5}$ ; (4.8,2.4) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (2.9,2.3) node $c$ ; (6.3,2.4) node $d$ ; (8.5,2.6) node ; [scale=0.5] (6,2.3) node ; [-,cyan,line width=2pt] (0,1)–(1,1); [-,cyan,line width=2pt] (3,1)–(4,1); [-] (6,1)–(7,1); [-,red] (1,1)–(3,2); [-,cyan,line width=2pt] (3,1)–(3,2); [-,line width=2pt,white] (4,1)–(6,2); [-,red] (4,1)–(6,2); [-,white,line width=4pt] (6,1)–(6,2); [-,cyan,line width=2pt] (6,1)–(6,2); [-,white,line width=4pt] (1,1)–(2,2); [-,white,line width=2pt] (2,2)–(3,1); [-,white,line width=4pt] (4,1)–(5,2); [-,white,line width=4pt] (5,2)–(6,1); [-,cyan,line width=2pt] (1,1)–(2,2); [-] (2,2)–(3,1); [-,cyan,line width=2pt] (4,1)–(5,2); [-,cyan,line width=2pt] (5,2)–(6,1); [-,dotted,line width=2pt] (3,2)–(5,2); [-,cyan,line width=2pt] (2,2)..controls (2,3.2)and(6,3.2)..(6,2); [red] (3,2) circle (2pt); [red] (6,2) circle (2pt); [black] (0,1) circle (2pt) [black] (1,1) circle (2pt) [black] (2,2) circle (2pt) [black] (3,1) circle (2pt) [black] (4,1) circle (2pt) [black] (5,2) circle (2pt) [black] (6,1) circle (2pt) [black] (7,1) circle (2pt); (0,0.6) node $p_1$ ; (1,0.6) node $p_2$ ; (1.6,2.3) node $p_3$ ; (3,0.6) node $p_4$ ; (4,0.6) node $p_{5}$ ; (4.8,2.4) node $p_6$ ; (6,0.6) node $p_7$ ; (7,0.6) node $p_8$ ; (2.9,2.3) node $c$ ; (6.3,2.4) node $d$ ; (8.5,2.6) node ; Thus we have discarded the possibility $\\lbrace i,j\\rbrace =\\lbrace 3,6\\rbrace $ , which finishes Case 3. and concludes the proof of the theorem.", "Corollary 4.3 Assume that $P$ and $Q$ are two longest paths in a simple graph $G$ .", "If $V(Q)\\ne V(P)$ and $n=|V(G)|\\le 10$ then $V(Q)\\cap V(P)$ is a separator.", "Moreover, there is a graph $G$ with $n=|V(G)|= 11$ and two longest paths $P$ and $Q$ with $V(Q)\\ne V(P)$ , such that $V(Q)\\cap V(P)$ is not a separator.", "Consequently, $n=11$ is the minimal $n$ for which such a graph exists.", "By the discussion in the introduction it suffices to discard the cases $\\ell =6$ , $n=8$ , $\\ell =7$ , $n=9$ , $\\ell =8$ , $n=10$ , $\\ell =6$ , $n=10$ .", "The first three cases are discarded in section , whereas the last case is discarded in Theorem REF .", "The statements about the example given in the introduction can be verified directly.", "dRarticle author=de Rezende, Susanna F., author=Fernandes, Cristina G., author=Martin, Daniel M., author=Wakabayashi, Yoshiko, title=Intersecting longest paths, journal=Discrete Math., volume=313, date=2013, number=12, pages=1401–1408, issn=0012-365X, review=MR 3061125, doi=10.1016/j.disc.2013.02.016, Grarticle author=Grötschel, Martin, title=On intersections of longest cycles, conference= title=Graph theory and combinatorics, address=Cambridge, date=1983, , book= publisher=Academic Press, London, , date=1984, pages=171–189, review=MR 777174, GVarticle author=Gutiérrez, Juan, author=Valqui, Christian, title=Bi-traceable graphs, the intersection of three longest paths and Hippchen's conjecture, journal=arXiv:2101.07859 [math.CO], date=2021, Hthesis title=Intersections of Longest Paths and Cycles, author=Hippchen, Thomas, school=Master thesis, Georgia State University, year=2008, Sarticle author=Schmitz, Werner, title=Über längste Wege und Kreise in Graphen, journal=Rend.", "Sem.", "Mat.", "Univ.", "Padova, volume=53, date=1975, pages=97–103, issn=0041-8994, review=MR 427139, SZZarticle author=Shabbir, Ayesha, author=Zamfirescu, Carol T., author=Zamfirescu, Tudor I., title=Intersecting longest paths and longest cycles: a survey, journal=Electron.", "J. Graph Theory Appl.", "(EJGTA), volume=1, date=2013, number=1, pages=56–76, review=MR 3093252, doi=10.5614/ejgta.2013.1.1.6, STarticle author=Stewart, Iain A., author=Thompson, Ben, title=On the intersections of longest cycles in a graph, journal=Experiment.", "Math., volume=4, date=1995, number=1, pages=41–48, issn=1058-6458, review=MR 1359416," ] ]
2105.11633
[ [ "A linear parallel algorithm to compute bisimulation and relational\n coarsest partitions" ], [ "Abstract The most efficient way to calculate strong bisimilarity is by calculation the relational coarsest partition on a transition system.", "We provide the first linear time algorithm to calculate strong bisimulation using parallel random access machines (PRAMs).", "More precisely, with $n$ states, $m$ transitions and $|\\mathit{Act}|\\leq m$ action labels, we provide an algorithm on $max(n,m)$ processors that calculates strong bisimulation in time $O(n+|\\mathit{Act}|)$ and space $O(n+m)$.", "The best-known PRAM algorithm has time complexity $O(n\\log n)$ on a smaller number of processors making it less suitable for massive parallel devices such as GPUs.", "An implementation on a GPU shows that the linear time-bound is achievable on contemporary hardware." ], [ "Introduction", "The notion of bisimilarity for Kripke structures and Labeled Transition Systems (LTSs) is commonly used to define behavioural equivalence.", "Deciding this behavioural equivalence is important in the field of modelling and verifying concurrent systems [4], [15].", "Kanellakis and Smolka proposed a partition refinement algorithm for obtaining the bisimilarity relation for Kripke structures [11].", "The proposed algorithm has a run time complexity of $\\mathcal {O}(nm)$ where $n$ is the number of states and $m$ is the number of transitions of the input graph.", "Later, a more sophisticated refinement algorithm running in $\\mathcal {O}(m~log~n)$ steps was proposed by Paige and Tarjan [16].", "In recent years the increase in the speed of sequential chip design has stagnated due to a multitude of factors such as energy consumption and heat generation.", "In contrast, parallel devices such as graphics processing units (GPUs) keep increasing rapidly in computational power.", "In order to profit from the acceleration of these devices, we require algorithms with massive parallelism.", "The article “There's plenty of room at the Top: What will drive computer performance after Moore's law” by Leierson et al.", "[14] indicates that the advance in computational performance will come from software and algorithms that can employ hardware structures with a massive number of simple, parallel processors, such as GPUs.", "In this paper, we propose such an algorithm to decide bisimilarity.", "Deciding bisimilarity is $P$ -complete [1], which suggests that bisimilarity is an inherently sequential problem.", "This fact has not withheld the community from searching efficient parallel algorithms for deciding bisimilarity of Kripke structures.", "In particular, Lee and Rajasekaran [13] proposed a parallel algorithm based on the Paige Tarjan algorithm that works in $\\mathcal {O}(n\\ log\\ n)$ time complexity using $\\frac{m}{\\log n}\\log \\log n$ Concurrently Read and Concurrently Write (CRCW) processors.", "In this work, we improve on the best known theoretical bound for PRAM algorithms using a higher degree of parallelism.", "The proposed algorithm improves the run time complexity to $\\mathcal {O}(n)$ on $max(m,n)$ processors and is based on the sequential algorithm of Kanellakis and Smolka [11].", "The larger number of processors used in this algorithm favours the increasingly parallel design of contemporary and future hardware.", "In addition, the algorithm is optimal w.r.t.", "the sequential Kanellakis-Smolka algorithm, meaning that overall, it does not perform more work than its sequential counterpart.", "We first present our algorithm on Kripke structures where transitions are unlabelled.", "However, as labelled transition systems (LTSs) are commonly used, and labels are not straightforward to incorporate in an efficient way (cf.", "for instance [21]), we discuss how our algorithm can be extended to take action labels into account.", "This leads to an algorithm with a run time complexity of $\\mathcal {O}(n + |\\mathit {Act}|)$ , with $\\mathit {Act}$ the set of action labels.", "Our algorithm has been designed for and can be analyzed with the CRCW PRAM model, following notations from [20].", "This model is an extension of the normal RAM model, allowing multiple processors to work with shared memory.", "In the CRCW PRAM model, parallel algorithms can be described in a straightforward and elegant way.", "In reality, no device exists that completely adheres to this PRAM model, but with recent advancements, hardware gets better and better at approximating the model since the number of parallel threads keeps growing.", "We demonstrate this by translating the PRAM algorithm to GPU code.", "We straightforwardly implemented our algorithm in CUDA and experimented with an NVIDIA Titan RTX, showing that our algorithm performs mostly in line with what our PRAM algorithm predicts.", "The paper is structured as follows: In Section , we recall the necessary preliminaries on the CRCW PRAM model and state the partition refinement problems this paper focuses on.", "In Section , we propose a parallel algorithm to compute bisimulation for Kripke structures, which is also called the Relational Coarsest Partition Problem (RCPP).", "In this section, we also prove the correctness of the algorithm and provide a complexity analysis.", "In Section , we discuss the details for an implementation with multiple action labels, thereby supporting LTSs, which forms the Bisimulation Coarsest Refinement Problem (BCRP).", "In Section  we discuss the results of the implementation and in Section  we address the usage of weaker PRAM models.", "Finally, in Section , we discuss related work." ], [ "The PRAM Model", "The Parallel Random Access Machine (PRAM) is a natural extension of the normal Random Access Machine (RAM), where an arbitrary number of parallel programs can access the memory.", "Following the definitions of [20] we use a version of PRAM that is able to Concurrently Read and Concurrently Write (CRCW PRAM).", "It differs from the model introduced in [8] in which the PRAM model was only allowed to concurrently read from the same memory address, but concurrent writes (to the same address) could not happen.", "We call the model from [8] an Concurrent Read, Exclusive Write (CREW) PRAM model.", "A CRCW PRAM consists of a sequence of numbered processors $P_0, P_1, \\dots $ .", "These processors have all the natural instructions of a normal RAM such as addition, subtraction, and conditional branching based on the equality and less-than operators.", "There is an infinite amount of common memory the processors have access to.", "The processors have instructions to read from and write to the common memory.", "In addition, a processor $P_i$ has an instruction to obtain its unique index $i$ .", "A PRAM also has a function $P:{\\mathbb {N}}\\rightarrow {\\mathbb {N}}$ which defines a bound on the number of processors given the size of the input.", "All the processors have the same program and run synchronized in a single instruction, multiple data (SIMD) fashion.", "In other words, all processors execute the program in lock-step.", "Parallelism is achieved by distributing the data elements over the processors and having the processors apply the program instructions on `their' data elements.", "Initially, given input consisting of $n$ data elements, the CRCW PRAM assumes that the input is stored in the first $n$ registers of the common memory, and starts the first $P(n)$ processors $P_0, P_1, \\dots ,P_{P(n)-1}$ .", "We need to define what the behaviour of the machine will be whenever a concurrent write happens.", "The way to handle this memory contention in concurrent writes is usually by assuming one of the following: (Common) All processors try to write the same value and succeed, otherwise, the writes are not legal and fail; (Arbitrary) Only one arbitrary attempt to write succeeds; (Priority) Only the processor with the lowest index succeeds in writing.", "The algorithm proposed in this paper works if we make either the arbitrary or the priority assumption.", "In Section we explain how we can adapt it to work under the common assumption.", "A parallel program for a PRAM is called optimal w.r.t.", "a sequential algorithm if the total work done by the program does not exceed the work done by the sequential algorithm.", "More precisely, if $T$ is the parallel run time and $P$ the number of processors used, then the algorithm is optimal w.r.t.", "a sequential algorithm running in $S$ steps if $P\\cdot T \\in \\mathcal {O}(S)$ .", "The computational complexity of these models is well studied and there is a close relation between circuit complexity and the complexity of PRAM algorithms [20]." ], [ "Strong Bisimulation", "To formalise concurrent system behaviour, we use LTSs.", "Definition 1 (Labeled Transition System) A Labeled Transition System (LTS) is a three-tuple $A=(S, Act, \\rightarrow )$ where $S$ is a finite set of states, $Act$ a finite set of action labels, and $\\rightarrow \\subseteq S\\times Act \\times S$ the transition relation.", "Let $A = (S, Act, {\\rightarrow })$ be an LTS.", "Then, for any two states $s,t\\in S$ and $a \\in Act$ , we write $s\\mathrel {\\text{$\\xrightarrow{}$}} t$ iff $(s,a,t)\\in {\\rightarrow }$ .", "Kripke structures differ from LTSs in the fact that the states are labelled as opposed to the transitions.", "In the current paper, for convenience, instead of using Kripke structures where appropriate, we reason about LTSs with a single action label, i.e., $|\\mathit {Act}| = 1$ .", "Computing the coarsest partition of such an LTS can be done in the same way as for Kripke structures, apart from the fact that in the latter case, a different initial partition is computed that is based on the state labels (see, for instance, [9]).", "Definition 2 (Strong bisimulation) On an LTS $A = (S,Act, {\\rightarrow })$ a relation $R\\subseteq S\\times S$ is called a strong bisimulation relation if and only if it is symmetric and for all $s,t\\in S$ with $s R t$ and for all $a\\in Act$ with $s\\mathrel {\\text{$\\xrightarrow{}$}} s^{\\prime }$ , we have: $\\exists t^{\\prime } \\in S. t\\mathrel {\\text{$\\xrightarrow{}$}} t^{\\prime } \\wedge s^{\\prime } R t^{\\prime }$ Whenever we refer to bisimulation we mean strong bisimulation.", "Two states $s, t \\in S$ in an LTS $A$ are called bisimilar, denoted by $s t$ , iff there is some bisimulation relation $R$ for $A$ that relates $s$ and $t$ .", "A partition $\\pi $ of a finite set of states $S$ is a set of subsets that are pairwise disjoint and whose union is equal to $S$ , i.e., $\\bigcup _{B\\in \\pi } B = S$ .", "Every element $B\\in \\pi $ of this partition $\\pi $ is called a block.", "We call partition $\\pi ^{\\prime }$ a refinement of $\\pi $ iff for every block $B^{\\prime }\\in \\pi ^{\\prime }$ there is a block $B \\in \\pi $ such that $B^{\\prime } \\subseteq B$ .", "We say a partition $\\pi $ of a finite set $S$ induces the relation $R = \\lbrace (s,t) \\mid \\exists B \\in \\pi .", "s \\in B \\wedge t \\in B \\rbrace $ .", "This is an equivalence relation of which the blocks of $\\pi $ are the equivalence classes.", "Given an LTS $A = (S, Act, {\\rightarrow })$ and two states $s,t\\in S$ we say that $s$ reaches $t$ with action $a\\in Act$ iff $s\\mathrel {\\text{$\\xrightarrow{}$}}t$ .", "A state $s$ reaches a set $U\\subseteq S$ with an action $a$ iff there is a state $t\\in U$ such that $s$ reaches $t$ with action $a$ .", "A set of states $V\\subseteq S$ is called stable under a set of states $U\\subseteq S$ iff for all actions $a$ either all states in $V$ reach $U$ with $a$ , or no state in $V$ reaches $U$ with $a$ .", "A partition $\\pi $ is stable under a set of states $U$ iff each block $B\\in \\pi $ is stable under $U$ .", "The partition $\\pi $ is called stable iff it is stable under all its own blocks $B\\in \\pi $ .", "Fact 1 [16] Stability is inherited under refinement, i.e.", "given a partition $\\pi $ of $S$ and a refinement $\\pi ^{\\prime }$ of $\\pi $ , then if $\\pi $ is stable under $U\\subseteq S$ , then $\\pi ^{\\prime }$ is also stable under $U$ .", "The main problem we focus on in this work is called the bisimulation refinement problem (BCRP).", "It is defined as follows: Input: An LTS $M = (S,Act, {\\rightarrow })$ .", "Output: The partition $\\pi $ of $S$ which is the coarsest partition, i.e., has the smallest number of blocks, that forms a bisimulation relation.", "In a Kripke structure, the transition relation forms a single binary relation, since the transitions are unlabelled.", "This is also the case when an LTS has a single action label.", "In that case, the problem is called the Relational Coarsest Partition Problem (RCPP) [11], [13], [16].", "This problem is defined as follows: Input: A set $S$ , a binary relation $\\rightarrow : S\\times S$ and an initial partition $\\pi _0$ Output: The partition $\\pi $ which is the coarsest refinement of $\\pi _0$ and which is a bisimulation relation.", "It is known that BCRP is not significantly harder than RCPP as there are intuitive translations from LTSs to Kripke structures [7].", "However, some non-trivial modifications can speed-up the algorithm for some cases, hence we discuss both problems separately.", "In Section , we discuss the basic parallel algorithm for RCPP, and in Section , we discuss the modifications required to efficiently solve the BCRP problem for LTSs with multiple action labels." ], [ "A Sequential Algorithm", "In this section, we discuss a sequential algorithm based on one of Kanellakis and Smolka [11] for RCPP.", "This is the basic algorithm which we adapt to the parallel PRAM algorithm.", "The algorithm starts with an input partition $\\pi _0$ and refines all blocks until a stable partition is reached.", "This stable partition will be the coarsest refinement that defines a bisimulation relation.", "The sequential algorithm, Algorithm REF , works as follows.", "Given are a set $S$ , a relation $\\rightarrow \\subseteq S\\times S$ , and an initial partition $\\pi _0$ of $S$ .", "Initially, we mark the partition as not necessarily stable under all blocks by putting these blocks in a set $\\mathit {Unstable}$ .", "In any iteration of the algorithm, if a block $B$ of the current partition is not in $\\mathit {Unstable}$ , then the current partition is stable under $B$ .", "If $\\mathit {Unstable}$ is empty, the partition is stable under all its blocks, and the partition represents the required bisimulation.", "As long as some blocks are in $\\mathit {Unstable}$ (line 3), a single block $B\\in \\pi $ is taken from this set (line 4) and we split the current partition such that it becomes stable under $B$ .", "Therefore, we refer to this block as the splitter.", "The set $S^{\\prime } = \\lbrace s \\in S \\mid \\exists t \\in B. s\\rightarrow t \\rbrace $ is the reverse image of $B$ (line 6).", "This set consists of all states that can reach $B$ , and we use $S^{\\prime }$ to define our new blocks.", "All blocks $B^{\\prime }$ that have a non-empty intersection with $S^{\\prime }$ , i.e., $B^{\\prime } \\cap S^{\\prime } \\ne \\emptyset $ , and are not a subset of $S^{\\prime }$ , i.e., $B^{\\prime }\\cap S^{\\prime } \\ne B^{\\prime }$ (line 7), are split in the subset of states that reach $S^{\\prime }$ and the subset of states that do not reach $S^{\\prime }$ (lines 8-9).", "These two new blocks are added to the set of $\\mathit {Unstable}$ blocks (line 10).", "The number of states is finite, and blocks can be split only a finite number of times.", "Hence, blocks are only finitely often put in $\\mathit {Unstable}$ , and so the algorithm terminates.", "[t] $\\pi := \\pi _0$ $\\mathit {Unstable} := \\pi $ $\\mathit {Unstable} \\ne \\emptyset $ $B\\in \\mathit {Unstable}$ $\\mathit {Unstable} := \\mathit {Unstable} \\setminus \\lbrace B\\rbrace $ $S^{\\prime } := \\lbrace s \\in S \\mid \\exists t \\in B. s\\mathrel {\\text{$\\xrightarrow{}$}} t \\rbrace $ $B^{\\prime }\\in \\pi \\textit { with } \\emptyset \\subset B^{\\prime } \\cap S^{\\prime } \\subset B^{\\prime }$ Split $B^{\\prime }$ into $B^{\\prime } \\cap S^{\\prime }$ and $B^{\\prime } \\setminus S^{\\prime }$ $\\pi := \\pi \\setminus \\lbrace B\\rbrace $ $\\pi := \\pi \\cup \\lbrace B^{\\prime }\\cap S^{\\prime }, B^{\\prime } \\setminus S^{\\prime }\\rbrace $ $\\mathit {Unstable} := \\mathit {Unstable} \\cup \\lbrace B^{\\prime }\\cap S^{\\prime }, B^{\\prime } \\setminus S^{\\prime }\\rbrace $ Sequential algorithm based on Kanellakis-Smolka" ], [ "The PRAM Algorithm", "Next, we describe a PRAM algorithm to solve RCPP that is based on the sequential algorithm given in Algorithm REF ." ], [ "Block representation", "Given an LTS $A = (S, Act, \\rightarrow )$ with $|A| = 1$ and $|S| = n$ states, we assume that the states are labeled with unique indices $0, \\dots , n-1$ .", "A partition $\\pi $ in the PRAM algorithm is represented by assigning a block label from a set of block labels $L_B$ to every state.", "The number of blocks can never be larger than the number of states, hence, we use the indices of the states as block labels: $L_B=S$ .", "We exploit this in the PRAM algorithm to efficiently select a new block label whenever a new block is created.", "We select the block label of a new block by electing one of its states to be the leader of that block and using the index of that state as the block label.", "By doing so, we maintain an invariant that the leader of a block is also a member of the block.", "In a partition $\\pi $ , whenever a block $B\\in \\pi $ is split into two blocks $B^{\\prime }$ and $B^{\\prime \\prime }$ , the leader $s$ of $B$ which is part of $B^{\\prime }$ becomes the leader of $B^{\\prime }$ , and for $B^{\\prime \\prime }$ , a new state $t \\in B^{\\prime \\prime }$ is elected to be the leader of this new block.", "Since the new leader is not part of any other block, the label of $t$ is fresh with respect to the block labels that are used for the other blocks.", "This method of using state leaders to represent subsets was first proposed in [24], [23]." ], [ "Data structures", "The common memory contains the following information: $n:{\\mathbb {N}}$ , the number of states of the input.", "$m:{\\mathbb {N}}$ , the number of transitions of the input relation.", "The input, a fixed numbered list of transitions.", "For every index $0\\le i<m$ of a transition, a source $\\textit {source}_i\\in S$ and target $\\textit {target}_i\\in S$ are given, representing the transition $\\textit {source}_i\\rightarrow \\textit {target}_i$ .", "$C: L_B\\cup \\lbrace \\bot \\rbrace $ , the label of the current block that is used as a splitter; $\\bot $ indicates that no splitter has been selected.", "The following is stored in lists of size $n$ , for each state with index $i$ : $\\mathit {mark}_i: {\\mathbb {B}}$ , a mark indicating whether state $i$ is able to reach the splitter.", "$\\mathit {block}_i:L_B$ , the block of which state $i$ is a member.", "The following is stored in lists of size $n$ , for each potential block with block label $i$ : $\\mathit {new\\_leader}_i : L_B$ the leader of the new block when a split is performed.", "$\\mathit {unstable}_i : {\\mathbb {B}}$ indicating whether $\\pi $ is possibly unstable w.r.t.", "the block.", "As input, we assume that each state with index $i$ has an input variable $I_i\\in L_B$ that is the initial block label.", "In other words, the values of the $I_i$ variables together encode $\\pi _0$ .", "Using this input, the initial values of the block label $\\textit {block}_i$ variables are calculated to conform to our block representation with leaders.", "Furthermore in the initialization, $\\textit {unstable}_i = \\mathrm {false}$ for all $i$ that are not used as block label, and $\\mathrm {true}$ otherwise." ], [ "The algorithm", "We provide our first PRAM algorithm in Algorithm REF .", "The PRAM is started with $max(m,n)$ processors.", "These processors are dually used for transitions and states.", "The algorithm performs initialisation (lines 1-6), after which each block has selected a new leader (lines 3-4), ensuring that the leader is one of its own states, and the initial blocks are set to unstable.", "Subsequently, the algorithm enters a single loop that can be explained in three separate parts.", "Splitter selection (lines 8-14), executed by $n$ processors.", "Every variable $mark_i$ is set to $\\mathrm {false}$ .", "After this, every processor with index $i$ will check $\\mathit {unstable}_i$ .", "If block $i$ is marked unstable the processor tries to write $i$ in the variable $C$ .", "If multiple write accesses to $C$ happen concurrently in this iteration, then according to both the arbitrary and the priority PRAM model (see Section ), only one process $j$ will succeed in writing, setting $C:=j$ as splitter in this iteration.", "Mark states (lines 15-17), executed by $m$ processors.", "Every processor $i$ is responsible for the transition $s_i\\mathrel {\\text{$\\xrightarrow{}$}} t_i$ and checks if $t_i$ ($\\mathit {target}_i$ ) is in the current block $C$ (line REF ).", "If this is the case the processor writes $\\mathrm {true}$ to $mark_{\\mathit {source}_i}$ where $\\mathit {source}_i$ is $s_i$ .", "This mark now indicates that $s_i$ reaches block $C$ .", "Performing splits (lines 18-26), executed by $n$ processors.", "Every processor $i$ compares the mark of state $i$ , i.e., $\\mathit {mark}_i$ , with the mark of the leader of the block in which state $i$ resides, i.e., $\\mathit {mark}_{\\mathit {block}_i}$ (line REF ).", "If the marking is different, state $i$ has to be split from $\\mathit {block}_i$ into a new block.", "At Line REF , a new leader is elected among the states that form the newly created block.", "The index of this leader is stored in $\\mathit {new\\_leader}_{\\mathit {block}_i}$ .", "The unstability of block $\\mathit {block}_i$ is set to $\\mathrm {true}$ (line 22).", "After that, all involved processors update the block index for their state (line 21) and update the stability of the new block (line 22).", "Figure: One iteration of Algorithm The steps of the program are illustrated in Figure REF .", "The notation $B_{s_i}$ refers to a block containing all states that have state $s_i$ as their block leader.", "In the figure on the left, we have two blocks $B_{s_1}$ and $B_{s_4}$ , of which at least $B_{s_4}$ is marked unstable.", "Block $B_{s_4}$ is selected to be splitter, i.e., $C = B_{s_4}$ at line 12 of Algorithm REF .", "In the figure in the middle, $\\mathit {mark}_i$ is set to $\\mathrm {true}$ for each state $i$ that can reach $B_{s_4}$ (line 16).", "Finally, block $B_{s_4}$ is set to stable (line 19), all states compare their mark with the leader's mark, and the processor working on state $s_3$ discovers that the mark of $s_3$ is different from the mark of $s_1$ , so $s_3$ is elected as leader of the new block $B_{s_3}$ at line 21 of Algorithm REF .", "Both $B_{s_1}$ and $B_{s_3}$ are set to unstable (lines 22 and 24).", "The algorithm repeats execution of the while-loop until all blocks are marked stable.", "[t] $i < n$ Initialize all variables $\\mathit {unstable}_i := \\mathrm {false}$ $\\mathit {new\\_leader}_{I_i} := i$ $\\mathit {block}_i := new\\_leader_{I_i}$ $\\mathit {unstable}_{\\mathit {block}_i} := \\mathrm {true}$ Dodowhile $C \\ne \\bot $ $C := \\bot $ $i < n$ $\\mathit {mark}_i := \\mathrm {false}$ $\\mathit {unstable}_i$ $C := i$ $i < m$ and $\\mathit {block}_{\\mathit {target_i}} = C$ $\\mathit {mark}_{\\mathit {source_i}} := \\mathrm {true}$ $i < n$ and $C \\ne \\bot $ $\\mathit {unstable}_{C} := \\mathrm {false}$ $\\mathit {mark}_i \\ne \\mathit {mark}_{\\mathit {block}_i}$ $\\mathit {new\\_leader}_{\\mathit {block}_i} := i$ $\\mathit {unstable}_{\\mathit {block}_i} := \\mathrm {true}$ $\\mathit {block}_i := \\mathit {new\\_leader}_{\\mathit {block}_i}$ $\\mathit {unstable}_{\\mathit {block}_i} := \\mathrm {true}$ The algorithm for each processor $P_{i}$ in the PRAM with $i\\in [0, \\dots , max(n,m)]$" ], [ "Correctness", "The $\\mathit {block}_i$ list in the common memory at the start of iteration $k$ defines a partition $\\pi _k$ where states $s\\in S$ with equal block labels block$_i$ form the blocks: $\\pi _k = \\lbrace \\lbrace s \\in S \\mid block_s = s^{\\prime }\\rbrace \\mid s^{\\prime }\\in S\\rbrace \\setminus \\emptyset $ A run of the program produces a sequence $\\pi _0, \\pi _1, \\dots $ of partitions.", "Observe that partition $\\pi _k$ is a refinement of every partition $\\pi _0,\\pi _1,\\dots , \\pi _{k-1}$ , since blocks are only split and never merged.", "A partition $\\pi $ induces a relation of which the blocks are the equivalence classes.", "For an input partition $\\pi _0$ we call the relation induced by the coarsest refinement of $\\pi _0$ that is a bisimulation relation $_{\\pi _0}$ .", "We now prove that Algorithm REF indeed solves RCPP.", "We first introduce Lemma REF which is invariant throughout execution and expresses that states which are related by $_{\\pi _0}$ are never split into different blocks.", "This lemma implies that if a refinement forms a bisimulation relation, it is the coarsest.", "Lemma 1 Let $S$ be the input set of states, $\\rightarrow :S\\times S$ the input relation and $\\pi _0$ the input partition.", "Let $ \\pi _1,\\pi _2, \\dots $ be the sequence of partitions produced by Algorithm REF , then for all initial blocks $B\\in \\pi _0$ , states $s,t\\in B$ and iteration $k\\in {\\mathbb {N}}$ : $ s_{\\pi _0} t \\Rightarrow \\exists B\\in \\pi _k .", "s,t\\in B$ This is proven by induction on $k$ .", "In the base case, $\\pi _0$ , this is true by default.", "Now assume for a particular $k\\in {\\mathbb {N}}$ that the property holds.", "We know that the partition $\\pi _{k+1}$ is obtained by splitting with respect to a block $C\\in \\pi _k$ .", "For two states $s,t\\in S$ with $s_{\\pi _0} t$ we know that $s$ and $t$ are in the same block in $\\pi _k$ .", "In the case that both $s$ and $t$ do not reach $C$ , then $mark_s = mark_t =\\mathrm {false}$ .", "Since they were in the same block, they will be in the same block in $\\pi _{k+1}$ .", "Now consider the case that at least one of the states is able to reach $C$ .", "Without loss of generality say that $s$ is able to reach $C$ .", "Then there is a transition $s\\rightarrow s^{\\prime }$ with $s^{\\prime }\\in C$ .", "By Definition REF , there exists a $t^{\\prime }\\in S$ such that $t\\rightarrow t^{\\prime }$ and $s^{\\prime }_{\\pi _0} t^{\\prime }$ .", "By the induction hypothesis we know that since $s^{\\prime } _{\\pi _0} t^{\\prime }$ , $s^{\\prime }$ and $t^{\\prime }$ must be in the same block in $\\pi _k$ , i.e., $t^{\\prime }$ is in $C$ .", "This witnesses that $t$ is also able to reach $C$ and we must have $\\mathit {mark}_s = \\mathit {mark}_t = \\mathrm {true}$ .", "Since the states $s$ and $t$ are both marked and are in the same block in $\\pi _k$ , they will remain in the same block in $\\pi _{k+1}$ .", "Lemma 2 Let $S$ be the input set of states with $\\rightarrow :S \\times S$ , $L_B= S$ the block labels, and $\\pi _n$ the partition stored in the memory after termination of Algorithm REF .", "Then the relation induced by $\\pi _n$ is a bisimulation relation.", "Since the program finished, we know that for all block indices $i\\in L_B$ we have $\\mathit {unstable}_i = \\mathrm {false}$ .", "For a block index $i \\in L_B$ , $\\mathit {unstable}_i$ is set to $\\mathrm {false}$ if the partition $\\pi _k$ , after iteration $k$ , is stable under the block with index $i$ and set to $\\mathrm {true}$ if it is split.", "So, by Fact REF , we know $\\pi _n$ is stable under every block $B$ , hence stable.", "Next, we prove that a stable partition is a bisimulation relation.", "We show that the relation $R$ induced by $\\pi _n$ is a bisimulation relation.", "Assume states $s,t\\in S$ with $sRt$ are in block $B\\in \\pi _n$ .", "Consider a transition $s\\rightarrow s^{\\prime }$ with $s^{\\prime }\\in S$ .", "State $s^{\\prime }$ is in some block $B^{\\prime } \\in \\pi _n$ , and since the partition is stable under block $B^{\\prime }$ , and $s$ is able to reach $B^{\\prime }$ , by the definition of stability, we know that $t$ is also able to reach $B^{\\prime }$ .", "Therefore, there must be a state $t^{\\prime }\\in B^{\\prime }$ such that $t\\rightarrow t^{\\prime }$ and $s^{\\prime }Rt^{\\prime }$ .", "Finally, by the fact that $R$ is an equivalence relation we know that $R$ is also symmetric, therefore it is a bisimulation relation.", "Theorem 1 The partition resulting from executing Algorithm REF forms the coarsest relational partition for a set of states $S$ and a transition relation $\\rightarrow : S \\times S$ , solving RCPP.", "By Lemma REF , the resulting partition is a bisimulation relation.", "Lemma REF implies that it is the coarsest refinement which is a bisimulation." ], [ "Complexity analysis", "Every step in the body of the while-loop can be executed in constant time.", "So the asymptotic complexity of this algorithm is given by the number of iterations.", "Theorem 2 RCPP on an input with $m$ transitions and $n$ states is solved by Algorithm REF in $\\mathcal {O}(n)$ time using $max(m,n)$ CRCW PRAM processors.", "In iteration $k \\in {\\mathbb {N}}$ of the algorithm, let us call the total number of blocks $N_k \\in {\\mathbb {N}}$ and the number of unstable blocks $U_k \\in {\\mathbb {N}}$ .", "Initially, $N_0 = U_0 = |\\pi _0|$ .", "In every iteration $k$ , a number of blocks $l_k \\in {\\mathbb {N}}$ is split, resulting in $l_k$ new blocks, so the new total number of blocks at the end of iteration $k$ is $ N_{k+1} = N_k +l_k $ .", "First the current block $C$ in iteration $k$ which was unstable is set to stable which causes the number of unstable blocks to decrease by one.", "In this iteration $k$ the $l_k$ blocks $B_1, \\dots , B_{l_k}$ are split, resulting in $l_k$ newly created blocks.", "These $l_k$ blocks are all unstable.", "A number of blocks $l_k^{\\prime } \\le l_k$ of the blocks $B_1, \\dots B_{l_k}$ , were stable and are set to unstable again.", "The block $C$ which was set to stable is possibly one of these $l_k^{\\prime }$ blocks which were stable and set to unstable again.", "The total number of unstable blocks at the end of iteration $k$ is $U_{k+1} = U_{k} + l_k + l_k^{\\prime } - 1$ .", "For all $k\\in {\\mathbb {N}}$ , in iteration $k$ we calculate the total number of blocks $N_k = \\sum _{i=0}^{k-1}(l_i)+|\\pi _0|$ and unstable blocks $U_k = \\sum _{i=0}^{k-1}(l_i + l_i^{\\prime }) - k + |\\pi _0|$ .", "The number of iterations is given by $k = \\sum _{i=0}^{k-1}(l_i + l_i^{\\prime }) - U_k + |\\pi _0|$ .", "By definition, $l_i^{\\prime } \\le l_i$ , and the total number of newly created blocks is $\\sum _{i=0}^{k-1}(l_i) = N_k - |\\pi _0|$ , hence $\\sum _{i=0}^{k-1}(l_i + l_i^{\\prime })\\le 2\\sum _{i=0}^{k-1}(l_i)\\le 2N_k-2|\\pi _0|$ .", "The number of unstable blocks is always positive, i.e., $U_k \\ge 0$ , and the total number of blocks can never be larger than the number of states, i.e., $N_k \\le n$ , so the total number of iterations $z$ is bounded by $z \\le 2N_z - |\\pi _0| \\le 2n - |\\pi _0|$ ." ], [ "Bisimulation Coarsest Refinement Problem", "In this section we extend our algorithm to the Bisimulation Coarsest Refinement Problem (BCRP), i.e., to LTSs with multiple action labels.", "Solving BCRP can in principle be done by translating an LTS to a Kripke structure, for instance by using the method described in [18].", "This translation introduces a new state for every transition, resulting in a Kripke structure with $n+m$ states.", "If the number of transitions is significantly larger than the number of states, then the number of iterations of our algorithm increases undesirably." ], [ "The PRAM Algorithm", "Instead of introducing more states, we introduce multiple marks per state, but in total we have no more than $m$ marks.", "For each state $s$ , we use a mark variable for each different outgoing action label relevant for $s$ , i.e., for each $a$ for which there is a transition $s \\mathrel {\\text{$\\xrightarrow{}$}} s^{\\prime }$ to some state $s^{\\prime }$ .", "Each state may have a different set of outgoing action labels and thus a different set of marks.", "Therefore, we first perform a preprocessing procedure in which we group together states that have the same set of outgoing action labels.", "This is valid, since two bisimilar states must have the same outgoing actions.", "That two states of the same block have the same set of action labels is then an invariant of the algorithm, since in the sequence of produced partitions, each partition is a refinement of the previous one.", "For the extended algorithm, we need to maintain extra information in addition to the information needed for Algorithm REF .", "For an input LTS $A = (S, Act, \\mathrel {\\text{$\\xrightarrow{}$}})$ with $n$ states and $m$ transitions this is the following extra information: Each action label has an index $a \\in \\lbrace 0, \\dots ,|Act| -1\\rbrace $ .", "The following is stored in lists of size $m$ , for each transition $s\\mathrel {\\text{$\\xrightarrow{}$}} t$ with transition index $i \\in \\lbrace 0, \\dots , m-1\\rbrace $ : $a_i := a$ $\\mathit {order}_i : {\\mathbb {N}}$ , the order of this action label, with respect to the source state $s$ .", "E.g., if a state s has the list $[1, 3, 6]$ of outgoing action labels, and transition $i$ has label 3, then $\\mathit {order}_i$ is 1 (we start counting from 0).", "$\\mathit {mark} : [{\\mathbb {B}}]$ , a list of up to $m$ marks, in which there is a mark for every state, action pair for which it holds that the state has at least one outgoing transition labelled with that action.", "This list can be interpreted as the concatenation of lists $\\mathit {mark}_s$ for all states $s \\in S$ .", "Essentially, we have for each state $s \\in S$ : $\\mathit {off}(s) : {\\mathbb {N}}$ , the offset to access the marks of a given state $s$ in $\\mathit {mark}$ .", "$\\mathit {mark}_{\\mathit {off}(s)} : [ {\\mathbb {B}}]$ , a list of marks (the list starting at position $\\mathit {off}(s)$ in $\\mathit {mark}$ ), where each mark indicates if the state can reach the current block with the corresponding action.", "We also refer to this list as $\\mathit {mark}_s$ .", "E.g., if state $s$ has actions $[1, 3, 6]$ and only actions 1 and 6 can reach the current block, this list has the contents $[{\\mathrm {true}}, {\\mathrm {false}}, {\\mathrm {true}}]$ .", "$\\mathit {nr\\_marks}_s$ , the number of marks this state has, thus the length of list $\\mathit {mark}_s$ .", "$\\mathit {mark\\_length}$ : The total length of all the $\\mathit {mark}_s$ lists together, i.e., the sum of all the $\\mathit {nr\\_marks}_s$ .", "This allows us to reset all marks in constant time using $\\mathit {mark\\_length}$ processors.", "This number is not larger than the number of transitions ($\\mathit {mark\\_length} \\le m$ ).", "In a list of size $n$ , we store for each state $s \\in S$ a variable $\\mathit {split}_s$ .", "This indicates if the state will be split off from its block.", "With this extra information, we can alter Algorithm REF to work with labels.", "The new version is given in Algorithm REF .", "The changes involve the following: Lines REF -REF : Reset the $\\mathit {mark}$ list.", "Line REF : Reset the $\\mathit {split}$ list.", "Line REF : When marking the transitions, we do this for the correct action label, using $\\mathit {order}_i$ .", "Note the indexing into $\\mathit {mark}$ .", "It involves the offset for the state $\\mathit {source}_i$ , and $\\mathit {order}_i$ .", "Lines REF -REF : We tag a state to be splitted off when it differs for any action label from the block leader.", "Line REF : If a state was tagged to be splitted off in the previous step, it should split off from its leader.", "Line REF : If any block was split, the partition may not be stable w.r.t.", "the splitter.", "[!t] Dodowhile $i < n$ $\\mathit {unstable}_i := \\mathrm {false}$ $\\mathit {unstable}_{\\mathit {block}_i} := \\mathrm {true}$ $C \\ne \\bot $ $C := \\bot $ blue$i < \\mathit {mark\\_length}$ $\\textit {mark}_i := {\\mathrm {false}}$ $i < n$ blue$split_i := {\\mathrm {false}}$ $\\mathit {unstable}_i$ $C := i$ $i < m$ and $\\mathit {block}_{\\mathit {target}_i} = C$ blue$\\mathit {mark}_{\\mathit {off}({\\mathit {source}_i})+\\mathit {order}_i} := {\\mathrm {true}}$ blue$i < m$ and $\\mathit {mark}_{\\mathit {off}({\\mathit {source}_i})+\\mathit {order}_i} \\ne \\mathit {mark}_{\\mathit {off}(\\mathit {block}_{\\mathit {source}_i})+\\mathit {order}_i}$ $\\mathit {split}_{\\mathit {source}_i} := {\\mathrm {true}}$ $i < n$ and $C \\ne \\bot $ $\\mathit {unstable}_{C} := \\mathrm {false}$ blue$\\mathit {split}_{i}$ $\\mathit {new\\_leader}_{\\mathit {block}_i} := i$ $\\mathit {unstable}_{\\mathit {block}_i} := \\mathrm {true}$ $\\mathit {block}_i := \\mathit {new\\_leader}_{\\mathit {block}_i}$ $\\mathit {unstable}_{\\mathit {block}_i} := \\mathrm {true}$ blue$\\mathit {unstable}_C := \\mathrm {true}$ The Algorithm for BCCP, the highlighted lines differ from Algorithm REF .", "Figure: An example LTS and its derived preprocessing information." ], [ "Preprocessing.", "To use the above algorithm, we need to do two preprocessing steps.", "First, we need to partition the states w.r.t.", "their set of outgoing action labels.", "This can be done with an altered version of Algorithm REF .", "Instead of splitting on a block at line REF , we split on an action $a \\in A$ .", "We visit all transitions, and we mark the source if it has the same action label $a$ .", "This can be found in Algorithm REF .", "[h] 14 $i < m$ and $a_{i} = a$ $\\mathit {mark}_{\\mathit {source_i}} := {\\mathrm {true}}$ Marking the source per action label $a_i$ .", "After executing Algorithm REF , each block can split in two blocks: a block that contains states that have $a$ as an outgoing action label and a block with states that do not have this outgoing action label.", "After doing this for all different action labels we end up with a partition of blocks, in which all states of a block have the same set of outgoing action labels, and each pair of states from different blocks have different sets of outgoing action labels.", "Using $m$ processors, this partition can be constructed in $\\mathcal {O}(|Act|)$ time.", "For the second preprocessing step, we need to gather the extra information that is needed in Algorithm REF .", "Only $a_i$ is part of the input, the others need to be calculated.", "We start our preprocessing by sorting the transitions by $(\\mathit {source}_i, a_i)$ , which can be done in $\\mathcal {O}(\\log m)$ time with $m$ processors, for instance using a parallel merge sort [5].", "In order to calculate $\\mathit {order}_i$ and $\\mathit {nr\\_marks}_s$ , we first calculate $\\mathit {action\\_switch}_i$ for each transition $i$ , which is done in Algorithm REF .", "See Figure REF for an example.", "Now, $\\mathit {order}_i$ can be calculated with a parallel segmented prefix sum [19] (also called a segmented scan) of $\\mathit {action\\_switch}$ .", "A parallel segmented sum can be performed on $\\mathit {action\\_switch}$ to calculate $\\mathit {nr\\_marks}_s$ , where we make sure to set $\\mathit {nr\\_marks}_s$ to 0, if state $s$ has no outgoing transitions.", "Finally, $\\mathit {off}_s$ , for the mark offsets, can be constructed as a list and calculated by applying a parallel prefix sum on $\\mathit {nr\\_marks}_s$ .", "The code in Algorithm REF takes $\\mathcal {O}(1)$ time on $m$ processors, and a parallel segmented (prefix) sum takes $\\mathcal {O}(\\log m)$ time [19].", "In total the preprocessing takes $\\mathcal {O}(|Act| + \\log m)$ time.", "[t] i $\\le $ m $i = 0$ or $\\mathit {source}_i \\ne \\mathit {source}_{i-1}$ or $a_i = a_{i-1}$ $\\mathit {action\\_switch}_i = 0$ ; $\\mathit {action\\_switch}_i = 1;$ Preprocessing step needed for Algorithm REF .", "We calculate $\\mathit {action\\_switch}_i$ , which is needed for the $\\mathit {order}_i$ and $\\mathit {nr\\_marks}_s$ variables." ], [ "Complexity & Correctness", "For Algorithm REF , we need to prove why it takes a linear number of steps to construct the final partition.", "This is subtle, as an iteration of the algorithm does not necessarily produce a stable block.", "Theorem 3 Algorithm REF on an input LTS with $n$ states and $m$ transitions will terminate in $\\mathcal {O}(n + |Act|)$ steps.", "The total preprocessing takes $\\mathcal {O}(|Act| + \\log m)$ steps, after which the while-loop will be executed on a partitioning $\\pi _0$ which was the result of the preprocessing on the partition $\\lbrace S\\rbrace $ .", "Every iteration of the while-loop is still executed in constant time.", "Using the structure of the proof of Theorem REF , we derive a bound on the number of iterations.", "At the start of iteration $k\\in {\\mathbb {N}}$ the total number of blocks and unstable blocks are $N_k,U_k\\in {\\mathbb {N}}$ , initially $U_0 = N_0 = |\\pi _0|$ .", "In iteration $k$ , a number $l_k$ of blocks is split in two blocks, resulting in $l_k$ new blocks, meaning that $N_{k+1} = N_{k} + l_k$ .", "All new $l_k$ blocks are unstable and a number $l_k^{\\prime } \\le l_k$ of the old blocks that are split, were stable at the start of iteration $k$ and now unstable.", "If $l_k = l_k^{\\prime } = 0$ there are no blocks split and the current block $C$ becomes stable.", "We indicate this with a variable $c_k$ : $c_k=1$ if $l_k = 0$ , and $c_k = 0$ , otherwise.", "The total number of iterations up to iteration $k$ in which no block is split is given by $\\sum _{i=0}^{k-1} c_i$ .", "The number of iterations in which at least one block is split is given by $k - \\sum _{i=0}^{k-1} c_i$ .", "If in an iteration $k$ at least one block is split, the total number of blocks at the end of iteration $k$ is strictly higher than at the beginning, hence for all $k\\in {\\mathbb {N}}$ , $N_k \\ge k - \\sum _{i=0}^{k-1}c_i$ .", "Hence, $N_k+\\sum _{i=0}^{k-1}c_i$ is an upper bound for $k$ .", "We derive an upper bound for the number of iterations in which no blocks are split using the total number of unstable blocks.", "In iteration $k$ there are $U_k = \\sum _{i=0}^{k-1}(l_i + l_i^{\\prime }) - \\sum _{i=0}^{k-1} c_i + |\\pi _0|$ unstable blocks.", "Since the sum of newly created blocks $\\sum _{i=0}^{k-1}(l_i) = N_k-|\\pi _0|$ and $l_i^{\\prime } \\le l_i$ , the number of unstable blocks $U_k$ is bounded by $2N_k-\\sum _{i=0}^{k-1}c_i - |\\pi _0|$ .", "Since $U_k\\ge 0$ we have the bound $\\sum _{i=0}^{k-1}c_i\\le 2N_k-|\\pi _0|$ .", "This gives the bound on the total number of iterations $z \\le 3N_z-|\\pi _0| \\le 3n - |\\pi _0|$ .", "With the time for preprocessing this makes the total run time complexity $\\mathcal {O}(n + |Act| + \\log m)$ .", "Since the total number of transitions $m$ is bounded by $|Act| \\times n^2$ , this simplifies to $\\mathcal {O}(n + |Act|)$ .", "Concerning correctness, we need to address two things.", "Firstly, as argued above, we start with a different partition compared to Algorithm REF , but it is a valid choice since states with different outgoing labels can never be bisimilar.", "Secondly, although the partition may not become stable w.r.t.", "the splitter, this will eventually occur, and the algorithm will only stop once the partition is stable w.r.t.", "all blocks.", "Therefore, the algorithm will produce the coarsest bisimulation relation." ], [ "Experimental Results", "In order to validate the proposed algorithm, we implemented Algorithm REF from Section .", "The implementation targets graphics processing units (GPUs) since a GPU closely resembles a PRAM and supports a large amount of parallelism.", "The algorithm is implemented in CUDA version 11.1 with use of the Thrust library.The source code can be found at https://github.com/sakehl/gpu-bisimulation.", "As input, we chose all benchmarks of the VLTS benchmark suitehttps://cadp.inria.fr/resources/vlts/.", "for which the implementation produced a result within 10 minutes.", "The VLTS benchmarks are LTSs that have been derived from real concurrent system models.", "The experiments were run on an NVIDIA Titan RTX with 24 GB memory and 72 Streaming Multiprocessors, each supporting up to 1,024 threads in flight.", "Although this GPU supports 73,728 threads in flight, it is very common to launch a GPU program with one or even several orders of magnitude more threads, in particular to achieve load balancing between the Streaming Multiprocessors and to hide memory latencies.", "In fact, the performance of a GPU program usually relies on that many threads being launched.", "Our implementation is purely a proof of concept, to show that our algorithm can be mapped to actual hardware and to understand how the algorithm scales with the number of states and transitions.", "Table: Benchmark results for Algorithm on a GPU, times (TT) are in ms.In the implementation, we have to make a few adjustments, since a GPU differs in some aspects from a PRAM.", "To make memory updates globally visible, we need to synchronize at certain points of Algorithm REF , otherwise the changes in the memory are not consistent.", "We do this by splitting up the algorithm in different kernels (functions that execute in parallel on a GPU) since after a kernel run all processors (threads) are synchronized.", "To be precise, in Algorithm REF we need to synchronize after: Line REF: To make sure the $\\mathit {mark}$ and $\\mathit {split}$ lists are reset and the splitter ($C$ ) is the same for all threads.", "Line REF: To make sure every thread has the same view of the $\\mathit {mark}$ list.", "Line REF: To synchronize the $\\mathit {mark}$ list.", "Line REF: To make sure the next leader for states that split off ($\\mathit {new\\_leader}_{\\mathit {block}_i}$ ) is chosen consistently among threads.", "We have chosen to allow race conditions in our implementation, for instance at Line 6 where multiple blocks can mark themselves as current ($C$ ).", "Strictly speaking, this is not safe in the CUDA programming model, but it does work for 32 bit words.", "This can be easily adjusted using atomic instructions, although this will result in sequentializing write accesses to the same memory location, meaning that a write need not be in constant time anymore.", "To ensure the implementation also works when $n$ and/or $m$ is larger than the number of threads $d$ on the GPU, we encapsulate the if-then blocks at lines 1-4, 7-9, 10-15, 16-18, 19-21 and 22-31 of Algorithm REF each in a for-loop, in which every thread accesses not only the data elements associated with its global index $i$ , but also, if needed, the elements with index $i+d$ , $i+2d$ , etc., as long as the indices are valid.", "Table REF shows the results of the experiments we conducted.", "The $|\\mathit {Blocks}|$ column indicates the number of different blocks at the end of the algorithm, where each block contains only bisimilar states.", "With $\\#\\mathit {It}$ we refer to the number of while-loop iterations that were executed (see Algorithm REF ), before all blocks became stable.", "The $T_\\mathit {pre}$ give the preprocessing times in milliseconds, which includes doing the memory transfers to the GPU, sorting the transitions and partitioning.", "The $T_\\mathit {alg}$ give the times of the core algorithm, in milliseconds.", "The $T_\\mathit {total}$ is the sum of the preprocessing and the algorithm, in milliseconds.", "We have not included the loading times for the files and the first CUDA API call that initializes the device.", "We ran each benchmark 10 times and took the averages.", "The standard deviation of the total times varied between 0% and 3% of the average, thus 10 runs are sufficient.", "All the times are rounded with respect to the standard error of the mean.", "We see that the bound as proven in Section REF ($k \\le 3n$ ) is indeed respected, $\\#\\mathit {It}/n$ is at most 2.20, and most of the time below that.", "The number of iterations is tightly related to the amount of blocks that the final partition has, the $\\#\\mathit {It} / |\\mathit {Blocks}|$ column varies between 1.00 and 2.53.", "This can be understood by the fact that each iteration either splits one or more blocks or marks a block as stable, and all blocks must be checked on stability at least once.", "This also means that for certain LTSs the algorithm scales better than linearly in $n$ .", "The preprocessing often takes the same amount of time (about a few milliseconds).", "Exceptions are those cases with a large number of actions and/or transitions.", "Concerning the GPU run times, it is not true that each iteration takes the same amount of time.", "A GPU is not a perfect PRAM machine.", "There are two key differences.", "Firstly, we suspect that the algorithm is memory bound since it is performing a limited amount of computations.", "The memory accesses are irregular, i.e., random, which caches can partially compensate, but for sufficiently large $n$ and $m$ , the caches cannot contain all the data.", "This means that as the LTSs become larger, memory accesses become relatively slower.", "Secondly, at a certain moment, the maximum number of threads that a GPU can run in parallel is achieved, and adding more threads will mean more run time.", "These two effects can best be seen in the column headed by $T_\\mathit {alg}/\\#\\mathit {It}$ , which corresponds to the time per iteration.", "The values are around $0.02$ up to $300,000$ transitions, but for a higher number of states and transitions, the amount of time per iteration increases." ], [ "Experimental comparison", "We compared our implementation (Par-BCRP) with an implementation of the algorithm by Lee and Rajasekaran (LR) [13] on GPUs, and the optimized GPU implementation by Wijs based on signature-based bisimilarity checking [2], with multi-way splitting (Wms) and with single-way splitting (Wss) [23].", "Multi-way splitting indicates that a block is split in multiple blocks at once, which is achieved in signature-based algorithms by computing a signature for each state in every partition refinement iteration, and splitting each block off into sets of states, each containing all the states with the same signature.", "The signature of a state is derived from the labels of the blocks that this state can reach in the current partition.", "Table: Comparison of the different algorithms with times in ms.The running times of the different algorithms can be found in Table REF .", "Similarly to our previous benchmarks, the algorithms were run 10 times on the same machine using the same VLTS benchmark suite with a time-out of 10 minutes.", "In some cases, the non-deterministic behaviour of the algorithms Wms and Wss led to high variations in the runs.", "In cases where the standard error of the mean was more than 5% of the mean value, we have added the standard error in Table REF in between parentheses.", "Furthermore, all the results are rounded with respect to the standard error of the mean.", "As a pre-processing step for the LR, Wms and Wss algorithms the input LTSs need to be sorted.", "We did not include this in the times, nor the reading of files and the first CUDA API call (which initializes the GPU).", "This comparison confirms the expectation that our algorithm in all cases (except one small LTS) out-performs LR.", "This confirms our expectation that LR is not suitable for massive parallel devices such as GPUs.", "Furthermore, the comparison teaches that in most cases our algorithm (Par-BCRP) outperforms Wss.", "In some benchmarks (Cwi_1_2, Cwi_214_684, Cwi_2165_8723 and Cwi_2416_17605) Wss is more than twice as fast, but in 16 other cases our algorithm is more than twice as fast.", "The last comparison shows us that our algorithm does not out-perform Wms.", "Wms employs multi-way splitting which is known to be very effective in practice.", "Contrary to our implementation, Wms is highly optimized for GPUs while the focus of the current work is to improve the theoretical bounds and describe a general algorithm.", "In order to understand the difference between Wms and our algorithm better, we analysed the complexity of Wms [23].", "In general this algorithm is quadratic in time, and the linearity claim in [23] depends on the assumption that the fan-out of `practical' transition systems is bounded, i.e., every state has no more than $c$ outgoing transitions for $c$ a (low) constant.", "We designed the transition systems $\\textit {Fan\\_out}_n$ for $n\\in {\\mathbb {N}}^+$ to illustrate the difference.", "The LTS $\\textit {Fan\\_out}_n = \\mbox{$(S, \\lbrace a,b\\rbrace , \\mathrel {\\text{$\\xrightarrow{}$}})$}$ has $n$ states: $S=\\lbrace 0,\\dots ,n-1\\rbrace $ .", "The transition function contains $i \\mathrel {\\text{$\\xrightarrow{}$}} i+1$ for all states $1 <i< n-1$ .", "Additionally, from state 0 and 1 there are transitions to every state: $0\\mathrel {\\text{$\\xrightarrow{}$}} i, 1\\mathrel {\\text{$\\xrightarrow{}$}} i$ for all $i\\in S$ .", "This LTS has $n$ states, $3n-3$ transitions and a maximum out degree of $n$ transitions.", "Figure REF shows the results of calculating the bisimulation equivalence classes for $\\textit {Fan\\_out}_n$ , with Wms and Par-BCRP.", "It is clear that the run time for Wms increases quadratically as the number of states grows linearly, already becoming untenable for a small amount of states.", "On the other hand, in conformance with our analysis, our algorithm scales linearly." ], [ "Weaker PRAM models", "Algorithm REF relies on concurrent writes to perform the constant time leader election and the choice of splitter.", "This means that the algorithm does not work on a weaker PRAM model.", "In this section we describe a modification for the common CRCW PRAM and a limitation for the ERCW PRAM.", "It is shown in [12] that any priority CRCW PRAM using $n$ processors and $m$ memory cells can be simulated by a common CRCW PRAM with $\\mathcal {O}(n^2)$ processors and $\\mathcal {O}(m^2)$ memory cells.", "For our problem, a common CRCW PRAM with $\\mathcal {O}(n^2)$ processors and no extra memory can solve leader election.", "This leader election on the common CRCW PRAM is given in Algorithm .", "Every processor is indexed as $P_{i,j}$ for all $i,j\\in \\lbrace 0, \\dots , n -1\\rbrace $ for exactly $n^2$ processors.", "First, if $P_{i,j}$ has a state with index $i$ that is eligible to be the leader of a new block (line 1), it writes $\\mathit {block}_i$ , i.e., the index of the block the state is currently a member of, to position $i$ in a list $\\mathit {new\\_leader}$ .", "In the next step, $P_{i,j}$ replaces $\\mathit {new\\_leader}_j$ with 0 if $\\mathit {new\\_leader}_i = \\mathit {new\\_leader_j}$ and $i < j$ .", "In other words, if $P_{i,j}$ encounters two states that can become the new leader, it selects the one with the smallest index.", "This is possibly a concurrent write, but all writes involve the same value 0, hence this is allowed by the common CRCW PRAM.", "Next, if for $P_{i,j}$ , $\\mathit {new\\_leader}_i \\ne 0$ , it writes the value $i$ to $\\mathit {new\\_leader}_{\\mathit {block}_i}$ at line 7.", "For a given block $\\mathit {block}_i$ , the condition at line 6 only holds for the state with the largest index among the states that are split from $\\mathit {block}_i$ , hence there is at most one value is written.", "[t!]", "$\\mathit {mark}_i \\ne \\mathit {mark}_{\\mathit {block}_i}$ $\\mathit {new\\_leader}_i := \\mathit {block}_i$ $i < j$ and $\\mathit {new\\_leader}_i = \\mathit {new\\_leader}_j$ $\\mathit {new\\_leader}_j := 0$ $\\mathit {new\\_leader}_i \\ne 0$ $\\mathit {new\\_leader}_{\\mathit {block}_i} := i$ $\\mathit {unstable}_{\\mathit {block}_i} := \\mathrm {true}$ $\\mathit {block}_i := \\mathit {new\\_leader}_{\\mathit {block}_i}$ $\\mathit {unstable}_{\\mathit {block}_i} := \\mathrm {true}$ Leader election for a common CRCW PRAM Leader election on the ERCW PRAM is not possible in constant time, which follows from a result by Cook et al. [6].", "This result says that all functions that have a critical input are in $\\Omega (log~n)$ on ERCW PRAMs.", "A bit sequence $I$ of size $n$ is critical for a function $f: \\lbrace 0, 1\\rbrace ^n \\rightarrow \\lbrace 0, 1 \\rbrace $ iff for any $I^{\\prime }$ obtained by flipping exactly one bit in $I$ we have $f(I) \\ne f(I^{\\prime })$ .", "Leader election can be seen as a function $f: S\\rightarrow \\lbrace 0,1\\rbrace $ , where $f(i) = 1$ iff $i$ is elected as a new leader.", "This function has a critical input, namely the to be chosen leader." ], [ "Related work", "In [13] Lee and Rajasekaran study RCPP.", "They implement a parallel version of Kanellakis-Smolka that runs in $\\mathcal {O}(n~log~n)$ time on $\\frac{m}{log\\ n}log\\ log\\ n$ CRCW PRAM processors.", "In [17] they present a different algorithm based on Paige and Tarjan's algorithm [16] that has the same run time of $\\mathcal {O}(n~log~n)$ but using only $\\frac{m}{n}log~n$ CREW processors.", "Jeong et al.", "[10] presented a linear time parallel algorithm, but it is probablistic in the sense that it has a non-zero chance to output the wrong result.", "Furthermore, Wijs [23] presented a GPU implementation of an algorithm to solve the strong and branching bisimulation partition refinement problems but although efficient for many practical cases, it has a quadratic time complexity.", "In a distributed setting, Blom and Orzan studied algorithms for refinement [2].", "Those algorithms use message passing as ways of communication between different workers in a network and rely on a small number of processors.", "Therefore, they are very different in nature than our algorithm.", "Those algorithms were extended and optimized for branching bisimulation [3]." ], [ "Conclusion", "We proposed and implemented an algorithm for RCPP and BCPP.", "We proved that the algorithm stops in $\\mathcal {O}(n + |Act|)$ steps on $max(n,m)$ CRCW PRAM processors.", "We implemented the algorithm for BCPP in CUDA, and conducted experiments that show the potential to compute bisimulation in practice in linear time.", "Further advances in parallel hardware will make this more feasible.", "For future work, it is interesting to investigate whether RCPP can be solved in sublinear time, that is $\\mathcal {O}(n^{\\epsilon })$ for a $\\epsilon < 1$ , as requested in [13].", "It is also intriguing whether the practical effectiveness of the algorithm in [23] by splitting blocks simultaneously can be combined with our algorithm, while preserving the linear time upperbound.", "Furthermore, it remains an open question whether our algorithm can be generalised for weaker bisimulations, such as weak and branching bisimulation [22], [9].", "The main challenge here is that the transitive closure of so-called internal steps needs to be taken into account." ], [ "Acknowledgments", "This work is carried out in the context of the NWO AVVA project 612.001751 and the NWO TTW ChEOPS project 17249." ] ]
2105.11788
[ [ "A unified framework based on graph consensus term for multi-view\n learning" ], [ "Abstract In recent years, multi-view learning technologies for various applications have attracted a surge of interest.", "Due to more compatible and complementary information from multiple views, existing multi-view methods could achieve more promising performance than conventional single-view methods in most situations.", "However, there are still no sufficient researches on the unified framework in existing multi-view works.", "Meanwhile, how to efficiently integrate multi-view information is still full of challenges.", "In this paper, we propose a novel multi-view learning framework, which aims to leverage most existing graph embedding works into a unified formula via introducing the graph consensus term.", "In particular, our method explores the graph structure in each view independently to preserve the diversity property of graph embedding methods.", "Meanwhile, we choose heterogeneous graphs to construct the graph consensus term to explore the correlations among multiple views jointly.", "To this end, the diversity and complementary information among different views could be simultaneously considered.", "Furthermore, the proposed framework is utilized to implement the multi-view extension of Locality Linear Embedding, named Multi-view Locality Linear Embedding (MvLLE), which could be efficiently solved by applying the alternating optimization strategy.", "Empirical validations conducted on six benchmark datasets can show the effectiveness of our proposed method." ], [ "Introduction", "With the rapid development of the information era, more and more data could be obtained from different domains or described from various perspectives, thus multi-view learning technologies[1], [2] have gained extensive attention from researchers in recent years.", "For examples, an image could be represented by different visual descriptors [3], [4], [5] to reveal its color, texture, and shape information; the document could be translated as different versions via various languages [6], [7]; a web-page is usually able to be composed of texts, images, and videos.", "These different heterogeneous features depict different perspectives to provide complementary information for data description, indicating that each view may contain some knowledge information that other views do not involve.", "However, classical methods are usually proposed under the single view scenario, which cannot be straightforwardly applied to the multi-view setting.", "A common solution is to concatenate different views together as one view and then employ single-view algorithms directly for this case.", "But this concatenation not only lacks physical meaning owing to its specific statistical property in each view, but also ignores the complementary nature of different views.", "Therefore, the main challenge for multi-view learning is how to effectively combine the information of multiple views and exploit the underlying structures within data.", "In recent years, a large amount of multi-view learning approaches have been well investigated in many applications (e.g.", "classifications [8], [9], [10], clustering [11], [12], [13], etc).", "Among existing multi-view learning works, one representative category of methods is based on the graph, which is mainly taken into account in this paper.", "One popular solution [14], [15], [16], [17] is to consider the weighted combination of different views to explore a common latent space shared by all views in integrating multi-view information.", "For example, Multiview Spectral Embedding (MSE) [14] was proposed to extend Laplacian Eigenmaps (LE) [18] into multi-view setting, which incorporated it with multi-view data to find common low-dimensional representations.", "Nevertheless, they could not guarantee the complementary effects across different views.", "For this reason, these algorithms in co-training [19], [20] and co-regularization [21], [22] styles are developed to explore the complementary information among different views.", "The former iteratively maximizes the mutual agreement on different views to guarantee the consistency of different views.", "The latter employs co-regularization terms of discriminant functions, added into the objective function, to ensure the consensus among distinct views.", "Unfortunately, these methods may produce unsatisfactory results when facing such multiple views that are highly related but sightly different from each other.", "More notably, there aren't still sufficient researches on generalized multi-view frameworks, which are convenient to extend those exiting graph embedding methods based on single view against multi-view tasks, so that the advantages of those single-view works couldn't be fully exploited.", "What's more, the framework of graph embedding [23] implies that most of subspace learning methods [24], [25], [26] and their kernel extensions [27], [28], [29] could be also cast as special embedding methods based on the graph.", "Besides, most graph-based deep learning technologies [30], [31] have been widely investigated in recent tears.", "However, these graph embedding methods cannot be extended into the multi-view setting directly.", "Therefore, how to extend these works into multi-view setting is the key yet challenging point.", "To handle these issues above, we propose a novel model for multi-view learning problems to simultaneously exploit both the diversity and complementary information among different views.", "Importantly, this model attempts to leverage most existing graph embedding works for single view into a unified formulation.", "Specifically, to preserve the diversity property of intrinsic information in each view, this model explores the intrinsic graph structure in each view independently; to fully exploit the complementary information among different learned representations, we introduce the graph consensus term, based on heterogeneous graphs, to consider the correlations among multiple views jointly.", "That is to say, we could utilize the graph consensus term to regularize the dependence among different views and simultaneously obtain the intrinsic structure based on its graph structure or embedding representations for each view.", "To this end, we formulate the above concerns into a unified framework, named Graph Consensus Multi-view Learning Framework (GCMLF).", "To facilitate related researches, the proposed framework is utilized to implement the multi-view extension of Locality Linear Embedding [32], named Multi-view Locality Linear Embedding (MvLLE).", "Correspondingly, an algorithm based on the alternating direction optimization strategy is provided to efficiently solve MvLLE, which converges to the local optimal value.", "Finally, extensive experiments based on the applications of document classification, face recognition, and image retrieval validate the ideal performance of our proposed method.", "In summary, our contributions in this paper could be listed as follows: We propose a novel unified framework multi-view learning problems to leverage most of existing single-view works based on the graph into a unified formula, which utilizes the graph consensus term based on heterogeneous graphs to regularize the dependence among different views.", "To get the feasible solution of GCMLF, a rough paradigm based on iterative alternating strategy is proposed, which could be verified that it converges to the local optimal value within limited iteration steps.", "GCMLF is utilized to implement the multi-view extension of Locality Linear Embedding, named Multi-view Locality Linear Embedding (MvLLE), which could be efficiently solved referring to the solving paradigm for GCMLF.", "The remainder of this paper is organized as follows: in Section II, we briefly review the background of multi-view setting and some methods closely related to our method; in Section III, we describe the construction procedure of our proposed method and its optimization algorithm; in Section IV, the proposed framework is utilized to implement the multi-view extension of Locality Linear Embedding; in Section V, extensive experiments on six datasets evaluate the effectiveness of our proposed approach; in Section VI, we make the conclusion of this paper." ], [ "Related work", "In this section, we first review a brief comprehension of the related works closed to the proposed method.", "Then we introduce a multi-view learning method called co-regularized multi-view spectral clustering (Co-reg) [21] in detail." ], [ "Multi-view learning", "Generally, most of multi-view learning methods belong to the category of the graph-based method.", "Among them, one representative group of multi-view methods [33], [15], [34] aim to fuse multiple features into single representation, by exploiting the common latent space shared by all views.", "For example, multi-view sparse coding [33], [34] combines the shared latent representation for the multi-view information by a series of linear maps as dictionaries.", "Similarly, Multiple Kernel Learning (MKL) [35], [36], [37] is also a natural way to integrate different views based on the direct combination of different views, where the work [35] learns a common low-dimensional representation with unsupervised or supervised information.", "However, these methods usually map different views to a common space, which might produce unsatisfactory results because they cannot guarantee the complementarity across different views.", "Another typical group of multi-view methods aim to integrate complementary information among different views.", "Among these works, there are two classes of multi-view methods related to our work, which are based on Canonical Correlation Analysis (CCA) [38] and Hilbert-Schmidt Independence Criterion (HSIC) [39], respectively.", "Suppose that two sets of $\\mathbf {X}$ and $\\mathbf {Y}$ , consisting of $N$ observations, are drawn jointly from a probability distribution.", "The former [40], [41], [42] employs CCA to project the two views into the common subspace by maximizing the cross correlation between two views.", "It could be expressed as follows: $\\begin{array}{l}Corr({\\mathbf {X}},{\\mathbf {Y}}) = tr\\left( {{\\mathbf {W}_X}^T\\mathbf {X}{\\mathbf {Y}}^T{\\mathbf {W}_Y}} \\right)\\\\\\end{array}$ where $\\mathbf {W}_X$ and $\\mathbf {W}_Y$ denote the projecting matrix of the set $\\mathbf {X}$ and the set $\\mathbf {Y}$ respectively.", "$tr(\\cdot )$ is the trace of the matrix.", "In particular, Multi-View Discriminant Analysis [42] is proposed to extend LDA [25], [29] into a multi-view setting, which projects multi-view features into one discriminative common subspace.", "Generalized Multiview Analysis (GMA) [41] solves a joint and relaxed problem of the form of quadratic constrained quadratic program (QCQP) over different feature spaces to obtain a common linear subspace, which generalizes CCA for multi-view scenario, i.e.", "cross-view classification and retrieval.", "However, dimensionalities of different views must keep equal with each other in this case.", "The latter [43], [44], [45] explores complementary information by utilizing HSIC to measure the correlations of different views.", "HSIC measures dependence of the learned representations of different views by mapping variables into a reproducing kernel Hilbert space, which could be expressed as follows: $\\begin{array}{l}HSIC({\\mathbf {X}},{\\mathbf {Y}}) = (N-1)^{-2}tr\\left( \\mathbf {K}_X \\mathbf {H} \\mathbf {K}_Y \\mathbf {H} \\right)\\\\\\end{array}$ where $\\mathbf {K}_X$ and $\\mathbf {K}_Y$ denote the Gram matrix of the set $\\mathbf {X}$ and the set $\\mathbf {Y}$ respectively.", "$\\mathbf {H}=\\mathbf {I}-N^{-1}\\mathbf {1}\\mathbf {1}^T$ centers the Gram matrix $\\mathbf {K}_X$ or $\\mathbf {K}_Y$ to have zero mean in the feature space.", "Compared to those methods based on CCA, such methods could relieve the restriction of equal dimensionalities for different views.", "In particular, the work [43] employs a kernel dependence measure of HSIC to quantify alternativeness between clustering solutions of two views, which iteratively discovers alternative clusterings.", "Similarly, the work [45] exploits the complementarity information of multiple views based on HSIC to enhance the correlations (or penalize the disagreement) across different views during the dimensionality reduction, and explores the correlations within each view independently, jointly.", "However, these works usually incorporate the inner product kernel to construct the HSIC term, which might lead to the issue that we cannot obtain satisfactory performance when facing those nonlinear cases.", "Differing from those methods above, our proposed graph consensus term cannot only overcome the limitation of dimensional equivalent across views but also fully discover the intrinsic structure information of each view and the complementary information among different views." ], [ "Co-regularized Multi-view Spectral Clustering", "Co-regularized Multi-view Spectral Clustering (Co-reg) [21] aims to propose a spectral clustering framework for multi-view setting.", "To achieve this goal, Co-reg works with the cross-view assumption that the true underlying clustering should assign corresponding points in each view to the same cluster.", "For the example of two-view case for the ease of exposition, the cost function for the measure of disagreement between two clusters of the learned embedding $\\mathbf {U}^v$ and $\\mathbf {U}^w$ in the $v$ th view and the $w$ th view could be defined as follows: $D\\left( {{\\mathbf {U}^v},{\\mathbf {U}^w}} \\right) = \\left\\Vert \\frac{\\mathbf {K}_{\\mathbf {U}^v}}{\\left\\Vert \\mathbf {K}_{\\mathbf {U}^v} \\right\\Vert _F^2}-\\frac{\\mathbf {K}_{\\mathbf {U}^w}}{\\left\\Vert \\mathbf {K}_{\\mathbf {U}^w}\\right\\Vert _F^2} \\right\\Vert _F^2$ where $\\mathbf {K}_{\\mathbf {U}^v}$ is the similarity matrix for the $v$ view and $\\left\\Vert \\cdot \\right\\Vert _F^2$ denotes the Frobenius norm of the matrix.", "For the convenience of solving the solution, linear kernel is chosen as the similarity measure, that is $\\mathbf {K}_{\\mathbf {U}^v}={\\mathbf {U}^v}{\\mathbf {U}^v}^{^T}$ .", "Substituting this in Eq.", "(REF ) and ignoring the constant additive and scaling terms that depend on the number of clusters, the disagreement term $D\\left( {{\\mathbf {U}^v},{\\mathbf {U}^w}} \\right)$ could be expressed as: $D\\left( {{\\mathbf {U}^v},{\\mathbf {U}^w}} \\right) = -tr\\left( {{\\mathbf {U}^v}{\\mathbf {U}^v}^{^T}{\\mathbf {U}^w}{\\mathbf {U}^w}^{^T}} \\right)$ Co-reg builds on the standard spectral clustering by appealing to the co-regularized framework, which makes the clustering relationships on different views agree with each other.", "Therefore, combining Eq.", "(REF ) with the spectral clustering objectives of all views, we could get the following joint maximization problem for $M$ views: $\\begin{split}&\\mathop {\\min }\\limits _{{\\mathbf {U}^1},{\\mathbf {U}^2}, \\ldots ,{\\mathbf {U}^M} \\in {\\mathbb {R}^{N \\times k}}} \\sum \\limits _{v = 1}^m {tr({\\mathbf {U}^v}^{^T}{\\mathbf {L}^v}{\\mathbf {U}^v})} - \\vspace{28.45274pt}\\\\&\\hspace{20.0pt}\\lambda \\sum \\limits _{1 \\le v \\ne w \\le M} {tr\\left( {{\\mathbf {U}^v}{\\mathbf {U}^v}^{^T}{\\mathbf {U}^w}{\\mathbf {U}^w}^{^T}} \\right)} \\\\&\\hspace{30.0pt}s.t.\\hspace{15.0pt}{\\mathbf {U}^v}^{^T}{\\mathbf {U}^v}{ = I,} \\forall 1 \\le v \\le M \\\\\\end{split}$ where ${\\mathbf {L}^v}$ is the normalized graph Laplacian matrix in the $v$ th view and $\\lambda $ is a the non-negative hyperparameter to trade-off the spectral clustering objectives and the spectral embedding disagreement terms across different views.", "In this way, Co-reg implements a spectral clustering framework for multi-view setting.", "However, choosing linear kernel might lack the ability to capture the nonlinear relationships among different samples in multi-view setting.", "Besides, there also exists the limitation that the dimensionalities of all views must keep same with each other.", "Figure: The flow chart of the proposed Graph Regularized Multi-view Learning Framework (GCMLF).", "Given a collection of samples with m views, e.g., {𝐗 1 ,𝐗 2 ,...,𝐗 m }\\lbrace \\mathbf {X}^1, \\mathbf {X}^2, \\ldots , \\mathbf {X}^m\\rbrace .", "GCMLF first explores the graph structure in each view by graph embedding model independently, which aims to preserve the diversity property of graph structure information in each view.", "Then, it utilizes the graph consensus term to regularize the dependence among different views, which makes different views mutually learn.", "For the example with view 1\\mathbf {1}, we could not only explore the intra-view graph information according to 𝐗 1 \\mathbf {X}^1, but also fully consider inter-view graph structure information more flexibly and robustly.", "In this way, GCMLF could consider the complementarity among different views and simultaneously obtain the graph embedding for each view." ], [ "Methodology", "In this section, we discuss the intuition of our proposed framework, named Graph Consensus Multi-view Learning Framework (GCMLF).", "Here, we propose to introduce the graph consensus term, based on heterogeneous graphs, to regularize the dependence among different views.", "We first work with two-views to formulate the graph consensus term.", "Then, the unified multi-view framework is developed for the case of more than two views to enforce multiple views close to each other.", "For clarity, the flow chart of GCMLF is shown in Fig.REF .", "Correspondingly, a rough paradigm based on iterative alternating strategy is proposed to solve the solution of GCMLF, which could be verified that it converges to the local optimal value.", "Specifically, we provide one typical case based on two heterogeneous graphs, called Multi-view Locality Linear Embedding (MvLLE).", "According to the scheme to solve GCMLF, the optimization procedure for MvLLE is presented to complete the case.", "For convenience, the important notations used in the remainder of this paper are summarized in Table REF .", "Table: Important notations used in this paper." ], [ "Problem Definition", "Assume that we are given a dataset consisting of $M$ views, the data in the $v$ th view ($1 \\le v \\le M$ ) could be denoted as $\\mathbf {X}^v = \\lbrace \\mathbf {x}_1^v, \\mathbf {x}_2^v, \\ldots , \\mathbf {x}_N^v\\rbrace $ , in which $N$ is the number of samples.", "The proposed method aims to obtain the graph structure or the embedding in each view under the multi-view setting.", "We separately employ $\\mathbf {G}^v \\in \\mathbb {R}^{N \\times N}$ and $\\mathbf {U}^v \\in \\mathbb {R}^{d^v \\times N}$ to denote the graph structure or the embedding in the $v$ th view, where $d^v$ is the dimensionality of the $v$ th view.", "Differing from the graph $\\mathbf {G}^v$ defined on $\\mathbf {X}^v$ , $\\mathbf {G}_{\\ast }^w$ is the graph constructed by the learned embedding $\\mathbf {U}^v$ .", "For the multi-view setting, a naive way is to incorporate all views directly as follows: $\\begin{split}&\\mathop {\\min }\\limits _{ \\left\\lbrace \\mathbf {U}^v \\in \\mathcal {\\mathbf {C}}^v, 1\\le v \\le M \\right\\rbrace } \\sum _{v=1}^M \\mathcal {F}(\\mathbf {G}^v, \\mathbf {U}^v)+\\lambda \\mathbf {\\Omega }(\\mathbf {U}^v)\\\\\\end{split}$ where $\\mathcal {\\mathbf {C}}^v$ denotes the different constraints on the embedding $\\mathbf {U}^v$ .", "$\\mathcal {F}(\\cdot , \\cdot )$ is the loss function defined on the embedding $\\mathbf {U}^v$ and the graph $\\mathbf {G}^v$ , and $\\mathbf {\\Omega }(\\cdot )$ stands for the smooth regularized term of the embedding $\\mathbf {U}^v$ .", "The positive term $\\lambda $ trades-off the loss function $\\mathcal {F}(\\mathbf {G}^v, \\mathbf {U}^v)$ and the smooth regularized term $\\mathbf {\\Omega }(\\mathbf {U}^v)$ .", "Intuitively, this naive way implements graph embedding problem for each view independently and fails to exploit the diversity information of these multiple views.", "More importantly, this way neglects the correlations of these multiple views, so that the complementary information among multiple views cannot be made full advantage to enforce all views to learn from each other.", "Accordingly, how to efficiently discover the complementary information among views is the key point.", "Besides those works based on CCA or HSIC, traditional solutions usually minimize the difference between the embeddings of pairwise views directly.", "However, such methods are only suitable for the case that the dimensionalities are equal for different views.", "For these reasons, it's necessary and worthy to develop a novel co-regularization term with better scalibility and robustness to enforce different views to mutually learn." ], [ "graph regularization term", "In this paper, we investigate to measure the dependence among all views based on graph structures, which reveals the relationships among all samples in each view.", "Specifically, we attempt to construct the view-structure consensus in terms of heterogeneous graphs to regularize the dependence between two views.", "Taking the example with two-view case consisting of the $v$ th view and the $w$ th view, if two graphs are obtained by the same style of graph approaches, discovering similarly property of individual view, we call such two graphs as homogeneous graphs; in contrast, if two graphs are solved by the different style of graph approaches, we call such two graphs as heterogeneous graphs.", "When facing the case of homogeneous graphs, directly minimizing the gap between two graphs is to make the relationships among all samples, computed from the $v$ th view and the $w$ th view, as consistent as possible.", "However, the diversity information from multiple views might be reduced in this way.", "For this reason, we introduce the heterogeneous graph consensus term to consider the correlations among multiple views.", "For the case of heterogeneous graphs, it's unsuitable to straightforward minimize the semantic gap between the graphs from two views owing to their different construction styles.", "By design, the graph coefficients could reflect the intrinsic geometric properties of one given view, which are invariant to exactly such transformations.", "Therefore, we expect their characterization of geometry structure in the one view to be equally valid for the other view on the manifold.", "That is to say, the relationship between two samples in the $v$ th view is expected to be closer if the distance in the $w$ th view is larger.", "Accordingly, we propose the following cost function as measure of dependence between two views: $\\begin{split}& Reg(\\mathbf {U}^v, \\mathbf {G}_\\ast ^w) = \\sum \\limits _{i,j=1}^{N}{\\left\\Vert {\\mathbf {U}_{i}^v-\\mathbf {U}_{j}^v} \\right\\Vert _2^2 \\mathbf {G}_{\\ast _{ij}}^w}\\\\& \\quad \\quad \\quad \\quad \\quad \\quad =tr\\left( \\mathbf {U}^v (\\mathbf {D}_\\ast ^w-\\mathbf {G}_\\ast ^w) {\\mathbf {U}^v}^T \\right) \\\\\\end{split}$ where $\\mathbf {D}_\\ast ^w$ denote a diagonal matrix, in which the $i$ th diagonal element in $\\mathbf {D}_\\ast ^w$ is the sum of all elements in the $i$ th row of $\\mathbf {G}_\\ast ^w$ .", "Besides, when the graph structure specifically reflects the reconstruction relationships among samples, i.e.", "Low-Rank Representation (LRR) [46], we try to solve the self-representation issue by the following form: $\\begin{split}& \\mathbf {U}^v = \\mathbf {U}^v\\mathbf {G}_\\ast ^v+\\mathbf {E}^v\\\\\\end{split}$ where $\\mathbf {E}^v$ denotes the error term of samples reconstruction.", "At this time, we investigate to measure the dependence between two views from the aspect of space reconstruction.", "That is, we expect that reconstruction relationships among samples in the one view could be equally preserved in the other view on the manifold.", "Therefore, we additionally could utilize the following cost function to measure the consensus between the $v$ th view and the $w$ th view: $\\begin{split}& Reg(\\mathbf {U}^v, \\mathbf {G}_\\ast ^w) = {\\left\\Vert {\\mathbf {U}^v-\\mathbf {U}^v\\mathbf {G}_\\ast ^w)} \\right\\Vert _F^2} \\\\& \\quad \\quad \\quad \\quad \\quad \\quad =tr\\left( \\mathbf {U}^v (\\mathbf {I}_N - \\mathbf {G}_\\ast ^w) {({\\mathbf {I}_N} - \\mathbf {G}_\\ast ^w)}^T {\\mathbf {U}^v}^T \\right) \\\\\\end{split}$ For convenience, we could further summarize the graph consensus term into a unified form $Reg(\\mathbf {U}^v, \\mathbf {G}_\\ast ^w)=tr\\left( \\mathbf {U}^v \\mathbf {L}^w {\\mathbf {U}^v}^T \\right)$ through the Eq.", "(REF )-Eq.", "(REF ), where $\\mathbf {L}^w$ is just dependent on the graph $\\mathbf {G}_\\ast ^w$ .", "In above discussion, we provide two formulas of $\\mathbf {L}^w$ based on the consistent preservation between two views.", "To sum up, we could utilize the graph consensus term $Reg(\\mathbf {U}^v, \\mathbf {G}_\\ast ^w)$ to co-regularize the dependence among different views and simultaneously obtain the graph structure or embedding for each view." ], [ "Multi-view learning framework based on graph consensus term", "To fully explore the correlations and complementary information among multiple views, we employ the graph consensus term in Eq.", "(REF )-Eq.", "(REF ) to encourage the new representations of different views to be close to each other.", "Accordingly, combining graph embedding loss term in each view with graph consensus term among all views, the overall objective function could be formulated as follows: $\\begin{split}&\\mathop {\\min }\\limits _{ \\left\\lbrace \\mathbf {U}^v \\in \\mathcal {\\mathbf {C}}^v, 1\\le v \\le M \\right\\rbrace } \\underbrace{\\sum _{v=1}^M \\left( \\mathcal {F}(\\mathbf {G}^v, \\mathbf {U}^v) \\right)}_{Graph \\ embedding \\ loss} + \\underbrace{\\lambda _{R} \\sum _{v=1}^M {\\mathbf {\\Omega }(\\mathbf {U}^v)}}_{Normalization \\ term}\\\\& + \\underbrace{ \\lambda _{C} \\sum _{v \\ne w} {Reg(\\mathbf {U}^v, \\mathbf {G}_{\\ast }^w)}}_{Graph \\ consensus \\ term} \\\\\\end{split}$ where $\\lambda _{R}>0$ and $\\lambda _{C}>0$ are two trade-off parameters corresponding to the smooth regularized term and graph consensus term respectively.", "Under the assumption that space structures in different views could reflect intrinsic properties diversely, the first term ensures that the graphs are constructed for homogeneous structures.", "The second term guarantees the smoothness within each view independently, and the third term enforces that the learned representations $\\left\\lbrace \\mathbf {U}^v, 1\\le v \\le M \\right\\rbrace $ should learn from each other to minimize the gap between them.", "In this way, when facing multi-view issues, our proposed framework could deal with the diversity information, smooth regularized terms, and complementary information among multiple views jointly.", "Optimization procedure: With the alternating optimization strategy, the Eq.", "(REF ) could be approximately solved.", "That is to say, we solve each view at a time while fixing other views.", "Specifically, with all views but $\\mathbf {U}^v$ fixed, we get the following optimization problem for the $v$ th view: $\\begin{split}&\\mathop {\\min }\\limits _{ \\mathbf {U}^v \\in \\mathcal {\\mathbf {C}}^v } \\mathcal {F}(\\mathbf {G}^v, \\mathbf {U}^v) + \\lambda _{R} \\mathbf {\\Omega }(\\mathbf {U}^v)+ \\\\& \\lambda _{C} \\sum _{1 \\le v \\ne w}^M { ( Reg(\\mathbf {U}^v, \\mathbf {G}_{\\ast }^w)+Reg(\\mathbf {U}^w, \\mathbf {G}_{\\ast }^v) )} \\\\\\end{split}$ Note that in $Reg(\\mathbf {U}^w, \\mathbf {G}_{\\ast }^v)$ , $\\mathbf {G}_{\\ast }^v$ is dependent on the target variable $\\mathbf {U}^v$ and Eq.", "(REF ) couldn't be directly solved.", "But if $\\mathbf {G}_{\\ast }^v$ is set to be stationary, $Reg(\\mathbf {U}^w, \\mathbf {G}_{\\ast }^v)$ will be reduced a constant term on $\\mathbf {U}^v$ .", "Without considering the constant terms, Eq.", "(REF ) will reduce to the following equation: $\\begin{split}&\\mathop {\\min }\\limits _{ \\mathbf {U}^v \\in \\mathcal {\\mathbf {C}}^v } \\mathcal {F}(\\mathbf {G}^v, \\mathbf {U}^v) + \\lambda _{R} \\mathbf {\\Omega }(\\mathbf {U}^v)+ \\lambda _{C} \\sum _{1 \\le v \\ne w}^M {Reg(\\mathbf {U}^v, \\mathbf {G}_{\\ast }^w)} \\\\\\end{split}$ which looks simpler to be solved.", "Suppose that $\\mathbf {U}^v$ could be effectively calculated by solving the Eq.", "(REF ), this $\\mathbf {U}^v$ could be continuously used to update $\\mathbf {G}_{\\ast }^v$ according to the construction manner of chosen homogeneous graph method, which inspires us to compute $\\mathbf {U}^v$ and $\\mathbf {G}_{\\ast }^v$ iteratively.", "Hereto, all the variables $\\lbrace \\mathbf {U}^v,\\mathbf {G}_{\\ast }^v, 1\\le v \\le M\\rbrace $ have been updated completely.", "The whole procedure to solve Eq.", "(REF ) is summarized in Algorithm REF.", "The optimization for GCMLF The multi-view data $\\lbrace \\mathbf {X}^v,\\forall 1\\le v \\le M \\rbrace $ , the hyperparameters $\\lambda _{R}$ and $\\lambda _{C}$ , the loss function $\\mathcal {F}(\\cdot , \\cdot )$ , the constraint $\\mathcal {\\mathbf {C}}^v$ , the homogeneous graph manner for $\\mathbf {G}_{\\ast }$ .", "v=1:M Construct $\\mathbf {G}^v$ in the loss function $\\mathcal {F}(\\cdot , \\cdot )$ .", "Initialize $\\mathbf {U}^v$ by minimizing the loss function $\\mathcal {F}(\\cdot , \\cdot )$ under the constraint $\\mathcal {\\mathbf {C}}^v$ .", "not converged v=1:M Update $\\mathbf {G}_{\\ast }^v$ for the $v$ th view according to the construction manner of the chosen homogeneous graph method.", "v=1:M Update $\\mathbf {U}^v$ for the $v$ th view by solving Eq.", "(REF ).", "Learned representations {$\\mathbf {U}^v, 1\\le v \\le M$ }.", "Convergence analysis: Because we adopt the alternating optimization strategy to solve our proposed framework, it's essential to analyze its convergence.", "Theorem 1.", "The objective function in Eq.", "(REF ) is bounded.", "The proposed optimization algorithm monotonically decreases the loss value in each step, which makes the solution converge to a local optimum.", "Proof: In most cases of graph embedding loss function in $v$ th view, $\\mathcal {F}(\\mathbf {G}^v, \\mathbf {U}^v)$ is positive.", "Thus, it's readily to be satisfied that there must exist one view which can make $\\mathcal {F}_{min}=\\mathcal {F}(\\mathbf {G}^v, \\mathbf {U}^v)>0$ to be smallest among all views.", "Similarly, we also find that the smooth regularized term $\\mathbf {\\Omega }(\\mathbf {U}^v)$ must be greater than 0.", "For the graph consensus terms among views, we could verify that $tr\\left( \\mathbf {U}^v \\mathbf {L}^w {\\mathbf {U}^v}^T \\right)$ is positive definite quadratic function if $\\mathbf {L}^w$ is a positive definite matrix.", "Fortunately, this condition is usually established.", "Similar to the discussion the loss function in each view, there must exist two closest views which could make $\\mathcal {C}_{min}=tr\\left( \\mathbf {U}^v \\mathbf {L}^w {\\mathbf {U}^v}^T \\right)>0$ to be smallest among all pairwise views.", "And because the hyperparameters $\\lambda _{R}>0$ and $\\lambda _{C}>0$ , it is provable that the objective value in Eq.", "(REF ) is greater than $M \\mathcal {F}_{min}+M(M-1)\\mathcal {C}_{min}$ .", "Therefore, the objective function in Eq.", "(REF ) has a lower bound.", "For each iteration of optimizing problem Eq.", "(REF ), we could obtain the learned representations {$\\mathbf {U}^v, 1 \\le v \\le M$ } by iterative solving the Eq.", "(REF ), which are corresponding to the exact minimum points of Eq.", "(REF ) for all views respectively.", "Under the condition that $\\mathbf {G}_{\\ast }^v$ is set to be stationary, the value of the objective function in Eq.", "(REF ) is non-increasing in each iteration of Algorithm REF.", "Thus the alternating optimization procedure will monotonically non-increasing the objective in Eq.", "(REF ).", "Denote the value of loss function in Eq.", "(REF ) as $\\mathcal {H}$ , and let ${\\lbrace \\mathcal {H}^t\\rbrace }_{t=1}^T$ be a sequence generated by the iteration steps in Algorithm REF, where $T$ is the length of this sequence.", "Based on the above analysis, ${\\lbrace \\mathcal {H}^t\\rbrace }_{t=1}^T$ is a bounded below monotone decreasing sequence.", "According to the bounded monotone convergence theorem [47] that asserts the convergence of every bounded monotone sequence, the proposed optimization algorithm converges.", "Accordingly, the Theorem 1 has been proved." ], [ "Discussion with other related methods", "For the proposed graph consensus term, we give a more comprehensive explanation by comparing it with other related methods in this section.", "Compared with the variants based on CCA, our method is not limited by the dimensional equivalent across different views and more applicable to those nonlinear cases.", "For the HSIC term in Eq.", "(REF ), linear kernel is usually used to implement $\\mathbf {K}_X$ and $\\mathbf {K}_Y$ .", "Even though this way is convenient to obtain the optimal solution, the optimization for the nonlinear case is not efficient.", "Besides, Co-reg might meet the similar issue when facing nonlinear cases.", "Note that, when the graph consensus term focuses on the similarity among samples in other views, HSIC term and the disagreement term $D\\left( {{\\mathbf {U}^v},{\\mathbf {U}^w}} \\right)$ in Co-reg could be seen as special cases of the graph consensus term.", "For example, if $Reg(\\mathbf {U}^v, \\mathbf {G}_{\\ast }^w)={\\mathbf {U}^v} \\mathbf {H} {\\mathbf {K}^w} \\mathbf {H} {\\mathbf {U}^v}^{^T}$ , it's equivalent to the definition of HSIC term with linear kernel.", "Differently, we could flexibly choose the common kernel function as similarity measure for $\\mathbf {K}^w$ , such as polynomial kernel, Gaussian kernel, etc, which is more applicable for the nonlinear case than HSCI term.", "Specifically, our proposed method is a more general and robust way to enforce the agreement among different views.", "In summary, our proposed framework has the following advantages in terms of exploitation for multi-view information and the flexibility of general framework: GCMLF is a unified framework to project multi-view data into ideal subspace for most graph embedding methods, which makes full use of the diversity and complementary information among different views.", "Differing from those methods minimizing the difference of learned representations among views directly, our proposed framework co-regularizes different views to be close to each other by the graph consensus term based on heterogeneous graphs, meanwhile steadily preserves the intrinsic property of each view on homogeneous graphs.", "For most of existing multi-view learning frameworks, the limitation of dimensional equivalent makes it not flexible for the extensions of those works.", "Differing from those methods that only hold under this condition to limit their performance, we could flexibly formulate the dimensionality of each view, which eliminates this limitation.", "Besides, adopting a suitable graph manner to explore the complementary information among multiple views is beneficial to obtain more robust and promising performance.", "More importantly, GCMLF could incorporate nonlinear universal cases by exploiting the graph structure information based on learned representations.", "In this section, we choose two heterogeneous graph embedding methods, consisting of LE [48] and LLE [32], to provide a typical implement for our proposed framework, named Multi-view Locality Linear Embedding (MvLLE).", "In fact, LLE and LE are used to construct the graph learning loss term and difference term between two views in Eq.", "(REF ), respectively." ], [ "The construction process of MvLLE", "LLE lies on the manifold structure of the samples space to preserve the relationships among samples.", "Based on the assumption that each sample and its neighbors to lie on or close to a locally linear patch of the manifold, then we obtain the weights matrix $\\mathbf {S}^v \\in \\mathbb {R}^{N \\times N}$ by minimizing the following reconstruction error: $Error \\left( \\mathbf {S}^v \\right){=}{\\sum \\limits _{i = 1}^N {\\Vert {\\mathbf {X_i^v} - \\sum \\limits _{j \\in Neighbors\\lbrace i\\rbrace } {{\\mathbf {S}_{ij}^v}{\\mathbf {X}_j^v}} } \\Vert _2 ^2}}$ where $Neighbors\\lbrace i\\rbrace $ denotes the neighbors of the $i$ th sample $\\mathbf {X}_i^v$ .", "By solving the above equation, we could obtain graph structure $\\mathbf {S}^v$ to reflect intrinsic properties of the samples space.", "We expect their characterization of local geometry in the original space to be equally valid for local patches on the manifold.", "Each original sample $\\mathbf {X}_i^v$ is mapped to a new representation.", "This is done by choosing $d^v$ -dimensional coordinates to minimize the following embedding cost function: $Error \\left( \\mathbf {U}^v \\right){=}{\\sum \\limits _{i = 1}^N {\\Vert {{\\mathbf {U}_i^v} - \\sum \\limits _{j \\in Neighbors\\lbrace i\\rbrace } {{\\mathbf {S}_{ij}^v}{\\mathbf {U}_j^v}} } \\Vert _2 ^2}}$ Additionally, we constrain the learned representations $\\mathbf {U}_i^v, 1 \\le i \\le N$ to have unit covariance.", "With simple algebraic formulation, the above cost problem can be further transformed as follows: $\\begin{array}{l}\\mathop {\\min }\\limits _{\\mathbf {U}^v} \\hspace{5.0pt}tr(\\mathbf {U}^{v}\\mathbf {{({I-S^v})}^T(I-S^v)}\\mathbf {U^{v^T}})\\\\\\hspace{5.0pt}s.t.\\hspace{10.0pt}\\mathbf {U}^{v}\\mathbf {U}^{v^T} = \\mathbf {I}_N\\end{array}$ Hereto, we determine that $\\mathcal {F}(\\mathbf {U}^v)$ and $\\mathcal {\\mathbf {C}}^v$ are responding to $tr(\\mathbf {U}^{v}\\mathbf {{({I-S^v})}^T(I-S^v)}\\mathbf {U^{v^T}})$ and $\\mathbf {U}^{v}\\mathbf {U}^{v^T} = \\mathbf {I}_N$ respectively.", "LE aims at preserving the local neighborhood structure on the data manifold, which constructs the weight matrix that describes the relationships among the samples.", "Specifically, the similarity matrix $\\mathbf {K}$ is to denote the weight coefficients, which could choose the common kernel function as our similarity measure, such as linear kernel, polynomial kernel, Gaussian kernel and etc.", "Combining this with the graph consensus term in Eq.", "(REF ) between the $v$ view and $w$ th view, we could define $\\mathbf {L}^w$ as follows: $\\begin{split}& \\mathbf {L}^w = \\mathbf {D}^w-\\mathbf {K}^w\\\\\\end{split}$ where $\\mathbf {D}^w$ denotes a diagonal matrix and ${\\mathbf {D}_{ii}^w}=\\sum \\limits _j {{\\mathbf {K}_{ij}^w}}$ .", "By rewriting the normalized matrix $\\mathbf {L}^w$ , we could get $\\mathbf {L}^w=\\mathbf {I}_N - {\\mathbf {D}^w}^{ - 1/2}\\mathbf {K}^w{\\mathbf {D}^w}^{ - 1/2}$ .", "According to the above discussion, we have specified each term in objective function in Eq.", "(REF ) and its constraint terms.", "In this way, we could extend single-view based LLE into multi-view setting, named Multi-view Locality Linear Embedding (MvLLE).", "Based on the above, the whole objective function for MvLLE could be formulated as follows: $\\begin{split}& \\mathop {\\min }\\mathcal {\\mathbf {O}}\\left( \\mathbf {U}^1, \\mathbf {U}^2, \\ldots , \\mathbf {U}^M \\right) = \\\\& \\sum _{v=1}^M tr(\\mathbf {U}^{v}\\mathbf {{({I-S^v})}^T(I-S^v)} \\mathbf {U^{v^T}}) + \\lambda _{R} \\sum _{v=1}^M \\mathbf {\\Omega }(\\mathbf {U}^v)\\\\& + \\lambda _{C} \\sum _{v \\ne w} {tr\\left( \\mathbf {U}^v (\\mathbf {I}_N - {\\mathbf {D}^w}^{ - 1/2}\\mathbf {K}^w{\\mathbf {D}^w}^{ - 1/2}) {\\mathbf {U}^v}^T \\right)}\\\\&\\hspace{5.0pt}s.t.\\hspace{10.0pt}\\mathbf {U}^{v}\\mathbf {U}^{v^T} = \\mathbf {I}_N, 1 \\le v \\le M \\\\\\end{split}$ Because the constraint terms normalize the scale of $\\lbrace \\mathbf {U}^1, \\mathbf {U}^2, \\ldots , \\mathbf {U}^M\\rbrace $ , the smooth regularized term $\\mathbf {\\Omega }(\\mathbf {U}^v)$ could be neglected in the objective function of MvLLE.", "That is, the above equation could be reduced as follows: $ \\begin{split}& \\mathop {\\min }\\mathcal {\\mathbf {O}}\\left( \\mathbf {U}^1, \\mathbf {U}^2, \\ldots , \\mathbf {U}^M \\right) = \\\\& \\sum _{v=1}^M tr(\\mathbf {U}^{v}\\mathbf {{({I-S^v})}^T(I-S^v)} \\mathbf {U^{v^T}}) \\\\& + \\lambda _{C} \\sum _{v \\ne w} {tr\\left( \\mathbf {U}^v (\\mathbf {I}_N - {\\mathbf {D}^w}^{ - 1/2}\\mathbf {K}^w{\\mathbf {D}^w}^{ - 1/2}) {\\mathbf {U}^v}^T \\right)}\\\\&\\hspace{5.0pt}s.t.\\hspace{10.0pt}\\mathbf {U}^{v}\\mathbf {U}^{v^T} = \\mathbf {I}_N, 1 \\le v \\le M \\\\\\end{split}$" ], [ "Optimization", "Referring to the optimization procedure for GCMLF, the Eq.", "(REF ) could be approximately solved.", "When solving the $v$ th view, with all views fixed but $\\mathbf {U}^v$ , we get the following optimization for the $v$ th view: $\\begin{split}& \\mathop {\\min }\\mathcal {\\mathbf {O}}\\left( \\mathbf {U}^v \\right) = tr\\left(\\mathbf {U}^{v}\\mathbf {{({I-S^v})}^T(I-S^v)} \\mathbf {U^{v^T}}\\right) \\\\& + \\lambda _{C} \\sum _{1 \\le v \\ne w}^M {tr\\left( \\mathbf {U}^v (\\mathbf {I}_N - {\\mathbf {D}^w}^{ - 1/2}\\mathbf {K}^w{\\mathbf {D}^w}^{ - 1/2}) {\\mathbf {U}^v}^T \\right)}\\\\&\\hspace{5.0pt}s.t.\\hspace{10.0pt}\\mathbf {U}^{v}\\mathbf {U}^{v^T} = \\mathbf {I}_N \\\\\\end{split}$ Due to the attributes of the matrix trace, the above equation is equivalent to the following optimization problem: $\\begin{split}& \\mathop {\\min }\\mathcal {\\mathbf {O}}\\left( \\mathbf {U}^v \\right) = tr(\\mathbf {U}^{v} ( \\mathbf {{({I-S^v})}^T(I-S^v)} + \\\\& \\lambda _{C} \\sum _{1 \\le v \\ne w}^M { (\\mathbf {I}_N - {\\mathbf {D}^w}^{ - 1/2}\\mathbf {K}^w{\\mathbf {D}^w}^{ - 1/2}) } \\mathbf {U^{v^T}}) \\\\&\\hspace{5.0pt}s.t.\\hspace{10.0pt}\\mathbf {U}^{v}\\mathbf {U}^{v^T} = \\mathbf {I}_N \\\\\\end{split}$ Under the constraint condition $\\mathbf {U}^{v}\\mathbf {U}^{v^T} = \\mathbf {I}_N$ , the above equation could be efficiently solved by eigenvalue decomposition.", "In this way, we could solve all the variables $\\lbrace \\mathbf {U}^v,\\mathbf {G}_{\\ast }^v, 1\\le v \\le M\\rbrace $ iteratively and the whole procedure to solve MvLLE is summarized in Algorithm REF.", "According to the convergence analysis for our framework in Section REF , it could be easily verified that Algorithm REF for MvLLE will be converged within limited iteration steps.", "We also use many experiments to verify the convergence property of the proposed method.", "Fig.", "REF shows the relation between the objective values and iterations.", "As shown in Fig.", "REF , we can see that with the iterations increase, the objective function value of the proposed method decreases fast and reaches a stable point after a few iterations, while the classification accuracy increases dramatically during the first small number of iterations and then reaches the stable high level for these four benchmark databases.", "For example, for the Holidays dataset, the proposed method reaches the stable point in terms of the classification accuracy within about fifteen iterations.", "Both theoretical proof and experiments demonstrate that the proposed method can obtain the local optimum quickly and has good convergence property.", "The optimization procedure for MvLLE The multi-view data $\\lbrace \\mathbf {X}^v,\\forall 1\\le v \\le M \\rbrace $ , the hyperparameter $\\lambda _{C}$ , kernel function $\\mathbf {\\kappa }(\\cdot , \\cdot )$ for similarity matrix $\\mathbf {K}$ .", "v=1:M Construct $\\mathbf {S}^v$ by solving the Eq.", "(REF ).", "Initialize $\\mathbf {U}^v$ by solving the Eq.", "(REF ).", "not converged v=1:M Update $\\mathbf {K}^v$ for the $v$ th view according to kernel function $\\mathbf {\\kappa }(\\cdot , \\cdot )$ .", "v=1:M Update $\\mathbf {U}^v$ by using eigenvalue decomposition to solve the Eq.", "(REF ).", "Learned representations {$\\mathbf {U}^v, 1\\le v \\le M$ }.", "Figure: Corel-1K dataset" ], [ "Time complexity", "The computational cost for MvLLE mainly is composed of two parts.", "One is the construction for the variables $\\lbrace \\mathbf {S}^v, i \\le v \\le M \\rbrace $ and the initialization for the variables and $\\lbrace \\mathbf {U}^v, i \\le v \\le M \\rbrace $ , which solves $\\mathbf {S}^v$ and $\\mathbf {U}^v$ according to Eq.", "(REF ) and Eq.", "(REF ).", "The other is to iteratively update $\\mathbf {K}^v$ and $\\mathbf {U}^v$ , which needs to perform the computation of similarity matrix and eigenvalue decomposition in each iteration, respectively.", "It's easy to find that the time complexity of Algorithm REF is mainly influenced by iteration times and eigenvalue decomposition process.", "Therefore, its time complexity is about O($T \\times M \\times N^3$ ), where $T$ is the iteration times of the alternating optimization procedure.", "Note that, based on the convergence of Algorithm REF, the iteration times $T$ will be a limited number." ], [ "Discussion", "LLE and LE are two heterogeneous graph embedding methods, in which LLE is used to construct the graph learning loss term and LE is used to regularize the dependence between two views in Eq.", "(REF ), respectively.", "Note that, LLE is based on manifold space reconstruction, which aims to preserve reconstruction relationships among samples.", "Therefore, when LE is utilized to construct the graph learning loss term, we also consider that LLE is used to construct the graph consensus term between two views by Eq.", "(REF ).", "To facilitate the solution, we choose the former to specify the graph learning loss term in Eq.", "(REF ) in this paper.", "Figure: Some examples in Corel-1K dataset" ], [ "Experiments", "In this section, we introduce the details of several experiments on document classification, face recognition, and image retrieval, to verify the effectiveness of our proposed framework.", "First, six benchmark datasets and related methods for comparison are described detailedly in Section REF .", "Then, we evaluate the performance of our framework by comparing all methods in Section REF , Section REF , and Section REF , respectively.", "Finally, we take the discussion on the performance of MvLLE based on experimental results on six benchmark datasets in Section REF ." ], [ "Datasets and Compared Methods", "Datasets: In our experiments, six datasets are used to validate the superior performance of our framework, including document datasets (3Sourcehttp://mlg.ucd.ie/datasets/3sources.html and Corahttp://lig-membres.imag.fr/grimal/data.html), face datasets(ORLhttp://www.uk.research.att.com/facedatabase.html and Yalehttp://cvc.yale.edu/projects/yalefaces/yalefaces.html), and image datasets(Corel-1Khttps://sites.google.com/site/dctresearch/Home/content-based-image-retrieval and Holidayshttp://lear.inrialpes.fr/jegou/data.php).", "Two document datasets are two benchmark multi-view datasets.", "For the face and image datasets, we utilize different descriptors to extract their corresponding multi-view features, in which some samples in these datasets are shown in Fig.", "REF .", "The detailed information of these datasets are summarized as follows: 3Source consists of three well-known news organizations: BBC, Reuters, and Guardian, where each news is manually annotated with one of six labels.", "Because each news source can be used as one view, we choose these news sources as a multi-view benchmark dataset.", "Cora contains 2708 scientific publications of seven categories, where each publication document could be described by content and citation.", "Thus, Cora could be considered as a two-view benchmark dataset.", "ORL is collected from 40 distinct subjects, where ten different images are gathered for each subject.", "For each person, the images are taken at different times, varying the lighting, facial expressions, and facial details.", "Yale is composed of 165 faces from 15 peoples, which has been widely used in face recognition.", "Each person has eleven images, with different facial expressions and facial details.", "Corel-1K manually collects one thousand images corresponding to ten categories, such as human beings, buildings, landscapes, buses, dragons, elephants, horses, flowers, mountains, and foods.", "And there are one hundred images in each category.", "Holidays consists of 1491 images corresponding to 500 categories, which are mainly captured for sceneries.", "To demonstrate the superior performance of our framework, we compare MvLLE with the following methods, where the first two are single-view methods with the most informative view, and the others are multi-view learning methods.", "BLE is Laplacian Eigenmaps (LE) [48] with the most informative view, i.e., one that achieves the best performance with LE.", "BLLE is Locality Linear Embedding (LLE) [32] with the most informative view, similar to BLE.", "MSE [14] is a multi-view spectral method based on global coordinate alignment.", "CCA [38] is used to deal with multi-view problems by maximizing the cross correlation between two views.", "Co-reg [21] is a multi-view spectral embedding by regularizing different views to be close to each other.", "AMGL [15] is an auto-weighted multiple graph learning method, which could allocate ideal weight for each view automatically." ], [ "Document Classification", "In this section, we evaluate the experimental results of the document classification tasks on 3Source and Cora datasets.", "For these two datasets, we randomly select 50% of the samples as training samples and the remaining 50% of the dataset as testing samples every time.", "All the methods are conducted to project all samples to the same dimensionality.", "Specifically, the dimensions of the embedding obtained by all methods all maintain 20 and 30 dimensions.", "We adopt 1NN as the classifier to classify the testing ones.", "After conducting this experiment 30 times with different random training samples and testing samples, we calculate the mean classification accuracy (MEAN) and max classification accuracy (MAX) on 3Source and Cora datasets as the evaluation index for all methods.", "Then, we can summary the evaluation indexes of MEAN and MAX results in Table REF and Table REF .", "Table: The classification accuracy on 3Source dataset.Table: The classification accuracy on Cora dataset.Through the experimental results of Tables REF -REF , it's clear that the proposed MvLLE is significantly superior to its counterparts in most situations.", "Among the comparing methods, CCA is close to the proposed MvLLE on classification performance, which might take more advantages of complementary information than other compared methods on 3Source and Cora datasets.", "Compared with other multi-view methods, the performance of our MvLLE is more stable.", "For example, Co-reg achieves promising results on 3Source dataset while the performance degrades sharply on the Cora dataset." ], [ "Face Recognition", "In this section, we evaluate the experimental results of the face recognition tasks on Yale and ORL datasets.", "For these two datasets, we first extract their multi-view features by the different image descriptors including EDH [5], LBP [3] and Gist [4].", "Then, all the methods are conducted to project all samples to the same dimensionality and the 1NN classifier is adopted to calculate the recognition results, where the dimension of the embedding obtained by all methods all maintains 30 dimensions.", "Note that we randomly select 50% of the samples as training samples and the remaining 50% of the samples as testing samples every time and run all methods 30 times with different random training samples.", "Because the task of face recognition mainly cares about the recognition accuracy, we choose the recognition accuracy as the evaluation index in this part.", "The boxplot figures of accuracy values of all methods on Yale and ORL datasets are shown in Fig.", "REF and Fig.", "REF .", "Figure: The face recognition accuracy on Yale dataset.Figure: The face recognition accuracy on ORL dataset.Through the experiment results of the above two experiments in Figs.", "REF -REF , the multiple view performances are usually better than the independent view.", "This demonstrates that multiple views can improve the performance of face recognition.", "Among these multi-view methods, we can find that MvLLE outperforms its comparing methods in most situations, which shows the superiority of the proposed framework.", "Besides our MvLLE, Co-reg obtains stably better than other methods on the performance of face recognition, which takes more advantages of complementary information than other comparing methods on Yale and ORL face datasets." ], [ "Image Retrieval", "In this section, we conduct two experiments on Holidays and Corel-1K datasets for image retrieval.", "For these two datasets, we both employ three image descriptors of MSD [49], Gist [4], and HOC [50] to extract multi-view features for all images.", "All the methods are conducted to project all samples to the same dimensionality.", "In this part, the dimensions of the embedding obtained by all methods maintain 30 dimensions.", "Besides, $\\mathop {l}_1$ distance is utilized to measure similarities between samples.", "At the aspect of the validation index, we choose several common indexes, including average precision rate (Precision), average recall rate (Recall), mean average precision (MAP), and $F_1$ -Measure, to validate the performances for image retrieval.", "Actually, high Precision and Recall are required and $F_1$ -Measure is put forward as the overall performance measurement.", "Then, we conducted this experiment on these two datasets repeatedly for twenty times.", "For Holidays dataset, we summarize these experiment results, including Precision, Recall, MAP, and $F_1$ -Measure, on top 2 retrieval results in Table REF .", "For Corel-1K dataset, we randomly select 10 images as query ones for each category.", "Afterward, the relation curves on validation indexes are drawn in Fig.", "REF .", "Table: The image retrieval accuracy on Holidays dataset.Figure: F 1 F_1-MeasureThrough these experimental results in Table REF and Fig.", "REF , it can be readily found that our proposed MvLLE achieves better performance than the other compared methods in most situations in the field of image retrieval.", "Our proposed method MvLLE could integrate compatible and complementary information from multiple views and obtain a better embedding from these views.", "Therefore, the results in Table REF and Fig.", "REF could show that our framework can achieve good performance in the field of face recognition.", "Note that the performance of BLE is bad because of its unreasonable way to deal with multi-view features." ], [ "Discussion", "For the experiment results in Table REF and Table REF on text classification, we can find that MvLLE outperforms other comparing methods in most situations.", "Similar to the performance validation in text classification, our proposed MvLLE also obtain promising performance in face recognition tasks through the evaluations in Figs.REF -REF .", "As shown in Table REF and Fig.REF , our method could be also utilized to execute the image retrieval task.", "From the above evaluations, it's readily seen that the representations obtained by our method could be more effective and suitable for multi-view features.", "Besides, other multi-view methods outperform the other single-view methods in most situations, which could show multi-view learning is a valuable research field indeed.", "Compared with BLLE, MvLLE could achieve significantly better performance by integrating the complementary information among different views meanwhile preserving its intrinsic characteristic in each view.", "Note that the experimental results of our proposed MvLLE on six datasets are without fine-tuning, and usage of fine-tuning might further improve its performance.", "Besides, we find that MvLLE could converge within limited iterations in most experiments, which empirically indicates the fast speed of the convergence for our method." ], [ "Conclusion", "In this paper, we propose a novel unified multi-view framework, named Graph Consensus Multi-view Learning Framework (GCMLF), to extend most of graph embedding works based on single view into the multi-view setting.", ", It encourages all views to learn with each other according to the complementarity among views and explores the heterogeneous graph structure in each view independently to preserve the diversity property among all views.", "Based on the sufficient theoretical analysis, we show that GCMLF is a more robust and flexible multi-view learning framework than those existing multi-view methods.", "Correspondingly, an algorithm based on alternating direction strategy is proposed to solve GCMLF, and the relative proof guarantees that it can converge to a local optimal solution.", "Furthermore, we provide one typical implement based on two heterogeneous graph embedding methods of LLE and LE, called Multi-view Locality Linear Embedding (MvLLE).", "Extensive experimental results demonstrate that the proposed MvLLE can effectively explore the diversity information and underlying complementary information of the given multi-view data, and outperforms its compared methods.", "With the rapid development of graph neural networks [51], [52], [53], how to extend our framework into this domain is very meaningful yet full of challenges, and we will consider it in our future work." ], [ "Acknowledgements", "The authors would like to thank the anonymous reviewers for their insightful comments and suggestions to significantly improve the quality of this paper.", "This work was supported by National Natural Science Foundation of PR China(61672130, 61972064) and LiaoNing Revitalization Talents Program(XLYC1806006).", "[Figure: NO_CAPTION [Figure: NO_CAPTION [Figure: NO_CAPTION" ] ]
2105.11781
[ [ "Feature Space Targeted Attacks by Statistic Alignment" ], [ "Abstract By adding human-imperceptible perturbations to images, DNNs can be easily fooled.", "As one of the mainstream methods, feature space targeted attacks perturb images by modulating their intermediate feature maps, for the discrepancy between the intermediate source and target features is minimized.", "However, the current choice of pixel-wise Euclidean Distance to measure the discrepancy is questionable because it unreasonably imposes a spatial-consistency constraint on the source and target features.", "Intuitively, an image can be categorized as \"cat\" no matter the cat is on the left or right of the image.", "To address this issue, we propose to measure this discrepancy using statistic alignment.", "Specifically, we design two novel approaches called Pair-wise Alignment Attack and Global-wise Alignment Attack, which attempt to measure similarities between feature maps by high-order statistics with translation invariance.", "Furthermore, we systematically analyze the layer-wise transferability with varied difficulties to obtain highly reliable attacks.", "Extensive experiments verify the effectiveness of our proposed method, and it outperforms the state-of-the-art algorithms by a large margin.", "Our code is publicly available at https://github.com/yaya-cheng/PAA-GAA." ], [ "Introduction", "Deep neural networks (DNNs) [9], [10], [22], [24] have made impressive achievements in these years, and various fields are dominated by them, e.g., object detection [19].", "However, recent works demonstrate that DNNs are highly vulnerable to the adversarial examples [23], [1] which are only added with human-imperceptible perturbations.", "To find out the insecure “bugs\" in the DNNs, many works pay attention to the generation of adversarial examples.", "In general, the attack methods can be grouped into three broad categories: white-box, gray-box, and black-box attacks.", "For the white-box setting [18], [2], the adversaries can access all information (e.g., the architectures and parameters) of the victim's models.", "Thus the update directions of the adversarial examples are accurate.", "For the gray-box setting [11], [20], only the output logits or labels are available.", "Therefore, most of the works craft adversarial examples through a considerable amount of queries.", "However, in many scenarios, both the white-box and the gray-box attacks are infeasible owing to the opaque deployed models.", "For the black-box setting, all information of the victim's models is unavailable.", "Since the decision boundaries of different DNNs are similar, the resultant adversarial examples crafted for the substitute models, e.g., well-trained models, are also practical for others, which is called the transferability of adversarial examples.", "Most black-box attack methods [3], [12], [5], [6], [17] aim at enhancing the transferability of adversarial examples depending on information from the classification layers of the substitute models.", "However, it is still challenging to improve the success rate of black-box targeted attacks, i.e., induce the victim's models to predict the pre-set target labels.", "To tackle the poor effectiveness of black-box targeted attacks, researchers [21], [12] delve into the feature space targeted attacks, which perturb images by modulating their intermediate feature maps.", "For example, given a source image, [12] first select a single sample of the target label whose intermediate activation is furthest from the source one under Euclidean distance.", "Then, perturbation is crafted by minimizing the Euclidean distance between the source and target features.", "However, since Euclidean distance prefers to focus on the spatial gap between two features, it will select the spatially furthest target image rather than the one with the outermost semantic meaning.", "For instance, considering a source image with a cat on the left and the target label is “dog\", under the above setting, the algorithm tends to choose a target image that has a dog on the right instead of on the left.", "When it comes to the generation of perturbation, what the algorithm needs to do is the semantic meaning alignment between the source and target features and the minimization of the huge spatial discrepancy.", "Overall, the current choice of pixel-wise Euclidean distance to measure the discrepancy is questionable, as it unreasonably imposes a spatial-consistency constraint on the source and target features.", "To produce spatial-agnostic measurements, we propose two novel approaches called Pair-wise Alignment Attack and Global-wise Alignment Attack, which attempt to measure similarities between features by high-order statistics with translation invariance.", "With this perspective, we deal with the feature space targeted attacks as the problem of statistic alignment.", "By aligning the source and target high-order statistics, rather than depending on the Euclidean distance, we can make the two feature maps semantically close without introducing an excessive spatial gap in feature space.", "To sum up, our contribution is summarized as three-folds: 1) We point out that the current choice of pixel-wise Euclidean Distance to measure the discrepancy between two features is questionable, for it unreasonably imposes a spatial-consistency constraint on the source and target features.", "By exploring high-order statistics with translation invariance, two novel methods are proposed: a) Pair-wise Alignment Attack and b) Global-wise Alignment Attack, which deal with feature space targeted attacks as a problem of statistic alignment; 2) To obtain high-reliability results, we systematically analyze the layer-wise transferability.", "Furthermore, to set all images under the same transfer difficulty, which ranges from the easiest to the hardest, we assign the target labels of the same difficulty level to them and give a comprehensive evaluation of our methods.", "and 3) Extensive experimental results show the effectiveness of our methods, which outperform the state-of-the-art by $6.92\\%$ at most and $1.70\\%$ on average in typical setups." ], [ "Related Works", "After the discovery of adversarial examples [23], [1], many excellent works are proposed.", "Generally, based on different goals, attack methods can be divided into non-targeted attacks and targeted attacks.", "For non-targeted attacks (e.g., [26]), all need to do is fooling DNNs to misclassify the perturbed images.", "For targeted attacks, the adversaries must let the DNNs predict specific untrue labels for the adversarial examples.", "[16] apply Poincar$\\acute{e}$ distance and Triplet loss to regularize the targeted attack process.", "[7] propose staircase sign method to utilize the gradients of the substitute models effectively.", "The above methods craft adversarial examples by directly using the outputs of the classification layers, i.e., logits (un-normalized log probability).", "In addition to these, researchers [27] observe that distorting the features in the intermediate layers of DNNs can also generate transferable adversarial examples.", "Based on this, [12] generate adversarial examples by minimizing the Euclidean distance between the source and target feature maps.", "[13] leverage class-wise and layer-wise deep feature distributions of substitute models .", "[14] extract feature hierarchy of DNNs to boost the performance of targeted adversarial attacks further.", "However, the above methods need to train specific auxiliary classifiers for each target label, thus suffering from expensive computation costs." ], [ "Methodology", "In this section, we first give some notations of targeted attacks, and the untargeted version can be simply derived.", "Then we describe our proposed methods, i.e., Pair-wise Alignment Attack and Global-wise Alignment Attack, in Subsection REF and REF .", "The attack process is detailed in Subsection REF ." ], [ "Adversarial targeted attacks.", "This task aims at fooling a DNN $\\mathcal {F}$ to misclassify perturbed image $x^{adv} = x + \\delta $ , where $x$ is the original image of label $y$ , $ \\delta $ is an imperceptible perturbation added on $x$ .", "In our work, $\\ell _\\infty $ -norm is applied to evaluate the imperceptibility of perturbation, i.e., $\\left\\Vert \\delta \\right\\Vert _\\infty \\le \\epsilon $ .", "Different from the untargeted attacks that only need to let $\\mathcal {F}$ will not perform correct recognition, targeted attacks restrict the misclassified label to be $y^{tgt}$ .", "The constrained optimization of targeted attacks can be written as: $x^{adv}=\\operatornamewithlimits{arg\\,min}\\mathcal {L}(x^{adv}, y^{tgt}),\\mathit {s.t.}", "\\left\\Vert x^{adv}-x\\right\\Vert _\\infty \\le \\epsilon ,$ where $\\mathcal {L(\\cdot ,\\cdot )}$ is the loss function to calculate perturbations." ], [ "Perceptions of DNNs.", "DNNs, especially convolutional neural networks (CNNs), have their patterns to perceive and understand images [28], which is caused by the mechanism of convolutional layers.", "As introduced in [25], convolution kernels do not perform a one-time transformation to produce result from the input.", "Instead, a small region of input is perceived iteratively so that features at every layer still hold local structures similar to that of the input (see Appendix ).", "This property of convolution leads to the translation homogeneity of intermediate feature maps.", "Therefore, measuring only the Euclidean distance between two feature maps will be inaccurate when there are translations, rotations, etc." ], [ "Pair-wise Alignment Attack", "Given an image $x^{tgt}$ of target label $y^{tgt}$ , a specific intermediate layer $l$ from network $\\mathcal {F}$ .", "We use $S^l\\in \\mathbb {R}^{{N_l}\\times {M_l}}$ to denote the feature of $x^{adv}$ at layer $l$ of $\\mathcal {F}$ .", "Similarly, $T^l\\in \\mathbb {R}^{{N_l}\\times {M_l}}$ is the feature of $x^{tgt}$ .", "Specifically, $N_l$ is the number of channels and $M_l$ is the product of the height and width of features.", "As described before, since Euclidean distance imposes unreasonable spatial-consistency constraint on $S^l$ and $T^l$ , choosing it as the metric leads to redundant efforts on spatial information matching.", "To handle this, we propose the Pair-wise Alignment Attack ($\\mathbf {PAA}$ ).", "Assuming that the label information is modeled by highly abstract features, we denote $S^l$ and $T^l$ are under two distributions $p$ and $q$ , which models the label information $y$ and $y^{tgt}$ , respectively.", "Naturally, an arbitrary feature extracted from $\\mathcal {F}$ is treated as a sample set of a series of feature vectors over corresponding distribution.", "So the problem is how to utilize these samples to further estimate the difference between $p$ and $q$ .", "Empirically, source and target sample sets $\\Omega \\sim p$ , $Z \\sim q$ are built by splitting $S^l$ , $T^l$ into individual vectors, where $\\Omega \\!=\\!\\lbrace {s_{\\cdot i}}\\rbrace _{i=1}^{M_l}$ , $Z\\!=\\!\\lbrace {t_{\\cdot j}}\\rbrace _{j=1}^{M_l}$ .", "Another way of splitting in where $\\Omega \\!=\\!\\lbrace {s_{ i\\cdot }}\\rbrace _{i=1}^{N_l}$ , $Z\\!=\\!\\lbrace {t_{j\\cdot }}\\rbrace _{j=1}^{N_l}$ is analysed in Appendix  After that, through measuring the similarity of $\\Omega $ and $Z$ , the discrepancy between $p$ and $q$ is estimated.", "Typically, this is a two-sample problem [8].", "As introduced in [8], $\\operatorname{MMD}^{2}$ has been explored for the two-sample problem.", "Let $\\mathcal {H}$ be a reproducing kernel Hilbert space (RKHS) with an associated continuous kernel $k(\\cdot ,\\cdot )$ .", "For all $f\\!\\in \\!\\mathcal {H}$ , the mean embedding of $p$ in $\\mathcal {H}$ is an unique element $\\mu _{p}$ which satisfies the condition of ${\\mathbb {E}_{\\omega \\sim {p}}f\\!=\\!\\langle f,\\mu _{p}\\rangle }_\\mathcal {H}$ .", "Then in our task, $\\operatorname{MMD}^{2}[p, q]$ is defined as the RKHS distance between $\\mu _{p}$ and $\\mu _{q}$ : $\\begin{split}&\\operatorname{MMD}^{2}[p, q]=~~ \\Vert \\mu _{p}-\\mu _{q}\\Vert ^2_{\\mathcal {H}} \\\\[3.5pt]=&\\; {\\langle \\mu _{p},\\mu _{p}\\rangle }_{\\mathcal {H}}+{\\langle \\mu _{q},\\mu _{q}\\rangle }_{\\mathcal {H}}-2{\\langle \\mu _{p},\\mu _{q}\\rangle }_{\\mathcal {H}}\\\\[1pt]=&\\frac{1}{M_l^2}\\sum \\limits _{i,j = 1}^{M_l}{k({s_{\\cdot i}},s_{\\cdot j}})+\\frac{1}{M_l^2}\\sum \\limits _{i,j = 1}^{M_l} {k(t_{\\cdot i},t_{\\cdot j}})\\\\[1pt]&- \\frac{2}{M_l^2}\\sum \\limits _{i,j = 1}^{M_l,M_l}{k(s_{\\cdot i},t_{\\cdot j}}).\\end{split}$ Specifically, $\\operatorname{MMD}^{2}$ is calculated by two kinds of pairs: a) intra-distribution pairs $\\left(s_{\\cdot i}, s_{\\cdot j}\\right)$ , $\\left(t_{\\cdot i}, t_{\\cdot j}\\right)$ and b) inter-distribution pair $\\left(s_{\\cdot i}, t_{\\cdot j}\\right)$ .", "Obviously, $\\operatorname{MMD}^{2}$ is not affected by spatial translations, i.e., shifting or rotation will not change the result of equation REF , which is the key difference from Euclidean distance.", "Furthermore, based on the critical property $\\operatorname{MMD}^{2}[p, q]\\!=\\!0\\,\\mathit {iff}\\, p\\!=\\!q$  [8], minimizing equation REF equals to modulating source feature to target's: $\\begin{split}\\mathcal {L_P}{(S^l, T^l)}=&\\operatorname{MMD}^{2}[p, q].\\end{split}$ Since the kernel choice plays a key role in the mean embedding matching [8].", "In our experiments, three kernel functions will be studied to evaluate their effectiveness in statistic alignment: Linear kernel $\\mathbf {PAA_\\ell }$ : $k(s,t)\\!=\\!", "{s^{\\scriptscriptstyle T}}t$ .", "Polynomial kernel $\\mathbf {PAA_p}$ : $k(s,t)\\!=\\!", "{(s^{\\scriptscriptstyle T}t+c)}^d$ .", "Gaussian kernel $\\mathbf {PAA_g}$ : $k(s,t)\\!=\\!\\exp {(-{\\Vert s - t\\Vert _2^2 \\over {2\\sigma ^2}})}$ , where bias $c$ , power $d$ and variance $\\sigma ^2$ are hyper-parameters.", "Following [12], by randomly sampling images from each label, a gallery is maintained for picking target images.", "With the help of the gallery, the pipeline of getting $x^{tgt}$ by $\\mathbf {PAA}$ is as follows: Given a source image $x$ , we obtain $y^{tgt}$ by using different strategies of target label selection.", "After that, $x^{tgt}$ is chosen from the corresponding sub-gallery by finding the image with the largest loss $\\mathcal {L_P}$ .", "It is worth noting that we adopt the linear-time unbiased estimation of $\\operatorname{MMD}^{2}[p, q]$ from [8] to decrease the space and computation complexity during the selection of the target image $x^{tgt}$ ." ], [ "Global-wise Alignment Attack", "Since Pair-wise Alignment Attack involves time-consuming pair-wise computation, we propose the other efficient approach that achieves comparable performance.", "Unlike the previous one, Global-wise Alignment Attack (GAA) explicitly matches moments of source, and target sample sets $\\Omega $ , $Z$ .", "Specifically, we employ two global statistics: a) first-order raw moment (mean) and b) second-order central moment (variance) to guide the modulation of features.", "Let $\\mu _{S^l}^i$ , $\\mu _{T^l}^i$ , $\\sigma _{S^l}^i$ , $\\sigma _{T^l}^i$ be the mean and variance of the $i$ th channel of $S^l$ and $T^l$ , respectively: $&\\mu _{S^l}^i = \\frac{1}{M_l}\\sum \\nolimits _{j = 1}^{M_l} {(S^l)}_{ij},\\; {\\sigma _{S^l}^i} = \\operatorname{Var}((S^l)_{i \\cdot }), \\\\&\\mu _{T^l}^i = \\frac{1}{M_l}\\sum \\nolimits _{j = 1}^{M_l} {(T^l)_{ij}}, \\;{\\sigma _{T^l}^i} = \\operatorname{Var}({(T^l)}_{i \\cdot }).$ Minimizing the gaps between $\\Omega $ and $Z$ of these two moments equals to aligning the source and target features globally: ${\\begin{array}{c}\\delta _\\mu = \\left\\Vert \\mu _{S^l}^i - \\mu _{T^l}^i\\right\\Vert ,\\delta _\\sigma = \\left\\Vert \\sigma _{S^l}^i - \\sigma _{T^l}^i\\right\\Vert , \\\\[3.5pt]\\mathcal {L_G}{(S^l, T^l)} = \\delta _\\mu + \\delta _\\sigma .\\end{array}}$ The reasons for performing Global-wise Alignment are: 1) the two moments are practical to estimate the distribution on a dataset, just like what batch-normalization does; and 2) when the architectures of DNNs go deeper, these two moments will contain more complicated traits to represent different distributions [15].", "Similar as $\\mathbf {PAA}$ , $\\mathbf {GAA}$ also chooses the target image from the gallery by calculating Equation (REF ).", "Figure: Performance (tSuc and tTR) of 𝐏𝐀𝐀 𝐩 \\mathbf {PAA_p} w.r.t.", "2𝑛𝑑\\mathit {2nd}, 10th10th, 100th100th, 500th500th, and 1000th1000th settings.", "Target label of higher ranking leads to better performance." ], [ "Attack Algorithm", "Motivated by MIFGSM [3] which using momentum to memorize previous gradients and follow the setting of AA [12], we integrate momentum to the pipeline of perturbation generation.", "Specifically, for two kinds of attacks, i.e., PAA and GAA, we firstly calculate gradients step-by-step: $g_\\nu = {\\nabla _{x^{adv}_{_\\nu }}} \\mathcal {L}(S^l_\\nu ,T^l),$ where $\\nu $ is the current step during the whole iteration, $S_{\\nu }^l$ is the intermediate feature of the perturbed image $x^{adv}_\\nu $ at iteration $\\nu $ , and $x_0^{adv}\\!=\\!", "x$ .", "Then the momentum term is accumulated by previous gradients: $\\begin{split}{\\beta _{\\nu + 1}} = {\\mu \\cdot \\beta _\\nu } + \\frac{g_\\nu }{{\\left\\Vert {g_\\nu } \\right\\Vert }},\\end{split}$ where $\\mu $ refers to the decay factor, ${\\beta _ \\nu }$ is the momentum term at iteration $\\nu $ and ${\\beta _ 0}$ is initialized to 0.", "Finally, under the $\\ell _\\infty $ -norm constraint, adversarial examples are crafted by performing the above calculations iteratively: $\\begin{split}{x_{\\nu + 1}^{adv}} = \\operatorname{clip}_{x,\\epsilon }({x^{adv}_\\nu } - \\alpha \\cdot \\operatorname{sign}({\\beta _{\\nu +1} })),\\end{split}$ where $\\alpha $ is a given step size.", "Table: Quantitative comparisons with state-of-the-art attacks under the random sample strategy of target label selection.", "Ours achieve the best performance in most cases.Table: Transferability (tSuc and tTR) w.r.t.", "2𝑛𝑑\\mathit {2nd}, 10th10th, 100th100th, 500th500th, and 1000th1000th settings.", "Formally, different target labels lead to different performance and those of lower-ranking lead to worse performance.Figure: Performance (tSuc and tTR) of 𝐆𝐀𝐀\\mathbf {GAA} w.r.t.", "2𝑛𝑑\\mathit {2nd}, 10th10th, 100th100th, 500th500th, and 1000th1000th settings.", "Target label of higher ranking leads to better performance.Figure: tSuc and tTR performance w.r.t.", "relative layer depth for multiple transfer scenarios.", "The figure is split into four phases: upper left, upper right, bottom left, and bottom right, corresponding to black-box attacks transferring from Den121, Inc-v3, VGG19, and Res50.", "All of our proposed methods outperform AA in most cases, which indicates the effectiveness of statistic alignment on various layers.Figure: tSuc results w.r.t.", "bias cc for 𝐏𝐀𝐀 𝐩 \\mathbf {PAA_p} transferring from Den121 (white-box model) to VGG19, Inc-v3, and Res50 (black-box model).", "We observe the highest results when c=0c\\!=\\!0, i.e., polynomial with pure second-order terms." ], [ "Experiments", "To make comprehensive comparisons with state-of-the-arts, we conduct a series of experiments to evaluate performance.", "Specifically, baselines include a feature space targeted attack method: AA [12] and two FGSM-based methods: MIFGSM [3] and TIFGSM [4].", "Appendix  give comparisons with other FGSM-based methods." ], [ "ImageNet models.", "For a better evaluation of transferability, four ImageNet-trained models with different architectures are chosen: VGG-19 with batch-normalization (VGG19) [22], DenseNet-121 (Den121) [10], ResNet-50 (Res50) [9], Inception-v3 (Inc-v3) [24]." ], [ "Dataset.", "Attacking images that have already been misclassified is pointless.", "Hence for each of all 1000 labels in the ImageNet validation set, we randomly select five images (5,000 in total) to perturb, which are correctly classified by all the networks we considered." ], [ "Layer decoding scheme.", "Following AA, a scheme for layer decoding is employed to present better which layer is chosen for the attack.", "Generally, layers are arranged from shallow to deep and numbered by relative layer depths, e.g., layer 0 of Res50 (denoted as Res50[0]) is near the input layer, and Res50[16] is closed to the classification layer.", "Appendix  details the scheme." ], [ "Target label selection.", "There are two strategies for target label selection: a) random sample adopted in AA.", "b) choose by ranking.", "Previous feature space targeted attack methods, e.g., [12], gain relatively poor performance.", "Given the prior knowledge that different target labels involve different transfer difficulties, randomly sampling the target label will lead to fluctuating transfer results (see Appendix  for more analysis).", "For instance, given an image of “cat\", it is easier to fool a model to predict it as a dog than an airplane.", "To avoid this, we assign $y^{tgt}$ by ranking.", "For example, $\\mathit {2nd}$ indicates that the label of the second high confidence is chosen to be $y^{tgt}$ .", "To give an exhaustive comparison, $2nd$ , $10th$ , $100th$ , $500th$ , and $1000th$ settings are adopted.", "We also report results under the random sample strategy to reveal the raw performance." ], [ "Implementation details.", "To make a fair comparison, all methods are set to identical $\\ell _\\infty $ constraint $\\epsilon \\!=\\!", "0.07$ , the number of iterations $T \\!=\\!", "20$ , and step size $\\alpha \\!=\\!", "\\epsilon /T\\!=\\!0.0035$ .", "The gallery size is set to $20\\times 1000$ .", "For $\\mathbf {PAA_g}$ , we set variance $\\sigma ^2$ as the mean of squared $\\ell _2$ distances of those pairs.", "For $\\mathbf {PAA_p}$ , we set bias $c\\!=\\!0$ , and only study the case of power $d=2$ .", "For TIFGSM, we adopt the default kernel length as 15.", "For MIFGSM, we set the decay factor as $\\mu \\!=\\!1.0$ ." ], [ "Evaluation metrics.", "Following AA, we adopt two metrics, i.e., targeted success rate (tSuc) and targeted transfer rate (tTR), to evaluate the transferability of adversarial examples.", "For tSuc, it equals the percentage of adversarial examples that successfully fool the victim's DNNs.", "For tTR, given an image set that contains adversarial examples that attack the substitute model successfully, tTR is the ratio that how many examples of this set can fool the black-box model too." ], [ "Comparisons with State-of-the-Art Attacks", "In this section, to comprehensively evaluate adversarial examples' transferability, we firstly attack different white-box models using the random sample strategy for target labels, then transfer the resultant adversarial examples to black-box models.", "For instance, Den121$\\rightarrow $ Res50 indicates that we generate adversarial examples from Den121 and transfer them to Res50.", "Empirically, attack performance varies according to the choice of layers.", "Under random sample strategy, VGG19[10], Den121[23], Res50[11] and Inc-v3[8] perform the best, their experimental results are shown in Table REF ." ], [ "Effectiveness of $\\mathbf {PAA}$ with Different Kernel Functions", "As demonstrated in Table REF , all pair-wise alignments show their success in feature space targeted attack.", "Specifically, comparing with Linear kernel and Gaussian kernel, Polynomial kernel brings the best performance, and our $\\mathbf {PAA_p}$ outperforms state-of-the-arts by 6.92% at most and 1.70% on average, which shows the effectiveness of our pair-wise alignment.", "As for the reasons of the performance gains, compared with FGSM-based methods, i.e., TIFGSM and MIFGSM, we exploit the information in the intermediate feature maps to perform highly transferable attacks.", "Compared with AA, it adopts Euclidean distance for measuring differences so that shows worse performance than ours, demonstrating the effectiveness of our proposed statistic alignment." ], [ "Effectiveness of $\\mathbf {GAA}$", "Although $\\mathbf {GAA}$ requires quite simple computations to perform attacks, it still shows convincing performance against all black-box models.", "Specifically, $\\mathbf {GAA}$ outperforms the state-of-the-arts by 3.98% at most and 0.73% on average, which shows the effectiveness of global alignment between statistics from target and source.", "Moreover, when choosing Den121 and Res50 as white-box models, it shows comparable performance with $\\mathbf {PAA_\\ell }$ .", "When it becomes VGG19 or Inc-v3, $\\mathbf {GAA}$ achieves the second-best results in most cases." ], [ "Transferability w.r.t. Target Labels", "Considering different difficulties of target label ${y^{tgt}}$ , for $\\mathbf {PAA_{p}}$ and $\\mathbf {GAA}$ , we study how layer-wise transferability varies with \"$\\mathit {2nd}$ \", \"$\\mathit {10th}$ \", \"$\\mathit {100th}$ \", \"$\\mathit {500th}$ \", \"$\\mathit {1000th}$ \" setup.", "As illustrated in Figure REF and Figure REF , tSuc and tTR w.r.t.", "relative layer depth under above settings are evaluated.", "Obviously, the independence of layer-wise transferability from different target labels maintains.", "In other words, different target labels do not affect the layer-wise transferability trends, although further ${y^{tgt}}$ away from ground truth $y$ leads to a more challenging transfer-based attack.", "For case Den121$\\rightarrow $ Res50 under $\\mathit {2nd}$ , we report the results for the optimal layer of Den121 (Den121[22]) in Table REF .", "Formally, target labels of different ranks lead to different performance , and the lower-ranking leads to worse performance.", "Specifically, $2nd$ is the best case, $1000th$ refers to the worst case." ], [ "Transferability w.r.t. Layers", "In this section, transferability w.r.t.", "relative layer depth under $\\mathit {2nd}$ is investigated.", "Involved methods contain $\\mathbf {PAA_\\ell }$ , $\\mathbf {PAA_p}$ , $\\mathbf {PAA_g}$ , $\\mathbf {GAA}$ , and AA.", "Specifically, given the white-box and black-box model pair, each subfigure of Figure REF illustrates performance under different metric w.r.t.", "relative layer depth.", "As demonstrated in the figure, compared with the Linear kernel, the Polynomial kernel brings about better attack ability on Res50, Inc-v3, and Dense121 white-box.", "As for the VGG19 white-box, they achieve comparable results.", "Furthermore, in most of the chosen layers, all of our methods are superior to the baseline AA by a large margin.", "Similar to what is stated in [12], given a white-box model, our layer-wise transferability still holds a similar trend regardless of which black-box models we test.", "Specifically, for Den121, a deeper layer yields more transferability.", "For Inc-v3, Vgg19, and Res50, the most powerful attack comes from perturbations generated from optimal middle layers.", "This phenomenon indicates that adversarial examples generated by our optimal layers can be well transferred to truly unknown models.", "From the experimental results, under $2nd$ , we simply adopt VGG19[14], Den121[22], Res50[14], and Inc-v3[11] as our optimal layers." ], [ "Transferability w.r.t. Orders", "As mentioned above, the Polynomial kernel leads to the most powerful attack.", "Since larger bias c $(c\\ge 0)$ results in a greater proportion of lower-order terms in the polynomial, in this section, we study the appropriate value of $c$ under $\\mathit {2nd}$ and Den121[22] setup.", "Specifically, we attack Den121 using $\\mathbf {PAA_p}$ parameterized by $c$ ranging from 0.0 to 2.0 with a granularity 0.1.", "As illustrated in Figure REF , from the monotonically decreasing curves, we can achieve the most effective attack when $c=0.0$ , where tSuc is 37.00%, 24.00%, 37.78% for VGG19, Inc-v3 , and Res50.", "Once $c\\!=\\!1.3$ or larger, tSuc maintains stable.", "The overall average tSuc for VGG19, Inc-v3, Res50 are 30.78%, 19.68%, and 32.12%." ], [ "Conclusion", "In this paper, we propose a novel statistic alignment for feature space targeted attacks.", "Previous methods utilize Euclidean distance to craft perturbations.", "However, because of the spatial-related property of this metric, it unreasonably imposes a spatial-consistency constraint on the source and target features.", "To address this problem, two novel methods, i.e., Pair-wise Alignment Attack and Global-wise Alignment Attack are proposed by employing high-order translation-invariant statistics.", "Moreover, since randomly selecting target labels results in fluctuating transfer results, we further analyze the layer-wise transferability with different transfer difficulties to obtain highly reliable attacks.", "Extensive experimental results show the effectiveness of our methods." ], [ "Acknowledgements", "This work is supported by National Key Research and Development Program of China (No.2018AAA0102200), the National Natural Science Foundation of China (Grant No.61772116, No.61872064, No.62020106008), Sichuan Science and Technology Program (Grant No.2019JDTD0005), The Open Project of Zhejiang Lab (Grant No.2019KD0AB05) and Open Project of Key Laboratory of Artificial Intelligence, Ministry of Education (Grant No.AI2019005)." ], [ "Layer Decoding", "Table REF gives the detailed information for the chosen layers of all models we test on.", "Den121 follows the implementation here: https://github.com/pytorch/vision/blob/master/torchvision/models/densenet.py,Den121 has four denseblocks, each of them has [6,12,24,16] denselayers.", "e.g., given layer 1, we extract the output of the 2nd denselayer of the first denseblock as the feature map, for layer 23, the feature map corresponding to the output of the 14nd denselayer of the fourth denseblock.", "Notably, layer 0 means that we extract the final output before the first block.", "The VGG19 model follows the implementation here:https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py.", "The complete layer array for VGG19 is [64, 64, 'M', 128, 128, 'M', 256, 256, 256, 256, 'M', 512, 512, 512, 512, 'M', 512, 512, 512, 512, 'M'], 'M' refers to the MaxPool2d layer, and each number corresponding to how many channels a 3x3 convolutional layer has.", "FC1, FC2, FC3 refer to the last three linear layers of VGG19.", "Some layers of the arrays are not chosen since perturbing images based on these layers is helpless.", "The Inc-v3 model follows the implementation here: https://github.com/pytorch/vision/blob/master/torchvision/models/inception.py.", "The layer array of Inc-v3 is [C1a3x3, C2a3x3, C2b3x3,M,C3b1x1, C4a3x3, M, Mi5b, Mi5c, Mi5d, Mi6a, Mi6b, Mi6c, Mi6d, Mi6e], 'M' refers to the MaxPool2d layer, others start with 'C' refers to convolutional layers, and the rest represents different inception blocks.", "The Res50 model follows the implementation here: https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py.", "Res50 has 4 layer groups, each layer group has 3, 4, 6, and 3 Bottlenecks, FC is the last linear layer of Res50.", "Layer 0 means that we choose the output of the first Bottlenecks from 1st layer group.", "Figure: Visualization of adversarial examples with Den121 as the white-box.", "Original class: goldfish, targeted class: axolotl.", "From left to right: Raw, MIFGSM, AA and 𝐏𝐀𝐀 𝐩 \\mathbf {PAA_p} (both on Den121[22])." ], [ "Visualization of Adversarial Examples", "Please refer to Figure REF .", "Obviously, $\\mathbf {PAA_p}$ obtains better performance and higher stealthiness than MIFGSM and AA." ], [ "Different Sample Strategies", "There are two strategies to sampling feature maps: pixel-wise and channel-wise.", "Given source and target feature maps $S^l\\in \\mathbb {R}^{{N_l}\\times {M_l}}$ , $T^l\\in \\mathbb {R}^{{N_l}\\times {M_l}}$ , by following different sample strategy, corresponding sample sets $\\Omega $ and $Z$ are obtained.", "Channel-wise strategy means that we sample from the feature maps channel-by-channel (i.e.", "$\\Omega \\!=\\!\\lbrace {s_{ i \\cdot }}\\rbrace _{i=1}^{N_l}$ , $Z\\!=\\!\\lbrace {t_{j \\cdot }}\\rbrace _{j=1}^{N_l}$ ).", "Point-wise strategy changes to pixel-by-pixel (i.e.", "$\\Omega \\!=\\!\\lbrace {s_{\\cdot i}}\\rbrace _{i=1}^{M_l}$ , $Z\\!=\\!\\lbrace {t_{\\cdot j}}\\rbrace _{j=1}^{M_l}$ ).", "Given Dense121 as the white-box model, the transferability results under different strategies are shown in Fig.", "REF .", "Obviously, the tSuc results of Channel-wise strategy don't vary with the choices of layers.", "And for all the layers we choose, Point-wise strategy always outperforms Channel-wise strategy.", "Thus in our experiments we adopt the Point-wise strategy." ], [ "Demonstration of the Extracted Feature Maps from Inc-v3", "Convolution kernels do not perform a one-time transformation to produce results from the input.", "Instead, a small region of input is perceived iteratively, so that feature at every layer still holds a local structure similar to that of the input.", "As Figure REF demonstrates.", "This property of convolution leads to the translation homogeneity of intermediate feature maps.", "Therefore, measuring only the Euclidean distance between two feature maps will be inaccurate when there are translations, rotations etc.", "Table: Quantitative comparisons with the state-of-the-art attacks.", "All methods are attacked by choosing the 2nd2nd target class.", "For FGSM-based methods, we report the reproduced results; For Feature space attacks, the best results they ever met are presented.", "Specifically, optimal layers of four white-boxes are: Den121[23], VGG19[10], Inc-v3[8], Res50[11].", "Compared with settings with randomly selected target classes, the attack performance is much higher.", "Nevertheless, Ours still achieves the best performance in all cases, indicating the high reliable of our proposed methods.Table: Quantitative comparisons with other FGSM-based methods (NIFGSM, SIFGSM, and PIFGSM).", "Ours still achieves the best performance in all cases, indicating the highly reliable of our methods.Figure: tSuc, and tTR rates w.r.t.", "layer depth for multiple transfer scenarios.", "The figure is split into four phases: upper left, upper right, bottom left, bottom right, corresponding to black-box attacks transferring from Den121, Inc-v3, VGG19, Res50." ], [ "The Effectiveness of $\\mathbf {PAA}$ and {{formula:18f83560-2967-4b4b-9a55-3cd6af25375c}} under 2nd", "In this section, under $2nd$ , we craft the adversarial perturbations for different white-box models by using the features from their optimal layers, then evaluate the resultant adversarial examples on black-box models.", "Specifically, optimal layers of four white-boxes are: Den121[23], VGG19[10], Inc-v3[8], Res50[11], and Den121$\\rightarrow $ Res50 indicates that we firstly generate adversarial examples from Den121 and then transfer them to Res50.", "As shown in Table REF , we report the quantitative comparisons with the state-of-the-art attacks.", "Compared with settings that randomly select target labels, the attack performance is much higher.", "Nevertheless, Ours still achieves the best performance in all cases, indicating the high reliability of our proposed methods." ], [ "Fluctuate Transfer Results under Random Targeted Class Selection", "For targeted class selection, there are two strategies: a) random sample, which is adopted in previous methods.", "b) choose by ranking.", "Given the prior knowledge that different targeted class involves different transfer difficulty.", "For instance, given an image of “cat”, it is easier to fool a model to predict it as a dog than an airplane.", "Under this circumstance, the random selection of targeted class will lead to fluctuating transfer results.", "With the random sample strategy, we study the transferability w.r.t.", "relative layer depth.", "All the experiments are under settings of $\\mathit {2nd}$ and $c\\!=\\!0$ .", "Involved methods containing $\\mathbf {PAA_\\ell }$ , $\\mathbf {PAA_p}$ , $\\mathbf {PAA_g}$ , $\\mathbf {GAA}$ , MIFGSM, TIFGSM, and AA.", "Given the white-box and black-box model pair, each subfigure of Figure REF illustrates performances under different metrics w.r.t.", "relative layer depth, obviously, it is hard to see the details.", "The randomness influences the transfer results a lot." ], [ "Comparisons with other FGSM-based Methods", "In this section, to further evaluate the effectiveness of our proposed methods ($\\mathbf {GAA}$ and $\\mathbf {PAA_p}$ ), given Res50 as the white-box model and using random sample strategy for target label selection, we report the comparisons with other FGSM-based methods, e.g., SIFGSM [17], NIFGSM [17] and PIFGSM [5] in Tabel REF .", "Notably, AA which integrates with MIFGSM is selected as our crucial baseline, and thus the direct FGSM-based competitor is naturally MIFGSM.", "Only when we modify AA, PAA, and GAA by replacing MIFGSM with other algorithms, e.g., NIFGSM, SIFGSM, and PIFGSM, would it be possible to compare with more state-of-the-arts, otherwise it would be unfair.", "For simplicity, we denote AA-NI, GAA-NI, and $\\mathbf {PAA_p\\text{-}NI}$ as the algorithms which replace MIFGSM with NIFGSM in AA, GAA and $\\mathbf {PAA_p}$ .", "Specifically, we firstly conduct attacks on the optimal layer of Res50 (Res50[11]) using different methods and then transfer the adversarial examples to black-box models, including Den121, Inc-v3, and VGG19.", "As Tabel REF demonstrates, after modifying AA by replacing MIFGSM with NIFGSM and PIFGSM, the new versions of AA achieve lower attack performance compared with original AA.", "As for SIFGSM, AA-SI obtains a slightly performance gain.", "Nevertheless, our adversarial examples have better transferability and our $\\mathbf {PAA_ps}$ still achieve the most powerful targeted attacks and $\\mathbf {GAAs}$ are the second.", "Specifically, our $\\mathbf {PAA_p}$ outperforms the FGSM-based methods by 3.12% at most and 1.46%, surpass the AAs by 4.48% at most and 1.06% on average, indicating the effectiveness of our statics alignment.", "Table: Transferability (tSuc) w.r.t.", "2𝑛𝑑\\mathit {2nd}, 10th10th, 100th100th, 500th500th, and 1000th1000th settings for other FGSM-based methods (e.g., NIFGSM, SIFGSM and PIFGSM).", "Our 𝐆𝐀𝐀\\mathbf {GAA} and 𝐏𝐀𝐀 𝐩 𝐬\\mathbf {PAA_ps} obtain noticeable performance gains compared with AA for all cases.", "𝐏𝐀𝐀 𝐩 𝐬\\mathbf {PAA_ps} achieve comparable performance to FGSM-based methods under 10th10th, and outperforms them by a large margin under 100th100th, 500th500th, and 1000th1000th setting.Table: Transferability (tTR) w.r.t.", "2𝑛𝑑\\mathit {2nd}, 10th10th, 100th100th, 500th500th, and 1000th1000th settings for other FGSM-based methods (e.g., NIFGSM, SIFGSM and PIFGSM).", "Our 𝐆𝐀𝐀\\mathbf {GAA} and 𝐏𝐀𝐀 𝐩 𝐬\\mathbf {PAA_ps} obtain noticeable performance gains compared with AA for all cases.", "𝐏𝐀𝐀 𝐩 𝐬\\mathbf {PAA_ps} surpassing all FGSM-based methods by a large margin under 10th10th, 100th100th, 500th500th, and 1000th1000th." ], [ "Transferability w.r.t. Target Labels on other FGSM-based Methods", "Follow the setting in Appendix  except changing the white-box model to Den121, we report the results (tSuc and tTR) on black-box model Res50 under $2nd$ , $10th$ , $100th$ , $500th$ , and $1000th$ for the optimal layer of Den121 (Den121$\\textsubscript {[23}$ ) in Table REF and Tabel REF .", "As demonstrated in the tables, formally, lower ranking leads to worse performance.", "Intuitively, the distributions from target images of lower-ranking class are more different from that from source images, which leads to a larger discrepancy to align as well as lower performance.", "Compared with the feature space targeted attack methods AA, our methods obtain noticeable performance gains on both tSuc and tTR, which shows the effectiveness or the proposed statistic alignment.", "Specifically, our $\\mathbf {PAA_ps}$ obtain the best performance, and $\\mathbf {GAA}$ takes second place.", "Compared with FGSM-based methods, for tSuc, our $\\mathbf {PAA_ps}$ achieve comparable performance to FGSM-based methods under $10th$ , and outperforms them by a large margin under $100th$ , $500th$ , and $1000th$ setting.", "For tTR, $\\mathbf {PAA_ps}$ surpassing all FGSM-based methods by a large margin under $10th$ , $100th$ , $500th$ , and $1000th$ .", "It is worth noting that FGSM-based methods perform better than ours under $2nd$ setting.", "However, since $2nd$ refers to the easiest transfer difficulty, it is not applicable in real world compared with other setting, i.e., $10th$ , $100th$ , $500th$ , and $1000th$ .", "Investigating this phenomenon needs further theoretical studies, which is not our main purpose in this paper." ] ]
2105.11645
[ [ "Dark photon portal into mirror world" ], [ "Abstract Dark photons and mirror matter are well-motivated dark matter candidates.", "It is possible that both of them arose during the compactification and symmetry breaking scenario of the heterotic $E_8\\times E_8$ string theory and are related to each other.", "In this case, dark photons can become a natural portal into the mirror world.", "Unfortunately, the expected magnitude of the induced interactions of ordinary matter with mirror matter is too small to be of phenomenological interest." ], [ "introduction", "There is overwhelming evidence for the existence of dark matter at all astrophysical length scales, from galactic to cosmological [1].", "At the same time, the true nature and composition of dark matter remain unknown.", "All evidence for the existence of dark matter so far is based on its gravitational effects.", "Many proposed models of dark matter assume other non-gravitational very weak interactions between dark matter particles and ordinary matter.", "This motivates dark matter direct-detection experiments [2], [3].", "The sensitivity of such experiments has improved tremendously.", "However, no clear experimental evidence for the existence of dark matter particles has been provided yet from these experiments.", "A generic feature of string theory is the prediction of extra $U(1)$ gauge factors beyond the Standard Model group [4].", "This fact makes dark photons well-motivated candidates for dark matter [5].", "These dark photons can kinetically mix with the Standard Model photon [6] and thus lead to potentially observable effects in astrophysical and cosmological phenomena, as well as in laboratory experiments [5], [7].", "Another well-motivated candidate for dark matter is mirror matter.", "The mirror partners of ordinary particles were first introduced by Lee and Yang in their famous paper [8] in an attempt to preserve the left-right symmetry of the world.", "Kobzarev, Okun and Pomeranchuk realized [9] that mirror particles could make up a hidden sector of the world, which communicates with the visible world mainly through gravity.", "The idea was rediscovered in the modern context of renormalizable gauge theories by Foot, Lew and Volkas [10] and revived in the context of neutrino oscillations in both mirror-symmetric [11] and mirror-asymmetric [12] forms.", "For recent reviews of the mirror matter theory and related references, see, for example, [13], [14], [15], [16].", "Mirror matter, like dark photons, can also be motivated by string theory.", "One of the heterotic string theories, which makes it possible to construct a coherent quantum theory that unifies all interactions, including gravity, is based on the group $E_8\\times E_8$ [17], [18].", "The second $E_8$ after compactification can lead to the existence of shadow matter, which interacts only gravitationally with ordinary matter [19].", "Mirror matter is a special case of shadow matter when the symmetry breaking patterns of the second $E_8$ exactly mirror the patterns of the first $E_8$ , which leads to an exciting possibility of a mirror world with invisible stars, planets and galaxies [9], [20].", "Since both dark photons and mirror matter may have their origins in string theory, it can be assumed that they are related in some way.", "In this short note, we present the simplest model of this type." ], [ "Mirror world with dark photons", "Calabi-Yau compactifications of the heterotic $E_8\\times E_8$ superstring model may naturally lead to the gauge group $E_6$ in the visible sector [20], [21].", "Further symmetry breaking patterns such as $E_6\\rightarrow SO(10)\\times U(1)$ or $E_6\\rightarrow SU(5)\\times SU(2)\\times U(1)$ can introduce the gauge group $U(1)$ of dark photons into play.", "It is usually assumed that after Calabi-Yau compactification, the gauge group in the hidden sector is different from the visible sector gauge group, since, as noted in [19], a fully symmetric hidden sector contradicts observations, in particular Big Bang nucleosynthesis constraints.", "However, the limitations from the Big Bang nucleosynthesis can be avoided if the mirror sector has a lower temperature than the ordinary one, and an inflationary scenario can be envisaged that could explain the different initial temperatures of the two sectors [16].", "As a result, although the microphysics of these two sectors are identical, their macroscopic properties in relation to the most important epochs, such as baryogenesis, nucleosynthesis, etc., will be completely different [16].", "Therefore, we assume that it is possible to keep the mirror symmetry between the visible and hidden sectors unbroken after compactification and up to the electroweak scale.", "Then during, say, $E_6\\times E_6\\rightarrow (SO(10)\\times U(1))\\times (SO(10)\\times U(1))$ symmetry breaking stage the two $U(1)$ -s of the dark photon and mirror dark photon can get mixed, for example, through the Higgs portal [22].", "These considerations motivate to consider the following effective Lagrangian density ${\\cal {L}}=L+\\tilde{L}-\\frac{\\epsilon _2}{2}F_{b\\mu \\nu }\\tilde{F}^{b\\mu \\nu }-e_aJ_\\mu A^{a\\mu }-e_a\\tilde{J}_\\mu \\tilde{A}^{a\\mu },$ where tilde denotes mirror fields, $J_\\mu $ and $\\tilde{J}_\\mu $ are ordinary and mirror electric currents, and $\\begin{aligned}& L=-\\frac{1}{4}F_{a\\mu \\nu }F^{a\\mu \\nu }-\\frac{1}{4}F_{b\\mu \\nu }F^{b\\mu \\nu }+\\frac{1}{2}m_b^2A_{b\\mu } A^{b\\mu }-\\frac{\\epsilon _1}{2}\\,F_{a\\mu \\nu }F^{b\\mu \\nu }, \\\\& \\tilde{L}=-\\frac{1}{4}\\tilde{F}_{a\\mu \\nu }\\tilde{F}^{a\\mu \\nu }-\\frac{1}{4}\\tilde{F}_{b\\mu \\nu } \\tilde{F}^{b\\mu \\nu }+\\frac{1}{2}m_b^2\\tilde{A}_{b\\mu }\\tilde{A}^{b\\mu }-\\frac{\\epsilon _1}{2}\\,\\tilde{F}_{a\\mu \\nu }\\tilde{F}^{b\\mu \\nu }.\\end{aligned}$ The dark photon masses can arise, for example by Stückelberg mechanism [23].", "The Lagrangian (REF ) can be diagonalized in three steps.", "First, we introduce the fields $A_\\mu $ and $B^\\prime _\\mu $ along their mirror counterparts via $\\begin{aligned}& A_{a\\mu }=A_\\mu -\\frac{\\epsilon _1}{\\sqrt{1-\\epsilon _1^2}}\\,B^\\prime _\\mu ,\\;\\;\\;A_{b\\mu }=\\frac{1}{\\sqrt{1-\\epsilon _1^2}}\\,B^\\prime _\\mu , \\\\& \\tilde{A}_{a\\mu }=\\tilde{A}_\\mu -\\frac{\\epsilon _1}{\\sqrt{1-\\epsilon _1^2}}\\,\\tilde{B}^\\prime _\\mu ,\\;\\;\\;\\tilde{A}_{b\\mu }=\\frac{1}{\\sqrt{1-\\epsilon _1^2}}\\,\\tilde{B}^\\prime _\\mu .\\end{aligned}$ In terms of these fields Lagrangians (REF ) take the diagonal forms $L=-\\frac{1}{4}F_{\\mu \\nu }F^{\\mu \\nu }-\\frac{1}{4}B^\\prime _{\\mu \\nu }B^{\\prime \\mu \\nu }+\\frac{1}{2}\\,\\frac{m_b^2}{1-\\epsilon _1^2}\\,B^\\prime _{\\mu } B^{\\prime \\mu },\\;\\;\\tilde{L}=-\\frac{1}{4}\\tilde{F}_{\\mu \\nu }\\tilde{F}^{\\mu \\nu }-\\frac{1}{4}\\tilde{B}^\\prime _{\\mu \\nu }\\tilde{B}^{\\prime \\mu \\nu }+\\frac{1}{2}\\,\\frac{m_b^2}{1-\\epsilon _1^2}\\,\\tilde{B}^\\prime _{\\mu } \\tilde{B}^{\\prime \\mu }.$ However, visible and hidden gauge fields still remain kinematically intermixed thanks to the $-\\frac{\\epsilon _2}{2(1-\\epsilon ^2_1)}B^\\prime _{\\mu \\nu }\\tilde{B}^{\\prime \\mu \\nu }$ term.", "This term can be diagonalized by the transformation $B^\\prime _{\\mu }=\\frac{1}{\\sqrt{2}}\\left(B^{\\prime \\prime }_{\\mu }+\\tilde{B}^{\\prime \\prime }_{\\mu }\\right),\\;\\;\\;\\tilde{B}^\\prime _{\\mu }=\\frac{1}{\\sqrt{2}}\\left(B^{\\prime \\prime }_{\\mu }-\\tilde{B}^{\\prime \\prime }_{\\mu }\\right).$ Being an orthogonal transformation, (REF ) does not spoil the diagonality of the sum of Lagrangians (REF ).", "However, we end up with incorrect coefficients of the kinetic $B^{\\prime \\prime }_{\\mu }B^{\\prime \\prime \\mu }$ and $\\tilde{B}^{\\prime \\prime }_{\\mu } \\tilde{B}^{\\prime \\prime \\mu }$ terms.", "To restore the correct normalization of these terms, we rescale the fields: $B^{\\prime \\prime }_{\\mu }=\\sqrt{\\frac{1-\\epsilon _1^2}{1-\\epsilon _1^2+\\epsilon _2}}\\,B_\\mu ,\\;\\;\\;\\tilde{B}^{\\prime \\prime }_{\\mu }=\\sqrt{\\frac{1-\\epsilon _1^2}{1-\\epsilon _1^2-\\epsilon _2}}\\,\\tilde{B}_\\mu .$ Finally, in terms of mass-eigenstate physical fields (ordinary photon $A_\\mu $ , mirror photon $\\tilde{A}_\\mu $ , dark photon $B_\\mu $ and mirror dark photon $\\tilde{B}_\\mu $ ) our original Lagrangian density (REF ) takes the form ${\\cal {L}}=-\\frac{1}{4}F_{\\mu \\nu }F^{\\mu \\nu }-\\frac{1}{4}B_{\\mu \\nu }B^{\\mu \\nu }-\\frac{1}{4}\\tilde{F}_{\\mu \\nu }\\tilde{F}^{\\mu \\nu }-\\frac{1}{4}\\tilde{B}_{\\mu \\nu }\\tilde{B}^{\\mu \\nu }+\\frac{1}{2}\\,\\mu ^2B_\\mu B^\\mu +\\frac{1}{2}\\,\\tilde{\\mu }^2\\tilde{B}_\\mu \\tilde{B}^\\mu +{\\cal {L}}_{\\mathrm {int}}.$ Here, as usual, $F{\\mu \\nu }=\\partial _\\mu A_\\nu -\\partial _\\nu A_\\mu $ , $B{\\mu \\nu }=\\partial _\\mu B_\\nu -\\partial _\\nu B_\\mu $ , and $\\mu ^2=\\frac{m_b^2}{1-\\epsilon _1^2+\\epsilon _2},\\;\\;\\;\\tilde{\\mu }^2=\\frac{m_b^2}{1-\\epsilon _1^2-\\epsilon _2}.$ The most interesting is the interaction term ${\\cal {L}}_{\\mathrm {int}}=-e_aJ_\\mu \\left[A^\\mu -\\frac{\\epsilon _1}{\\sqrt{2}}\\left(\\frac{B^\\mu }{\\sqrt{1-\\epsilon _1^2+\\epsilon _2}}+\\frac{\\tilde{B}^\\mu }{\\sqrt{1-\\epsilon _1^2-\\epsilon _2}}\\right)\\right]-e_a\\tilde{J}_\\mu \\left[\\tilde{A}^\\mu -\\frac{\\epsilon _1}{\\sqrt{2}}\\left(\\frac{B^\\mu }{\\sqrt{1-\\epsilon _1^2+\\epsilon _2}}-\\frac{\\tilde{B}^\\mu }{\\sqrt{1-\\epsilon _1^2-\\epsilon _2}}\\right)\\right].$ As we can see, the effect of $\\sim \\epsilon _2$ mixing term in (REF ) is twofold: it removes the degeneracy between dark photon and mirror dark photon, resulting in mass splitting $\\mu ^2-\\tilde{\\mu }^2=-\\frac{2\\epsilon _2\\,m_b^2}{(1-\\epsilon _1^2)^2-\\epsilon _2^2}\\approx -2\\,\\epsilon _2\\,m_b^2,$ and interconnects the visible and hidden sectors (if $\\epsilon _2=0$ , the interconnection implied by (REF ) is unphysical and can be rotated away by an orthogonal transformation of the physical fields)." ], [ "Concluding remarks", "Previously, several different “portals” were considered that connect the ordinary and mirror worlds: photon portal (photon-mirror photon oscillations) [24], [25], neutrino portal [11], [12], [26], Higgs portal [27], axion portal [28], neutron portal (neutron-mirror neutron oscillations) [29].", "Even in the absence of all these portals, the visible and mirror sectors become interconnected (albeit very weakly) by quantum gravity effects [30].", "The dark photon portal proposed in this note is, in our opinion, very natural.", "Combining two dark entities is also helpful in explaining the dark matter mystery.", "For light dark photons to be the dominant part of dark matter, it is necessary to identify a well-motivated mechanism for their production.", "Although some production mechanisms have been proposed (misalignment mechanism, production due to fluctuations of the metric during inflation of the early universe, temperature-dependent instabilities in the hidden-sector (pseudo)scalar field coupled to a dark photon), they are not without difficulties [31].", "In the case of mirror matter, there are no such difficulties, and mirror baryons could easily provide the dominant component of dark matter in the universe [16].", "Dark photons and mirror matter can combine to form a multicomponent dark matter with properties as diverse as ordinary matter in the universe, and perhaps astrophysical observations do indicate diverse behavior of dark matter in galaxy cluster collisions [32].", "Dark photons and mirror matter help each other to stay hidden from direct experimental searches, such as experiments like XENON1T [33], or in the search for invisible decays of positronium [34].", "Indeed, it is clear from (REF ) that positronium can oscillate into mirror positronium through $B_\\mu $ or $\\tilde{B}_\\mu $ exchange.", "However, the amplitude of this transition is proportional to (assuming ultralight dark photons, $m_a\\ll m_e$ ) ${\\cal {A}}(Ps\\rightarrow \\widetilde{Ps})\\sim \\left(\\frac{\\epsilon _1}{\\sqrt{2}}\\,\\frac{1}{\\sqrt{1-\\epsilon _1^2+\\epsilon _2}}\\right)^2-\\left(\\frac{\\epsilon _1}{\\sqrt{2}}\\,\\frac{1}{\\sqrt{1-\\epsilon _1^2-\\epsilon _2}}\\right)^2=-\\frac{\\epsilon _1^2\\epsilon _2}{(1-\\epsilon _1^2)^2-\\epsilon _2^2}\\approx -\\epsilon _1^2\\epsilon _2.$ Bounds on $\\epsilon _1$ from the dark photon searches are quite tight [35].", "It is probably safe to assume that $|\\epsilon _1|<10^{-7}$ .", "Then (REF ) is unobservably small for any reasonable value of $\\epsilon _2$ .", "Contrariwise, with mirror matter at hand, there is no need to assume that dark photons represent a significant fraction of galactic dark matter.", "Therefore, dark photon direct-search experiments can only rely on intense photon sources such as the Sun.", "If the dark photon portal is a dominant mechanism connecting visible and hidden sectors, and if $\\epsilon _1$ and $\\epsilon _2$ are both too small, then unfortunately experiments to directly search for dark matter will have no more chance of success than finding black cat in a dark room." ] ]
2105.11814
[ [ "The Perturbed Prox-Preconditioned SPIDER algorithm for EM-based large\n scale learning" ], [ "Abstract Incremental Expectation Maximization (EM) algorithms were introduced to design EM for the large scale learning framework by avoiding the full data set to be processed at each iteration.", "Nevertheless, these algorithms all assume that the conditional expectations of the sufficient statistics are explicit.", "In this paper, we propose a novel algorithm named Perturbed Prox-Preconditioned SPIDER (3P-SPIDER), which builds on the Stochastic Path Integral Differential EstimatoR EM (SPIDER-EM) algorithm.", "The 3P-SPIDER algorithm addresses many intractabilities of the E-step of EM; it also deals with non-smooth regularization and convex constraint set.", "Numerical experiments show that 3P-SPIDER outperforms other incremental EM methods and discuss the role of some design parameters." ], [ "Introduction", "EM [1], [2] is a very popular computational tool, designed to solve non convex minimization problems on $\\mathbb {R}^d$ when the objective function is not explicit but defined by an integral $F(\\theta ) = -\\log \\int _\\mathsf {Z}G(z;\\theta ) \\mathrm {d}\\mu (z)$ .", "EM is a Majorize-Minimization algorithm which, based on the current value of the parameter $\\theta _\\mathrm {curr}$ , defines a majorizing function $\\theta \\mapsto Q(\\theta ;\\theta _\\mathrm {curr})$ through a Kullback-Leibler argument; then, the new point is chosen as the/a minimum of $Q(\\cdot ;\\theta _\\mathrm {curr})$ .", "The computation of $Q$ is straightforward when there exist (known and explicit) functions $\\mathsf {R},\\phi ,\\mathsf {s}$ such that $Q(\\cdot ; \\theta _\\mathrm {curr}) = \\mathsf {R}(\\cdot ) -\\left\\langle \\bar{\\mathsf {s}}(\\theta _\\mathrm {curr}),\\phi (\\cdot )\\right\\rangle $ and $\\bar{\\mathsf {s}}(\\tau ) \\propto \\int _\\mathsf {Z}\\mathsf {s}(z) G(z;\\tau ) \\mathrm {d}\\mu (z)$ is the expectation of the function $\\mathsf {s}$ with respect to (w.r.t.)", "the probability measure $G(\\cdot ; \\tau ) \\exp (-F(\\tau ))\\mathrm {d}\\mu $ .", "In these cases, the vector $\\bar{\\mathsf {s}}(\\theta _\\mathrm {curr})$ defines the function $Q$ .", "It may happen that the vector $\\bar{\\mathsf {s}}(\\theta _\\mathrm {curr})$ is not explicit (see e.g.", "[3]); a natural idea is to substitute $\\bar{\\mathsf {s}}$ for an approximation, possibly random.", "A first level of intractability occurs when the integral $\\bar{\\mathsf {s}}(\\theta _\\mathrm {curr})$ is not explicit.", "Many stochastic EM versions were proposed and studied to overcome this intractability: among them, let us cite Monte Carlo EM [4], [5] where $\\bar{\\mathsf {s}}$ is approximated by a Monte Carlo integration; and SA EM [6], [7] where $\\bar{\\mathsf {s}}$ is approximated by a Stochastic Approximation (SA) scheme [8].", "With the Big Data era, a second level of intractability occurred: EM applied to statistical learning evolved into online versions and large scale versions in order to minimize a loss function associated to a set of observations (also called examples).", "In large scale versions, the number of training data $n$ is too large to be processed at each iteration of EM: for example, when the majorizing function $Q$ of EM is of the form $Q(\\theta ; \\theta _\\mathrm {curr}) = \\mathsf {R}(\\theta ) -\\left\\langle \\bar{\\mathsf {s}}(\\theta _\\mathrm {curr}),\\phi (\\theta )\\right\\rangle $ , the vector $\\bar{\\mathsf {s}}(\\theta _\\mathrm {curr})$ often has the form $n^{-1} \\sum _{i=1}^n\\bar{\\mathsf {s}}_i(\\theta _\\mathrm {curr})$ and the sum over $n$ terms can not be allowed at each iteration of EM.", "To overcome this intractability in this so-called finite-sum setting, incremental EM-based algorithms were proposed: let us cite incremental EM [9], Online-EM [10], sEM-VR [11], FIEM [12] (see also [13] for opt-FIEM) and SPIDER-EM [14], [15].", "The three algorithms sEM-vr, FIEM and SPIDER-EM can be seen as a Online-EM algorithm combined with a variance reduction technique through the construction of a control variate; they all improve on Online-EM (see e.g.", "[15]).", "However, these EM-based algorithms designed for the finite-sum framework all consider that the functions $\\theta \\mapsto \\bar{\\mathsf {s}}_i(\\theta )$ can be explicitly evaluated for any $\\theta $ and $i=1, \\cdots , n$ , while being defined as an expectation.", "This paper introduces a novel EM-based procedure, named Perturbed Prox-Preconditioned SPIDER which tackles the two difficulties: ; (i) the finite-sum setting; (ii) the intractability of the quantities $\\bar{\\mathsf {s}}_i(\\theta _\\mathrm {curr})$ .", "It is proved in [14] that the complexity bounds of SPIDER-EM, expressed as the number of optimization steps and as the number of evaluations of the quantities $\\bar{\\mathsf {s}}_i(\\theta _\\mathrm {curr})$ required to reach an $\\epsilon $ -approximate stationary point of $F$ , improves over the state-of-the art.", "Therefore, our algorithm builds on SPIDER-EM.", "It is also designed to address a composite problem with a non-smooth term.", "3P-SPIDER is introduced in Section , with an emphasis on the case the quantities $\\bar{\\mathsf {s}}_i(\\theta _\\mathrm {curr})$ are approximated by a Monte Carlo sum.", "In Section , the algorithm is applied to the logistic regression problem; insights on the choice of some design parameters are also given.", "It is shown that this perturbed version of SPIDER-EM improves on the perturbed version of Online-EM thus illustrating that the variance reduction technique is still perceptible.", "This benefit is all the more visible that the error when approximating the $\\bar{\\mathsf {s}}_i(\\theta _\\mathrm {curr})$ 's is small.", "Finally, since 3P-SPIDER combines two approximations to address the intractability of the $\\bar{\\mathsf {s}}_i(\\theta _\\mathrm {curr})$ 's and the finite-sum setting, it is advocated to regularly refresh the control-variate approximation with a full screening of the data set.", "The complexity analysis of this algorithm is provided in [16]: under conditions on the approximations of the $\\bar{\\mathsf {s}}_i(\\theta _\\mathrm {curr})^{\\prime }s$ , which are satisfied for example for a Monte Carlo approximation, it is shown that 3P-SPIDER has the same complexity bounds as SPIDER-EM.", "In that sense, it remains optimal among the (perturbed) incremental EM algorithms.", "Notations $\\mathbb {R}_+^\\star $ and ${\\mathbb {N}}^\\star $ denote respectively (resp.)", "the positive real line and the set of the positive integers.", "For $n \\in {\\mathbb {N}}^\\star $ , set $[n]^\\star \\stackrel{\\mathrm {def}}{=}\\lbrace 1, \\cdots ,n\\rbrace $ and $[n] \\stackrel{\\mathrm {def}}{=}\\lbrace 0, \\cdots , n\\rbrace $ .", "For $x \\in \\mathbb {R}$ , $\\lceil x\\rceil $ is the nearest integer greater than or equal to $x$ .", "Vectors are column-vectors; for $a,b$ in $\\mathbb {R}^\\ell $ , $\\left\\langle a,b\\right\\rangle $ denotes the Euclidean scalar product, and $\\Vert a\\Vert $ the associated norm.", "For a matrix $A$ , $A^T$ and $A^{-1}$ are resp.", "its transpose and its inverse.", "$\\mathrm {I}_d$ is the $d \\times d$ identity matrix.", "The random variables are defined on a probability space $(\\Omega , \\mathcal {A},\\mathbb {P})$ ; $\\mathbb {E}$ denotes the associated expectation.", "For random variables $U,V$ , $\\mathbb {E}[U \\vert V]$ is the conditional expectation of $U$ given $V$ .", "For a smooth function $f$ , $\\nabla _x f$ (or simply $\\nabla f$ when clear enough) is the gradient of $f$ with respect to the variable $x$ ; $\\nabla ^2 f$ is its hessian.", "For a proper lower semi-continuous convex function $g$ and $x$ in its (assumed) non-empty domain, $\\partial g(x)$ is the subdifferential of $g$ at $x$ ." ], [ "The optimization problem", "We address the minimization of an objective function $F: \\Theta \\rightarrow \\mathbb {R}$ : $\\theta \\mapsto \\frac{-1}{n} \\sum _{i=1}^n \\log \\int _{\\mathsf {Z}} h_i(z)\\exp ( \\left\\langle \\mathsf {s}_i(z),\\phi (\\theta )\\right\\rangle ) \\mathrm {d}\\mu (z) + \\mathsf {R}(\\theta )\\vspace{-8.5359pt}$ where $\\Theta $ is an open subset of $\\mathbb {R}^d$ , $(\\mathsf {Z}, \\mathcal {Z})$ is a measurable space, $\\mathcal {Z}$ denoting a $\\sigma $ -algebra over $\\mathsf {Z}$ ; the functions $\\phi :\\Theta \\rightarrow \\mathbb {R}^q$ , $\\mathsf {R}: \\Theta \\rightarrow \\mathbb {R}$ and for all $i \\in [n]^\\star $ , $\\mathsf {s}_i:\\mathsf {Z}\\rightarrow \\mathbb {R}^q$ and $ h_i: \\mathsf {Z}\\rightarrow \\mathbb {R}_+^\\star $ are measurable; and $\\mu $ is a dominating measure on $(\\mathsf {Z}, \\mathcal {Z})$ .", "The minimization of the negative log-likelihood in latent variable models provides examples of such a problem.", "As a first example, consider the maximum likelihood estimate of a mixture of densities from the curved exponential family (see e.g.", "[15] for the Gaussian mixture model).", "As a second example, consider the following logistic regression model: given $\\mathbb {R}^d$ -valued covariate vectors $\\lbrace X_i, i\\in [n]^\\star \\rbrace $ , for any $\\theta \\in \\Theta \\stackrel{\\mathrm {def}}{=}\\mathbb {R}^d$ , the binary observations $\\lbrace Y_i, i \\in [n]^\\star \\rbrace $ are independent with distribution $\\vspace{-8.5359pt}p_\\theta (y_i) \\propto \\int _{\\mathbb {R}^d} (1+\\exp (-y_i\\left\\langle X_i,z_i\\right\\rangle ))^{-1} \\\\ \\times \\exp \\left(-(2 \\sigma ^2)^{-1} \\Vert z_i- \\theta \\Vert ^2 \\right) \\mathrm {d}z_i \\;,$ for any $i \\in [n]^\\star $ , $y_i \\in \\lbrace -1,1\\rbrace $ .", "In words, each individual $\\# i$ in the training set has an individual predictor $Z_i$ .", "Given $Z_i$ , the success probability $\\mathbb {P}(Y_i = 1 \\mid Z_i)$ is $(1+\\exp (-\\left\\langle X_i,Z_i\\right\\rangle ))^{-1}$ .", "The individual predictors $Z_1,\\cdots , Z_n$ are assumed to have a Gaussian distribution with expectation $\\theta $ , assumed to be unknown, and (known) diagonal covariance matrix $\\sigma ^2 \\mathrm {I}_d$ .", "The ridge-regularized negative log-likelihood, given by $-n^{-1} \\sum _{i=1}^n \\log p_\\theta (Y_i) +\\tau \\Vert \\theta \\Vert ^2$ may be written as (REF ) with $\\mathsf {Z}\\stackrel{\\mathrm {def}}{=}\\mathbb {R}$ , $\\phi (\\theta ) \\stackrel{\\mathrm {def}}{=}\\theta $ , $\\mathrm {d}\\mu (z) \\stackrel{\\mathrm {def}}{=}\\exp (- z^2/(2\\sigma ^2)) \\mathrm {d}z$ , $ h_i(z) & \\stackrel{\\mathrm {def}}{=}\\left( 1 + \\exp \\left(-Y_i \\Vert X_i\\Vert z \\right) \\right)^{-1} \\;,\\quad \\mathsf {s}_i(z) \\stackrel{\\mathrm {def}}{=}z \\, \\frac{X_i}{\\sigma ^2 \\Vert X_i\\Vert } \\;,\\\\ \\mathsf {R}(\\theta ) & \\stackrel{\\mathrm {def}}{=}\\frac{1}{2} \\theta ^T \\left(\\frac{1}{\\sigma ^2 n} \\sum _{i=1}^n \\frac{X_iX_i^T}{\\Vert X_i\\Vert ^2} + 2\\tau \\mathrm {I}_d \\right) \\theta \\;.$" ], [ "EM in the expectation space", "For solving this optimization problem, EM defines a sequence $\\lbrace \\theta _k, k \\ge 0 \\rbrace $ taking values in $\\Theta $ , by repeating (i) E-step: compute $Q(\\theta ; \\theta _k) \\stackrel{\\mathrm {def}}{=}- \\frac{1}{n} \\sum _{i=1}^n \\int _\\mathsf {Z}\\left\\langle \\mathsf {s}(z),\\phi (\\theta )\\right\\rangle \\, p_i(z; \\theta _k) \\mathrm {d}\\mu (z)+\\mathsf {R}(\\theta )$ where for any $z \\in \\mathsf {Z}$ , $\\theta \\in \\Theta $ , $i \\in [n]^\\star $ , $p_i(z;\\theta ) \\propto h_i(z) \\exp (\\left\\langle \\mathsf {s}_i(z),\\phi (\\theta )\\right\\rangle )$ is a probability density; (ii) M-step: compute the minimum $\\theta _{k+1} \\stackrel{\\mathrm {def}}{=}\\mathrm {argmin}_{\\theta \\in \\Theta } Q(\\theta ;\\theta _k), \\quad Q(\\theta ; \\theta _k) = \\mathsf {R}(\\theta ) -\\left\\langle \\bar{\\mathsf {s}}(\\theta _k),\\phi (\\theta )\\right\\rangle ,$ with $\\bar{\\mathsf {s}}(\\theta ) \\stackrel{\\mathrm {def}}{=}\\frac{1}{n} \\sum _{i=1}^n \\bar{\\mathsf {s}}_i(\\theta ),\\qquad \\bar{\\mathsf {s}}_i(\\theta ) \\stackrel{\\mathrm {def}}{=}\\int _\\mathsf {Z}\\mathsf {s}_i(z) \\, p_i(z;\\theta )\\mathrm {d}\\mu (z) \\;.$ Hereafter, we assume that for any $\\theta _\\mathrm {curr}\\in \\Theta $ , $\\theta \\mapsto Q(\\theta ;\\theta _\\mathrm {curr})$ possesses an unique minimum and we define for any $s$ in a closed convex set $\\mathcal {S}\\supseteq \\bar{\\mathsf {s}}(\\Theta )$ , $\\mathsf {T}(s) \\stackrel{\\mathrm {def}}{=}\\mathrm {argmin}_{\\theta \\in \\Theta } \\left( \\mathsf {R}(\\theta ) -\\left\\langle s,\\phi (\\theta )\\right\\rangle \\right) \\;.$ With these notations, it holds: $\\theta _{k+1} = \\mathsf {T}\\circ \\bar{\\mathsf {s}}(\\theta _k)$ .", "In the logistic regression example, $p_i(z; \\theta ) \\mathrm {d}\\mu (z)$ is the a posteriori distribution of the hidden variable $Z_i$ given the observation $Y_i$ ; $q=d$ ; $\\theta \\mapsto \\mathsf {R}(\\theta ) -\\left\\langle s,\\phi (\\theta )\\right\\rangle $ possesses an unique minimum; for any $s \\in \\mathcal {S}\\stackrel{\\mathrm {def}}{=}\\mathbb {R}^d$ , $\\mathsf {T}(s) = \\Omega s$ where $\\Omega \\stackrel{\\mathrm {def}}{=}\\left( \\frac{1}{\\sigma ^2 n}\\sum _{i=1}^n \\frac{X_iX_i^T}{\\Vert X_i\\Vert ^2} + 2 \\tau \\mathrm {I}_d\\right)^{-1} \\;.$ When such a map $\\mathsf {T}$ exists, it is well known that EM can be equivalently defined in the expectation step: the computation of the $\\Theta $ -valued sequence $\\lbrace \\theta _k, k \\ge 0\\rbrace $ through $\\theta _{k+1}=\\mathsf {T}\\circ \\bar{\\mathsf {s}}(\\theta _k)$ is equivalent to the computation of the $\\bar{\\mathsf {s}}(\\Theta )$ -valued sequence $\\lbrace s_k, k \\ge 0\\rbrace $ through $s_{k+1} = \\bar{\\mathsf {s}}\\circ \\mathsf {T}(s_k)$ .", "The limiting points of these sequences are resp.", "the roots of $\\theta \\mapsto \\mathsf {T}\\circ \\bar{\\mathsf {s}}(\\theta ) - \\theta $ and $s \\mapsto \\bar{\\mathsf {s}}\\circ \\mathsf {T}(s) -s$ (see e.g.", "[7]).", "Hereafter, we will see EM as an algorithm in the expectation space: EM is an iterative procedure designed to find the roots of the mean field $\\mathsf {h}$: $\\mathcal {S}\\rightarrow \\mathbb {R}^q$ $\\mathsf {h}(s) \\stackrel{\\mathrm {def}}{=}\\bar{\\mathsf {s}}\\circ \\mathsf {T}(s) -s = \\frac{1}{n} \\sum _{i=1}^n\\bar{\\mathsf {s}}_i \\circ \\mathsf {T}(s) -s \\;.$ In the large scale learning setting, $\\bar{\\mathsf {s}}$ has a prohibitive computational cost since it involves a sum over the full data set of size $n$ : EM can not be applied exactly.", "A popular alternative in the literature is to replace EM iterations with SA iterations, where the SA algorithm is designed to find the roots of $\\mathsf {h}$ [7].", "3P-SPIDER is in the same vein." ], [ "The Perturbed Prox-Preconditioned SPIDER algorithm", "Given a sequence of positive step sizes $\\lbrace \\gamma _k, k \\ge 0\\rbrace $ , SA defines a sequence $\\lbrace \\widehat{S}_k, k \\ge 0 \\rbrace $ such that $\\widehat{S}_{k+1} = \\widehat{S}_k + \\gamma _{k+1} H_{k+1}$ where $H_{k+1}$ is an approximation of $\\mathsf {h}(\\widehat{S}_k)$ .", "Observing that $\\bar{\\mathsf {s}}(\\theta ) = \\mathbb {E}\\left[\\bar{\\mathsf {s}}_I(\\theta )\\right]$ for some $[n]^\\star $ -valued uniform random variable $I$ , a natural idea to mimic the asymptotic behavior of EM is the definition $H_{k+1} \\stackrel{\\mathrm {def}}{=}\\frac{1}{\\mathsf {b}} \\sum _{i \\in \\mathcal {B}_{k+1}}\\bar{\\mathsf {s}}_i(\\widehat{S}_k) - \\widehat{S}_k$ where $\\mathcal {B}_{k+1}$ is a batch of size $\\mathsf {b}$ sampled uniformly from $[n]^\\star $ (with or without replacement) and independently of $\\widehat{S}_{k}$ .", "Such a strategy corresponds to the Online-EM algorithm.", "The incremental EM-based algorithms with variance reduction techniques use the property $\\mathsf {h}(\\widehat{S}_k) = \\mathbb {E}\\left[H_{k+1} + V \\vert \\widehat{S}_k \\right]$ for any (conditionally) centered random variable $V$ .", "This implies that, thanks to an adequate construction of the control variate $V$ , the variance of the approximation of $\\mathsf {h}(\\widehat{S}_k)$ can be reduced (see e.g.", "[17] for an introduction to variance reduction methods in Monte Carlo sampling).", "This is the essence of sEM-vr, FIEM and SPIDER-EM which essentially differ in the definition of $V$ .", "3P-SPIDER is described in Algorithm REF .", "As in SPIDER-EM, the control variate is refreshed regularly, let us say at the beginning of each outer loop $\\# t$ (see lines REF and REF ).", "In SPIDER-EM, it is defined as $\\bar{\\mathsf {s}}\\circ \\mathsf {T}(\\widehat{S}_{t,-1}) = n^{-1} \\sum _{i=1}^n\\bar{\\mathsf {s}}_i(\\widehat{S}_{t,-1})$ .", "Here, two perturbations are allowed: the approximation of $\\bar{\\mathsf {s}}_i(\\widehat{S}_{t,-1})$ with a quantity denoted by $\\hat{\\mathsf {s}}^{t,-1}_i$ , and an error $\\mathcal {E}_t$ which may include for example the situation when a sub-sample of the $n$ examples is used when computing the sum instead of the full data set.", "At each inner loop $\\# (k+1)$ , the control variate is modified in order to track the ideal quantity $\\bar{\\mathsf {s}}\\circ \\mathsf {T}(\\widehat{S}_{t,k})$ : note indeed that $\\mathsf {S}_{t,0} \\approx \\bar{\\mathsf {s}}\\circ \\mathsf {T}(\\widehat{S}_{t,-1})$ and, from line REF , $\\mathsf {S}_{t,k+1} - \\mathsf {S}_{t,k} \\approx \\bar{\\mathsf {s}}\\circ \\mathsf {T}(\\widehat{S}_{t,k}) - \\bar{\\mathsf {s}}\\circ \\mathsf {T}(\\widehat{S}_{t,k-1})$ .", "The sequence of interest $\\lbrace \\widehat{S}_{t,k}, t \\in [k_\\mathrm {out}]^\\star , k \\in [k_\\mathrm {in}] \\rbrace $ is updated first by a SA step (see Line REF ) followed with a proximal step (see Line REF ).", "In the SA step, the mean field $\\mathsf {h}(\\widehat{S}_{t,k}) = \\bar{\\mathsf {s}}\\circ \\mathsf {T}(\\widehat{S}_{t,k}) - \\widehat{S}_{t,k}$ is approximated with (see Lines REF and REF ) $H_{k+1} \\stackrel{\\mathrm {def}}{=}\\frac{1}{\\mathsf {b}} \\sum _{i \\in \\mathcal {B}_{t,k+1}}\\hat{\\mathsf {s}}_i^{t,k} +V_{k+1} - \\widehat{S}_{t,k}$ where $V_{k+1} \\stackrel{\\mathrm {def}}{=}\\mathsf {S}_{t,k} - \\mathsf {b}^{-1} \\sum _{i \\in \\mathcal {B}_{t,k+1}} \\hat{\\mathsf {s}}_i^{t,k-1}$ is a control variate.", "Here again, $\\mathcal {B}_{t,k+1}$ is a batch of size $\\mathsf {b}$ sampled from $[n]^\\star $ , with or without replacement and independently of the past of the algorithm.", "The proximal step in lines REF and REF is a novelty (with respect to SPIDER-EM) introduced to force the path of the algorithm $\\lbrace \\widehat{S}_{t,k}, t \\in [k_\\mathrm {out}]^\\star , k \\in [k_\\mathrm {in}] \\rbrace $ to remain in the set $\\mathcal {S}$ and possibly to inherit other properties from an adequate definition of $g$ (see section  for an example).", "The proof of the convergence in expectation of the algorithm (see [16]) relies on the observation that the algorithm (REF ) is a perturbed preconditioned-gradient method: by setting $\\operatorname{W}(s) \\stackrel{\\mathrm {def}}{=}F \\circ \\mathsf {T}(s)$ , we have under regularity assumptions on the functions $\\phi , \\mathsf {s}, \\mathsf {R}$ that $\\nabla \\operatorname{W}(s) = - B(s) \\mathsf {h}(s)$ for any $s \\in \\mathcal {S}$ , where (see e.g.", "[13]) $B(s) \\stackrel{\\mathrm {def}}{=}\\left( \\nabla \\mathsf {T}(s)\\right)^T \\,\\nabla ^2_\\theta \\left(\\mathsf {R}(\\theta ) - \\left\\langle s,\\phi (\\theta )\\right\\rangle \\right) \\vert _{\\theta = \\mathsf {T}(s)} \\ \\left( \\nabla \\mathsf {T}(s)\\right) \\;,$ is a positive-definite matrix.", "Therefore, given a lower semi-continuous proper convex function $g: \\mathcal {S}\\rightarrow \\mathbb {R}\\cup \\lbrace +\\infty \\rbrace $ , we use a weighted proximal operator defined by $\\mathrm {Prox}_{B,\\gamma g}(s^{\\prime }) \\stackrel{\\mathrm {def}}{=}\\mathrm {argmin}_{s \\in \\mathcal {S}} \\left(\\gamma g(s) +\\frac{1}{2} (s-s^{\\prime })^T B (s-s^{\\prime })\\right)$ for any $\\gamma >0$ and any $q \\times q$ positive-definite matrix $B$ .", "3P-SPIDER extends SPIDER-EM in the following directions.", "First, in the definition of the control variates $\\mathsf {S}_{t,k}$ , it allows to substitute the intractable $\\bar{\\mathsf {s}}_i \\circ \\mathsf {T}(\\widehat{S}_{t,k})$ with an approximation $\\hat{\\mathsf {s}}^{t,k}_i$ .", "Second, it adds a proximal step in order to force the sequence $\\lbrace \\widehat{S}_{t,k}, t \\in [k_\\mathrm {out}]^\\star , k \\in [k_\\mathrm {in}]\\rbrace $ to have some properties (see [18], [19] for a similar idea applied to the SPIDER algorithm, with $B(s) =\\mathrm {I}_d$ ).", "Finally, it allows a perturbation $\\mathcal {E}_t$ when initializing the control variate $\\mathsf {S}_{t,0}$ .", "[htbp] The Perturbed Prox-Preconditioned SPIDER (3P-SPIDER) algorithm.", "$k_\\mathrm {out}, k_\\mathrm {in}\\in {\\mathbb {N}}^\\star $ ; $\\widehat{S}_\\mathrm {init}\\in \\mathcal {S}$ ; $\\gamma _{t,0} \\ge 0$ , $\\gamma _{t,k} >0$ for $t \\in [k_\\mathrm {out}]^\\star $ , $k \\in [k_\\mathrm {in}]^\\star $ , a lower semi-continuous proper convex function $g$ The 3P-SPIDER sequence $\\lbrace \\widehat{S}_{t,k}, t\\in [k_\\mathrm {in}]^\\star , k \\in [k_\\mathrm {in}]\\rbrace $ $\\widehat{S}_{1,0} = \\widehat{S}_{1,-1} =\\widehat{S}_\\mathrm {init}$ $\\mathsf {S}_{1,0} = n^{-1} \\sum _{i=1}^n \\hat{\\mathsf {s}}^{1,-1}_i+\\mathcal {E}_1$ $t=1, \\cdots ,k_\\mathrm {out}$ $k=0, \\ldots ,k_\\mathrm {in}-1$ Sample a mini batch $\\mathcal {B}_{t,k+1}$ of size $\\mathsf {b}$ in $[n]^\\star $ $\\mathsf {S}_{t,k+1} =\\mathsf {S}_{t,k} + \\mathsf {b}^{-1} \\sum _{i \\in \\mathcal {B}_{t,k+1}} \\left(\\hat{\\mathsf {s}}_i^{t,k} - \\hat{\\mathsf {s}}_i^{t,k-1} \\right)$ $\\widehat{S}_{t,k+1/2} = \\widehat{S}_{t,k} +\\gamma _{t,k+1} \\left( \\mathsf {S}_{t,k+1} - \\widehat{S}_{t,k}\\right)$ $\\widehat{S}_{t,k+1} =\\mathrm {Prox}_{B(\\widehat{S}_{t,k}), \\gamma _{t,k+1} g}\\left( \\widehat{S}_{t,k+1/2}\\right)$ $\\widehat{S}_{t+1,-1} =\\widehat{S}_{t,k_\\mathrm {in}}$ $\\mathsf {S}_{t+1,0}= n^{-1} \\sum _{i=1}^n\\hat{\\mathsf {s}}_i^{t+1,-1} + \\mathcal {E}_{t+1}$ $\\widehat{S}_{t+1,-1/2} = \\widehat{S}_{t+1,-1} + \\gamma _{t+1,0} \\left(\\mathsf {S}_{t+1,0} - \\widehat{S}_{t+1,-1} \\right)$ $\\widehat{S}_{t+1,0} =\\mathrm {Prox}_{B(\\widehat{S}_{t+1,-1}), \\gamma _{t+1,0}g}(\\widehat{S}_{t+1,-1/2})$ In [20] (see also [16]), the convergence in expectation of the sequence $\\lbrace \\widehat{S}_{t,k}, t \\in [k_\\mathrm {out}]^\\star , k \\in [k_\\mathrm {in}] \\rbrace $ towards the set $\\mathcal {L} &\\stackrel{\\mathrm {def}}{=}\\lbrace s: \\mathrm {Prox}_{B(s), \\gamma g}(s + \\gamma \\mathsf {h}(s)) = s\\rbrace \\qquad \\forall \\gamma >0 \\;,\\\\ &= \\lbrace s: 0 \\in \\partial g(s) -B(s) \\mathsf {h}(s) \\rbrace = \\lbrace s: 0 \\in \\partial g(s) + \\nabla \\operatorname{W}(s) \\rbrace $ is proved.", "In the case $g$ is the indicator function of a closed convex set $\\mathcal {K}$ and $B(s)$ is invertible for any $s \\in \\mathcal {K} \\cap \\mathcal {S}$ , the limiting points are the roots of $\\nabla \\operatorname{W}$ which are in $\\mathcal {K}$ , that is the roots of $\\mathsf {h}(s)$ in $\\mathcal {K}$ : 3P-SPIDER has the same asymptotic behavior as EM." ], [ "Case of a Monte Carlo approximation", "The intractable quantity $\\bar{\\mathsf {s}}_i \\circ \\mathsf {T}(s)$ is defined by $\\bar{\\mathsf {s}}_i \\circ \\mathsf {T}(s) \\stackrel{\\mathrm {def}}{=}\\int _\\mathsf {Z}\\mathsf {s}_i(z) p_i(z; \\mathsf {T}(s))\\mathrm {d}\\mu (z) \\;,$ where $p_i(z; \\mathsf {T}(s)) \\, \\mathrm {d}\\mu (z)$ is the distribution defined by (REF ).", "When this integral is not explicit, a natural idea is to approximate it by a Monte Carlo (MC) sum.", "For example, $\\hat{\\mathsf {s}}^{t,k}_i \\stackrel{\\mathrm {def}}{=}\\frac{1}{m_{t,k+1}} \\sum _{r=1}^{m_{t,k+1}}\\mathsf {s}_i\\left( Z_{r}^{i,t,k} \\right) \\;,$ where for $t \\in [k_\\mathrm {out}]^\\star , k \\in [k_\\mathrm {in}]$ and $i \\in [n]^\\star $ , $\\lbrace Z_r^{i,t,k}, r \\ge 1 \\rbrace $ is a Markov chain designed to be ergodic with unique invariant distribution $p_i(z; \\mathsf {T}(\\widehat{S}_{t,k})) \\mathrm {d}\\mu (z)$ .", "Such a chain can be obtained by running a Markov chain Monte Carlo sampler (see e.g.", "[21], [22]); note that the independent and identically distributed (i.i.d) setting is a special case of the Markovian setting.", "When the random variables $\\lbrace Z_r^{i,t,k}, r \\ge 1 \\rbrace $ are i.i.d., we have $\\mathbb {E}\\left[\\hat{\\mathsf {s}}^{t,k}_i \\vert \\widehat{S}_{t,k} \\right] = \\bar{\\mathsf {s}}_i \\circ \\mathsf {T}(\\widehat{S}_{t,k})$ ; when the random variables $\\lbrace Z_r^{i,t,k}, r \\ge 1\\rbrace $ are a Markov chain, the approximation is biased: $\\mathbb {E}\\left[\\hat{\\mathsf {s}}^{t,k}_i \\vert \\widehat{S}_{t,k} \\right] \\ne \\bar{\\mathsf {s}}_i \\circ \\mathsf {T}(\\widehat{S}_{t,k})$ .", "In this biased case, the algorithm still converges to $\\mathcal {L}$ but its theoretical analysis is more technical (see [20])." ], [ "Application: Inference in the Logistic Regression Model", "Let us consider the logistic regression model described in Section REF .", "Since $\\mathbb {P}(Y_i = y_i) \\le 1$ , it can be proved that the minima of $F$ are in the set $\\lbrace \\theta \\in \\mathbb {R}^d:\\tau \\Vert \\theta \\Vert ^2 \\le \\ln 4 \\rbrace $ .", "Therefore, in the expectation space, the minima of $\\operatorname{W}= F \\circ \\mathsf {T}$ are in the set $\\lbrace s \\in \\mathbb {R}^d:\\tau s^T \\Omega ^2 s \\le \\ln 4 \\rbrace $ which is included in $\\mathcal {K}\\stackrel{\\mathrm {def}}{=}\\lbrace s \\in \\mathbb {R}^d: \\tau s^T \\Omega s \\le \\ln 4 /\\lambda _{\\min }\\rbrace $ where $\\lambda _{\\min }$ is the positive minimal eigenvalue of $\\Omega $ (see (REF )).", "3P-SPIDER is applied with $g \\stackrel{\\mathrm {def}}{=}\\chi _{\\mathcal {K}}$ , the characteristic function of the compact convex set $\\mathcal {K}$ .", "With this definition of $\\mathcal {K}$ and since $B(s) = \\Omega $ for any $s \\in \\mathcal {S}\\stackrel{\\mathrm {def}}{=}\\mathbb {R}^d$ , the computation of the operator $\\mathrm {Prox}_{B(s), \\gamma g}$ is explicit.", "$\\bar{\\mathsf {s}}_i(\\Omega s)$ is equal to $\\frac{X_i}{\\sigma ^2 \\Vert X_i\\Vert } \\frac{1}{Z(s)} \\, \\int _\\mathbb {R}z \\,\\frac{\\exp (z \\left\\langle X_i,\\Omega s\\right\\rangle /(\\sigma ^2 \\Vert X_i\\Vert ))}{1+ \\exp (- Y_i\\Vert X_i\\Vert z)} \\, \\exp (\\frac{-z^2}{2 \\sigma ^2}) \\mathrm {d}z$ where the normalizing constant $Z(s)$ is given by $Z(s) \\stackrel{\\mathrm {def}}{=}\\int _\\mathbb {R}\\frac{\\exp (z \\left\\langle X_i,\\Omega s\\right\\rangle /(\\sigma ^2\\Vert X_i\\Vert ))}{1+ \\exp (- Y_i \\Vert X_i\\Vert z)} \\, \\exp (-z^2/(2 \\sigma ^2)) \\mathrm {d}z \\;.$ These integrals are not explicit; we consider the approximation $\\hat{\\mathsf {s}}_i^{t,k}$ of $\\bar{\\mathsf {s}}_i(\\Omega \\widehat{S}_{t,k})$ given by a MC sum as described in Section REF ; the samples $Z_r^{i,t,k}$ are obtained by the Gibbs sampler given in [23].", "The numerical illustrations use the MNIST data set: the class \"1\" contains the $12 \\, 873$ images in the training set labeled 1 and 3 and the class \"$-1$ \" contains the $12 \\, 116$ images in the training set labeled 7 and 8; hence $n=24 \\, 989$ .", "The 787 pixels are compressed in 50 features through PCA (see [13] for the details).", "An intercept is included in the covariates so $d=51$ .", "3P-SPIDER is run with $\\sigma ^2=0.1$ , $\\tau =1$ , $k_\\mathrm {out}= 20$ , $k_\\mathrm {in}= \\lceil \\sqrt{n}/10\\rceil =16$ and $\\mathsf {b}= \\lceil 10 \\sqrt{n} \\rceil =1581$ .", "Note that $k_\\mathrm {in}\\times \\mathsf {b}=n$ so that each outer loop requires $n$ examples; it corresponds to an epoch.", "We study the quantity $\\mathcal {D}_{t,k} \\stackrel{\\mathrm {def}}{=}\\mathbb {E}\\left[ \\frac{\\Vert \\widehat{S}_{t,k+1} -\\widehat{S}_{t,k}\\Vert ^2}{\\gamma _{t,k+1}^2} \\right], \\qquad t \\in [k_\\mathrm {out}]^\\star , k \\in [k_\\mathrm {in}-1],$ which quantifies how far 3P-SPIDER is from its limiting set (see the definition of $\\mathcal {L}$ ); this expectation is estimated by a MC sum over 25 independent runs.", "All the runs start from the same value $\\widehat{S}_\\mathrm {init}$ .", "For the computation of the quantity $\\mathsf {S}_{t,0}$ at each outer loop $\\# t$ , all the examples are used and the expectations $\\bar{\\mathsf {s}}_i \\circ \\mathsf {T}(\\widehat{S}_{t,-1})$ are approximated by a MC sum with $m^{\\prime } = 10 \\lceil \\sqrt{n} \\rceil = 1590$ points.", "Hence, $\\mathcal {E}_t=0$ except when specified (see the second analysis below); the computation of $\\mathsf {S}_{t,0}$ requires $n$ examples: it corresponds to an epoch.", "First, 3P-SPIDER is run with $m_{t,k} = 2 \\lceil \\sqrt{n}\\rceil $ and $\\gamma _{t,k} = \\gamma _{t,0} = 0.1$ ; Online-EM is run with a step size equal to $\\gamma _t = 0.1$ , a batch size $\\mathsf {b}=\\lceil 10 \\sqrt{n} \\rceil $ (case \"sqr\") and $\\mathsf {b}=n$ (case \"full\"), and a MC approximation for $\\bar{\\mathsf {s}}_i$ computed with $2 \\lceil \\sqrt{n} \\rceil $ points.", "Figure REF (a) displays $\\mathcal {D}_{t,0}$ for 3P-SPIDER and $\\Vert \\widehat{S}_{t+1} -\\widehat{S}_{t}\\Vert ^2 /\\gamma _{t}^2$ for Online-EM.", "The x-axis scales as the number of epoch, that is the use of $n$ examples.", "The plot shows that, even when the expectations $\\bar{\\mathsf {s}}_i$ have to be replaced with approximations, 3P-SPIDER is far more efficient than Online-EM (in which the exact expectations are also replaced with MC approximations).", "Second, we analyze the role of $\\mathcal {E}_t$ when initializing the control variate $\\mathsf {S}_{t,0}$ .", "We run 3P-SPIDER with $\\gamma _{t,k}= \\gamma _{t,0}= 0.1$ and a number of MC points $m_{t,k} = 2 \\lceil \\sqrt{n} \\rceil $ ; the quantity $\\mathcal {D}_{t,k}$ is displayed on Figure REF (b) vs the cumulated number of inner loops; the squares, circles and diamonds indicate $\\mathcal {D}_{t,k_\\mathrm {in}}$ for every outer loop.", "The case \"full\" corresponds to $\\mathcal {E}_t=0$ , the case \"half\" (resp.", "\"quarter\") corresponds to $\\mathsf {S}_{t,0}$ computed with a batch of size $\\lceil n/2 \\rceil $ examples (resp.", "$\\lceil n/4 \\rceil $ ).", "The control variate is too poor in the case \"half\" and \"quarter\" and, after the transient phase when the possibly bad initialization is forgotten, it weakens the benefit of its use: we definitely advice $\\mathcal {E}_t =0$ .", "Third, we analyze how the variability of the MC approximation and the choice of the step sizes affect the rate of convergence of 3P-SPIDER.", "In \"Case 1\", the values are the same as in Figure REF (a).", "In \"Case 2\", $\\gamma _{t,k} = \\gamma _{t,0} = 0.1$ during the first three outer loops and then $\\gamma _{t,k} = \\gamma _{t,0}=10^{-3}$ ; $m_{t,k}$ is as in \"Case 1\" until the outer loop $\\# 10$ and then $m_{t,k}$ is multiplied by 5.", "In \"Case 3\", the step sizes and the number of MC points are as in \"Case 2\", except that the step size decreases later, at outer loop $\\# 6$ .", "On Figure REF (c), we display $\\mathcal {D}_{t,k}$ vs the cumulated number of inner loops, starting from the number $\\# 32$ (that is, at the end of the second outer loop); the diamonds, circles and squares indicate $\\mathcal {D}_{t,k_\\mathrm {in}}$ .", "First, 3P-SPIDER is improved when the number of MC points increases; when the fluctuations of the algorithm is of the same order as the fluctuations of the MC errors, 3P-SPIDER can not go forward anymore in order to reach a more precise estimation of the parameter (compare \"Case 1\" and \"Case 3\").", "Small step sizes penalize the algorithm (compare \"Case 1\" and \"Case 2\").", "Finally, we discuss the strategies $\\gamma _{t,0} = 0$ and $\\gamma _{t,0}\\ne 0$ .", "3P-SPIDER run as in Figure REF (a) corresponds to \"Case 1\".", "In \"Case 2\" and \"Case 3\", the number of MC points is multiplied by 5 from the outer loop $\\# 11$ , and $\\gamma _{t,k} = 0.1$ for any $k>0$ .", "In \"Case 1\" and \"Case 2\", $\\gamma _{t,0} = 0.1$ and in \"Case 3\", $\\gamma _{t,0} =0$ .", "On Figure REF (d), we display $\\mathcal {D}_{t,k}$ vs the cumulated number of inner loops, starting from the loop $\\# 32$ ; the diamonds, circles and squares indicate $\\mathcal {D}_{t,k_\\mathrm {in}}$ .", "Here again, we observe the benefit of reducing the MC variability by increasing the number of MC points (compare \"Case 1\" to the other cases); \"Case 2\" and \"Case 3\" are almost similar, maybe with a slightly better behavior for \"Case 2\".", "Figure: [(a) top left] Comparison of algorithms; [(b) top right] Roleof the size of the batch when computing 𝖲 t,0 \\mathsf {S}_{t,0}; [(c) bottomleft] Role of the step sizes γ t,k \\gamma _{t,k} and the number of MonteCarlo points when computing 𝗌 ^ t,k i \\hat{\\mathsf {s}}^i_{t,k}; [(d) bottom right] Roleof γ t,0 \\gamma _{t,0}" ] ]
2105.11732
[ [ "The Average Size of Ramanujan Sums over Cubic Number Fields" ], [ "Abstract Let K be a cubic number field.", "In this paper, we study the Ramanujan sums c_{J}(I), where I and J are integral ideals in O_{K}.", "The asymptotic behaviour of sums of c_{J}(I) over both I and J is investigated." ], [ "Ramanujan sums over the rationals", "For positive integers $m$ and $n$ the Ramanujan sum $c_{m}(n)$ is defined as $c_{m}(n)=\\sum _{\\begin{array}{c}1\\leqslant j\\leqslant m\\\\gcd(j,m)=1\\end{array}}e\\Big (\\frac{jn}{m}\\Big )=\\sum _{d|gcd(m,n)}d\\mu \\Big (\\frac{m}{d}\\Big ),$ where $e(z)=e^{2\\pi i z}$ and $\\mu (\\cdot )$ is the Möbius function.", "In 2012, Chan and Kumchev [1] studied the average order of $c_{m}(n)$ with respect to both $m$ and $n$ .", "They proved that $\\begin{aligned}S_{1}(X,Y)&=\\sum _{1\\leqslant m\\leqslant X}\\sum _{1\\leqslant n\\leqslant Y}c_{m}(n)\\\\&=Y-\\frac{3}{2\\pi ^{2}}X^{2}+O(XY^{1/3}\\log X)+O(X^{3}Y^{-1}),\\end{aligned}$ for large real numbers $Y\\geqslant X \\geqslant 3$ , and $\\begin{aligned}S_{1}(X,Y)& := {\\left\\lbrace \\begin{array}{ll}Y, & \\text{if $\\delta >2$,}\\\\{\\vspace{2.84526pt}}-\\frac{3}{2\\pi ^{2}}X^{2}, & \\text{if $1< \\delta <2$, }\\end{array}\\right.", "}\\end{aligned}$ if $Y\\asymp X^{\\delta }$ ." ], [ "Ramanujan sums in fields", "Let $\\textit {K}$ be a number field and $\\mathcal {O}_{\\textit {K}}$ denote its ring of algebraic integers.", "For any nonzero integral ideal $\\mathcal {I}$ in $\\mathcal {O}_{\\textit {K}}$ , the Möbius function is defined as follows: $\\mu (\\mathcal {I})=0$ if there exists a prime ideal $\\mathcal {P}$ such that $\\mathcal {P}^{2}$ divides $\\mathcal {I}$ , and $\\mu (\\mathcal {I})=(-1)^{r}$ if $\\mathcal {I}$ is a product of $r$ distinct prime ideals.", "For any ideal $\\mathcal {I}$ , the norm of $\\mathcal {I}$ is denoted by $\\textit {N}(\\mathcal {I})$ .", "For nonzero integral ideals $\\mathcal {I}$ and $\\mathcal {J}$ , the Ramanujan sum in fields is defined by $c_{\\mathcal {J}}(\\mathcal {I})=\\sum _{\\begin{array}{c}\\mathcal {M}\\in \\mathcal {O}_{\\textit {K}}\\\\\\mathcal {M}|\\mathcal {I},\\mathcal {M}|\\mathcal {J}\\end{array}}\\textit {N}(\\mathcal {M})\\mu \\Big (\\frac{\\mathcal {J}}{\\mathcal {M}}\\Big ),$ which is an analogue of (REF ).", "For each $n\\geqslant 1$ , let $a_{\\textit {K}}(n)$ denote the number of integral ideals in $\\mathcal {O}_{\\textit {K}}$ of norm $ n$ .", "Then $\\sum _{n\\leqslant x}a_{\\textit {K}}(n)=\\rho _{\\textit {K}} x +P_{\\textit {K}}(x),\\qquad P_{\\textit {K}}(x)=O(x^{\\frac{\\mathbf {d}-1}{\\mathbf {d}+1}}),$ where $\\rho _{\\textit {K}}$ is a constant depending only on the field $\\textit {K}$ and $\\mathbf {d}$ is the degree of the field extension $\\textit {K}/\\mathbb {Q}$ .", "This is a classical result of Landau (see [9]).", "Let $X \\geqslant 3$ and $Y \\geqslant 3$ be two large real numbers.", "Define $S_{\\textit {K}}(X,Y):=\\sum _{1\\leqslant \\textit {N}(\\mathcal {J})\\leqslant X}\\sum _{1\\leqslant \\textit {N}(\\mathcal {I})\\leqslant Y}c_{\\mathcal {J}}(\\mathcal {I}),$ which is an analogue of (REF ).", "When $\\textit {K}$ is a quadratic number field, some authors studied the asymptotic behaviour of $S_{\\textit {K}}(X,Y)$ (see [11], [13], [14]).", "In [11], Nowak proved $S_{\\textit {K}}(X,Y)\\sim \\rho _{\\textit {K}} Y$ provided that $Y>X^{\\delta }$ for some $\\delta >\\frac{1973}{820}$ .", "In [13], Zhai improved Nowak' results and proved that (REF ) holds provided that $Y>X^{\\delta }$ for some $\\delta >\\frac{79}{34}$ .", "Recently Zhai [14] proved that (REF ) holds for $Y>X^{2+\\varepsilon }$ .", "In this paper, we consider the asymptotic behaviour of $S_{\\textit {K}}(X,Y)$ for a cubic field $\\textit {K}$ .", "We shall prove the following results.", "Theorem 1 Let $\\textit {K}$ be a cubic number field.", "Suppose that $Y\\geqslant X\\geqslant 3$ are large real numbers.", "Then $S_{\\textit {K}}(X,Y)=\\rho _{\\textit {K}} Y+O(X^{\\frac{8}{5}}Y^{\\frac{2}{5}+\\varepsilon }+X^{\\frac{11}{8}}Y^{\\frac{1}{2}+\\varepsilon })$ provided that $Y>X^{11/4}$ .", "Theorem 2 Let $\\textit {K}$ be a cubic number field.", "Suppose that $T\\geqslant X\\geqslant 3$ are two large real numbers such that $T\\geqslant 10X$ .", "Then we have $\\int _{T}^{2T}|\\mathfrak {R}_{\\textit {K}}(X,Y)|^{2}dY=c(X)\\int _{T}^{2T}Y^{\\frac{2}{3}}dY+O(X^{\\frac{31}{9}}T^{\\frac{14}{9}+\\varepsilon }+X^{\\frac{26}{9}}T^{\\frac{29}{18}+\\varepsilon }),$ where $\\mathfrak {R}_{\\textit {K}}(X,Y):=S_{\\textit {K}}(X,Y)-\\rho _{\\textit {K}} Y$ and $c(X)$ is defined by (REF ).", "Remark.", "From (REF ) we can see that $c(X)\\ll X^{\\frac{7}{3}+\\varepsilon } $ .", "From this estimate we get from Theorem 2 that the asymptotic formula (REF ) holds on average provided that $Y>X^{\\frac{7}{3}+\\varepsilon }$ .", "Notation.", "Let $[x]$ denote the greatest integer less or equal to $x$ .", "The notation $U\\ll V$ means that there exists a constant $C>0$ such that $|U|\\leqslant CV$ , which is equivalent to $U=O(V)$ .", "The notations $U\\gg V$ (which implies $U\\geqslant 0$ and $V\\geqslant 0$ ), $U\\asymp V$ (which means that we have both $U\\ll V$ and $U\\gg V$ ) are defined similarly.", "Let $\\zeta (s)$ denote the Riemann zeta-function and $\\tau _{r}(n)$ the number of ways $n$ factorized into $r$ factors.", "In particular, $\\tau _{2}(n)=\\tau (n)$ is the Dirichlet divisor function.", "At last, let $z_{n}~(n\\geqslant 1)$ denote a series of complex numbers.", "We set $\\Big |\\sum _{N<n\\leqslant 2N}z_{n}\\Big |^{*}:=\\max _{N\\leqslant N_{1}<N_{2}\\leqslant 2N}\\Big |\\sum _{N_{1}<n\\leqslant N_{2}}z_{n}\\Big |.$" ], [ "some lemmas", "In this section, we will make preparation for the proof of our theorems.", "From now on, we always supposed that $\\textit {K}$ is a cubic number field.", "The Dedekind zeta-function of $\\textit {K}$ is defined by $\\zeta _{\\textit {K}}(s):=\\sum _{\\begin{array}{c}\\mathcal {I}\\in \\mathcal {O}_{\\textit {K}}\\\\\\mathcal {I}\\ne 0\\end{array}}\\frac{1}{\\textit {N}^{s}(\\mathcal {I})}\\quad (\\Re s >1).$ Then $\\zeta _{\\textit {K}}(s)=\\sum _{n=1}^{\\infty }\\frac{a_{\\textit {K}}(n)}{n^{s}} \\quad (\\Re s >1),$ where $a_{\\textit {K}}(n)$ is the number of integral ideals in $\\mathcal {O}_{\\textit {K}}$ of norm $ n$ .", "The function $\\mu _{\\textit {K}}(n)$ is defined by $\\frac{1}{\\zeta _{\\textit {K}}(s)}:=\\sum _{n=1}^{\\infty }\\frac{\\mu _{\\textit {K}}(n)}{n^{s}} \\quad (\\Re s>1).$ Define $\\textit {M}_{\\textit {K}}(x):=\\sum _{n\\leqslant x}\\mu _{\\textit {K}}(n).$ Then there is a trivial bound $\\textit {M}_{\\textit {K}}(x)\\ll x.$ We collect the algebraic properties of cubic number fields as the following Lemma.", "Lemma 2.1 Let $\\textit {K}$ be a cubic number field over $\\mathbb {Q}$ and $D=df^{2}$ ($d$ squarefree) its discriminant; then (a) $\\textit {K}/\\mathbb {Q}$ is a normal extension if and only if $D=f^{2}$ .", "In this case $\\zeta _{\\textit {K}}(s)=\\zeta (s)L(s,\\chi _{1})\\overline{L(s,\\chi _{1})},$ where $\\zeta (s)$ is the Riemann zeta-function and $L(s,\\chi _{1})$ is an ordinary Dirichlet series (over $\\mathbb {Q}$ ) corresponding to a primitive character $\\chi _{1}$ modulo $f$ .", "(b) If $\\textit {K}/\\mathbb {Q}$ is not a normal extension, then $d\\ne 1$ and $\\zeta _{\\textit {K}}(s)=\\zeta (s)L(s,\\chi _{2}),$ where $L(s,\\chi _{2})$ is a Dirichlet $L$ -function over the quadratic field $F =\\mathbb {Q}(\\sqrt{d})$ : $L(s,\\chi _{2})=\\sum _{\\varrho }\\chi _{2}(\\varrho )N_{F}(\\varrho )^{-s}, \\quad (\\Re s >1).$ Here the summation is taken over all ideals $\\varrho \\ne 0$ in $F$ and $N_{F}$ denotes the (absolute) ideal norm in $F$ .", "This is Lemma 1 in [10].", "Remark 1.", "To describe the character $\\chi _{2}$ , let $H$ be the ideal group in $F$ according to which the normal extension $\\textit {K}(\\sqrt{d})$ is the class field.", "Then $H$ divides the set $A^{f}$ of all ideals $\\varrho \\subseteq F$ with $(\\varrho ,f)=1$ into three classes $A^{f}=H\\cup C\\cup C^{^{\\prime }}$ , and ($\\omega =e^{2\\pi i/3}$ ) $\\begin{aligned}\\chi _{2}(\\varrho )&={\\left\\lbrace \\begin{array}{ll}1, \\qquad \\varrho \\in H,\\\\{\\vspace{2.84526pt}}\\omega , \\qquad \\varrho \\in C,\\\\{\\vspace{2.84526pt}}\\overline{\\omega }, \\qquad \\varrho \\in C^{^{\\prime }},\\\\{\\vspace{2.84526pt}}0, \\qquad (\\varrho ,f) \\ne 1.\\end{array}\\right.", "}\\end{aligned}$ The substitution $\\gamma =(\\sqrt{d} \\mapsto -\\sqrt{d})$ in $F$ maps $C$ onto $C^{^{\\prime }}$ .", "Remark 2.", "The factorization of $\\zeta _{\\textit {K}}(s)$ in Lemma REF gives $a_{\\textit {K}}(n)=\\sum _{m|n}b(m),$ where in the case of a normal extension $b(m)=\\sum _{xy=m}\\chi _{1}(x)\\overline{\\chi _{1}(y)}$ ($\\chi _{1}$ is the primitive character modulo $f$ ).", "Otherwise $b(m)$ is equal to the number of ideals $\\varrho \\in H$ with $N_{F}(\\varrho )=m$ minus two times the number of ideals $\\varrho \\in C$ with $N_{F}(\\varrho )=m$ .", "In both cases, $|b(m)|\\ll m^{\\varepsilon }$ .", "Lemma 2.2 Let $\\textit {K}$ be an algebraic number field of degree $\\mathbf {d}$ , then $a_{\\textit {K}}(n)\\ll (\\tau (n))^{\\mathbf {d}-1},$ where $\\tau (n)$ is the Dirichlet divisor function and $\\mathbf {d}=[K:\\mathbb {Q}]$ .", "This is (68) in [2].", "Corollary.", "Let $\\textit {K}$ be a cubic field, then we have $a_{\\textit {K}}(n)\\ll \\tau ^{2}(n).$ Lemma 2.3 Suppose $1\\ll N\\ll Y$ , then we have $P_{\\textit {K}}(Y)=\\frac{Y^{1/3}}{\\sqrt{3}\\pi }\\sum _{n\\leqslant N}\\frac{a_{\\textit {K}}(n)}{n^{{2}/{3}}}\\cos (6\\pi (nY)^{{1}/{3}})+O(Y^{{2}/{3}+\\varepsilon }N^{-{1}/{3}}),$ where the $O$ -constant depends on $\\varepsilon $ .", "This is a special case of Proposition 3.2 of Friedlander and Iwaniec [4].", "Lemma 2.4 Let $T\\geqslant 10$ be a large parameter and $y$ a real number such that $T^{\\varepsilon }\\ll y\\ll T$ .", "Define for any $T\\leqslant Y \\leqslant 2T$ that $\\begin{split}&P_{1}(Y)=P_{1}(Y;y):=\\frac{Y^{1/3}}{\\sqrt{3}\\pi }\\sum _{n\\leqslant y}\\frac{a_{\\textit {K}}(n)}{n^{{2}/{3}}}\\cos (6\\pi (nY)^{{1}/{3}}),\\\\&P_{2}(Y)=P_{2}(Y;y):=P_{\\textit {K}}(Y)-P_{1}(Y).\\end{split}$ Then we have $\\int _{T}^{2T}|P_{2}(Y)|^{2}dY\\ll {T^{5/3+\\varepsilon }}{y^{-1/3}}\\quad (y\\ll T^{1/3}).$ We can prove that the estimate $\\int _{1}^{T}|\\zeta _{\\textit {K}}(7/12+it)|^{2}dt\\ll T^{1+\\varepsilon }$ holds.", "If $\\textit {K}/\\mathbb {Q}$ is a normal extension, then by Lemma REF we have $\\zeta _{\\textit {K}}(s)=\\zeta (s)L(s,\\chi _{1})\\overline{L(s,\\chi _{1})}$ .", "From Theorem 8.4 in [7] we get that $\\int _{1}^{T}|\\zeta (7/12+it)|^{6}dt\\ll T^{1+\\varepsilon }.$ The proof of Theorem 8.4 in [7] can be applied directly to $L(s,\\chi _{1})$ to derive $\\int _{1}^{T}|L(7/12+it,\\chi _{1})|^{6}dt\\ll T^{1+\\varepsilon }.$ From (REF ), (REF ) and Hölder's inequality we get $\\begin{split}&\\int _{1}^{T}|\\zeta _{\\textit {K}}(7/12+it)|^{2}dt\\\\&=\\int _{1}^{T}|\\zeta (7/12+it)|^{2}|L(7/12+it,\\chi _{1})|^{4}dt\\\\&\\ll \\Big (\\int _{1}^{T}|\\zeta (7/12+it)|^{6}dt\\Big )^{1/3}\\Big (\\int _{1}^{T}|L(7/12+it,\\chi _{1})|^{6}dt\\Big )^{2/3}\\\\&\\ll T^{1+\\varepsilon }.\\end{split}$ Now suppose that $\\textit {K}/\\mathbb {Q}$ is not a normal extension, then $\\zeta _{\\textit {K}}(s)=\\zeta (s)L(s,\\chi _{2})$ from Lemma REF .", "We know that $L(s,\\chi _{2})$ is an automorphic $L$ -function of degree 2 corresponding to a cusp form $F$ over $SL_{2}(\\mathbb {Z})$ (see, for example, Fomenko [5]).", "So from [3], which is originally proved in [8], we have $\\int _{1}^{T}|L(7/12+it,\\chi _{2})|^{3}dt\\ll T^{1+\\varepsilon }.$ By (REF ), (REF ) and Hölder's inequality we get $\\begin{split}&\\int _{1}^{T}|\\zeta _{\\textit {K}}(7/12+it)|^{2}dt\\\\&=\\int _{1}^{T}|\\zeta (7/12+it)|^{2}|L(7/12+it,\\chi _{2})|^{2}dt\\\\&\\ll \\Big (\\int _{1}^{T}|\\zeta (7/12+it)|^{6}dt\\Big )^{1/3}\\Big (\\int _{1}^{T}|L(7/12+it,\\chi _{2})|^{3}dt\\Big )^{2/3}\\\\&\\ll T^{1+\\varepsilon }.\\end{split}$ Now we give a short proof of (REF ).", "For simplicity, we follow the proof of Theorem 1 in [3].", "Take $d=3$ , $a(n)=a_{\\textit {K}}(n)$ , $N=[T^{5-\\varepsilon }]$ and $M=[T^{2/3}]$ .", "From (REF ) we can take $\\sigma ^{\\ast }=7/12$ .", "As in the proof of Theorem 1 in [3], we can write $P_{2}(Y)=R_{1}^{\\ast }(Y;y)+\\sum _{j=2}^{7}R_{j}(Y),$ where $R_{1}^{\\ast }(Y;y):=\\frac{Y^{1/3}}{\\sqrt{3}\\pi }\\sum _{y<n\\leqslant M}\\frac{d_{3}(n)}{n^{2/3}}\\cos (6\\pi (nY)^{1/3})$ and $R_{j}(Y)$ (j = 2, 3, 4, 5, 6, 7) were defined in page 2129 of [3].", "Similar to the formula (8.11) of [3], we have the estimate (noting that $y\\ll T^{1/3}$ ) $\\begin{split}&\\int _{T}^{2T}(R_{1}^{\\ast }(x;y)+R_{2}(x))^{2}dx\\\\&\\ll \\sum _{y<n\\leqslant M}\\frac{d_{3}^{2}(n)}{n^{4/3}}\\int _{T}^{2T}x^{2/3}dx+T^{5/3+\\varepsilon }M^{-1/6}+T^{4/3+\\varepsilon }M^{1/3}\\\\&\\ll T^{5/3+\\varepsilon }y^{-1/3}+T^{14/9+\\varepsilon }\\ll T^{5/3+\\varepsilon }y^{-1/3},\\end{split}$ which combining (8.17) of [3] gives (REF ).", "Next, we consider the following exponential sums $S_{0}=\\sum _{H<h\\leqslant 2H}\\sum _{N<n\\leqslant 2N}a(h,n)\\sum _{{M<m\\leqslant 2M}}b(m)e\\Big (U\\frac{h^{\\beta }n^{\\gamma }m^{\\alpha }}{H^{\\beta }N^{\\gamma }M^{\\alpha }}\\Big )$ and $S_{1}=\\sum _{H<h\\leqslant 2H}\\sum _{N<n\\leqslant 2N}a(h,n)\\Big |\\sum _{{M<m\\leqslant 2M}}e\\Big (U\\frac{h^{\\beta }n^{\\gamma }m^{\\alpha }}{H^{\\beta }N^{\\gamma }M^{\\alpha }}\\Big )\\Big |^{*},$ where $H, N, M$ are positive integers, $U$ is a real number greater than one, $a(h,n)$ and $b(m)$ is a complex number of modulus at most one; moreover, $\\alpha , \\beta ,\\gamma $ are fixed real numbers such that $\\alpha (\\alpha -1)\\beta \\gamma \\ne 0$ .", "Then we have Lemma 2.5 $S_{0}\\ll (HNM)^{1+\\varepsilon }\\Big (\\Big (\\frac{U}{HNM^{2}}\\Big )^{1/4}+\\frac{1}{(HN)^{1/4}}+\\frac{1}{M^{1/2}}+\\frac{1}{U^{1/2}}\\Big ),$ and $S_{1}\\ll (HNM)^{1+\\varepsilon }\\Big (\\Big (\\frac{U}{HNM^{2}}\\Big )^{1/4}+\\frac{1}{M^{1/2}}+\\frac{1}{U}\\Big ).$ This is contained in [12].", "Lemma 2.6 Suppose that $L(H)=\\sum _{i=1}^{m}A_{i}H^{a_{i}}+\\sum _{j=1}^{n}B_{j}H^{-b_{j}},$ where $A_{i},B_{j},a_{i},$ and $b_{j}$ are positive.", "Assume that $H_{1}\\leqslant H_{2}$ .", "Then there is some $H$ with $H_{1}\\leqslant H \\leqslant H_{2}$ and $L(H)\\ll \\sum _{i=1}^{m}\\sum _{j=1}^{n}(A_{i}^{b_{j}}B_{j}^{a_{i}})^{{1}/{(a_{i}+b_{j})}}+\\sum _{i=1}^{m}A_{i}H_{1}^{a_{i}}+\\sum _{j=1}^{n}B_{j}H_{2}^{-b_{j}}.$ The implied constants depend only on $m$ and $n$ .", "See Lemma 2.4 in [6].", "Lemma 2.7 Let $l\\geqslant 2$ and $q\\geqslant 1$ be two fixed integers.", "Then we have $\\sum _{n\\leqslant x}\\tau _{l}^{q}(n)\\ll x(\\log x)^{l^{q}-1}.$ See Lemma 2.4 in [13].", "Lemma 2.8 Let $T \\geqslant 2$ be a real number.", "Then we have $\\sum _{\\begin{array}{c}m,n\\leqslant T\\\\ m\\ne n\\end{array}}\\frac{\\tau _{4}^{2}(m)\\tau _{4}^{2}(n)}{(mn)^{\\frac{2}{3}}|\\@root 3 \\of {m}-\\@root 3 \\of {n}|}\\ll T^{\\frac{1}{3}+\\varepsilon }.$ Firstly, we write $\\sum _{\\begin{array}{c}m,n\\leqslant T\\\\ m\\ne n\\end{array}}\\frac{\\tau _{4}^{2}(m)\\tau _{4}^{2}(n)}{(mn)^{\\frac{2}{3}}|\\@root 3 \\of {m}-\\@root 3 \\of {n}|}=S_{1}+S_{2},$ where $S_{1}=\\sum _{\\begin{array}{c}m,n\\leqslant T\\\\ |\\@root 3 \\of {m}-\\@root 3 \\of {n}|\\geqslant (mn)^{1/6}/10\\end{array}}\\frac{\\tau _{4}^{2}(m)\\tau _{4}^{2}(n)}{(mn)^{\\frac{2}{3}}|\\@root 3 \\of {m}-\\@root 3 \\of {n}|},$ $S_{2}=\\sum _{\\begin{array}{c}m,n\\leqslant T\\\\ 0<|\\@root 3 \\of {m}-\\@root 3 \\of {n}|< (mn)^{1/6}/10\\end{array}}\\frac{\\tau _{4}^{2}(m)\\tau _{4}^{2}(n)}{(mn)^{\\frac{2}{3}}|\\@root 3 \\of {m}-\\@root 3 \\of {n}|}.$ Applying Lemma REF with $l=4$ and $q=2$ , we have $S_{1}\\ll \\sum _{m,n\\leqslant T}\\frac{\\tau _{4}^{2}(m)\\tau _{4}^{2}(n)}{(mn)^{\\frac{5}{6}}}\\ll T^{\\frac{1}{3}+\\varepsilon },$ where we used partial summation.", "Secondly, $0<|\\@root 3 \\of {m}-\\@root 3 \\of {n}|< {(mn)^{1/6}}/{10}$ implies that $m\\asymp n$ .", "And from the Lagrange theorem we have $|\\@root 3 \\of {m}-\\@root 3 \\of {n}|\\asymp (mn)^{-1/3}|m-n|$ .", "By the formula $ab\\leqslant (a^{2}+b^{2})/2$ and Lemma REF with $l=4$ and $q=4$ we get that $\\begin{aligned}S_{2}&\\ll \\sum _{m\\asymp n\\leqslant T}\\frac{\\tau _{4}^{2}(m)\\tau _{4}^{2}(n)}{(mn)^{{1}/{3}}|m-n|}\\\\&\\ll \\sum _{m\\asymp n\\leqslant T}\\Big (\\frac{\\tau _{4}^{4}(m)}{m^{{2}/{3}}}+\\frac{\\tau _{4}^{4}(n)}{n^{{2}/{3}}}\\Big )\\frac{1}{|m-n|}\\\\&\\ll \\sum _{m\\leqslant T}\\frac{\\tau _{4}^{4}(m)}{m^{{2}/{3}}}\\sum _{m\\asymp n}\\frac{1}{|m-n|}\\ll T^{\\frac{1}{3}+\\varepsilon }.\\end{aligned}$" ], [ "The proof of Theorem 1", "We begin the proof with formula (2.3) in [11], which reads $\\begin{aligned}S_{\\textit {K}}(X,Y)&=\\rho _{\\textit {K}} Y+ \\sum _{\\begin{array}{c}\\mathcal {M},\\mathcal {L}\\in \\mathcal {O}_{\\textit {K}}\\\\1\\leqslant N(\\mathcal {M}\\mathcal {L})\\leqslant X\\end{array}}N(\\mathcal {M})\\mu (\\mathcal {L})P_{\\textit {K}}\\Big (\\frac{Y}{N(\\mathcal {M})}\\Big )\\\\&=\\rho _{\\textit {K}} Y+\\sum _{\\begin{array}{c}\\mathcal {M},\\mathcal {L}\\in \\mathcal {O}_{\\textit {K}}\\\\1\\leqslant N(\\mathcal {M})N(\\mathcal {L})\\leqslant X\\end{array}}N(\\mathcal {M})\\mu (\\mathcal {L})P_{\\textit {K}}\\Big (\\frac{Y}{N(\\mathcal {M})}\\Big ).\\end{aligned}$ Let $\\mathfrak {R}=\\mathfrak {R}_{\\textit {K}}(X,Y)$ denote the last sum in (REF ).", "We have $\\begin{aligned}\\mathfrak {R}&=\\sum _{1\\leqslant ml \\leqslant X}ma_{\\textit {K}}(m)\\mu _{K}(l)P_{\\textit {K}}\\Big (\\frac{Y}{m}\\Big )\\\\&=\\sum _{1\\leqslant l \\leqslant X}\\mu _{K}(l)\\sum _{1\\leqslant m \\leqslant X/l}ma_{\\textit {K}}(m)P_{\\textit {K}}\\Big (\\frac{Y}{m}\\Big )\\\\&=\\mathfrak {R_{1}^{\\dag }}+\\mathfrak {R_{2}^{\\dag }},\\end{aligned}$ where $\\begin{aligned}&\\mathfrak {R_{1}^{\\dag }}:=\\sum _{1\\leqslant l \\leqslant X^{1-\\varepsilon }}\\mu _{K}(l)\\sum _{1\\leqslant m \\leqslant X/l}ma_{\\textit {K}}(m)P_{\\textit {K}}\\Big (\\frac{Y}{m}\\Big ),\\\\&\\mathfrak {R_{2}^{\\dag }}:=\\sum _{X^{1-\\varepsilon }< l \\leqslant X}\\mu _{K}(l)\\sum _{1\\leqslant m \\leqslant X/l}ma_{\\textit {K}}(m)P_{\\textit {K}}\\Big (\\frac{Y}{m}\\Big ).\\end{aligned}$ Firstly, we bound $\\mathfrak {R_{2}^{\\dag }}$ .", "Müller [10] proved that $P_{\\textit {K}}(x)=O(x^{\\frac{43}{96}+\\varepsilon })$ .", "So we can easily derived that $\\mathfrak {R_{2}^{\\dag }}\\ll XY^{43/96+\\varepsilon }.$ Secondly, we consider $\\mathfrak {R_{1}^{\\dag }}$ .", "We can write $\\mathfrak {R_{1}^{\\dag }}:=\\sum _{1\\leqslant l \\leqslant X^{1-\\varepsilon }}\\mu _{K}(l)\\mathfrak {R_{1}}(X_{l},Y),$ where $\\mathfrak {R_{1}}(X_{l},Y)=\\sum _{1\\leqslant m \\leqslant X_{l}}ma_{\\textit {K}}(m)P_{\\textit {K}}\\Big (\\frac{Y}{m}\\Big ),\\qquad X_{l}=X/l.$ Using (REF ), we can write $\\mathfrak {R_{1}}(X_{l},Y)=\\sum _{1\\leqslant m_{1}m_{2}\\leqslant X_{l}}m_{1}m_{2}b(m_{2})P_{\\textit {K}}\\Big (\\frac{Y}{m_{1}m_{2}}\\Big ).$ By a splitting argument, $\\mathfrak {R_{1}}(X_{l},Y)$ can be written as a sum of the following $R(M_{1},M_{2}):=\\sum _{\\begin{array}{c}1\\leqslant m_{1}m_{2}\\leqslant X_{l}\\\\M_{j}<m_{j}\\leqslant 2M_{j}(j=1,2)\\end{array}}m_{1}m_{2}b(m_{2})P_{\\textit {K}}\\Big (\\frac{Y}{m_{1}m_{2}}\\Big ).$ Suppose that $ y\\ll Y/ M_{1}M_{2}$ is a parameter to be determined.", "By Lemma REF , we have $\\begin{aligned}R(M_{1},M_{2})&=\\frac{Y^{\\frac{1}{3}}}{\\sqrt{3}\\pi }\\sum _{\\begin{array}{c}1\\leqslant m_{1}m_{2}\\leqslant X_{l}\\\\M_{j}<m_{j}\\leqslant 2M_{j}(j=1,2)\\end{array}}(m_{1}m_{2})^{\\frac{2}{3}}b(m_{2})\\sum _{n\\leqslant y}\\frac{a_{\\textit {K}}(n)}{n^{2/3}}\\cos \\Big (6\\pi \\@root 3 \\of {\\frac{nY}{m_{1}m_{2}}}\\Big )\\\\&\\qquad \\qquad +O((M_{1}M_{2})^{4/3}Y^{2/3+\\varepsilon }y^{-1/3}).\\end{aligned}$ By a splitting argument to the sum over $n$ we get that $\\begin{aligned}R(M_{1},M_{2})&\\ll Y^{\\frac{1}{3}}(M_{1}M_{2})^{\\frac{2}{3}+\\varepsilon }N^{-\\frac{2}{3}+\\varepsilon }|R^{*}(M_{1},M_{2},N)|\\\\&+O((M_{1}M_{2})^{4/3}Y^{2/3+\\varepsilon }y^{-1/3})\\end{aligned}$ for some $1\\ll N\\ll y$ , where $\\begin{aligned}R^{*}(M_{1},M_{2},N)=\\sum _{\\begin{array}{c}1\\leqslant m_{1}m_{2}\\leqslant X_{l}\\\\M_{j}<m_{j}\\leqslant 2M_{j}(j=1,2)\\end{array}}\\Big (\\frac{m_{1}}{M_{1}}\\Big )^{\\frac{2}{3}}\\Big (\\frac{m_{2}}{M_{2}}\\Big )^{\\frac{2}{3}}\\frac{b(m_{2})}{M_{2}^{\\varepsilon }}\\sum _{N<n \\leqslant 2N}c(n)e\\Big (6\\pi \\@root 3 \\of {\\frac{nY}{m_{1}m_{2}}}\\Big )\\end{aligned}$ with $c(n)=\\frac{a_{\\textit {K}}(n)}{N^{\\varepsilon }}\\Big (\\frac{N}{n}\\Big )^{\\frac{2}{3}}.$ Now, we give our first estimate for the sum $R^{*}(M_{1},M_{2},N)$ .", "Obviously, we have $R^{*}(M_{1},M_{2},N)\\ll R^{\\dag }(M_{1},M_{2},N),$ where $R^{\\dag }(M_{1},M_{2},N)=\\sum _{M_{2}<m_{2}\\leqslant 2M_{2}}\\sum _{N<n\\leqslant 2N}\\Big |\\sum _{M_{1}<m_{1}\\leqslant 2M_{1}}e\\Big (6\\pi \\@root 3 \\of {\\frac{nY}{m_{1}m_{2}}}\\Big )\\Big |^{*}.$ By taking $(H,N,M)=(M_{2},N,M_{1})$ and $U=\\@root 3 \\of {NY}/\\@root 3 \\of {M_{1}M_{2}}$ in Lemma REF , we get that $R^{\\dag }(M_{1},M_{2},N)Y^{-\\varepsilon }\\ll N^{\\frac{5}{6}}Y^{\\frac{1}{12}}{M_{1}}^{\\frac{5}{12}}M_{2}^{\\frac{2}{3}}+N{M_{1}}^{\\frac{1}{2}}M_{2}+N^{\\frac{2}{3}}Y^{-\\frac{1}{3}}(M_{1}M_{2})^{\\frac{4}{3}},$ which combines (REF ) gives $\\begin{aligned}&R^{*}(M_{1},M_{2},N)Y^{-\\varepsilon }\\\\&\\ll N^{\\frac{5}{6}}Y^{\\frac{1}{12}}{M_{1}}^{\\frac{5}{12}}M_{2}^{\\frac{2}{3}}+N{M_{1}}^{\\frac{1}{2}}M_{2}+N^{\\frac{2}{3}}Y^{-\\frac{1}{3}}(M_{1}M_{2})^{\\frac{4}{3}},\\\\&=N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{2}^{\\frac{1}{4}}+N(M_{1}M_{2})^{\\frac{1}{2}}M_{2}^{\\frac{1}{2}}+N^{\\frac{2}{3}}Y^{-\\frac{1}{3}}(M_{1}M_{2})^{\\frac{4}{3}}.\\end{aligned}$ Next, we give another estimate for $R^{*}(M_{1}, M_{2}, N)$ .", "Clearly we have $R^{*}(M_{1}, M_{2}, N)\\ll R^{\\ddag }(M_{1}, M_{2}, N),$ where $R^{\\ddag }(M_{1}, M_{2}, N)=\\sum _{M_{1}<m_{1}\\leqslant 2M_{1}}\\sum _{N<n\\leqslant 2N}\\sum _{M_{2}<m_{2}\\leqslant 2M_{2}}e\\Big (6\\pi \\@root 3 \\of {\\frac{nY}{m_{1}m_{2}}}\\Big ).$ By taking $(H, N, M)=(M_{1}, N, M_{2})$ and $U=\\@root 3 \\of {NY}/\\@root 3 \\of {M_{1}M_{2}}$ in Lemma REF , we get that $\\begin{split}R^{\\ddag }(M_{1},M_{2},N)Y^{-\\varepsilon }&\\ll N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{1}^{\\frac{1}{4}}+N^{\\frac{3}{4}}(M_{1}M_{2})^{\\frac{3}{4}}M_{2}^{\\frac{1}{4}}\\\\&\\qquad \\qquad +N(M_{1}M_{2})^{\\frac{1}{2}}M_{1}^{\\frac{1}{2}}+N^{\\frac{5}{6}}Y^{-\\frac{1}{6}}(M_{1}M_{2})^{\\frac{7}{6}}\\\\&\\ll N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{1}^{\\frac{1}{4}}+N^{\\frac{3}{4}}(M_{1}M_{2})^{\\frac{3}{4}}(M_{1}M_{2})^{\\frac{1}{4}}\\\\&\\qquad \\qquad +N(M_{1}M_{2})^{\\frac{1}{2}}M_{1}^{\\frac{1}{2}}+N^{\\frac{5}{6}}Y^{-\\frac{1}{6}}(M_{1}M_{2})^{\\frac{7}{6}}\\\\&\\ll N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{1}^{\\frac{1}{4}}+N^{\\frac{3}{4}}(M_{1}M_{2})\\\\&\\qquad \\qquad +N(M_{1}M_{2})^{\\frac{1}{2}}M_{1}^{\\frac{1}{2}}+N^{\\frac{5}{6}}Y^{-\\frac{1}{6}}(M_{1}M_{2})^{\\frac{7}{6}}.\\end{split}$ So $\\begin{split}R^{*}(M_{1},M_{2},N)Y^{-\\varepsilon }&\\ll N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{1}^{\\frac{1}{4}}+N(M_{1}M_{2})^{\\frac{1}{2}}M_{1}^{\\frac{1}{2}}\\\\&\\qquad \\qquad +N^{\\frac{3}{4}}(M_{1}M_{2})+N^{\\frac{5}{6}}Y^{-\\frac{1}{6}}(M_{1}M_{2})^{\\frac{7}{6}}.\\end{split}$ From (REF ) and (REF ), we get $\\begin{split}R^{*}(M_{1},M_{2},N)Y^{-\\varepsilon }&\\ll J_{1}+J_{2}+J_{3}+J_{4}+N^{\\frac{5}{6}}Y^{-\\frac{1}{6}}(M_{1}M_{2})^{\\frac{7}{6}}\\\\&\\qquad +N^{\\frac{3}{4}}(M_{1}M_{2})+N^{\\frac{2}{3}}Y^{-\\frac{1}{3}}(M_{1}M_{2})^{\\frac{4}{3}},\\end{split}$ where $\\begin{aligned}&J_{1}=\\min \\Big (N(M_{1}M_{2})^{\\frac{1}{2}}M_{2}^{\\frac{1}{2}},N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{1}^{\\frac{1}{4}}\\Big ),\\\\&J_{2}=\\min \\Big (N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{2}^{\\frac{1}{4}},N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{1}^{\\frac{1}{4}}\\Big ),\\\\&J_{3}=\\min \\Big (N(M_{1}M_{2})^{\\frac{1}{2}}M_{2}^{\\frac{1}{2}},N(M_{1}M_{2})^{\\frac{1}{2}}M_{1}^{\\frac{1}{2}}\\Big ),\\\\&J_{4}=\\min \\Big (N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{2}^{\\frac{1}{4}},N(M_{1}M_{2})^{\\frac{1}{2}}M_{1}^{\\frac{1}{2}}\\Big ).\\end{aligned}$ Noticing the fact that $\\min (X_{1},\\ldots ,X_{k}) \\leqslant X_{1}^{a_{1}} \\ldots X_{k}^{a_{k}},$ where $X_{1}, \\ldots ,X_{k}>0$ , $a_{1},\\ldots ,a_{k}\\geqslant 0$ satisfying $a_{1}+\\ldots +a_{k}=1$ , we have $\\begin{aligned}J_{1}&\\leqslant \\Big (N(M_{1}M_{2})^{\\frac{1}{2}}M_{2}^{\\frac{1}{2}}\\Big )^{\\frac{1}{3}}\\Big (N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{1}^{\\frac{1}{4}}\\Big )^{\\frac{2}{3}}\\leqslant N^{\\frac{8}{9}}Y^{\\frac{1}{18}}(M_{1}M_{2})^{\\frac{11}{18}},\\\\J_{2}&\\leqslant \\Big (N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{2}^{\\frac{1}{4}}\\Big )^{\\frac{1}{2}}\\Big (N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{1}^{\\frac{1}{4}}\\Big )^{\\frac{1}{2}}\\leqslant N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{13}{24}},\\\\J_{3}&\\leqslant \\Big (N(M_{1}M_{2})^{\\frac{1}{2}}M_{2}^{\\frac{1}{2}}\\Big )^{\\frac{1}{2}}\\Big (N(M_{1}M_{2})^{\\frac{1}{2}}M_{1}^{\\frac{1}{2}}\\Big )^{\\frac{1}{2}}\\leqslant N(M_{1}M_{2})^{\\frac{3}{4}},\\\\J_{4}&\\leqslant \\Big (N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{12}}M_{2}^{\\frac{1}{4}}\\Big )^{\\frac{2}{3}}\\Big (N(M_{1}M_{2})^{\\frac{1}{2}}M_{1}^{\\frac{1}{2}}\\Big )^{\\frac{1}{3}}\\leqslant N^{\\frac{8}{9}}Y^{\\frac{1}{18}}(M_{1}M_{2})^{\\frac{11}{18}}.\\end{aligned}$ It now follows that $\\begin{split}&R^{*}(M_{1},M_{2},N)Y^{-\\varepsilon }\\\\&\\ll N^{\\frac{8}{9}}Y^{\\frac{1}{18}}(M_{1}M_{2})^{\\frac{11}{18}}+N(M_{1}M_{2})^{\\frac{3}{4}}+N^{\\frac{5}{6}}Y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{13}{24}}\\\\&\\qquad +N^{\\frac{5}{6}}Y^{-\\frac{1}{6}}(M_{1}M_{2})^{\\frac{7}{6}}+N^{\\frac{3}{4}}(M_{1}M_{2})+N^{\\frac{2}{3}}Y^{-\\frac{1}{3}}(M_{1}M_{2})^{\\frac{4}{3}}.\\end{split}$ Combining (REF ) with (REF ), we get (recalling $N\\ll y$ ) $\\begin{split}&R(M_{1},M_{2})Y^{-\\varepsilon }\\\\&\\ll N^{\\frac{2}{9}}Y^{\\frac{7}{18}}(M_{1}M_{2})^{\\frac{23}{18}}+ N^{\\frac{1}{3}}Y^{\\frac{1}{3}}(M_{1}M_{2})^{\\frac{17}{12}}+ N^{\\frac{1}{6}}Y^{\\frac{5}{12}}(M_{1}M_{2})^{\\frac{29}{24}}\\\\&\\qquad + N^{\\frac{1}{6}}Y^{\\frac{1}{6}}(M_{1}M_{2})^{\\frac{11}{6}}+ N^{\\frac{1}{12}}Y^{\\frac{1}{3}}(M_{1}M_{2})^{\\frac{5}{3}}+ Y^{\\frac{2}{3}}(M_{1}M_{2})^{\\frac{4}{3}}y^{-\\frac{1}{3}}+(M_{1}M_{2})^{2}\\\\&\\ll Y^{\\frac{7}{18}}y^{\\frac{2}{9}}(M_{1}M_{2})^{\\frac{23}{18}}+ Y^{\\frac{1}{3}}y^{\\frac{1}{3}}(M_{1}M_{2})^{\\frac{17}{12}}+ Y^{\\frac{5}{12}}y^{\\frac{1}{6}}(M_{1}M_{2})^{\\frac{29}{24}}\\\\&\\qquad + Y^{\\frac{1}{6}}y^{\\frac{1}{6}}(M_{1}M_{2})^{\\frac{11}{6}}+ Y^{\\frac{1}{3}}y^{\\frac{1}{12}}(M_{1}M_{2})^{\\frac{5}{3}}+ Y^{\\frac{2}{3}}(M_{1}M_{2})^{\\frac{4}{3}}y^{-\\frac{1}{3}}+(M_{1}M_{2})^{2}.\\end{split}$ By choosing a best $y$ with Lemma REF  (recalling that $X_{l}=X/l$ ), we get that $\\begin{aligned}R(M_{1},M_{2})Y^{-\\varepsilon }&\\ll Y^{\\frac{1}{2}}(M_{1}M_{2})^{\\frac{13}{10}}+Y^{\\frac{1}{2}}(M_{1}M_{2})^{\\frac{11}{8}}+Y^{\\frac{1}{2}}(M_{1}M_{2})^{\\frac{5}{4}}\\\\&\\qquad \\qquad +Y^{\\frac{1}{3}}(M_{1}M_{2})^{\\frac{5}{3}}+Y^{\\frac{2}{5}}(M_{1}M_{2})^{\\frac{8}{5}}+(M_{1}M_{2})^{2}\\\\&\\ll Y^{\\frac{1}{2}}{X_{l}}^{\\frac{13}{10}}+Y^{\\frac{1}{2}}{X_{l}}^{\\frac{11}{8}}+Y^{\\frac{1}{2}}{X_{l}}^{\\frac{5}{4}}+Y^{\\frac{1}{3}}{X_{l}}^{\\frac{5}{3}}+Y^{\\frac{2}{5}}{X_{l}}^{\\frac{8}{5}}+{X_{l}}^{2}.\\end{aligned}$ From (REF )-(REF ) and (REF ), we get $\\begin{aligned}\\mathfrak {R}_{1}^{\\dag }Y^{-\\varepsilon }&\\ll {X}^{\\frac{13}{10}}Y^{\\frac{1}{2}}+{X}^{\\frac{11}{8}}Y^{\\frac{1}{2}}+{X}^{\\frac{5}{4}}Y^{\\frac{1}{2}}+{X}^{\\frac{5}{3}}Y^{\\frac{1}{3}}+{X}^{\\frac{8}{5}}Y^{\\frac{2}{5}}+{X}^{2}\\\\&\\ll {X}^{\\frac{11}{8}}Y^{\\frac{1}{2}}+{X}^{\\frac{8}{5}}Y^{\\frac{2}{5}}\\end{aligned}$ by noting that $Y\\geqslant X$ .", "This together with (REF ) and (REF ) yields $\\mathfrak {R}Y^{-\\varepsilon }\\ll {X}^{\\frac{11}{8}}Y^{\\frac{1}{2}}+{X}^{\\frac{8}{5}}Y^{\\frac{2}{5}}+XY^{\\frac{43}{96}}\\ll {X}^{\\frac{11}{8}}Y^{\\frac{1}{2}}+{X}^{\\frac{8}{5}}Y^{\\frac{2}{5}}.$ This completes the proof of Theorem 1." ], [ "The proof of Theorem 2", "We begin with the first expression of $\\mathfrak {R}$ in (REF ) $\\begin{aligned}\\mathfrak {R}&=\\sum _{1\\leqslant ml \\leqslant X}ma_{\\textit {K}}(m)\\mu _{K}(l)P_{\\textit {K}}\\Big (\\frac{Y}{m}\\Big )\\\\&=\\sum _{1\\leqslant m \\leqslant X}ma_{\\textit {K}}(m)\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m}\\Big )P_{\\textit {K}}\\Big (\\frac{Y}{m}\\Big )\\\\&=\\mathfrak {R}_{1}+\\mathfrak {R}_{2},\\end{aligned}$ where $\\begin{aligned}&\\mathfrak {R}_{1}=\\frac{Y^{1/3}}{\\sqrt{3}\\pi }\\sum _{m\\leqslant X}m^{2/3}a_{\\textit {K}}(m)\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m}\\Big )\\sum _{n\\leqslant y}\\frac{a_{\\textit {K}}(n)}{n^{{2}/{3}}}\\cos \\Big (6\\pi \\@root 3 \\of {\\frac{nY}{m}}\\Big ),\\\\&\\mathfrak {R}_{2}=\\sum _{1\\leqslant m \\leqslant X}ma_{\\textit {K}}(m)\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m}\\Big )P_{2}\\Big (\\frac{Y}{m}\\Big ).\\end{aligned}$ A.", "Evaluation of $\\int _{T}^{2T}\\mathfrak {R}_{2}^{2}dY$ Suppose that $0<y<\\big (\\frac{T}{X}\\big )^{1/3}$ , it is not hard to find that $\\begin{split}\\mathfrak {R}_{2}&\\ll \\sum _{ m\\sim M}ma_{\\textit {K}}(m)\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m}\\Big )P_{2}\\Big (\\frac{Y}{m}\\Big )\\log X\\\\&\\ll X\\sum _{ m\\sim M}a_{\\textit {K}}(m)P_{2}\\Big (\\frac{Y}{m}\\Big )\\log X\\end{split}$ for some $1\\ll M\\ll X$ and $\\textit {M}_{\\textit {K}}(t)\\ll t$ .", "By Cauchy's inequality we get $\\begin{split}\\mathfrak {R}_{2}^{2}&\\ll X^{2}\\sum _{ m\\sim M}a_{\\textit {K}}(m)\\sum _{ m\\sim M}a_{\\textit {K}}(m)P_{2}^{2}\\Big (\\frac{Y}{m}\\Big )\\log ^{2}X\\\\&\\ll X^{2}M\\sum _{ m\\sim M}a_{\\textit {K}}(m)P_{2}^{2}\\Big (\\frac{Y}{m}\\Big )\\log ^{2}X,\\end{split}$ which together with $Xy^{3}\\ll T$ imply that $\\begin{split}\\int _{T}^{2T}\\mathfrak {R}_{2}^{2}dY&\\ll X^{2}M\\sum _{ m\\sim M}a_{\\textit {K}}(m)\\log ^{2}X\\int _{T}^{2T}P_{2}^{2}\\Big (\\frac{Y}{m}\\Big )dY\\\\&\\ll X^{2}M\\sum _{ m\\sim M}a_{\\textit {K}}(m)m\\log ^{2}X\\int _{T}^{2T}P_{2}^{2}\\Big (\\frac{Y}{m}\\Big )d\\Big (\\frac{Y}{m}\\Big )\\\\&\\ll X^{2}M\\sum _{ m\\sim M}a_{\\textit {K}}(m)m\\Big (\\frac{T}{m}\\Big )^{\\frac{5}{3}+\\varepsilon }y^{-\\frac{1}{3}}\\log ^{2}X\\\\&\\ll X^{2}M^{\\frac{4}{3}}T^{\\frac{5}{3}+\\varepsilon }y^{-\\frac{1}{3}}\\\\&\\ll X^{\\frac{10}{3}}T^{\\frac{5}{3}+\\varepsilon }y^{-\\frac{1}{3}}.\\end{split}$ B.", "Evaluation of $\\int _{T}^{2T}\\mathfrak {R}_{1}^{2}dY$ Noticing that $\\begin{aligned}\\mathfrak {R}_{1}^{2}=\\frac{Y^{\\frac{2}{3}}}{3\\pi ^{2}}&\\sum _{1\\leqslant m_{1},m_{2}\\leqslant X}(m_{1}m_{2})^{\\frac{2}{3}}a_{\\textit {K}}(m_{1})a_{\\textit {K}}(m_{2})\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{1}}\\Big )\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{2}}\\Big )\\\\&\\times \\sum _{n_{1},n_{2}\\leqslant y}\\frac{a_{\\textit {K}}(n_{1})}{n_{1}^{{2}/{3}}}\\frac{a_{\\textit {K}}(n_{2})}{n_{2}^{{2}/{3}}}\\cos \\Big (6\\pi \\@root 3 \\of {\\frac{n_{1}Y}{m_{1}}}\\Big )\\cos \\Big (6\\pi \\@root 3 \\of {\\frac{n_{2}Y}{m_{2}}}\\Big )\\end{aligned}$ and using the elementary formula $\\cos \\alpha \\cos \\beta =\\frac{1}{2}\\big (\\cos (\\alpha -\\beta )+\\cos (\\alpha +\\beta )\\big )$ give $\\mathfrak {R}_{1}^{2}=Q_{1}(Y)+Q_{2}(Y)+Q_{3}(Y),$ where $\\begin{aligned}Q_{1}(Y)& :=\\frac{Y^{\\frac{2}{3}}}{6\\pi ^{2}}\\sum _{\\begin{array}{c}m_{1},m_{2}\\leqslant X;n_{1},n_{2}\\leqslant y\\\\n_{1}m_{2}=n_{2}m_{1}\\end{array}}(m_{1}m_{2})^{\\frac{2}{3}}a_{\\textit {K}}(m_{1})a_{\\textit {K}}(m_{2})\\\\&\\qquad \\qquad \\qquad \\qquad \\times \\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{1}}\\Big )\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{2}}\\Big )\\frac{a_{\\textit {K}}(n_{1})}{n_{1}^{{2}/{3}}}\\frac{a_{\\textit {K}}(n_{2})}{n_{2}^{{2}/{3}}},\\\\ {\\vspace{2.84526pt}}Q_{2}(Y)& :=\\frac{Y^{\\frac{2}{3}}}{6\\pi ^{2}}\\sum _{\\begin{array}{c}m_{1},m_{2}\\leqslant X;n_{1},n_{2}\\leqslant y\\\\n_{1}m_{2}\\ne n_{2}m_{1}\\end{array}}(m_{1}m_{2})^{\\frac{2}{3}}a_{\\textit {K}}(m_{1})a_{\\textit {K}}(m_{2})\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{1}}\\Big )\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{2}}\\Big )\\\\ {\\vspace{2.84526pt}}&\\qquad \\qquad \\qquad \\qquad \\times \\frac{a_{\\textit {K}}(n_{1})}{n_{1}^{{2}/{3}}}\\frac{a_{\\textit {K}}(n_{2})}{n_{2}^{{2}/{3}}}\\cos \\Big (6\\pi \\@root 3 \\of {Y}\\Big (\\@root 3 \\of {\\frac{n_{1}}{m_{1}}}-\\@root 3 \\of {\\frac{n_{2}}{m_{2}}}\\Big )\\Big ),\\\\ {\\vspace{2.84526pt}}Q_{3}(Y)& :=\\frac{Y^{\\frac{2}{3}}}{6\\pi ^{2}}\\sum _{\\begin{array}{c}m_{1},m_{2}\\leqslant X\\\\n_{1},n_{2}\\leqslant y\\end{array}}(m_{1}m_{2})^{\\frac{2}{3}}a_{\\textit {K}}(m_{1})a_{\\textit {K}}(m_{2})\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{1}}\\Big )\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{2}}\\Big )\\\\& \\qquad \\qquad \\qquad \\qquad \\times \\frac{a_{\\textit {K}}(n_{1})}{n_{1}^{{2}/{3}}}\\frac{a_{\\textit {K}}(n_{2})}{n_{2}^{{2}/{3}}}\\cos \\Big (6\\pi \\@root 3 \\of {Y}\\Big (\\@root 3 \\of {\\frac{n_{1}}{m_{1}}}+\\@root 3 \\of {\\frac{n_{2}}{m_{2}}}\\Big )\\Big ).\\end{aligned}$ Firstly, we consider $Q_{3}(Y)$ .", "By using the first derivative test, (REF ) and the elementary formula $a+b\\geqslant 2\\sqrt{ab}$ ($a>0, b>0$ ), we get $\\begin{aligned}\\int _{T}^{2T}Q_{3}(Y)dY&\\ll T^{\\frac{4}{3}}\\sum _{\\begin{array}{c}m_{1},m_{2}\\leqslant X\\\\n_{1},n_{2}\\leqslant y\\end{array}}(m_{1}m_{2})^{\\frac{2}{3}}a_{\\textit {K}}(m_{1})a_{\\textit {K}}(m_{2})\\Big |\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{1}}\\Big )\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{2}}\\Big )\\Big |\\\\& \\qquad \\qquad \\qquad \\times \\frac{a_{\\textit {K}}(n_{1})}{n_{1}^{{2}/{3}}}\\frac{a_{\\textit {K}}(n_{2})}{n_{2}^{{2}/{3}}}\\times \\frac{1}{\\@root 3 \\of {\\frac{n_{1}}{m_{1}}}+\\@root 3 \\of {\\frac{n_{2}}{m_{2}}}}\\\\&\\ll X^{2}T^{\\frac{4}{3}}\\sum _{m_{1},m_{2}\\leqslant X}\\frac{a_{\\textit {K}}(m_{1})a_{\\textit {K}}(m_{2})}{(m_{1}m_{2})^{{1}/{6}}}\\sum _{n_{1},n_{2}\\leqslant y}\\frac{a_{\\textit {K}}(m)}{n_{1}^{{5}/{6}}}\\frac{a_{\\textit {K}}(m)}{n_{2}^{{5}/{6}}}\\\\&\\ll X^{\\frac{11}{3}}T^{\\frac{4}{3}}y^{\\frac{1}{3}},\\end{aligned}$ where in the last step we used (REF ) and partial summation.", "Secondly, we consider $Q_{2}(Y)$ .", "By the first derivative test and (REF ) again we get with the help of Lemma REF that $\\begin{aligned}&\\int _{T}^{2T}Q_{2}(Y)dY\\\\&\\ll T^{\\frac{4}{3}}\\sum _{\\begin{array}{c}m_{1},m_{2}\\leqslant X;n_{1},n_{2}\\leqslant y\\\\n_{1}m_{2}\\ne n_{2}m_{1}\\end{array}}(m_{1}m_{2})^{\\frac{2}{3}}a_{\\textit {K}}(m_{1})a_{\\textit {K}}(m_{2})\\Big |\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{1}}\\Big )\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{m_{2}}\\Big )\\Big |\\\\& \\qquad \\qquad \\qquad \\times \\frac{a_{\\textit {K}}(n_{1})}{n_{1}^{{2}/{3}}}\\frac{a_{\\textit {K}}(n_{2})}{n_{2}^{{2}/{3}}}\\times \\frac{1}{\\Big |\\@root 3 \\of {\\frac{n_{1}}{m_{1}}}-\\@root 3 \\of {\\frac{n_{2}}{m_{2}}}\\Big |}\\\\&\\ll X^{2}T^{\\frac{4}{3}} \\sum _{\\begin{array}{c}m_{1},m_{2}\\leqslant X;n_{1},n_{2}\\leqslant y\\\\n_{1}m_{2}\\ne n_{2}m_{1}\\end{array}}\\frac{a_{\\textit {K}}(m_{1})a_{\\textit {K}}(m_{2})a_{\\textit {K}}(n_{1})a_{\\textit {K}}(n_{2})}{(n_{1}n_{2})^{{2}/{3}}|\\@root 3 \\of {n_{1}m_{2}}-\\@root 3 \\of {n_{2}m_{1}}|}\\\\&\\ll X^{\\frac{10}{3}}T^{\\frac{4}{3}}\\sum _{\\begin{array}{c}m_{1},m_{2}\\leqslant X;n_{1},n_{2}\\leqslant y\\\\n_{1}m_{2}\\ne n_{2}m_{1}\\end{array}}\\frac{a_{\\textit {K}}(m_{1})a_{\\textit {K}}(m_{2})a_{\\textit {K}}(n_{1})a_{\\textit {K}}(n_{2})}{(m_{1}m_{2})^{{2}/{3}}(n_{1}n_{2})^{{2}/{3}}|\\@root 3 \\of {n_{1}m_{2}}-\\@root 3 \\of {n_{2}m_{1}}|}\\\\&\\ll X^{\\frac{10}{3}}T^{\\frac{4}{3}}\\sum _{\\begin{array}{c}l_{1},l_{2}\\leqslant Xy\\\\l_{1}\\ne l_{2}\\end{array}}\\frac{\\tau _{4}^{2}(l_{1})\\tau _{4}^{2}(l_{2})}{l_{1}^{2/3}l_{2}^{2/3}|\\@root 3 \\of {l_{1}}-\\@root 3 \\of {l_{2}}|}\\\\&\\ll T^{\\frac{4}{3}}X^{\\frac{10}{3}}(Xy)^{\\frac{1}{3}+\\varepsilon }\\\\&\\ll X^{\\frac{11}{3}}T^{\\frac{4}{3}+\\varepsilon }y^{\\frac{1}{3}},\\end{aligned}$ where we used the estimate $a_{\\textit {K}}(m)a_{\\textit {K}}(n)\\leqslant \\tau ^{2}(m)\\tau ^{2}(n)\\leqslant \\tau _{4}^{2}(mn)$ .", "Finally, we consider $Q_{1}(Y)$ .", "Let $m=(m_{1},m_{2})$ .", "Write $m_{1}=mm_{1}^{*},~ m_{2}=mm_{2}^{*}$ such that $(m_{1}^{*}, ~m_{2}^{*})=1$ .", "If $n_{1}m_{2}=n_{2}m_{1}$ , we immediately get that $n_{1}=nm_{1}^{*}, ~n_{2}=nm_{2}^{*}$ for some positive integer $n$ .", "It follows that $\\begin{aligned}Q_{1}(Y)&=\\frac{Y^{\\frac{2}{3}}}{6\\pi ^{2}}\\sum _{\\begin{array}{c}mm_{1},mm_{2}\\leqslant X\\\\gcd(m_{1},m_{2})=1\\end{array}}m^{4/3}a_{\\textit {K}}(mm_{1})a_{\\textit {K}}(mm_{2})\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{mm_{1}}\\Big )\\textit {M}_{\\textit {K}}\\Big ({\\frac{X}{mm_{2}}}\\Big )\\\\& \\qquad \\qquad \\qquad \\times \\sum _{n\\leqslant \\min (\\frac{y}{m_{1}},\\frac{y}{m_{2}})}\\frac{a_{\\textit {K}}(nm_{1})a_{\\textit {K}}(nm_{2})}{n^{4/3}}\\\\&=c(X)Y^{\\frac{2}{3}}+E(Y),\\end{aligned}$ where $\\begin{aligned}&c(X)=\\frac{1}{6\\pi ^{2}}\\sum _{\\begin{array}{c}mm_{1},mm_{2}\\leqslant X\\\\gcd(m_{1},m_{2})=1\\end{array}}m^{4/3}a_{\\textit {K}}(mm_{1})a_{\\textit {K}}(mm_{2})\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{mm_{1}}\\Big )\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{mm_{2}}\\Big )\\\\& \\qquad \\qquad \\qquad \\times \\sum _{n=1}^{\\infty }\\frac{a_{\\textit {K}}(nm_{1})a_{\\textit {K}}(nm_{2})}{n^{4/3}},\\\\&E(Y)=\\frac{Y^{\\frac{2}{3}}}{6\\pi ^{2}}\\sum _{\\begin{array}{c}mm_{1},mm_{2}\\leqslant X\\\\gcd(m_{1},m_{2})=1\\end{array}}m^{4/3}a_{\\textit {K}}(mm_{1})a_{\\textit {K}}(mm_{2})\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{mm_{1}}\\Big )\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{mm_{2}}\\Big )\\\\& \\qquad \\qquad \\qquad \\times \\sum _{n> \\min (\\frac{y}{m_{1}},\\frac{y}{m_{2}})}\\frac{a_{\\textit {K}}(nm_{1})a_{\\textit {K}}(nm_{2})}{n^{4/3}}.\\end{aligned}$ Noting that $a_{\\textit {K}}(mn)\\leqslant \\tau ^{2}(mn)\\leqslant \\tau ^{2}(m)\\tau ^{2}(n)$ , we get that $\\begin{aligned}E(Y)&\\ll Y^{\\frac{2}{3}}\\sum _{\\begin{array}{c}mm_{1},mm_{2}\\leqslant X\\\\gcd(m_{1},m_{2})=1\\end{array}}m^{4/3}a_{\\textit {K}}(mm_{1})a_{\\textit {K}}(mm_{2})\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{mm_{1}}\\Big )\\textit {M}_{\\textit {K}}\\Big (\\frac{X}{mm_{2}}\\Big )\\\\& \\qquad \\qquad \\qquad \\times \\sum _{n> \\min (\\frac{y}{m_{1}},\\frac{y}{m_{2}})}\\frac{a_{\\textit {K}}(nm_{1})a_{\\textit {K}}(nm_{2})}{n^{4/3}}\\\\&\\ll X^{2}Y^{\\frac{2}{3}}\\sum _{m\\leqslant X}\\frac{\\tau ^{4}(m)}{m^{2/3}}\\sum _{\\begin{array}{c}m_{1}\\leqslant \\frac{X}{m},m_{2}\\leqslant \\frac{X}{m}\\\\gcd(m_{1},m_{2})=1\\end{array}}\\frac{\\tau ^{4}(m_{1})\\tau ^{4}(m_{2})}{m_{1}m_{2}}\\sum _{n>\\min (\\frac{y}{m_{1}},\\frac{y}{m_{2}})}\\frac{\\tau ^{4}(n)}{n^{4/3}}\\\\&\\ll X^{2}Y^{\\frac{2}{3}}\\sum _{m\\leqslant X}\\frac{\\tau ^{4}(m)}{m^{2/3}}\\sum _{m_{1}\\leqslant m_{2}\\leqslant \\frac{X}{m}}\\frac{\\tau ^{4}(m_{1})\\tau ^{4}(m_{2})}{m_{1}m_{2}}\\times \\Big (\\frac{m_{2}}{y}\\Big )^{1/3-\\varepsilon }\\\\&\\ll X^{2}Y^{\\frac{2}{3}}{y}^{\\varepsilon -\\frac{1}{3}}\\sum _{m\\leqslant X}\\frac{\\tau ^{4}(m)}{m^{2/3}}\\sum _{m_{2}\\leqslant \\frac{X}{m}}\\frac{\\tau ^{4}(m_{2})}{m_{2}^{2/3+\\varepsilon }}\\sum _{m_{1}\\leqslant m_{2}}\\frac{\\tau ^{4}(m_{1})}{m_{1}}\\\\&\\ll X^{\\frac{7}{3}}T^{\\frac{2}{3}+\\varepsilon }y^{-\\frac{1}{3}}.\\end{aligned}$ This together with (REF ) yields $\\int _{T}^{2T}Q_{1}(Y)dY=c(X)\\int _{T}^{2T}Y^{\\frac{2}{3}}dY+O(X^{\\frac{7}{3}}T^{\\frac{5}{3}+\\varepsilon }y^{-\\frac{1}{3}}).$ Similar to (REF ), we obtain the estimate $c(X)\\ll X^{\\frac{7}{3}+\\varepsilon }.$ From (REF )-(REF ) and (REF ), we get $\\int _{T}^{2T}\\mathfrak {R}_{1}^{2}dY=c(X)\\int _{T}^{2T}Y^{\\frac{2}{3}}dY+O(X^{\\frac{7}{3}}T^{\\frac{5}{3}+\\varepsilon }y^{-\\frac{1}{3}}+X^{\\frac{11}{3}}T^{\\frac{4}{3}+\\varepsilon }y^{\\frac{1}{3}}).$ C. Evaluation of $\\int _{T}^{2T}\\mathfrak {R}^{2}dY$ From (REF ), (REF ), (REF ) and Cauchy's inequality, we get $\\begin{aligned}\\int _{T}^{2T}\\mathfrak {R}_{1}\\mathfrak {R}_{2}dY\\ll X^{\\frac{17}{6}}T^{\\frac{5}{3}+\\varepsilon }y^{-\\frac{1}{6}}+X^{\\frac{7}{2}}T^{\\frac{3}{2}+\\varepsilon }.\\end{aligned}$ Combining (REF ), (REF ) and (REF ), we finally get $\\begin{split}\\int _{T}^{2T}\\mathfrak {R}^{2}dY&=c(X)\\int _{T}^{2T}Y^{\\frac{2}{3}}dY+O(X^{\\frac{11}{3}}T^{\\frac{4}{3}+\\varepsilon }y^{\\frac{1}{3}}+X^{\\frac{10}{3}}T^{\\frac{5}{3}+\\varepsilon }y^{-\\frac{1}{3}}\\\\&\\qquad \\qquad \\qquad \\qquad \\qquad \\qquad +X^{\\frac{17}{6}}T^{\\frac{5}{3}+\\varepsilon }y^{-\\frac{1}{6}}+X^{\\frac{7}{2}}T^{\\frac{3}{2}+\\varepsilon }).\\end{split}$ By choosing a best $y\\in (1,(T/X)^{1/3})$ via Lemma REF , we get $\\int _{T}^{2T}|\\mathfrak {R}_{\\textit {K}}(X,Y)|^{2}dY=c(X)\\int _{T}^{2T}Y^{\\frac{2}{3}}dY+O(X^{\\frac{31}{9}}T^{\\frac{14}{9}+\\varepsilon }+X^{\\frac{26}{9}}T^{\\frac{29}{18}+\\varepsilon }),$ where $c(X)$ is defined by (REF ).", "This completes the proof of Theorem 2." ] ]
2105.11699
[ [ "Intermediate Models of Magidor-Radin Forcing- Part II" ], [ "Abstract We continue the work done by the authors and before that by the second author, Kanovei and koepke.", "We prove that for every set of ordinals $A$ in a Magidor-Radin generic extension using a coherent sequence such that $o^{\\vec{U}}(\\kappa)<\\kappa^+$, there is $C'\\subseteq C_G$, such that $V[A]=V[C']$.", "Also we prove that the supremum of a fresh set in a Prikry, tree Prikry, Magidor, Radin-Magidor and Radin forcing, changes cofinality to $\\omega$." ], [ "Introduction", "A basic fact about the Cohen and Random forcings is that every subforcing of the Cohen (Random) forcing is equivalent to it.", "Kanovey, Koepke and the second author showed in [9] that the same is true for the standard Prikry forcing.", "The result was generalized to the Magidor forcing in [5].", "This was pushed further to versions of the Magidor-Radin forcing with $o^{\\vec{U}}(\\kappa )< \\kappa $ , in [4].", "The result for $o^{\\vec{U}}(\\kappa )<\\kappa $ , splits into two parts.", "The first is to prove that for every every $V$ -generic filter $G$ , for the Magidor-Radin forcing, and $A\\in V[G]$ , there is a subsequence of the generic club $C\\subseteq C_G$ such that $V[A]=V[C]$ .", "Thus, in order to analyse the intermediate models of $V[G]$ , it suffices to study models of the form $V[C]$ , where $C\\subseteq C_G$ .", "The second part is to show that each model of the form $V[C]$ is a $V$ -generic extension for a Magidor-Radin-like forcing.", "The main purpose of the present paper is to study sets in generic extension of the version of Magidor-Radin forcing for $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ .", "It turns out that the first statement holds and every set in the extension is equivalent to a subsequence of a generic Magidor-Radin sequence.", "There are considerable additional difficulties here and new ideas are used to overcome them.", "However, we do not give here a classification for models of the form $V[C]$ .", "The major difference between the case $o^{\\vec{U}}(\\kappa )<\\kappa $ and $o^{\\vec{U}}(\\kappa )\\ge \\kappa $ , is that we cannot split $\\mathbb {M}[\\vec{U}]$ to the part below $o^{\\vec{U}}(\\kappa )$ and above it.", "As proven in [4], this decomposition provided the ability to run over all possible extension types.", "In terms of $C_G$ this means that we cannot split $C_G$ below $\\kappa $ in a way that will determine what are the measure used in the construction of $C_G$ .", "The classical example for such a sequence is $C_G(0),C_G(C_G(0)),C_G(C_G(C_G(0))),...$ in which every element in the sequence is taken from a measure which depends of the previous element in the sequence.", "This example suggests that some sort of tree construction is needed in order to refer to such sequences in the ground model.", "In context of [4] and [5], we are working by induction on $\\kappa $ .", "Formally we prove the following inductive step: Theorem 1.1 Let $U$ be a coherent sequence with maximal measurable $\\kappa $ , such that $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ .", "Assume the inductive hypothesis: $(IH)$ For every $\\delta <\\kappa $ , any coherent sequence $\\vec{W}$ with maximal measurable $\\delta $ and any set $A\\in V[H]$ for $H\\subseteq \\mathbb {M}[\\vec{W}]$ , there is $C\\subseteq C_H$ , such that $V[A]=V[C]$ .", "Then for every $V$ -generic filter $G\\subseteq \\mathbb {M}[\\vec{U}]$ and any set $A\\in V[G]$ , there is $C\\subseteq C_G$ such that $V[A]=V[C]$ .", "As a corollary of this, we obtain the main result of this paper: Theorem 1.2 Let $\\vec{U}$ be a coherent sequence such that $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ .", "Then for every $V$ -generic filter $G\\subseteq \\mathbb {M}[\\vec{U}]$ , such that $\\forall \\alpha \\in C_G.", "o^{\\vec{U}}(\\alpha )<\\alpha ^+$ and every $A\\in V[G]$ , there is $C\\subseteq C_G$ such that $V[A]=V[C]$ .", "Distinguishing from the case where $o^{\\vec{U}}(\\kappa )<\\kappa $ , we do not have a classification of what are exactly the subforcings which generate the models $V[C^{\\prime }]$ .", "Let us give some examples of subforcing of $\\mathbb {M}[\\vec{U}]$ in the case of $o^{\\vec{U}}(\\kappa )=\\kappa $ .", "Example 1.3 Let $G$ be a generic with ,$C_G$ be the generic club added by $\\mathbb {M}[\\vec{U}]$ , consider the increasing continuous enumeration of $C_G$ , $\\langle C_G(i)\\mid i<\\kappa \\rangle $ .", "Assume that $C_G(0)>0$ , and consider again the sequence $\\langle \\kappa _n\\mid n<\\omega \\rangle $ which is defined as follows: $\\kappa _0=C_G(0), \\ \\kappa _{n+1}=C_G(\\kappa _n)$ Consider the following tree of measures: $\\vec{W}=\\langle W_{\\vec{\\alpha }}\\mid \\vec{\\alpha }\\in [\\kappa ]^{<\\omega }\\rangle $ where $W_{\\vec{\\alpha }}=U(\\kappa ,{\\rm max}(\\vec{\\alpha }))$ .", "Note here the since $o^{\\vec{U}}(\\kappa )=\\kappa $ , this is well defined.", "It is not hard to check the Mathias criteria for the tree-Prikry forcing with $\\vec{W}$ , given in [3], to conclude that $\\langle \\kappa _n\\mid n<\\omega \\rangle $ is a tree-Prikry generic sequence with respect to $\\vec{W}$ .", "Note that, since the sequence of measures $\\langle U(\\kappa ,i)\\mid i<\\kappa \\rangle $ is a discrete family of normal measure, this tree-Prikry forcing falls under the framework of [13] and therefore the model $V[\\langle \\kappa _n\\mid n<\\omega \\rangle ]$ is minimal above $V$ .", "This phenomena does not occur in generic extensions of $\\mathbb {M}[\\vec{U}]$ with $o^{\\vec{U}}(\\kappa )<\\kappa $ .", "Example 1.4 The previous example can be made more complex.", "Let $f:[\\kappa ]^{<\\omega }\\rightarrow \\kappa $ be any function.", "Then $\\langle \\alpha _n\\mid n<\\omega \\rangle $ is defined as follows: $\\alpha _0=C_G({\\langle }{\\rangle })$ and $\\alpha _{n+1}$ is obtained by applying $f$ to some finite $\\vec{C}_n\\in [C_G]^{<\\omega }$ i.e.", "$\\alpha _{n+1}=C_G(f(\\vec{C}_n))$ .", "Another theorem proven in section 6 regards fresh sets in Prikry, Magidor, Magidor-Radin and Radin extensions.", "Theorem 1.5 Assume that $\\mathbb {P}$ is either Prikry, tree Prikry, Magidor, Magidor-Radin or Radin forcing.", "Let $G\\subseteq \\mathbb {P}$ be $V$ -generic.", "If $A\\in V[G]$ is fresh set of ordinals with respect to $V$ , then $cf^{V[G]}({\\rm sup}(A))=\\omega $ .", "The paper is organized as follows: Section 2: Subsections $2.1,2.2$ consists of basic definitions and properties of the forcing.", "Then $2.3$ provides several general definitions and previous results.", "In subsection $2.4$ we develop the theory of fat trees.", "Section 3: We deal with the case of sets with cardinality less than $\\kappa $ .", "Section 4: The proof for subsets of $\\kappa $ is presented.", "Section 5: In $5.1$ an argument for general sets is given.", "In $5.2$ , we prove some general results above the quotient forcing of several Prikry type forcing.", "Section 6: Devoted to the proof of REF .", "Section 7: Present further research directions and open questions related to this paper." ], [ "Preliminaries", "Most of the basic definitions are identical to [4] and [7]." ], [ "Magidor forcing", "Let $\\vec{U}=\\langle U(\\alpha ,\\beta )\\mid \\alpha \\le \\kappa \\ ,\\beta <o^{\\vec{U}}(\\alpha )\\rangle $ be a coherent sequence.", "For every $\\alpha \\le \\kappa $ , denote $\\cap \\vec{U}(\\alpha )=\\underset{i<o^{\\vec{U}}(\\alpha )}{\\bigcap }U(\\alpha ,i)$ Definition 2.1 $\\mathbb {M}[\\vec{U}]$ consist of elements $p$ of the form $p=\\langle t_1,...,t_n,\\langle \\kappa ,B\\rangle \\rangle $ .", "For every $1\\le i\\le n $ , $t_i$ is either an ordinal $\\kappa _i$ if $ o^{\\vec{U}}(\\kappa _i)=0$ or a pair $\\langle \\kappa _i,B_i\\rangle $ if $o^{\\vec{U}}(\\kappa _i)>0$ .", "$B\\in \\cap \\vec{U}(\\kappa )$ , ${\\rm min}(B)>\\kappa _n$ .", "For every $1\\le i\\le n$ .", "$\\langle \\kappa _1,...,\\kappa _n\\rangle \\in [\\kappa ]^{<\\omega }$ (increasing finite sequence below $\\kappa $ ).", "$B_i\\in \\cap \\vec{U}(\\kappa _i)$ .", "${\\rm min}(B_i)>\\kappa _{i-1}$ $(i>1)$ .", "Definition 2.2 For $p=\\langle t_1,t_2,...,t_n,\\langle \\kappa ,B\\rangle \\rangle ,q=\\langle s_1,...,s_m,\\langle \\kappa ,C\\rangle \\rangle \\in \\mathbb {M}[\\vec{U}]$ , define $p \\le q$ ($q$ extends $p$ ) iff: $n \\le m$ .", "$B \\supseteq C$ .", "$\\exists 1 \\le i_1 <...<i_n \\le m$ such that for every $1 \\le j \\le m$ : If $\\exists 1\\le r\\le n$ such that $i_r=j$ then $\\kappa (t_r)=\\kappa ( s_{i_r})$ and $C(s_{i_r})\\subseteq B(t_r)$ .", "Otherwise $\\exists \\ 1 \\le r \\le n+1$ such that $ i_{r-1}<j<i_{r}$ then $\\kappa (s_j) \\in B(t_r)$ .", "$B(s_j)\\subseteq B(t_r)\\cap \\kappa (s_j)$ .", "We also use “p directly extends q\", $p \\le ^{*} q$ if: $p \\le q$ $n=m$ Let us add some notation, for a pair $t=\\langle \\alpha , X\\rangle $ we denote by $\\kappa (t)=\\alpha ,\\ B(t)=X$ .", "If $t=\\alpha $ is an ordinal then $\\kappa (t)=\\alpha $ and $B(t)=\\emptyset $ .", "For a condition $p=\\langle t_1,...,t_n,\\langle \\kappa ,B\\rangle \\rangle \\in \\mathbb {M}[\\vec{U}]$ we denote $n=l(p)$ , $p_i=t_i$ , $B_i(p)=B(t_i)$ and $\\kappa _i(p)=\\kappa (t_i)$ for any $1\\le i\\le l(p)$ , $t_{l(p)+1}=\\langle \\kappa ,B\\rangle $ , $t_0=0$ .", "Also denote $\\kappa (p)=\\lbrace \\kappa _i(p)\\mid i\\le l(p)\\rbrace \\text{ and }B(p)=\\bigcup _{i\\le l(p)+1}B_i(p)$ Remark 2.3 In [5],[4] we had another requirement in definition REF , that given a condition $p$ , if we would like to add an ordinal $\\alpha $ to the sequence in the interval $(\\kappa _{i-1}(p),\\kappa _i(p))$ then we needed to make sure that $o^{\\vec{U}}(\\alpha )<o^{\\vec{U}}(\\kappa _i(p))$ .", "This condition is not essential as any condition $p$ can be directly extended to a condition in the set $D=\\lbrace q\\in \\mathbb {M}[\\vec{U}]\\mid \\forall i\\le l(q)+1.", "\\forall \\alpha \\in B_i(q).", "o^{\\vec{U}}(\\alpha )<o^{\\vec{U}}(\\kappa _i(q))\\rbrace $ The order defined in REF on elements of $D$ automatically satisfy the extra requirement.", "For this reason we will point out along this section some points where this assumption changes properties of $\\mathbb {M}[\\vec{U}]$ .", "The major one, is in propositions REF ,REF .", "Definition 2.4 Let $p\\in \\mathbb {M}[\\vec{U}]$ .", "For every $ i\\le l(p)+1$ , $\\alpha \\in B_{i}(p)$ with $o^{\\vec{U}}(\\alpha )>0$ , and $B\\in \\cap \\vec{U}(\\alpha )$ , define $p^{\\frown }{\\langle }\\alpha ,B{\\rangle }=\\langle p_1,...,p_{i-1},\\langle \\alpha ,B_{i}(p)\\cap B\\rangle ,\\langle \\kappa _{i}(p),B_{i}(p)\\setminus (\\alpha +1)\\rangle ,p_{i+1},...,p_{l(p)+1}\\rangle $ Also $p^{}{\\langle }\\alpha {\\rangle }=p^{}{\\langle }\\alpha ,\\alpha {\\rangle }$ .", "If $o^{\\vec{U}}(\\alpha )=0$ , define $p^{\\frown }\\langle \\alpha \\rangle =\\langle p_1,...,p_{i-1},\\alpha ,\\langle \\kappa _{i}(p),B_{i}(p)\\setminus (\\alpha +1)\\rangle ,...,p_{l(p)+1}\\rangle $ For $\\langle \\alpha _1,...,\\alpha _n\\rangle \\in [\\kappa ]^{<\\omega }$ and ${\\langle }B_1,...,B_n{\\rangle }$ , where $B_i\\in \\cap \\vec{U}(\\alpha _i)$ , define recursively, $p^{}{\\langle }{\\langle }\\alpha _1,...,\\alpha _n{\\rangle },{\\langle }B_1,...,B_n{\\rangle }{\\rangle }=(p^{}{\\langle }{\\langle }\\alpha _1,...,\\alpha _{n-1}{\\rangle },{\\langle }B_1,...,B_{n-1}{\\rangle }{\\rangle })^{}{\\langle }\\alpha _n,B_n{\\rangle }$ $p^{\\frown }\\langle \\alpha _1,...,\\alpha _n\\rangle =(p^{\\frown }\\langle \\alpha _1,...,\\alpha _{n-1}\\rangle )^{\\frown }\\langle \\alpha _n\\rangle $ For $\\vec{\\alpha }={\\langle }\\alpha _1,...,\\alpha _n{\\rangle }$ , denote by $|\\vec{\\alpha }|=n$ and $\\vec{\\alpha }(i)=\\alpha _i$ .", "If $I\\subseteq \\lbrace 1,..,n\\rbrace $ then $\\vec{\\alpha }\\upharpoonright I={\\langle }\\vec{\\alpha }(i_1),...,\\vec{\\alpha }(i_k){\\rangle }$ where $\\lbrace i_1,i_2,...,i_k\\rbrace $ is the increasing enumeration of $I$ .", "For $Y\\subseteq \\omega $ , $\\vec{\\alpha }\\upharpoonright Y=\\vec{\\alpha }\\upharpoonright (Y\\cap \\lbrace 1,..,n\\rbrace )$ .", "We will usually identify $\\vec{\\alpha }$ with the set $\\lbrace \\alpha _1,..,\\alpha _n\\rbrace $ .", "Note that if we add a pair of the form $\\langle \\alpha , B\\cap \\alpha \\rangle $ then in $B\\cap \\alpha $ there might be many ordinals which are irrelevant to the forcing and cannot be added.", "Namely, ordinals $\\beta $ such that $B\\cap \\beta \\notin \\cap \\vec{U}(\\beta )$ ,.", "Note that we no longer have to require $o^{\\vec{U}}(\\beta )\\ge o^{\\vec{U}}(\\alpha )$ .", "We can avoid such ordinals by shrinking the large sets.", "Proposition 2.5 Let $\\alpha \\le \\kappa $ , and $A\\in \\cap \\vec{U}(\\alpha )$ .", "Then there exists $A^*\\subseteq A$ such that: $A^*\\in \\cap \\vec{U}(\\alpha )$ For every $x\\in A^*$ , $A^*\\cap x\\in \\cap \\vec{U}(x)$ .", "Proof.", "For any $i<o^{\\vec{U}}(\\alpha )$ , $Ult(V,U(\\alpha ,j))\\models A=j_{U(\\alpha ,j)}(A)\\cap \\alpha \\in \\underset{i<j}{\\bigcap }U(\\alpha ,i)$ Coherency of the sequence imply that $A^{\\prime }:=\\lbrace \\alpha <\\kappa \\mid A\\cap \\alpha \\in \\cap \\vec{U}(\\alpha )\\rbrace \\in U(\\alpha ,j)$ , this is for every $j<o^{\\vec{U}}(\\alpha )$ .", "Define inductively $A^{(0)}=A$ , $A^{(n+1)}=A^{^{\\prime }(n)}$ .", "By definition, $\\forall \\alpha \\in A^{(n+1)}_j$ , $A^{(n)}\\cap \\alpha \\in \\cap \\vec{U}(\\alpha )$ .", "Define $A^*=\\underset{n<\\omega }{\\bigcap }A^{(n)}\\in \\cap \\vec{U}(\\kappa )$ , this set has the required property.", "$\\blacksquare $ The conditions $p^{}\\vec{\\alpha }$ and $p^{}{\\langle }\\vec{\\alpha },\\vec{B}{\\rangle }$ are minimal extensions of $p$ is a sense given in the following proposition.", "The proof of the proposition is a direct verification of REF ,REF .", "Proposition 2.6 Let $p\\in \\mathbb {M}[\\vec{U}]$ , $\\vec{\\alpha }\\in [\\kappa ]^{<\\omega }$ and $\\vec{B}={\\langle }B_1,...,B_n{\\rangle }$ , where $B_i\\in \\cap \\vec{U}(\\vec{\\alpha }(i))$ .", "$p^{}{\\langle }\\vec{\\alpha },\\vec{B}{\\rangle }\\in \\mathbb {M}[\\vec{U}]\\leftrightarrow \\forall i\\le |\\vec{\\alpha }|.", "\\ \\exists j\\le l(p)$ such that $\\vec{\\alpha }(i)\\in (\\kappa _j(p),\\kappa _{j+1}(p))$ and $ B_{j+1}(p)\\cap \\vec{\\alpha }(i)\\in \\cap \\vec{U}(\\vec{\\alpha }(i))$ Assume $p^{\\frown }{\\langle }\\vec{\\alpha },\\vec{B}{\\rangle }\\in \\mathbb {M}[\\vec{U}]$ then $\\forall p\\le q$ , if $\\kappa (p)\\cup \\vec{\\alpha }\\subseteq \\kappa (q)$ .", "$B(p)\\cup (\\uplus _{1\\le i\\le n} B_i)\\subseteq B(q)$ .", "For $j\\le l(q)$ , if there is $r$ such that ${\\rm max}\\lbrace \\kappa _i(p),\\vec{\\alpha }(r-1)\\rbrace <\\kappa _j(q)<\\vec{\\alpha }(r)<\\kappa _{i+1}(p)$ , then $\\kappa _j(q)\\in B_r$ .", "$p^{\\frown }{\\langle }\\vec{\\alpha },\\vec{B}{\\rangle }\\le q$ .", "This proposition also gives the criteria for $p^{}\\vec{\\alpha }\\in \\mathbb {M}[\\vec{U}]$ , and the minimality of the extension $p^{}\\vec{\\alpha }$ : for every $p\\le q$ , if $\\kappa (q)\\cup \\vec{\\alpha }\\subseteq \\kappa (q)$ then $p^{}\\vec{\\alpha }\\le q$ .", "Definition 2.7 Let $p\\in \\mathbb {M}[\\vec{U}]$ , $\\alpha <\\kappa $ and let $i\\le l(p)$ be such that $\\alpha \\in [\\kappa _{i}(p),\\kappa _{i+1}(p))$ $p\\upharpoonright \\alpha = \\langle p_1,...,p_i\\rangle \\ and \\ p\\upharpoonright (\\alpha ,\\kappa )=\\langle p_{i+1},...,p_{l(p)+1}\\rangle $ Also, for $\\lambda $ with $o^{\\vec{U}}(\\lambda )>0$ define $\\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda =\\Big \\lbrace p\\upharpoonright \\lambda \\mid p\\in \\mathbb {M}[\\vec{U}]\\ and \\ \\lambda \\ apears \\ in \\ p\\Big \\rbrace $ $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )=\\lbrace p\\upharpoonright (\\lambda ,\\kappa )\\mid p\\in \\mathbb {M}[\\vec{U}]\\ and \\ \\lambda \\ apears \\ in \\ p\\rbrace $ Note that $\\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda $ is just Magidor forcing on $\\lambda $ and $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ is a subset of $\\mathbb {M}[\\vec{U}]$ which generates a Magidor club in the interval $(\\lambda ,\\kappa )$ .", "Remark 2.8 Let $\\lambda <\\kappa $ which $o^{\\vec{U}}(\\lambda )>0$ and let $p\\in \\mathbb {M}[\\vec{U}]$ be such that $\\lambda $ appears in $p$ .", "Let $H\\subseteq \\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda $ with $p\\upharpoonright \\lambda \\in H$ , then in $V[H]$ we added new (bounded) subsets of $\\kappa $ , hence $\\vec{U}$ is no longer a sequence of ultrafilter.", "However, for the relevant interval $(\\lambda ,\\kappa )$ , $\\vec{U}\\upharpoonright (\\lambda ,\\kappa )$ generates a coherent sequence of ultrafilter $\\vec{W}$ and formally we force with $\\mathbb {M}[\\vec{W}]$ .", "Note that the ground model forcing $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ is dense in $\\mathbb {M}[\\vec{W}]$ , hence can simply force with $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ over $V[H]$ to complete to a generic extension of $\\mathbb {M}[\\vec{U}]$ .", "The following propositions can be found in [4]: Proposition 2.9 Let $p\\in \\mathbb {M}[\\vec{U}]$ and $\\langle \\lambda ,B\\rangle $ a pair in $p$ .", "Then $\\mathbb {M}[\\vec{U}]/p\\simeq \\Big (\\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda \\Big )/\\Big (p\\upharpoonright \\lambda \\Big )\\times \\Big (\\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )\\Big )/\\Big (p\\upharpoonright (\\lambda ,\\kappa )\\Big )$ Proposition 2.10 Let $p\\in \\mathbb {M}[\\vec{U}]$ and $\\langle \\lambda ,B\\rangle $ a pair in $p$ .", "Then the order $\\le ^*$ in the forcing $\\Big (\\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )\\Big )/\\Big (p\\upharpoonright (\\lambda ,\\kappa )\\Big )$ is $\\delta $ -directed where $\\delta ={\\rm min}(\\nu >\\lambda \\mid o^{\\vec{U}}(\\nu )>0)$ .", "Meaning that for every $X\\subseteq \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ such that $|X|<\\delta $ and for every $q\\in X, \\ p\\le ^* q$ , there is an $\\le ^*$ -upper bound for $X$ .", "Lemma 2.11 $\\mathbb {M}[\\vec{U}]$ satisfies $k^+$ -c.c.", "The following lemma is the well known Prikry condition: Lemma 2.12 $\\mathbb {M}[\\vec{U}]$ satisfy the Prikry condition i.e.", "for any statement in the forcing language $\\sigma $ and any $p\\in \\mathbb {M}[\\vec{U}]$ there is $p\\le ^*p^*$ such that $p^*||\\sigma $ i.e.", "either $p^*\\Vdash \\sigma $ or $p\\Vdash \\lnot \\sigma $ .", "The next lemma can be found in [14] and the proof in [4]: Lemma 2.13 Let $G\\subseteq \\mathbb {M}[\\vec{U}]$ be generic and suppose that $A\\in V[G]$ is such that $A\\subseteq V_\\alpha $ .", "Let $p\\in G$ and ${\\langle }\\lambda ,B{\\rangle }$ a pair in $p$ such that $\\alpha <\\lambda $ , then $A\\in V[G\\upharpoonright \\lambda ]$ .", "Corollary 2.14 $\\mathbb {M}[\\vec{U}]$ preserves all cardinals.", "Definition 2.15 Let $G\\subseteq \\mathbb {M}[\\vec{U}]$ be generic, define the Magidor club $C_{G}=\\lbrace \\nu \\mid \\exists \\ A\\exists p\\in G \\ s.t.", "\\ \\langle \\nu ,A\\rangle \\in p\\rbrace $ We will abuse notation by sometimes considering $C_G$ as a the canonical enumeration of the set $C_G$ .", "The set $C_{G}$ is closed and unbounded in $\\kappa $ , therefore, the order type of $C_{G}$ determines the cofinality of $\\kappa $ in $V[G]$ .", "The next propositions can be found in [7].", "Proposition 2.16 Let $G\\subseteq \\mathbb {M}[\\vec{U}]$ be generic.", "Then $G$ can be reconstructed from $C_{G}$ as follows $ G=\\lbrace p\\in \\mathbb {M}[\\vec{U}]\\mid (\\kappa (p)\\subseteq C_{G}) \\wedge (C_{G}\\setminus \\kappa (p)\\subseteq B(p))\\rbrace $ In particular $V[G]=V[C_{G}]$ .", "Proposition 2.17 Let $G\\subseteq \\mathbb {M}[\\vec{U}]$ be generic.", "$C_G$ is a club at $\\kappa $ .", "For every $\\delta \\in C_G$ , $o^{\\vec{U}}(\\delta )>0$ iff $\\delta \\in Lim(C_G)$The set of limit points of $X\\subseteq \\kappa $ is $Lim(X):=\\lbrace \\alpha \\mid {\\rm sup}\\ \\alpha \\cap X)=\\alpha \\rbrace \\subseteq \\kappa +1$.", "For every $\\delta \\in Lim(C_G)$ , and every $A\\in \\cap \\vec{U}(\\delta )$ , there is $\\xi <\\delta $ such that $C_G\\cap (\\xi ,\\delta )\\subseteq A$ .", "If ${\\langle }\\delta _i\\mid i<\\theta {\\rangle }$ is an increasing sequence of elements of $C_G$ , let $\\delta ^*={\\rm sup}_{i<\\theta }\\delta _i$ , then $o^{\\vec{U}}(\\delta ^*)\\ge \\limsup _{i<\\theta }o^{\\vec{U}}(\\delta _i)+1$ .For a sequence of ordinals ${\\langle }\\rho _j\\mid j<\\gamma {\\rangle }$ , $\\limsup _{j<\\gamma }\\rho _j={\\rm min}({\\rm sup}_{i<j<\\gamma }\\rho _j\\mid i<\\gamma )$ .", "Let $\\delta \\in Lim(C_G)$ and let $A$ be a positive set, $A\\in (\\cap \\vec{U}(\\delta ))^+$ .", "i.e.", "$\\delta \\setminus A\\notin \\cap \\vec{U}(\\delta )$ .", "Equivalently, if there is some $i<o^{\\vec{U}}(\\delta )$ such that $A\\in U(\\delta ,i)$ .", "Then, ${\\rm sup}(A\\cap C_G)=\\delta $ .", "If $A\\subseteq V_\\alpha $ , then $A\\in V[C_G\\cap \\lambda ]$ , where $\\lambda ={\\rm max}(Lim(C_G)\\cap \\alpha +1)$ .", "For every $V$ -regular cardinal $\\alpha $ , if $cf^{V[G]}(\\alpha )<\\alpha $ then $\\alpha \\in Lim(C_G)$ .", "Proof.", "The proof of $(1),(2),(3),(5),(6),(7)$ can be found in [7] and does not use the special property of REF .", "To see $(4)$ , use closure of $C_G$ , to find $q\\in G$ such that $\\delta ^*$ appears in $q$ .", "Clearly, $A:=\\lbrace \\alpha <\\delta ^*\\mid o^{\\vec{U}}(\\alpha )<o^{\\vec{U}}(\\delta )\\rbrace \\in \\cap \\vec{U}(\\delta ^*)$ , thus by $(3)$ , there is $\\xi <\\delta ^*$ such that $C_G\\cap (\\xi ,\\delta ^*)\\subseteq A$ .", "Let $i<\\theta $ be such that for every $j>i$ , $o^{\\vec{U}}(\\delta _j)<o^{\\vec{U}}(\\delta ^*)$ .", "By definition of $limsup$ , $\\limsup _{j<\\theta }o^{\\vec{U}}(\\delta _j)+1\\le {\\rm sup}_{i<j<\\theta }o^{\\vec{U}}(\\delta _j)+1\\le o^{\\vec{U}}(\\delta ^*)$ $\\blacksquare $ Proposition 2.18 Let $G\\subseteq \\mathbb {M}[\\vec{U}]$ be $V$ -generic filter and $C_{G}$ the corresponding Magidor sequence.", "Let $p\\in G$ , then for every $i\\le l(p)+1$ If $o^{\\vec{U}}(\\kappa _i(p))\\le \\kappa _i(p)$ , and $\\forall \\alpha \\in B_i(p)$ , $o^{\\vec{U}}(\\alpha )<o^{\\vec{U}}(\\kappa _i(p))$ , then ${\\rm otp}( [\\kappa _{i-1}(p),\\kappa _i(p))\\cap C_{G} )=\\omega ^{o^{\\vec{U}}(\\kappa _i(p))}$ If $o^{\\vec{U}}(\\kappa _i(p))\\ge \\kappa _i(p)$ , then ${\\rm otp}( [\\kappa _{i-1}(p),\\kappa _i(p))\\cap C_{G} )=\\kappa _i(p)$ Proof.", "The same as in [4], replacing the usage of definition REF with the assumption the $\\forall \\alpha \\in B_i(p)$ , $o^{\\vec{U}}(\\alpha )<o^{\\vec{U}}(\\kappa _i(p))$ .$\\blacksquare $ Proposition REF suggest a connection between the index in $C_G$ of ordinals appearing in $p$ and Cantor normal form.", "Definition 2.19 Let $p\\in G$ .", "For each $i\\le l(p)$ define $\\gamma _i(p)=\\sum _{j=1}^{i}\\omega ^{o^{\\vec{U}}(\\kappa _j(p))}$ Corollary 2.20 Let G be $\\mathbb {M}[\\vec{U}]$ -generic and $C_{G}$ the corresponding Magidor sequence.", "Let $p\\in G$ , such that for every $1\\le i\\le l(p)$ , and every $\\alpha \\in B_i(p)$ , $o^{\\vec{U}}(\\alpha )<o^{\\vec{U}}(\\kappa _i(p))$ , then $p\\Vdash \\underaccent{\\sim }{C}_G(\\gamma _i(p))=\\kappa (t_i)$ For more details and basic properties of Magidor forcing see [14],[7], [5] or [4]." ], [ "Magidor forcing with $o(\\kappa )<\\kappa ^+$", "When we assume $o^{\\vec{U}}(\\kappa )<\\kappa $ , the measure $U(\\kappa ,\\xi )$ concentrates on measurable $\\alpha $ with $o^{\\vec{U}}(\\alpha )=\\xi $ , which is a canonical discrete family for those measures.", "In our more general situation, $ o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , we can still separate the measures but the decomposition is not canonical.", "More precisely, for every $\\alpha \\le \\kappa $ , we would like to have sets which witness the fact that the sequence of ultrafilters ${\\langle }U(\\alpha ,\\beta )\\mid \\beta <o^{\\vec{U}}(\\alpha ){\\rangle }$ is discrete.", "Proposition 2.21 Assume $o^{\\vec{U}}(\\alpha )<\\alpha ^+$ , then there are pairwise disjoint sets $\\langle X^{(\\alpha )}_i\\mid i<o^{\\vec{U}}(\\alpha )\\rangle $ such that $X^{(\\alpha )}_i\\in U(\\alpha ,i)$ .", "Proof.", "By assumption, $|o^{\\vec{U}}(\\alpha )|\\le \\alpha $ .", "Enumerate the measures $\\lbrace U(\\alpha ,i)\\mid i<o^{\\vec{U}}(\\alpha )\\rbrace =\\lbrace W_j\\mid j<\\rho \\rbrace $ where $\\rho \\le \\alpha $ .", "For every $i\\ne j$ below $\\rho $ , find $Y_i\\in W_i\\setminus W_j$ .", "By normality $Y_i=\\Delta _{j<\\rho } W_{i,j}\\in W$ .", "Also, for $j\\ne i$ , $Y_i\\notin W_j$ since $Y_i\\subseteq Y_{i,j}\\cup j\\notin W_j$ .", "Set $Z_i=Y_i\\setminus (\\cup _{j<i}Y_j)$ , then $Z_i\\in W_i$ and ${\\langle }Z_i\\mid i<\\rho {\\rangle }$ are pairwise disjoint.", "Finally, define $X^{(\\alpha )}_\\xi =Z_i$ where $\\xi <o^{\\vec{U}}(\\kappa )$ is such that $W_i=U(\\alpha ,\\xi )$ .$\\blacksquare $ .", "Definition 2.22 Let $\\alpha \\le \\kappa $ .", "For $o^{\\vec{U}}(\\alpha )\\le \\alpha $ define for every $i<o^{\\vec{U}}(\\alpha )$ $X^{(\\alpha )}_i=\\lbrace x<\\alpha \\mid i=o^{\\vec{U}}(x)\\rbrace \\in U(\\alpha ,i)$ For $\\alpha < o^{\\vec{U}}(\\alpha )<\\alpha ^+$ fix a decomposition of $\\alpha $ , ${\\langle }X^{(\\alpha )}_i\\mid i<o^{\\vec{U}}(\\alpha ){\\rangle }$ guaranteed by the previous propositions such that $X^{(\\alpha )}_i\\in U(\\alpha ,i)$ .", "For $\\beta <\\alpha $ denote by $o^{(\\alpha )}(\\beta )=\\xi $ for the unique $\\xi <o^{\\vec{U}}(\\alpha )$ such that $\\beta \\in X^{(\\alpha )}_\\xi $ .", "Also let $o^{(\\alpha )}(\\alpha )=o^{\\vec{U}}(\\alpha )$ .", "Note that if $o^{\\vec{U}}(\\alpha )\\le \\alpha $ then $o^{(\\alpha )}(\\beta )=o^{\\vec{U}}(\\beta )$ .", "Proposition 2.23 For every $V$ -generic $G\\subseteq \\mathbb {M}[\\vec{U}]$ and for every $\\kappa _0\\in Lim(C_G)$ (Recall that $\\kappa \\in Lim(C_G)$ ) such that $o^{\\vec{U}}(\\kappa _0)<\\kappa _0^+$ , there is $\\xi <\\kappa _0$ such that and every $\\alpha \\in Lim(C_G)\\cap (\\xi ,\\kappa _0]$ $o^{(\\kappa _0)}(\\alpha )\\ge limsup(o^{(\\kappa _0)}(\\beta )+1\\mid \\beta \\in C_G\\cap \\alpha )$ In other words, there is $\\xi _\\alpha <\\alpha $ such that for every $\\beta \\in C_G\\cap (\\xi _\\alpha ,\\alpha )$ , $o^{(\\kappa _0)}(\\beta )<o^{(\\kappa _0)}(\\alpha )$ .", "Proof.", "If $o^{\\vec{U}}(\\kappa _0)<\\kappa _0$ , then $o^{(\\kappa _0)}(\\alpha )=o^{\\vec{U}}(\\alpha )$ and the proposition follows from REF .4.", "Also if $\\alpha =\\kappa _0$ , then clearly for every $\\beta <\\kappa _0$ , $o^{(\\kappa _0)}(\\beta )<o^{\\vec{U}}(\\kappa _0)$ by definition.", "Assume that $\\kappa _0\\le o^{\\vec{U}}(\\kappa _0)<\\kappa _0^+$ and let $\\pi :\\kappa _0\\longleftrightarrow o^{\\vec{U}}(\\kappa _0)$ be a bijection.", "For every $\\rho <o^{\\vec{U}}(\\kappa _0)$ denote by $E_\\rho =\\pi ^{-1^{\\prime \\prime }}\\rho \\subseteq \\kappa _0$ and for every $\\alpha <\\kappa _0$ define $Y_\\alpha =X^{(\\kappa _0)}_{\\pi (\\alpha )}$ .", "In $M_{U(\\kappa _0,\\rho )}$ , define $j_{U(\\kappa _0,\\rho )}({\\langle }Y_\\alpha \\mid \\alpha <\\kappa _0{\\rangle })={\\langle }Y^{\\prime }_\\alpha \\mid \\alpha <j_{U(\\kappa _0,\\rho )}(\\kappa _0){\\rangle }$ Since $crit(j_{U(\\kappa _0,\\rho )})=\\kappa _0$ , for $\\alpha <\\kappa _0$ , $Y^{\\prime }_\\alpha =j_{U(\\kappa _0,\\rho )}(Y_\\alpha )$ .", "Moreover, $j_{U(\\kappa _0,\\rho )}(E_{\\rho })\\cap \\kappa _0=E_{\\rho }$ and $j_{U(\\kappa _0,\\rho )}(Y_\\alpha )\\cap \\kappa _0= Y_\\alpha $ .", "Hence $\\cup _{\\alpha \\in j_{U(\\kappa _0,\\rho )}(E_{\\rho })\\cap \\kappa } Y^{\\prime }_\\alpha \\cap \\kappa _0=\\cup _{\\alpha \\in E_\\rho }j_{U(\\kappa _0,\\rho )}(Y_\\alpha )\\cap \\kappa _0=\\cup _{\\alpha \\in E_\\rho }Y_\\alpha =\\cup _{\\xi <\\rho }X^{(\\kappa _0)}_\\xi $ By coherency, $\\cap _{\\xi <\\rho }U(\\kappa _0,\\xi )=\\cap j_{U(\\kappa _0,\\rho )}(\\vec{U})(\\kappa _0)$ , thus $(*) \\ \\ \\ \\ \\ M_{U(\\kappa _0,\\rho )}\\models \\cup _{\\alpha \\in j_{U(\\kappa _0,\\rho )}(E_{\\rho })\\cap \\kappa _0} Y^{\\prime }_\\alpha \\cap \\kappa _0\\in \\cap j_{U(\\kappa _0,\\rho )}(\\vec{U})(\\kappa _0)$ Reflecting $(*)$ we get $X^{\\prime }_\\rho =\\Big \\lbrace \\beta \\in X^{(\\kappa _0)}_\\rho \\mid \\cup _{\\alpha \\in E_{\\rho }\\cap \\beta } Y_\\alpha \\cap \\beta \\in \\cap \\vec{U}(\\beta ) \\Big \\rbrace \\in U(\\kappa _0,\\rho )$ Now let $\\xi <\\kappa _0$ be such that $C_G\\cap (\\xi ,\\kappa _0)\\subseteq \\cup _{\\rho <o^{\\vec{U}}(\\kappa _0)}X^{\\prime }_{\\rho }$ .", "and let $\\alpha \\in Lim(C_G)\\cap (\\xi ,\\kappa _0)$ .", "Denote by $o^{(\\kappa _0)}(\\alpha )=\\rho $ , and since $X^{(\\kappa _0)}_i$ are pairwise disjoint, $\\alpha \\in X^{\\prime }_{\\rho }$ .", "By definition of $X^{\\prime }_{\\rho }$ , $\\cup _{i\\in E_{\\rho }\\cap \\alpha } Y_i\\cap \\alpha \\in \\cap \\vec{U}(\\alpha )\\text{ and } \\forall i\\in E_\\rho \\cap \\alpha .", "Y_i\\cap \\beta \\in (\\cap \\vec{U}(\\alpha ))^+$ By REF .3 there is $\\xi _\\alpha <\\alpha $ such that $C_G\\cap (\\xi _\\alpha ,\\alpha )\\subseteq \\cup _{i\\in E_{\\rho }\\cap \\alpha } Y_i\\cap \\alpha $ .", "In particular, for every $\\beta \\in C_G\\cap (\\xi _\\alpha ,\\alpha )$ , there is $i\\in E_{\\rho }\\cap \\alpha $ such that $\\beta \\in Y_i=X^{(\\kappa _0)}_{\\pi (i)}$ .", "Since $i\\in E_\\rho $ , $\\pi (i)<\\rho $ so $o^{(\\kappa _0)}(\\beta )<\\rho =o^{(\\kappa _0)}(\\alpha )$ , hence $limsup_{\\beta \\in C_G\\cap \\alpha }(o^{(\\kappa _0)}(\\beta )+1)\\le o^{(\\kappa _0)}(\\alpha )$ .", "$\\blacksquare $ Corollary 2.24 For every $V$ -generic $G\\subseteq \\mathbb {M}[\\vec{U}]$ and for every $\\kappa _0\\in Lim(C_G)$ with $o^{\\vec{U}}(\\kappa _0)<\\kappa _0^+$ there is $\\eta <\\kappa _0$ such that for every $\\alpha \\in Lim(C_G)\\cap (\\eta ,\\kappa _0]$ the following holds: If $o^{(\\kappa _0)}(\\alpha )=\\beta +1$ is a successor ordinal, then there is $\\xi <\\alpha $ such that ${\\rm otp}(C_G\\cap X^{(\\kappa _0)}_{\\beta }\\cap (\\xi ,\\alpha ))=\\omega $ , hence $cf^{V[G]}(\\alpha )=\\omega $ .", "If $cf^V(o^{(\\kappa _0)}(\\alpha ))=\\lambda <\\kappa _0$ , then $\\lambda <\\alpha $ and let ${\\langle }\\rho _i\\mid i<\\lambda {\\rangle }$ be cofinal in $o^{(\\kappa _0)}(\\alpha )$ , then there is $\\xi <\\alpha $ such that the sequence $x_i={\\rm min}(C_G\\cap X^{(\\kappa )}_{\\rho _i}\\setminus \\xi )$ is increasing, and unbounded in $\\alpha $ , hence $cf^{V[G]}(\\alpha )=cf^{V[G]}(\\lambda )$ .", "Assume that $cf^V(o^{\\kappa _0}(\\alpha ))=\\kappa $ , and let ${\\langle }\\rho _i\\mid i<\\kappa {\\rangle }$ be cofinal in $o^{\\vec{U}}(\\alpha )$ , then there is $\\xi <\\alpha $ such that the sequence $x_0={\\rm min}(C_G\\cap (\\xi ,\\kappa _0))$ and $x_{n+1}={\\rm min}( C_G\\cap X^{(\\kappa )}_{\\rho _{x_n}})$ is increasing and cofinal in $\\alpha $ , hence $cf^{V[G]}(\\alpha )=\\omega $ .", "Proof.", "For each successor $\\rho =\\beta +1$ consider the set $S_\\rho =\\lbrace \\alpha \\in X^{(\\kappa _0)}_\\rho \\mid X^{(\\kappa _0)}_\\beta \\cap \\alpha \\in (\\cap \\vec{U}(\\alpha ))^+\\rbrace $ Since $j_{U(\\kappa _0,\\rho )}(X^{(\\kappa _0)}_\\beta )\\cap \\kappa _0=X^{(\\kappa _0)}_\\beta \\in U(\\kappa _0,\\beta )$ , then by coherency, $j_{U(\\kappa _0,\\rho )}(X^{(\\kappa _0)}_\\beta )\\cap \\kappa _0\\in \\cap j_{U(\\kappa _0,\\rho )}(\\vec{U})(\\kappa _0))^+$ .", "By elementarity, $\\kappa _0\\in j_{U(\\kappa _0,\\rho )}(S_\\rho )$ , hence $S_{\\rho }\\in U(\\kappa ,\\rho )$ .", "For $\\rho $ such that $cf^{V}(\\rho )=:\\lambda <\\kappa _0$ , fix a cofinal sequence ${\\langle }\\rho _i\\mid i<\\lambda {\\rangle }\\in V$ .", "Consider the set $S^{\\prime }_{\\rho }=\\lbrace \\alpha \\in X^{(\\kappa _0)}_\\rho \\mid \\forall i<\\lambda .", "X^{(\\kappa _0)}_{\\rho _i}\\cap \\alpha \\in (\\cap \\vec{U}(\\alpha ))^+\\rbrace $ Also $S^{\\prime }_\\rho \\in U(\\kappa _0,\\rho )$ .", "Indeed, since $\\lambda <\\kappa _0$ $j_{U(\\kappa _0,\\rho )}({\\langle }X^{(\\kappa _0)}_{\\rho _i}\\mid i<\\lambda {\\rangle })={\\langle }j_{U(\\kappa _0,\\rho )}(X^{(\\kappa _0)}_{\\rho _i})\\mid i<\\lambda {\\rangle }$ As before, for every $i<\\lambda $ it follow that $j_{U(\\kappa _0,\\rho )}(X^{(\\kappa _0)}_{\\rho _i})\\cap \\kappa _0\\in (\\cap j_{U(\\kappa _0,\\rho )}(\\vec{U})(\\kappa _0))^+$ , thus $S^{\\prime }_\\rho \\in U(\\kappa _0,\\rho )$ .", "We shrink $S^{\\prime }_{\\rho }$ a bit more, consider $S_{\\rho }=\\Big \\lbrace \\alpha \\in S^{\\prime }_{\\rho }\\mid \\lbrace \\beta <\\alpha \\mid \\forall i<\\lambda .", "\\beta \\in X^{(\\kappa _0)}_{\\rho _i}\\rightarrow \\forall j<i.X^{(\\kappa _0)}_{\\rho _j}\\cap \\beta \\in (\\cap \\vec{U}(\\beta ))^+\\rbrace \\in \\cap \\vec{U}(\\alpha )\\Big \\rbrace $ To see that $S_\\rho \\in U(\\kappa _0,\\rho )$ , for every $i<\\lambda $ consider the set $E_{\\rho _i}=\\lbrace \\beta \\in X^{(\\kappa _0)}_{\\rho _i}\\mid \\forall j<i.X^{(\\kappa _0)}_{\\rho _j}\\cap \\beta \\in (\\cap \\vec{U}(\\beta ))^+\\rbrace $ In $M_{U(\\kappa _0,\\rho _i)}$ , for every $j<i$ , $j_{U(\\kappa _0,\\rho _i)}(X^{(\\kappa _0)}_{\\rho _j})\\cap \\kappa _0\\in (\\cap j_{U(\\kappa _0,\\rho _i)}(\\vec{U})(\\kappa _0))^+$ it follows that $E_{\\rho _i}\\in U(\\kappa _0,\\rho _i)$ .", "For $y\\in \\rho \\setminus \\lbrace \\rho _i\\mid i<\\lambda \\rbrace $ , set $E_{y}=X^{(\\kappa _0)}_y$ .", "Then $E:=\\cup _{y<\\rho }E_y\\in \\cap _{y<\\rho }U(\\kappa _0,y)$ .", "The set $E$ has the property that for every $\\beta \\in E$ , if $\\beta \\in X^{(\\kappa )}_{\\rho _i}$ for some $i<\\lambda $ , then $\\beta \\in E_{\\rho _i}$ and therefore $\\forall j<i.X^{(\\kappa _0)}_{\\rho _j}\\cap \\beta \\in (\\cap \\vec{U}(\\beta ))^+$ .", "In $M_{U(\\kappa _0,\\rho )}$ , by coherency $o^{j_{U(\\kappa _0,\\rho )}(\\vec{U})}(\\kappa _0)=\\rho $ and for every $\\beta <\\kappa _0$ , $\\cap j_{U(\\kappa _0,\\rho )}(\\vec{U})(\\beta )=\\cap \\vec{U}(\\beta )$ .", "Also $E\\in M_{U(\\kappa _0,\\rho )}$ (by $\\kappa $ -closure) and $E\\in \\cap j_{U(\\kappa ,\\rho )}(\\vec{U})(\\kappa _0)$ .", "Denote by $X^{\\prime }_i=j_{U(\\kappa _0,\\rho )}(X^{(\\kappa _0)}_{\\rho _i})$ , then for every $\\beta \\le \\kappa _0$ , $X^{\\prime }_i\\cap \\beta = X^{(\\kappa _0)}_{\\rho _i}\\cap \\beta $ .", "It follows that $M_{U(\\kappa _0,\\rho )}\\models \\lbrace \\beta <\\kappa _0\\mid \\forall i<\\lambda .", "\\beta \\in X^{\\prime }_{i}\\rightarrow \\forall j<i.X^{\\prime }_j\\cap \\beta \\in (\\cap j_{U(\\kappa _0,\\rho )}(\\vec{U})(\\beta ))^+\\rbrace \\in \\cap j_{U(\\kappa _0,\\rho )}(\\vec{U})(\\kappa _0) $ Reflecting this, we get that $S_\\rho \\in U(\\kappa _0,\\rho )$ .", "If $cf^V(\\rho )=\\kappa _0$ , fix a continuous cofinal sequence ${\\langle }\\rho _i\\mid i<\\kappa _0{\\rangle }\\in V$ , consider $S^{\\prime }_\\rho =\\lbrace \\alpha \\in X^{(\\kappa _0)}_\\rho \\mid \\forall i<\\alpha .", "X^{(\\kappa _0)}_{\\rho _i}\\cap \\alpha \\in (\\cap \\vec{U}(\\alpha ))^+\\rbrace $ Then as before $S^{\\prime }_\\rho \\in U(\\kappa _0,\\rho )$ .", "Next, consider $S_\\rho =\\Big \\lbrace \\alpha \\in S^{\\prime }_{\\rho }\\mid \\lbrace \\beta <\\alpha \\mid \\exists \\zeta <\\beta .\\cup _{i<\\rho _{\\zeta }}X^{(\\kappa _0)}_i\\cap \\beta \\in \\cap \\vec{U}(\\beta )\\rbrace \\in \\cap \\vec{U}(\\alpha )\\Big \\rbrace $ To see that $S_{\\rho }\\in U(\\kappa _0,\\rho )$ , let $\\xi <\\rho $ , find $\\zeta <\\kappa _0$ be such that $\\rho _\\zeta >\\xi $ .", "Denote $j_{U(\\kappa _0,\\xi )}({\\langle }X^{(\\kappa _0)}_i\\mid i<o^{\\vec{U}}(\\kappa _0){\\rangle })=\\langle X^{\\prime }_i\\mid i<o^{j(\\vec{U})}(j_{U(\\kappa _0,\\xi )}(\\kappa _0)){\\rangle }, \\ j_{U(\\kappa _0,\\xi )}({\\langle }\\rho _i\\mid i<\\kappa _0{\\rangle })={\\langle }\\rho ^{\\prime }_i\\mid i<j_{U(\\kappa _0,\\xi )}(\\kappa _0){\\rangle }$ then $\\rho _\\zeta \\le j_{U(\\kappa _0,\\xi )}(\\rho _\\zeta )=\\rho ^{\\prime }_{\\zeta }$ .", "If follows that $\\cup _{i<\\rho ^{\\prime }_{\\zeta }}X^{\\prime }_i\\cap \\kappa _0\\in \\cap _{i<\\xi }U(\\kappa _0,i)=\\cap j_{U(\\kappa _0,\\xi )}(\\vec{U})(\\kappa _0)$ .", "To see this, note that for every $y<\\xi $ , $j_{U(\\kappa _0,\\xi )}(y)<j_{U(\\kappa _0,\\xi )}(\\rho _{\\zeta })=\\rho ^{\\prime }_{\\zeta }$ , hence $X^{(\\kappa _0)}_y=j_{U(\\kappa _0,\\xi )}(X^{(\\kappa _0)}_y)\\cap \\kappa _0=X^{\\prime }_{j_{U(\\kappa _0,\\xi )}(y)}\\cap \\kappa _0\\subseteq \\cup _{i<\\rho ^{\\prime }_{\\zeta }}X^{\\prime }_i\\cap \\kappa _0$ This means that in $M_{U(\\kappa _0,\\xi )}$ , $\\exists \\zeta <\\kappa _0.", "\\ \\cup _{i<\\rho ^{\\prime }_\\zeta }X^{\\prime }_i\\cap \\kappa _0\\in \\cap j_{U(\\kappa _0,\\xi )}(\\vec{U})(\\kappa _0)$ Reflecting this, we get that for every $\\xi <\\rho $ , $\\lbrace \\beta <\\kappa _0\\mid \\exists \\zeta <\\beta .\\cup _{i<\\rho _\\zeta }X^{(\\kappa _0)}_i\\cap \\beta \\in \\cap \\vec{U}(\\beta )\\rbrace \\in U(\\kappa _0,\\xi )$ Now in $M_{U(\\kappa _0,\\rho )}$ using coherency it follows that $\\lbrace \\beta <\\kappa _0\\mid \\exists \\zeta <\\beta .\\cup _{i<\\rho _\\zeta }X^{(\\kappa _0)}_i\\cap \\beta \\in \\cap \\vec{U}(\\beta )\\rbrace \\in \\cap _{\\xi <\\rho }U(\\kappa _0,\\xi )=\\cap j_{U(\\kappa _0,\\rho )}(\\vec{U})(\\kappa _0)$ Finally, reflect this to conclude that $S_\\rho \\in U(\\kappa _0,\\rho )$ .", "By REF .3 there is $\\eta ^{\\prime }$ such that $C_G\\cap (\\eta ^{\\prime },\\kappa _0)\\subseteq \\cup _{\\rho <o^{\\vec{U}}(\\kappa _0)}S_\\rho $ , define $\\eta ={\\rm max}\\lbrace \\eta ^{\\prime },\\xi \\rbrace <\\kappa _0$ where $\\xi $ is from proposition REF .", "Let $\\alpha \\in Lim(C_G)\\cap (\\eta ,\\kappa _0)$ , then $\\alpha \\in S_{o^{(\\kappa _0)}(\\alpha )}$ .", "Since $\\alpha >\\xi $ , there is $\\xi \\le \\xi _\\alpha <\\alpha $ such that for every $\\nu \\in C_G\\cap (\\xi _\\alpha ,\\alpha )$ , $o^{(\\kappa _0)}(\\nu )<o^{(\\kappa _0)}(\\alpha )$ .", "If $o^{(\\kappa _0)}(\\alpha )=\\beta +1$ then $X^{(\\kappa _0)}_\\beta \\cap \\alpha \\in (\\cap \\vec{U}(\\alpha ))^+$ hence by REF .5, ${\\rm sup}(X^{(\\kappa _0)}_{\\beta }\\cap \\alpha \\cap C_G)=\\alpha $ .", "Let us argue that ${\\rm otp}(X^{(\\kappa _0)}_\\beta \\cap C_G\\cap (\\xi _\\alpha ,\\alpha ))=\\omega $ .", "Just otherwise denote by $\\mu $ the $\\omega $ -th element of $X^{(\\kappa _0)}_\\beta \\cap C_G\\cap (\\xi _\\alpha ,\\alpha )$ , then $\\mu <\\alpha $ .", "Since $\\mu >\\xi $ , proposition REF implies that $o^{\\vec{U}}(\\mu )\\ge \\beta +1$ .", "On the other hand, $\\mu >\\xi \\alpha $ , thus $o^{(\\kappa _0)}(\\mu )<o^{(\\kappa _0)}(\\alpha )$ , contradiction.", "If $cf^V(o^{(\\kappa _0)}(\\alpha )):=\\lambda <\\kappa _0$ , then by definition of $S_{o^{(\\kappa _0)}(\\alpha )}$ , $\\forall i<\\lambda .", "X^{(\\kappa _0)}_{\\rho _i}\\cap \\alpha \\in (\\cap \\vec{U}(\\alpha ))^+$ .", "hence by REF .5, for every $ i<\\lambda $ , ${\\rm sup}(X^{(\\kappa _0)}_{\\rho _i}\\cap \\alpha \\cap C_G)=\\alpha $ , thus the sequence of $x_i$ 's defined in the proposition starting above any $\\xi <\\alpha $ is well defined.", "The second property of $S_{o^{(\\kappa _0)}(\\alpha )}$ is that $Y:=\\lbrace \\beta <\\alpha \\mid \\forall i<\\lambda .", "\\beta \\in X^{(\\kappa _0)}_{\\rho _i}\\rightarrow \\forall j<i.X^{(\\kappa _0)}_j\\cap \\beta \\in (\\cap \\vec{U}(\\beta ))^+\\rbrace \\in \\cap \\vec{U}(\\alpha )$ By REF .3 there is $\\xi _\\alpha \\le \\zeta _\\alpha <\\alpha $ such that $C_G\\cap (\\zeta _\\alpha ,\\alpha )\\subseteq Y$ .", "Start the definition of $x_i$ 's above $\\zeta _\\alpha $ .", "To see it is increasing, note that $x_i\\in C_G\\cap (\\zeta _\\alpha ,\\alpha )\\cap X^{(\\kappa _0)}_{\\rho _i}$ so by definition of $Y$ , $\\forall j<i$ , $X^{(\\kappa _0)}_{\\rho _j}\\cap x_i\\in (\\cap \\vec{U}(x_i))^+$ , again by REF .5, for every $j<i$ ${\\rm sup}(X^{(\\kappa _0)}_{\\rho _j}\\cap x_i\\cap C_G)=x_i$ and therefore by minimality of $x_j$ it follows that for $j<i$ , $x_j<x_i$ .", "To see that the sequence of $x_i$ 's is unbounded, just otherwise its limit point would be some $\\zeta \\in (\\zeta _\\alpha ,\\alpha )$ .", "Since the $x_i$ 's are increasing and by proposition REF , $o^{(\\kappa _0)}(\\zeta )\\ge limsup_{i<\\lambda } o^{(\\kappa _0)}(x_i)+1 =limsup_{i<\\lambda }\\rho _i+1= o^{(\\kappa _0)}(\\alpha )$ contradicting the choice of $\\xi _\\alpha $ .", "Finally, if $cf^V(o^{(\\kappa _0)}(\\alpha ))=\\kappa _0$ , then $\\forall i<\\alpha .", "X^{(\\kappa _0)}_{\\rho _i}\\cap \\alpha \\in (\\cap \\vec{U}(\\alpha ))^+$ hence by REF .5, $\\forall i<\\alpha .", "{\\rm sup}(X^{(\\kappa _0)}_{\\rho _i}\\cap \\alpha \\cap C_G)=\\alpha $ .", "If the limit $x^*$ of the $x_n$ 's defined in the proposition would be less than $\\alpha $ , then by the definition of $S_{o^{(\\kappa _0)}(\\alpha )}$ there is $\\zeta <x^*$ such that $\\cup _{i<\\rho _{\\zeta }}X^{(\\kappa _0)}_i\\cap x^*\\in \\cap \\vec{U}(x^*)$ .", "To see the contradiction, on one hand there is $\\sigma < x^*$ such that $C_G\\cap (\\sigma ,x^*)\\subseteq \\cup _{i<\\rho _{\\zeta }}X^{(\\kappa _0)}_i\\cap \\zeta $ So there is $N<\\omega $ such that $\\forall n\\ge N$ , $x_n\\in \\cup _{i<\\rho _{\\zeta }}X^{(\\kappa _0)}_i\\cap \\zeta $ .", "On the other find $N\\le n<\\omega $ such that $x_n>\\zeta $ , then $o^{(\\kappa _0)}(x_{n+1})=\\rho _{x_n}>\\rho _{\\zeta }$ , which implies $x_{n+1}\\notin \\cup _{i<\\rho _{\\zeta }}X^{(\\kappa _0)}_i\\cap \\zeta $ .$\\blacksquare $ Corollary 2.25 Let $G\\subseteq \\mathbb {M}[\\vec{U}]$ be $V$ -generic.", "Assume that $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , then for every $V$ -regular cardinal $\\alpha $ , $cf^{V[G]}(\\alpha )<\\alpha $ iff $\\alpha \\in C_G\\cup \\lbrace \\kappa \\rbrace $ and $0<o^{\\vec{U}}(\\alpha )<\\alpha ^+$ ." ], [ "Other preliminaries", "Definition 2.26 Let $X,X^{\\prime }$ be sets of ordinals such that $X^{\\prime }\\subseteq X\\subseteq On$ .", "Let $\\alpha =otp(X,\\in )$ be the order type of $X$ and $\\phi :\\alpha \\rightarrow X$ be the order isomorphism witnessing it.", "The indices of $X^{\\prime }$ in $X$ are $Ind(X^{\\prime },X)=\\phi ^{-1^{\\prime \\prime }}X^{\\prime }=\\lbrace \\beta <\\alpha \\mid \\phi (\\beta )\\in X^{\\prime }\\rbrace $ In the last part of the proof we will need the definition of quotient forcing.", "Definition 2.27 Let $\\underaccent{\\sim }{D}$ be a $\\mathbb {M}[\\vec{U}]$ -name for a subset of $\\kappa $ and let $D\\subseteq \\kappa $ such that $\\underaccent{\\sim }{D}_G=D$ .", "Define $\\mathbb {P}_{\\underaccent{\\sim }{D}}$ , the complete subalgebra of ${\\langle }RO(\\mathbb {M}[\\vec{U}]),\\le _B{\\rangle }$ $RO(\\mathbb {M}[\\vec{U}])$ is the set of all regular open cuts of $\\mathbb {M}[\\vec{U}]$ (see for example [11]), as usual we identify $\\mathbb {M}[\\vec{U}]$ as a dense subset of $RO(\\mathbb {M}[\\vec{U}])$ .", "The order $\\le _B$ is in the standard position of Boolean algebras orders i.e.", "$p\\le _B q$ means $p\\Vdash q\\in \\hat{G}$ .", "generated by the conditions $X=\\lbrace ||\\alpha \\in \\underaccent{\\sim }{D}||\\mid \\alpha <\\kappa \\rbrace $ .", "By [11], $V[D]=V[H]$ for some $V$ -generic filter $H$ of $\\mathbb {P}_{\\underaccent{\\sim }{D}}$ .", "In fact $D=\\lbrace \\alpha <\\kappa \\mid ||\\alpha \\in \\underaccent{\\sim }{D}||\\in X\\cap H\\rbrace $ As for the other direction, $H$ is definable and uniquely determined by the set $X\\cap H=\\lbrace ||\\alpha \\in \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{D}}}||\\mid \\alpha \\in D\\rbrace $ which belongs to $V[D]$ (see [11]).", "Denote this $H$ by $H_D$ .", "Definition 2.28 Define the function $\\pi :\\mathbb {M}[\\vec{U}]\\rightarrow \\mathbb {P}_{\\underaccent{\\sim }{D}}$ by $\\pi (p)=\\inf \\lbrace b\\in \\mathbb {P}_{\\underaccent{\\sim }{D}}\\mid p\\le _B b\\rbrace $ It not hard to check that $\\pi $ is a projection i.e.", "$\\pi $ is order preserving.", "$\\forall p\\in \\mathbb {M}[\\vec{U}].\\forall q\\le _B\\pi (p).\\exists p^{\\prime }\\ge p.\\pi (p^{\\prime })\\le _B q$ .", "$Im(\\pi )$ is dense in $\\mathbb {P}_{\\underaccent{\\sim }{D}}$ .", "Definition 2.29 Let $\\mathbb {P},\\mathbb {Q}\\in V$ be forcing notions, $\\pi :\\mathbb {P}\\rightarrow \\mathbb {Q}$ be any projection and let $H\\subseteq \\mathbb {Q}$ be $V$ -generic.", "Define the quotient forcing $\\mathbb {P}/H=\\pi ^{-1^{\\prime \\prime }}H$ Also if $G\\subseteq \\mathbb {P}$ is a $V$ -generic filter, the projection of $G$ is the filter $\\pi _*(G):=\\lbrace q\\in \\mathbb {Q}\\mid \\exists p\\in G. q\\le _{\\mathbb {Q}}p\\rbrace $ We abuse notation by defining $\\mathbb {M}[\\vec{U}]/D=\\mathbb {M}[\\vec{U}]/H_D$ , where $H_D$ is the $V$ -generic filter (definable from $D)$ for $\\mathbb {P}_{\\underaccent{\\sim }{D}}$ such that $V[H_D]=V[D]$ .", "It is important to note that $\\mathbb {M}[\\vec{U}]/D$ depends on the choice of the name $\\underaccent{\\sim }{D}$ .", "Proposition 2.30 Let $\\pi :\\mathbb {P}\\rightarrow \\mathbb {Q}$ be a projection, then: If $G\\subseteq \\mathbb {P}$ is $V$ -generic then $\\pi _*(G)$ is $V$ -generic filter for $\\mathbb {Q}$ If $G\\subseteq \\mathbb {P}$ is $V$ -generic then $G\\subseteq \\mathbb {P}/\\pi _*(G)$ is $V[\\pi _*(G)]$ -generic filter.", "If $G\\subseteq \\mathbb {P}/H$ is $V[H]$ -generic, then $\\pi _*(G)=H$ and $G\\subseteq \\mathbb {P}$ is $V$ -generic.", "Definition 2.31 We denote $X\\subseteq ^* Y$ if $X\\setminus Y$ is finite.", "Also define $X=^*Y$ if $X\\subseteq ^* Y\\wedge Y\\subseteq ^* X$ , equivalently, if $X\\bigtriangleup Y$ is finite.", "In the next theorem, we will need the Erdös-Rado theorem[6], which is stated here for the convenience of the reader (For the proof see [12] or [10]).", "Theorem 2.32 If $\\theta $ is a regular cardinal then for every $\\rho <\\theta $ $(2^{<\\theta })^+\\rightarrow (\\theta +1)^2_{\\rho }$ i.e.", "for every $f:[(2^{<\\theta })^+]^2\\rightarrow \\rho $ there is $H\\subseteq (2^{<\\theta })^+$ such that ${\\rm otp}(H)=\\theta +1$ and $f\\upharpoonright [H]^2$ is constant.", "Theorem 2.33 Let $\\aleph _0<\\lambda $ be a strong limit cardinal, and $\\mu >\\lambda $ be regular.", "Let $\\langle D_\\alpha \\mid \\alpha <\\mu \\rangle $ be any $\\subseteq ^*$ -increasing sequence of subsets of $\\lambda $ .", "Then the sequence $=^*$ -stabilizes i.e.", "there is $\\alpha ^*<\\mu $ such that for every $\\alpha ^*\\le \\alpha <\\mu $ , $D_\\alpha =^*D_{\\alpha ^*}$ .", "Remark 2.34 The theorem fail for $\\lambda =\\aleph _0$ .", "Let us construct a counter example: Define ${\\langle }D_i\\mid i<\\omega _1{\\rangle }$ a sequence of subsets of $\\omega $ by induction, such that: ${\\langle }D_i\\mid i<\\omega _1{\\rangle }$ is $\\subseteq ^*$ -increasing.", "For all $i<j<\\omega _1$ , $|D_j\\setminus D_i|=\\aleph _0$ .", "For every $i<\\omega _1$ , $|\\omega \\setminus D_i|=\\aleph _0$ Let $D_0=\\emptyset $ .", "Assume that for $\\alpha <\\omega _1$ , ${\\langle }D_i\\mid i<\\alpha {\\rangle }$ is $\\subseteq ^*$ -increasing, and let us define $D_\\alpha $ .", "If $\\alpha =\\beta +1$ , then by $(3)$ , $|\\omega \\setminus D_\\beta |=\\aleph _0$ , Let $\\omega \\setminus D_{\\beta }=X\\uplus Y$ where $|X|=|Y|=\\aleph _0$ .", "Define $D_\\alpha =D_{\\beta }\\cup X$ .", "If $\\alpha $ is limit, then $cf(\\alpha )=\\omega $ , let ${\\langle }\\alpha _n\\mid n<\\omega {\\rangle }$ be increasing and cofinal in $\\alpha $ and denote by $E_n=D_{\\alpha _n}$ .", "We construct natural numbers $x_n,y_n$ .", "By $(3)$ ,$|\\omega \\setminus E_0|=\\omega $ , let $x_0,y_0\\in \\omega \\setminus E_0$ be distinct.", "Assume that $x_k, y_k$ are defined for every $k\\le n$ , then $Z=\\omega \\setminus (E_{n+1}\\cup \\lbrace x_k,y_k\\mid k\\le n\\rbrace )$ is infinite, pick $x_{n+1},y_{n+1}\\in Z$ distinct.", "Clearly $|\\lbrace x_n\\mid n<\\omega \\rbrace |=|\\lbrace y_n\\mid n<\\omega \\rbrace |=\\aleph _0\\text{ and }\\lbrace x_n\\mid n<\\omega \\rbrace \\cap \\lbrace y_n\\mid n<\\omega \\rbrace =\\emptyset $ Let $D_{x,\\alpha }=\\omega \\setminus \\lbrace x_n\\mid n<\\omega \\rbrace $ and $D_{y,\\alpha }=\\omega \\setminus \\lbrace y_n\\mid n<\\omega \\rbrace $ .", "We claim the for every $n<\\omega $ , $E_n\\subseteq ^* D_{x,\\alpha },D_{y,\\alpha }$ .", "By symmetry it suffices to show it for $D_{x,\\alpha }$ .", "If $r\\in E_n\\setminus D_{x,\\alpha }$ , then there is $m$ such that $r=x_m$ , since for every $m\\ge n$ , $x_m\\notin E_n$ , it follows that $m<n$ .", "Thus $E_n\\setminus D_{x,\\alpha }\\subseteq \\lbrace x_m\\mid m<n\\rbrace $ , implying $E_n\\subseteq ^* D_{x,\\alpha }$ .", "Let us argue that Either for every $n<\\omega $ , $|D_{x,\\alpha }\\setminus E_n|=\\omega $ , or for every $n<\\omega $ , $|D_{y,\\alpha }\\setminus E_n|=\\omega $ .", "Assume otherwise, there is $n<\\omega $ such that $D_{x,\\alpha }=^*E_n$ and there is $k<\\omega $ such that $D_{y,\\alpha }=^*E_k$ .", "For every $n\\le m<\\omega $ , $D_{x,\\alpha }=^*E_n\\subseteq ^* E_m\\subseteq ^*D_{x,\\alpha }$ Hence $E_m=^*D_{x,\\alpha }$ .", "In the same way we see that for every $k\\le m<\\omega $ , $E_m=^*D_{y,\\alpha }$ .", "Let $m>{\\rm max}\\lbrace n,k\\rbrace $ .", "Then $D_{y,\\alpha }=^*E_m=^*D_{x,\\alpha }$ , contradiction.", "Without loss of generality, assume that for every $n<\\omega $ , $|D_{x,\\alpha }\\setminus E_n|=\\omega $ .", "Define $D_\\alpha =D_{x,\\alpha }$ .", "Let us prove $(1),(2),(3)$ .", "To see $(1)$ , for each $\\beta <\\alpha $ find $n<\\omega $ such that $\\beta <\\alpha _n$ , then $D_\\beta \\subseteq ^* D_{\\alpha _n}\\subseteq ^* D_\\alpha $ .", "Also $D_{\\alpha }\\setminus D_{\\alpha _n}\\subseteq (D_{\\alpha }\\setminus D_{\\beta })\\cup ( D_{\\beta }\\setminus D_{\\alpha _n})$ .", "Since $|D_{\\alpha }\\setminus D_{\\alpha _n}|=\\omega $ and $|D_{\\beta }\\setminus D_{\\alpha _n}|<\\omega $ it follows that $|D_{\\alpha }\\setminus D_{\\beta }|=\\omega $ , so $(2)$ holds.", "Finally, $(3)$ follows since $\\lbrace x_n\\mid n<\\omega \\rbrace \\subseteq \\omega \\setminus D_\\alpha $ .", "Proof of REF .", "Toward a contradiction, assume that the theorem fails, then by regularity of $\\mu $ , there is $Y\\subseteq \\mu $ such that $|Y|=\\mu $ and for every $\\alpha ,\\beta \\in Y$ , if $\\alpha <\\beta $ then $D_\\alpha \\subseteq ^* D_\\beta $ and $|D_\\beta \\setminus D_\\alpha |\\ge \\omega $ .", "For every $i<\\lambda $ , there is $E_i\\subseteq C_G\\cap i$ such that the set $X_i:=\\lbrace \\nu <\\mu \\mid D_\\nu \\cap i=E_i\\rbrace $ is unbounded in $\\mu $ , set $\\alpha _i:={\\rm min}(X_i)$ .", "Since $D_i$ is $\\subseteq ^*$ -increasing, for every $\\alpha _i\\le \\alpha <\\kappa ^+$ , $D_\\alpha \\cap i=^* E_i$ .", "To see this, find $\\beta \\in X_i$ such that $\\alpha _i\\le \\alpha \\le \\beta $ , then $D_{\\alpha _i}\\subseteq ^* D_{\\alpha }\\subseteq ^* D_{\\beta }$ Hence $E_i=D_{\\alpha _i}\\cap i\\subseteq ^* D_{\\alpha }\\cap i\\subseteq ^* D_{\\beta }\\cap i=E_i$ Set $\\alpha ^*={\\rm sup}\\lbrace \\alpha _i\\mid i<\\lambda \\rbrace $ , by regularity, $\\alpha ^*<\\mu $ .", "It follows that $(*) \\ \\ \\text{For every }\\delta <\\lambda \\text{ and every }\\alpha ^*\\le \\beta _1<\\beta _2<\\mu .", "\\ D_{\\beta _1}\\cap \\delta =^*E_{\\delta }=^* D_{\\beta _1}\\cap \\delta $ It follows that $(**) \\ \\ \\ \\ \\ \\text{For every }\\alpha ^*\\le \\beta _1<\\beta _2<\\mu .", "\\ \\ \\ |D_{\\beta _1}\\Delta D_{\\beta _2}|\\le \\omega \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ $ To see $(**)$ , assume otherwise, then there are $\\beta _1,\\beta _2$ such that $|D_{\\beta _1}\\Delta D_{\\beta _2}|\\ge \\omega _1$ .", "Thus there is $\\delta <\\lambda $ such that $|D_{\\beta _1}\\cap \\delta \\Delta D_{\\beta _2}\\cap \\delta |\\ge \\aleph _0$ contradiction $(*)$ .", "Also $cf(\\lambda )=\\aleph _0$ , since for any distinct $\\beta _1,\\beta _2\\in Y\\setminus \\alpha ^*$ , $|D_{\\beta _1}\\Delta D_{\\beta _2}|\\ge \\aleph _0$ , and by $(**)$ , $|D_{\\beta _1}\\Delta D_{\\beta _2}|\\le \\aleph _0$ so by Cantor–Bernstein $|D_{\\beta _1}\\Delta D_{\\beta _2}|=\\aleph _0$ .", "Since $\\beta _1,\\beta _2>\\alpha ^*$ , $D_{\\beta _1}\\Delta D_{\\beta _2}$ cannot be bounded, hence $cf(\\lambda )=\\aleph _0$ .", "Denote by $\\chi :=(2^{<\\aleph _1})^+=(2^{\\aleph _0})^+$ .", "Since $\\lambda >\\aleph _0$ is strong limit then $\\chi <\\lambda <\\mu $ .", "Fix any $X\\subseteq Y\\setminus \\alpha ^*$ such that $|X|=\\chi $ .", "Define a partition $f:[X]^2\\rightarrow \\omega $ : Let ${\\langle }\\eta _n\\mid n<\\omega {\\rangle }$ be cofinal in $\\lambda $ .", "For any $i<j$ in $X$ , $D_i\\subseteq ^* D_j$ , hence there is $n_{i,j}<\\omega $ such that $(D_{\\alpha _i}\\setminus \\eta _{n_{i,j}})\\subseteq D_{\\alpha _j}$ .", "Simply pick some $\\eta _{n_{i,j}}$ above finitely many elements in $D_{\\alpha _i}\\setminus D_{\\alpha _j}$ .", "Then set $f(i,j)=n_{i,j}$ Apply the Erdös-Rado theorem and find $I\\subseteq X$ such that ${\\rm otp}(I)=\\omega _1+1$ which is homogeneous with color $n^*<\\omega $ .", "This means that for any $i<j$ in $I$ , $D_{i}\\setminus \\eta _{n^*}\\subseteq D_{j}\\setminus \\eta _{n^*}$ .", "Let $\\langle i_\\rho \\mid \\rho <\\omega _1+1\\rangle $ be the increasing enumeration of $I$ .", "We will prove that $|D_{i_{\\omega _1}}\\setminus D_{i_0}|\\ge \\omega _1$ , and since $i_0,i_{\\omega _1}\\ge \\alpha ^*$ , this is a contradiction to $(**)$ .", "Indeed, for every $r<\\omega _1$ , pick any $\\delta _r\\in D_{i_{r+1}}\\setminus (D_{i_r}\\cup \\eta _{n^*})$ .", "Such $\\delta _r$ exists, since by $(*)$ , $D_{i_{r+1}}\\cap \\eta _{n^*}=^* D_{i_r}\\cap \\eta _{n^*}$ and since $i_{r},i_{r+1}\\in Y$ , $\\aleph _0\\le |D_{i_{r+1}}\\setminus D_{i_r}|$ .", "Let us argue that for every $\\beta \\le r<\\alpha \\le \\omega _1$ , $\\delta _r\\in D_{i_{\\alpha }}\\setminus D_{i_\\beta }$ .", "Since $i_\\beta \\le i_r<i_{r+1}\\le i_\\alpha \\in I$ then $(\\star ) \\ \\ D_{i_{r+1}}\\setminus \\eta _{n^*}\\subseteq D_{i_{\\alpha }}\\text{ and }D_{i_\\beta }\\setminus \\eta _{n^*}\\subseteq D_{i_r}$ By definition $\\delta _r\\in D_{i_{r+1}}\\setminus \\eta _{n^*}$ and $\\delta _r\\notin D_{i_r}$ .", "By $(\\star )$ , $\\delta _r\\in D_{i_\\alpha }\\setminus \\eta _{n^*}$ and $\\delta _r\\notin D_{i_\\beta }\\setminus \\eta _{n^*}$ .", "Since $\\delta _r>\\eta _{n^*}$ , it follows that $\\delta _r\\notin D_{i_\\beta }$ .", "Therefore $\\delta _r\\in D_{i_\\alpha }\\setminus D_{i_\\beta }$ .", "In particular, for for every $r<\\omega _1$ , $\\delta _r\\in D_{i_{\\omega _1}}\\setminus D_{i_0}$ so the map $r\\mapsto \\delta _r$ is well defined from $\\omega _1$ to $D_{i_{\\omega _1}}\\setminus D_{i_0}$ .", "Also if $r_1<r_2<\\omega _1$ , then $\\delta _{r_2}\\notin D_{i_{r_1}+1}$ and $\\delta _{r_1}\\in D_{i_{r_1+1}}$ so $\\delta _1\\ne \\delta _2$ .", "Thus we found an injection of $\\omega _1$ to $D_{i_{\\omega _1}}\\setminus D_{i_0}$ , contradicting $(**)$ .$\\blacksquare $" ], [ "Fat Trees", "In case $o^{\\vec{U}}(\\kappa )$ is for example $\\omega _1$ , the strong Prikry property for $\\mathbb {M}[\\vec{U}]$ insures that given $p\\in \\mathbb {M}[\\vec{U}]$ and a dense open set $D\\subseteq \\mathbb {M}[\\vec{U}]$ , there is a choice of measures $U(\\kappa _1,i_1),..,U(\\kappa ,_n,i_n)$ where $\\kappa _1\\le ...\\le \\kappa _n\\le \\kappa $ and a direct extension $p\\le ^*p^*$ such that for every choice $\\vec{\\alpha }\\in A_1\\times ...\\times A_n $ from the typical sets associated to $U(\\kappa _1,i_1),..,U(\\kappa _n,i_n)$ , $p^{*}{\\langle }\\alpha _1,...,\\alpha _n{\\rangle }\\in D$ .", "This means that in the ground model we can determine measures which are necessary to enter $D$ .", "For higher order of $\\kappa $ this is no longer the case.", "For example, assume that $o^{\\vec{U}}(\\kappa )=\\kappa $ and consider the first element of $C_G$ i.e.", "$C_G(0)$ .", "Since ${\\rm otp}(C_G)=\\kappa $ , consider $C_G(C_G(0))$ .", "Let $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}$ such that $\\Vdash _{\\mathbb {M}[\\vec{U}]} \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}=C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}(C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}(0))$ .", "Consider any condition of the form $p={\\langle }{\\langle }\\kappa ,A{\\rangle }{\\rangle }$ .", "There is no choice of measures in the ground model and no direct extension of $p$ which determine $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}$ .", "Instead, we can construct a tree $T$ with two levels.", "Then first, is simply all the ordinals which can be $C_G(0)$ , namely ${\\rm Lev}_1(T)=\\lbrace \\alpha \\in A\\mid o^{\\vec{U}}(\\alpha )=0\\rbrace \\in U(\\kappa ,0)$ .", "Now any extension of the form $p^{}\\alpha $ for $\\alpha \\in A$ forces that $C_G(0)=\\alpha $ , so to determine $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}$ we only need to pick dome ordinal in the set $\\lbrace \\beta \\in A\\setminus \\alpha +1\\mid o^{\\vec{U}}(\\beta )=\\alpha \\rbrace \\in U(\\kappa ,\\alpha )$ .", "Hence we define ${\\rm Succ}_{T}({\\langle }\\alpha {\\rangle })=\\lbrace \\beta \\in A\\setminus \\alpha +1\\mid o^{\\vec{U}}(\\beta )=\\alpha \\rbrace $ .", "Since the measure used in the second level is different for every choice of $\\alpha $ , we cannot find a single measure that will turn this tree into a product.", "This section is devoted to the study of some combinatorical aspects of such trees.", "Definition 2.35 Let $\\vec{U}$ be a coherent sequence of normal measures and $\\theta _1\\le ...\\le \\theta _n$ be measurables with $o^{\\vec{U}}(\\theta _i)>0$ .", "A $\\vec{U}-fat \\ tree$ on $\\theta _1\\le ...\\le \\theta _n$ is a tree $\\langle T, \\le _{T}\\rangle $ such that $T\\subseteq \\prod _{i=1}^n\\theta _i$ and $\\langle \\ \\rangle \\in T$ .", "$\\le _{T}$ is end-extension i.e.", "$t\\le _{T}s \\Leftrightarrow t=s\\cap {\\rm max}(t)+1$ $T$ is downward closed with respect to end-extension.", "For any $t\\in T$ one of the following holds: $|t|=n$ $|t|<n$ and there is $\\beta < o^{\\vec{U}}(\\theta _{|t|+1})$ such that $\\lbrace \\alpha \\mid t^{\\frown }\\langle \\alpha \\rangle \\in T\\rbrace \\in U(\\theta _{|t|+1},\\beta )$ .", "Some usual notations of trees: ${\\rm Succ}_T(t)=\\lbrace \\alpha \\mid t^{}\\langle \\alpha \\rangle \\in T\\rbrace $ .", "For each $t\\in T$ with $|t|<n$ , choose $\\xi (t)$ such that $suc_T(t)\\in U(\\theta _{|t|+1},\\xi (t))$ , and define $U^{(T)}_t=U(\\theta _{|t|+1},\\xi (t))$ (We drop the script $(T)$ when there is no risk of confusion).", "Note that if the measures in $\\vec{U}$ can be separated i.e.", "there are $\\langle X(\\alpha ,\\beta )\\mid \\langle \\alpha ,\\beta \\rangle \\in Dom(\\vec{U})\\rangle $ such that $X_i\\in U_i\\wedge \\forall j\\ne i X_i\\notin U_j$ , then we can intersect each set of the form ${\\rm Succ}_T(t)$ with appropriate $X_i$ and then $\\xi (t)$ has a unique choice.", "$ht(t)={\\rm otp}(s\\in T\\mid s<_T t)$ ${\\rm Lev}_i(T)=\\lbrace t\\in T\\mid ht(t)=i\\rbrace $ .", "The height of a tree is $ht(T)={\\rm max}(\\lbrace n<\\omega \\mid {\\rm Lev}_n(T)\\ne \\emptyset \\rbrace )$ .", "We will assume that if $\\theta _i<\\theta _{i+1}$ then for every $t\\in {\\rm Lev}_i(T)$ , ${\\rm min}({\\rm Succ}_T(t))>\\theta _i$ .", "For $t\\in T$ the tree above $t$ is $T/t=\\lbrace s\\in T\\mid t\\le _Ts\\rbrace $ .", "We identify $T/t$ with the $\\vec{U}$ -fat tree $\\lbrace s\\setminus t\\mid s\\in T/t\\rbrace $ .", "The set of all maximal branches of $T$ is denoted by $mb(T)={\\rm Lev}_{ht(T)}(T)$ .", "Note that $mb(T)$ completely determine $T$ .", "Let $J\\subseteq \\lbrace 0,1,...,ht(T)\\rbrace $ then $T\\upharpoonright J=\\lbrace t\\upharpoonright J\\mid t\\in T\\rbrace $ .", "For every $\\vec{U}$ -fat tree $T$ in $\\theta _1\\le ...\\le \\theta _n$ of height $n$ , define the iteration associated to $T$, ${\\langle }j^{(T)}_{m,k},M_k\\mid 0\\le m\\le k\\le n{\\rangle }$ , usually we drop the superscript $T$ .", "Let $V=M_0$ , $j_1=j_{0,1}:=j_{U^{(T)}_{{\\langle }{\\rangle }}}:V\\rightarrow Ult(V,U^{(T)}_{{\\langle }{\\rangle }})\\simeq M_{U^{(T)}_{{\\langle }{\\rangle }}}:=M_1$ then $crit(j_{1})=\\theta _1\\in j_{1}({\\rm Succ}_T({\\langle }{\\rangle }))={\\rm Succ}_{j_{1}(T)}({\\langle }{\\rangle })$ .", "Thus ${\\langle }\\theta _1{\\rangle }\\in {\\rm Lev}_1(j_1(T))$ .", "Assume that ${\\langle }j_{m^{\\prime },m},M_m\\mid 0\\le m\\le m^{\\prime }\\le k{\\rangle }$ is defined for some $k<n$ , for every $1\\le i\\le k<n$ , denote $\\kappa _i:=crit(j_{i-1,i})=j_{i-1}(\\theta _i)$ and assume ${\\langle }\\kappa _1,..,\\kappa _k{\\rangle }\\in {\\rm Lev}_k(j_k(T))$ .", "Let $j_{k,k+1}:= j_{U^{(j_k(T))}_{{\\langle }\\kappa _1,..,\\kappa _k{\\rangle }}}:M_k\\rightarrow Ult(M_k,U^{(j_k(T))}_{{\\langle }\\kappa _1,..,\\kappa _k{\\rangle }})\\simeq M_{k+1}$ $j_{i,k+1}=j_{k,k+1}\\circ j_{i,k}$ and $j_{k+1}=j_{0,k+1}$ .", "Note that ${\\rm Succ}_{j_k(T)}({\\langle }\\kappa _1,..,.\\kappa _k{\\rangle })\\in U^{(j_k(T))}_{{\\langle }\\kappa _1,..,\\kappa _k{\\rangle }}$ which is a normal measure on $j_{k}(\\theta _k+1)$ .", "Thus $\\kappa _{k+1}:=j_{k}(\\theta _{k+1})={\\rm crit}(j_{k,k+1})\\in j_{k,k+1}({\\rm Succ}_{j_k(T)}({\\langle }\\kappa _1,..,.\\kappa _k{\\rangle }))={\\rm Succ}_{j_{k+1}(T)}({\\langle }\\kappa _1,..,\\kappa _k{\\rangle })$ Therefore, ${\\langle }\\kappa _1,..,.\\kappa _k,\\kappa _{k+1}{\\rangle }\\in {\\rm Lev}_{k+1}(j_{k+1}(T))$ .", "We denote $j_T=j_n$ and $M_T=M_n$ .", "More generally, an tree iteration of $\\vec{U}$ -measures is a finite iteration ${\\langle }j_{m,k},M_k\\mid 0\\le m\\le k\\le n{\\rangle }$ of $V$ such that for some measurable cardinals $\\theta _1\\le ...\\le \\theta _n$ , for every $0\\le m<n$ , there is normal measure $W_{m+1}\\in j_m(\\vec{U})$ on $j_m(\\theta _{m+1})$ such that $j_{m,m+1}=j_{W_m}: M_m\\rightarrow Ult(M_m,W_m)\\simeq M_{m+1}$ Denote by $\\kappa _m=j_{m-1}(\\theta _m)$ and derive an ultrafilter $U$ on $\\prod _{i=1}^n\\theta _i$ by the formula: $X\\in U\\longleftrightarrow {\\langle }\\kappa _1, \\kappa _2,...,\\kappa _n{\\rangle }\\in j_n(X)$ Let us verify some standard properties of such an iteration: Proposition 2.36 Let ${\\langle }j_{m,k},M_k\\mid 0\\le m\\le k\\le n{\\rangle }$ be a tree iteration of $\\vec{U}$ -measures.", "Then: $U$ is a $\\theta _1$ -complete ultrafilter on $\\prod _{i=1}^n\\theta _i$ .", "For any formula $\\Phi (y_1,...,y_{m})$ and any $f_1,...,f_m:\\prod _{i=1}^n\\theta _i\\rightarrow V$ , $M_n\\models \\Phi (j_n(f_1)(\\kappa _1,...,\\kappa _n),...,j_n(f_m)(\\kappa _1,...,\\kappa _n))\\Leftrightarrow \\lbrace \\vec{\\alpha }\\in \\prod _{i=1}^n\\theta _i\\mid \\Phi (f_1(\\vec{\\alpha }),...,f_m(\\vec{\\alpha })\\rbrace \\in U$ Let $j_U:V\\rightarrow Ult(V,U)\\simeq M_U$ be the elementary embedding associated to $U$ , then $M_U=M_n$ and $j_U=j_n$ .", "For every $R\\in U$ there is a $\\vec{U}$ -fat tree $S$ such that $mb(S)\\subseteq R$ , $mb(S)\\in U$ .", "Moreover, if $j_{i-1}(f_i)(\\kappa _1,..,\\kappa _{i-1})=W_i$ (the ultrafilter used in $j_{i,i-1}$ ), then for every $s\\in {\\rm Lev}_{i-1}(S)$ , ${\\rm Succ}_{S}(s)\\in f_i(s)$ .", "Proof.", "$(1)$ is a standard consequence of the critical point of the iteration being $\\theta _1$ .", "For $(2)$ , by elementarity of $j_n$ , $j_n(\\lbrace \\vec{\\alpha }\\in \\prod _{i=1}^n\\theta _i\\mid \\Phi (f_1(\\vec{\\alpha }),...,f_m(\\vec{\\alpha }))\\rbrace )=\\lbrace \\vec{\\alpha }\\in \\prod _{i=1}^nj_n(\\theta _i)\\mid M_n\\models \\Phi (j_n(f_1)(\\vec{\\alpha }),...,j_n(f_m)(\\vec{\\alpha }))\\rbrace $ Note that $\\kappa _i=j_{i-1}(\\theta _i)=crit(j_{i,i+1})$ , thus $\\kappa _i<j_{i,i+1}(j_{i-1}(\\theta _i))\\le j_n(\\theta _i)$ .", "By definition of $U$ , $M_n\\models \\Phi (j_n(f_1)(\\kappa _1,...,\\kappa _n),...,j_n(f_m)(\\kappa _1,...,\\kappa _n))\\leftrightarrow $ $\\leftrightarrow {\\langle }\\kappa _1,..,\\kappa _n{\\rangle }\\in j_n(\\lbrace \\vec{\\alpha }\\in \\prod _{i=1}^n\\theta _i\\mid \\Phi (f_1(\\vec{\\alpha }),...,f_m(\\vec{\\alpha }))\\rbrace )\\leftrightarrow $ $\\leftrightarrow \\lbrace \\vec{\\alpha }\\in \\prod _{i=1}^n\\theta _i\\mid \\Phi (f_1(\\vec{\\alpha }),...,f_m(\\vec{\\alpha }))\\rbrace \\in U$ For $(3)$ , it suffices to prove $M_U\\simeq M_n$ via an isomorphism $k:M_U\\rightarrow M_n$ such that $k\\circ j_U=j_n$ .", "Define $k([f]_U)=j_n(f)(\\kappa _1,..,\\kappa _n)$ .", "By $(2)$ , $k$ is well defined and elementary embedding.", "Moreover, by elementarity of $j_n$ , if $c_x$ is constant function with value $x$ then $j_n(c_x)$ is constant with value $j_n(x)$ .", "Thus, $k(j_U(x))=k([c_x]_U)=j_n(c_x)(\\kappa _1,..,\\kappa _n)=j_n(x)$ To see $j_U$ is onto, let $x\\in M_n$ , since $M_n$ is the ultrapower of $M_{n-1}$ by $W_n$ , there is $f_{n-1}\\in M_{n-1}$ , $f_{n-1}:j_{n-1}(\\theta _n)\\rightarrow M_{n-1}$ , such that $j_{n,n-1}(f_{n-1})(\\kappa _n)=x$ .", "Inductively, assume that $x=j_{n,i}(f_i)(\\kappa _{i+1},..,\\kappa _n)$ , where $f_i:\\prod _{k=i+1}^nj_i(\\theta _k)\\rightarrow M_i$ .", "Since $M_i$ is the ultrapower of $W_{i-1}$ , there is $g_{i-1}: j_{i-1}(\\theta _i)\\rightarrow M_{i-1}$ such that $j_{i,i-1}(g_{i-1})(\\kappa _i)=f_i$ .", "By elementarity, for every $\\alpha <j_{i-1}(\\theta _i)$ , $g_{i-1}(\\alpha ):\\prod _{k=i+1}^nj_{i-1}(\\theta _k)\\rightarrow M_{i-1}$ .", "Define $f_{i-1}:\\prod _{k=i}^nj_{i-1}(\\theta _k)\\rightarrow M_{i-1}\\text{ by } f_{i-1}(\\alpha _i,..,\\alpha _n)=g(\\alpha _i)(\\alpha _{i+1},...,\\alpha _n)$ Since $\\kappa _i=crit(j_{i,i-1})<crit(j_{n,i})=\\kappa _{i+1}$ , $j_{n,i}(\\kappa _i)=\\kappa _i$ and $j_{n,i-1}(f_{i-1})(\\kappa _i,..,\\kappa _n)=j_{n,i}(j_{i,i-1}(g_{i-1})(\\kappa _i))(\\kappa _{i+1},...,\\kappa _n)=j_{n,i}(f_{i})(\\kappa _{i+1},...,\\kappa _n)=x$ We conclude that there is $f_0:\\prod _{i=1}^n\\theta _i:\\rightarrow V$ such that $k([f]_U)=j_n(f_0)(\\kappa _1,..,\\kappa _n)=x$ .", "To see $(4)$ , let $W_{i}\\in M_{i-1}$ be the ultrafilter used in $j_{i,i-1}$ .", "Apply $(3)$ , and fix for every $1\\le i\\le n$ $f_i:\\prod _{k=1}^{i-1}\\theta _k\\rightarrow V$ such that $j_{i-1}(f_i)(\\kappa _1,...,\\kappa _{i-1})=W_i$ .", "We prove $(4)$ by induction on the length of the iteration $n$ .", "For $n=1$ we can take $S$ such that ${\\rm Lev}_1(S)=R$ , also ${\\rm Succ}_{S}({\\langle }{\\rangle })\\in W_1=j_0(f_1)({\\langle }{\\rangle })=f_1({\\langle }{\\rangle })$ .", "Assume this holds for iterations of length $i-1$ .", "Let $R\\in U$ , where $U$ is derived from an iteration of length $i$ .", "Since $R\\in U$ , by definition ${\\langle }\\kappa _1,..,\\kappa _i{\\rangle }\\in j_i(R)$ .", "It follows that $\\kappa _i\\in j_{i,i-1}(\\lbrace \\alpha <\\kappa _i\\mid {\\langle }\\kappa _1,..,\\kappa _{i-1}{\\rangle }^{}\\alpha \\in R\\rbrace )$ .", "Since $j_{i,i-1}$ is the ultrapower by $W_i$ , $(\\star ) \\ \\ Z:=\\lbrace \\alpha <\\kappa _i\\mid {\\langle }\\kappa _1,..,\\kappa _{i-1}{\\rangle }^{}\\alpha \\in j_{i-1}(R)\\rbrace \\in W_i=j_{i-1}(f_i)({\\langle }\\kappa _1,..,\\kappa _{i-1}{\\rangle })$ Let $R^{\\prime }=\\lbrace \\vec{\\alpha }\\mid \\lbrace \\alpha <\\theta _i\\mid \\vec{\\alpha }^{}\\alpha \\in R\\rbrace \\in f_i(\\vec{\\alpha })\\rbrace $ , then by $(\\star )$ , ${\\langle }\\kappa _1,...,\\kappa _{i-1}{\\rangle }\\in j_{i-1}(R^{\\prime })$ .", "Apply induction to $R^{\\prime }$ and $j_{i-1}$ to find $S^{\\prime }$ , $\\vec{U}$ -fat tree such that $mb(S^{\\prime })\\subseteq R^{\\prime }$ , ${\\langle }\\kappa _1,..,\\kappa _{i-1}{\\rangle }\\in j_{i-1}(mb(S^{\\prime }))$ and for every $s\\in {\\rm Lev}_{k-1}(S^{\\prime })$ , ${\\rm Succ}_{S^{\\prime }}(s)\\in f_k(s)$ .", "Define $S\\upharpoonright \\lbrace 1,..,i-1\\rbrace =S^{\\prime }\\text{ and for every }s\\in mb(S^{\\prime }), \\ {\\rm Succ}_{S}(s)=\\lbrace \\alpha <\\theta _i\\mid s^{}\\alpha \\in R\\rbrace $ Clearly $mb(S)\\subseteq R$ and by definition of $R^{\\prime }$ , for every $ s\\in {\\rm Lev}_{i-1}(S)$ , ${\\rm Succ}_{S}(s)\\in f_i(s)$ , which is a $\\vec{U}$ -measure over $\\theta _i$ .", "Together with the induction hypothesis, we conclude that $S$ is $\\vec{U}$ -fat tree on $\\theta _1\\le ...\\le \\theta _i$ .", "Finally, ${\\langle }\\kappa _1,..,\\kappa _{i-1}{\\rangle }\\in j_{i-1}(mb(S^{\\prime }))=j_{i-1}({\\rm Lev}_{i-1}(S))$ , and by elementarity, ${\\rm Succ}_{j_{i-1}(S)}({\\langle }\\kappa _1,..,\\kappa _{i-1}{\\rangle })=\\lbrace \\alpha <\\kappa _i\\mid {\\langle }\\kappa _1,..,\\kappa _{i-1}{\\rangle }^{}\\alpha \\in j_{i-1}(R)\\rbrace \\in W_i$ Hence $\\kappa _i\\in j_{i,i-1}({\\rm Succ}_{j_{i-1}(S)}({\\langle }\\kappa _1,..,\\kappa _{i-1}{\\rangle }))={\\rm Succ}_{j_{i}(S)}({\\langle }\\kappa _1,..,\\kappa _{i-1}{\\rangle })$ .", "It follow that ${\\langle }\\kappa _1,..,\\kappa _i{\\rangle }\\in j_i(mb(S))$ as wanted.$\\blacksquare $ If $T$ is a $\\vec{U}$ -tree then by definition, the iteration of $T$ is a tree iteration of $\\vec{U}$ -measures.", "We denote by $U_T$ the ultrafilter derived from $j_T:V\\rightarrow M_n$ .", "Proposition 2.37 Let $T$ be a $\\vec{U}$ -fat tree on $\\theta _1\\le ...\\le \\theta _n$ .", "Then: $mb(T)\\in U_T$ .", "If $S\\subseteq T$ is such that $ht(S)=ht(T)=n$ .", "${\\rm Succ}_S({\\langle }{\\rangle })\\in U^{(T)}_{{\\langle }{\\rangle }}$ .", "For every $\\alpha \\in {\\rm Succ}_S({\\langle }{\\rangle })$ , $mb(S/{{\\langle }\\alpha {\\rangle }})\\in U_{T/{{\\langle }\\alpha {\\rangle }}}$ Then $mb(S)\\in U_T$ .", "If $S\\subseteq T$ is such that $ht(S)=ht(T)=n$ .", "$mb(S\\upharpoonright \\lbrace 1,...,n-1\\rbrace )\\in U_{T\\upharpoonright \\lbrace 1,..,n-1\\rbrace }$ .", "For every $s\\in {\\rm Lev}_{n-1}(S)$ , ${\\rm Succ}_{S}(s)\\in U^{(T)}_{{\\langle }s{\\rangle }}$ If ${\\langle }U_i\\mid i<\\lambda {\\rangle }$ is a sequence of $\\lambda $ -complete ultrafilters over a set $B$ and $U$ is an $\\lambda $ -complete ultrafilter over $\\lambda $ .", "Then $U-lim_{i<\\lambda }U_i$ is an $\\lambda $ -complete ultrafilter over $\\lambda \\times B$ , defined: $ U-lim_{i<\\lambda }U_i:=\\Big \\lbrace X\\subseteq \\lambda \\times B\\mid \\lbrace i<\\lambda \\mid \\lbrace b\\in B\\mid (i,b)\\in X\\rbrace \\in U_i\\rbrace \\in U\\Big \\rbrace $ We can inductively conclude that $U_T=U^{(T)}_{{\\langle }{\\rangle }}-lim_{\\alpha _1<\\theta _1} U^{(T)}_{{\\langle }\\alpha {\\rangle }}-lim_{\\alpha _1<\\theta _2}U^{(T)}_{{\\langle }\\alpha _1,\\alpha _2{\\rangle }}-... -lim_{\\alpha _{n-1}<\\theta _{n-1}}U^{(T)}_{{\\langle }\\alpha _1,..,\\alpha _{n-2},\\alpha _{n-1}{\\rangle }}$ Then $mb(S)\\in U_T$ .", "If $S\\subseteq T$ is such that $ht(S)=ht(T)=n$ .", "For every $s\\in S\\setminus mb(S)$ , ${\\rm Succ}_{S}(s)\\in U^{(T)}_{{\\langle }s{\\rangle }}$ Then $mb(S)\\in U_T$ .", "If $S$ is a $\\vec{U}$ -fat tree, and $mb(S)\\in U_T$ , then there is a choice of measures $U^{(S)}_s$ such that $j^{(S)}_n=j^{(T)}_n$ and in particular, $U_S=U_T$ .", "Proof.", "For $(1)$ , by definition of $j_T$ , we have that ${\\langle }\\kappa _1,..,\\kappa _n{\\rangle }\\in mb(j_n(T))=j_n(mb(T))$ , hence by definition of $U_T$ , $mb(T)\\in \\vec{U}(T)$ .", "For $(2)$ , note that in $M_1$ we have the tree $j_1(T)/{{\\langle }\\kappa _1{\\rangle }}$ .", "By $(b),(c)$ it follow that in $M_1$ , $ mb(j_1(S)/{{\\langle }\\kappa _1{\\rangle }})\\in U_{j_1(T)/{{\\langle }\\kappa _1{\\rangle }}}$ .", "By definition, the iteration defined inside $M_1$ of $j_1(T)_{{\\langle }\\kappa _1{\\rangle }}$ is simply the iteration $j_T$ starting from the second step inside $M_1$ , namely, ${\\langle }j_{m,k}\\mid 1\\le k\\le m\\le n{\\rangle }$ .", "Hence ${\\langle }\\kappa _2,..,\\kappa _n{\\rangle }\\in j_{n,1}(mb(j_1(S)/{{\\langle }\\kappa _1{\\rangle }}))=mb(j_n(S)/{{\\langle }\\kappa _1{\\rangle }})$ It follows that ${\\langle }\\kappa _1,..,\\kappa _n{\\rangle }\\in mb(j_n(S))$ and by definition $mb(S)\\in U_T$ .", "As for $(3)$ , note that $j_{T\\upharpoonright \\lbrace 1,..,n-1\\rbrace }$ is by definition the first $n-1$ steps of the iteration of $j_T$ .", "By $(b)$ , $mb(S\\upharpoonright \\lbrace 1,..,n-1\\rbrace )\\in U_{T\\upharpoonright \\lbrace 1,..,n-1\\rbrace }$ , thus ${\\langle }\\kappa _1,..,\\kappa _{n-1}{\\rangle }\\in {\\rm Lev}_{n-1}(j_{n-1}(S))$ .", "By $(c)$ , and elementarity of $j_{n-1}$ , it follows that ${\\rm Succ}_{j_{n-1}(S)}({\\langle }\\kappa _1,...,\\kappa _{n-1}{\\rangle })\\in U^{(j_{n-1}(T))}_{{\\langle }\\kappa _1,...,\\kappa _{n-1}{\\rangle }}$ , hence $\\kappa _n\\in j_{n,n-1}({\\rm Succ}_{j_{n-1}(S)}({\\langle }\\kappa _1,...,\\kappa _{n-1}{\\rangle }))={\\rm Succ}_{j_{n}(S)}({\\langle }\\kappa _1,...,\\kappa _{n-1}{\\rangle })$ .", "In other words, ${\\langle }\\kappa _1,..,\\kappa _n{\\rangle }\\in j_n(mb(S))$ and by definition $mb(S)\\in U_T$ .", "For $(4)$ , by induction on $i\\le n$ let us argue that ${\\rm Lev}_i(S)=mb(S\\upharpoonright \\lbrace 1,...,i\\rbrace )\\in U_{T\\upharpoonright \\lbrace 1,...,i\\rbrace }$ .", "If $i=1$ then ${\\rm Lev}_1(S)={\\rm Succ}_{S}({\\langle }{\\rangle })\\in U^{(T)}_{{\\langle }{\\rangle }}$ .", "Assume that $mb(S\\upharpoonright \\lbrace 1,...,i-1\\rbrace )\\in \\vec{U}_{T\\upharpoonright \\lbrace 1,...,i-1\\rbrace }$ .", "By $(b)$ , for every $s\\in {\\rm Lev}_{i-1}(S)$ , ${\\rm Succ}_{S}(s)\\in U^{(T)}_{s}$ , now apply $(3)$ to $S\\upharpoonright \\lbrace 1,..,i\\rbrace $ and $T\\upharpoonright \\lbrace 1,..,i\\rbrace $ to conclude that $mb(S\\upharpoonright \\lbrace 1,...,i\\rbrace )\\in U_{T\\upharpoonright \\lbrace 1,..,i\\rbrace }$ .", "To see $(5)$ , again argue by induction on $i$ that $j^{(T)}_{i}=j^{(S)}_{i}$ .", "Since $mb(S)\\in U_T$ , ${\\langle }\\kappa _1,...,\\kappa _n{\\rangle }\\in mb(j_n(S))$ , hence $\\kappa _1\\in {\\rm Lev}_1(j_n(S))$ .", "Since $crit(j_{1,n})=\\kappa _2$ , $\\kappa _1\\in {\\rm Lev}_1(j_1(S))$ , and therefore ${\\rm Lev}_1(S)\\in U^{(T)}_{{\\langle }{\\rangle }}$ , choose $U^{(S)}_{{\\langle }{\\rangle }}=U^{(T)}_{{\\langle }{\\rangle }}$ which implies that $j^{(T)}_{0,1}=j^{(S)}_{0,1}$ .", "Assume that $j^{(T)}_i=j^{(S)}_i=j_i$ .", "Since $\\kappa _{i+1}\\in {\\rm Succ}_{j^{(T)}_n(S)}({\\langle }\\kappa _1,...,\\kappa _i{\\rangle })$ then $\\kappa _{i+1}\\in j^{(T)}_{i+1,i}({\\rm Succ}_{j_i(S)}({\\langle }\\kappa _1,..,\\kappa _i{\\rangle }))$ thus $(*)\\ \\ \\ \\ {\\rm Succ}_{j_i(S)}({\\langle }\\kappa _1,..,\\kappa _i{\\rangle })\\in U^{(j_i(T))}_{{\\langle }\\kappa _1,...,\\kappa _i{\\rangle }}$ Back in $V$ , for every $s\\in {\\rm Lev}_i(S)$ , if ${\\rm Succ}_S(s)\\in U^{(T)}_s$ , let $U^{(S)}_s=U^{(T)}_s$ , otherwise, we pick a random ultrafilter.", "Then by $(*)$ , and elementarity $U^{(j_i(S))}_{{\\langle }\\kappa _1,..,\\kappa _i{\\rangle }}=U^{(j_1(T))}_{{\\langle }\\kappa _1,..,\\kappa _i{\\rangle }}$ hence $j^{(T)}_{i+1}=j^{(S)}_{i+1}$ .", "$\\blacksquare $ The following lemma is a generalization of a combinatorical property that was proven in [5] for product of measures.", "It can be stated for more general trees, however, let us restrict the attention to our needs.", "Lemma 2.38 Let $\\vec{U}$ be a sequence of normal measures and let $T$ be a $\\vec{U}$ -fat tree on $\\theta _1\\le \\theta _2\\le ...\\le \\theta _n$ .", "For $f:mb(T)\\rightarrow \\theta _1$ regressive i.e.", "$f(t)<{\\rm min}(t)$ there is a $\\vec{U}$ -fat tree $T^{\\prime }\\subseteq T$ such that $mb(T^{\\prime })\\in U_T$ and $f\\upharpoonright mb(T^{\\prime })=const$ .", "Proof.", "By induction on the height of the tree.", "If $ht(T)=1$ it is the case of one normal measure, namely $U_{\\langle \\rangle }$ , which is well known.", "Assume the lemma holds for $n$ and fix $T,f$ such that $ht(T)=n+1$ .", "For $\\vec{\\alpha }\\in {\\rm Lev}_{n}(T)$ consider ${\\rm Succ}_T(\\vec{\\alpha })\\in U^{(T)}_{\\vec{\\alpha }}$ .", "Define $f_{\\vec{\\alpha }}:suc_T(\\vec{\\alpha })\\rightarrow \\lambda $ by $f_{\\vec{\\alpha }}(\\beta )=f(\\vec{\\alpha }^{\\frown }\\beta )$ .", "Then there exist $H_{\\vec{\\alpha }}\\in U_{\\vec{\\alpha }}$ homogeneous for $f_{\\vec{\\alpha }}$ with color $c_{\\vec{\\alpha }}<{\\rm min}(\\vec{\\alpha })$ .", "Consider the regressive function $g:mb(T\\upharpoonright \\lbrace 1,...,n\\rbrace )\\rightarrow \\lambda \\ \\ \\ g(\\vec{\\alpha })=c_{\\vec{\\alpha }}$ Since $ht(T\\upharpoonright \\lbrace 1,...,n\\rbrace )=n$ we can apply the induction hypothesis to $g$ , so let $T^{\\prime }\\subseteq T\\upharpoonright \\lbrace 1,...n\\rbrace $ such that $mb(T^{\\prime })\\in U_{T\\upharpoonright \\lbrace 1,..,n\\rbrace }$ be an homogeneous $\\vec{U}$ -fat with color $c^*$ .", "Extend $T^{\\prime }$ by adjoining $H_{\\vec{\\alpha }}$ as the successors of $\\vec{\\alpha }\\in mb(T^{\\prime })$ , denote the resulting tree by $T^*$ .", "Note that by the induction, $T^*\\subseteq T$ is a $\\vec{U}$ -fat tree with $ht(T^*)=n+1$ , and by REF .3 $mb(T^*)\\in U_T$ and $f\\upharpoonright mb(T^*)$ is constantly $c^*$ .", "$\\blacksquare $ The second combinatorical property, formulated in corollary REF , generalizes corollary REF , which is a consequence of the Weak compactness property of normal measures: Proposition 2.39 (folklore) Let $U$ be a normal ultrafilter over $\\kappa $ , and $f:[A]^2\\rightarrow \\lbrace 0,1\\rbrace $ such that $A\\in U$ .", "Then there is $A^{\\prime }\\in U$ such that $f\\upharpoonright [A^{\\prime }]^2$ is constant.$\\blacksquare $ Corollary 2.40 Let $U$ be a normal ultrafilter over $\\kappa $ , and $f:A\\rightarrow X$ any function such that $A\\in U$ .", "Then there is $A^{\\prime }\\subseteq A$ such that $A^{\\prime }\\in U$ and $f\\upharpoonright A^{\\prime }$ is either constant or $1-1$ .", "Proof.", "Define $g:[A]^2\\rightarrow \\lbrace 0,1\\rbrace $ by $g(\\alpha ,\\beta )=1\\leftrightarrow f(\\alpha )=f(\\beta )$ By weak compactness, there is $A^{\\prime }\\subseteq A$ , $A^{\\prime }\\in U$ , and $c\\in \\lbrace 0,1\\rbrace $ such that for every $\\alpha ,\\beta \\in A^{\\prime }$ , $\\alpha <\\beta $ , $g(\\alpha ,\\beta )=c$ .", "If $c=1$ , then $f\\upharpoonright A^{\\prime }$ is constant and if $c=0$ then $f\\upharpoonright A^{\\prime }$ is $1-1$ .$\\blacksquare $ In this argument we compare $f(\\alpha ),f(\\beta )$ for distinct $\\alpha ,\\beta $ .", "It is always the case that $\\alpha <\\beta \\vee \\beta <\\alpha $ hence we can think about this comparison as a function defined on $[A]^2$ which is a set in $U\\times U$ .", "One problem to generalize this argument to $\\vec{U}$ -fat trees is the following: Although for a given a function $f:mb(T)\\rightarrow X$ , a $\\vec{U}$ -fat tree $T$ , and distinct pair $t,t^{\\prime }\\in mb(T)$ we can be identify this pair as a branch of some $\\vec{U}$ -fat tree $S$ , $S$ might vary for different $t,t^{\\prime }$ .", "For example, if $t=\\langle \\alpha _1,\\alpha _2,\\alpha _3\\rangle $ and $t^{\\prime }=\\langle \\alpha _1^{\\prime },\\alpha _2^{\\prime },\\alpha _3^{\\prime }\\rangle $ the following is a possible such interweaving: $\\alpha _1<\\alpha _1^{\\prime }=\\alpha _2<\\alpha _2^{\\prime }<\\alpha _3^{\\prime }<\\alpha _3$ then we can think of $t,t^{\\prime }$ as a single branch from a tree $S$ of height 5 such that any branch $s={\\langle }s_1,s_2,s_3,s_4,s_5{\\rangle }\\in mb(S)$ decomposes back to $t={\\langle }s_1,s_2,s_5{\\rangle }$ ans $t^{\\prime }={\\langle }s_2,s_3,s_4{\\rangle }$ .", "However there can be different interweaving of $t,t^{\\prime }$ for which we need a different tree.", "Generally, if $t:={\\langle }\\alpha _1,...,\\alpha _n{\\rangle }, \\ t^{\\prime }:={\\langle }\\alpha ^{\\prime }_1,...,\\alpha ^{\\prime }_n{\\rangle }\\in mb(T)$ , the set $\\lbrace \\alpha _1,...,\\alpha _n\\rbrace \\cup \\lbrace \\alpha ^{\\prime }_1,...,\\alpha ^{\\prime }_n\\rbrace $ naturally orders in one of finitely many ways and induces an interweaving of $t,t^{\\prime }$ : Definition 2.41 $p$ is an interweaving of $T$, if it is a pair of order embedding $\\langle g,g^{\\prime }\\rangle $ where $g,g^{\\prime }:ht(T)\\rightarrow \\lbrace 1,...,k\\rbrace $ so that $Im(g)\\cup Im(g^{\\prime })=\\lbrace 1,...,k\\rbrace $ .", "Denote $A_p=Im(g), A_p^{\\prime }=Im(g^{\\prime })$ and $k=|p|$ .", "Let $T$ be a $\\vec{U}$ -fat tree on $\\theta _1\\le ...\\le \\theta _n$ .", "For every interweaving $p={\\langle }g,g^{\\prime }{\\rangle }$ , define the iteration associated with $p$ , $j_p=j^{(T)}_p$ : The length of the iteration is $|p|$ .", "Let $M_0=V$ and $j_0=Id$ .", "Assume that we are at the $m$ th step of the iteration and denote the critical points $\\kappa _1,...,\\kappa _m$ .", "Also assume inductively that $\\langle k_i\\mid i\\in A_p\\cap \\lbrace 1,...,m\\rbrace \\rangle \\in j_m(T),\\langle k_i\\mid i\\in A^{\\prime }_p\\cap \\lbrace 1,...,m\\rbrace \\rangle \\in j_m(T)$ If $m+1\\in A_p\\setminus A_p^{\\prime }$ , let $r\\le ht(T)$ be such that $g(r)=m+1$ .", "Then $\\langle k_i\\mid i\\in A_p\\cap \\lbrace 1,...,m\\rbrace \\rangle \\in {\\rm Lev}_{r-1}(j_m(T))$ and the ultrafilter $\\vec{U}^{(j_m(T))}_{\\langle \\kappa _i\\mid i\\in A_p\\cap \\lbrace 1,...,m\\rbrace \\rangle }$ which is an ultrafilter over $j_m(\\theta _{r})$ is defined in $M_m$ and for every $i\\in A_p\\cap \\lbrace 1,...,m\\rbrace $ , $\\kappa _i<j_m(\\theta _r)$ .", "If there is $i\\in A^{\\prime }_p\\cap \\lbrace 1,...,m\\rbrace $ such that $\\kappa _i\\ge j_m(\\theta _r)$ , then declare that the iteration is undefined.", "Otherwise, perform the ultrapower of $M_m$ by $\\vec{U}^{(j_m(T))}_{\\langle \\kappa _i\\mid i\\in A_p\\cap \\lbrace 1,...,m\\rbrace \\rangle }$ .", "It follows that $\\kappa _{m+1}:=crit(j_{m,m+1})=j_m(\\theta _r)$ and $\\langle \\kappa _i\\mid i\\in A_p\\cap \\lbrace 1,...,m+1\\rbrace \\rangle =\\langle \\kappa _i\\mid i\\in A_p\\cap \\lbrace 1,...m\\rbrace \\rangle ^{}\\kappa _{m+1}\\in j_{m+1}(T)$ If $m+1\\in A^{\\prime }_p\\setminus A_p$ we perform the symmetric procedure.", "If $m+1\\in A_p\\cap A^{\\prime }_p$ , let $r,r^{\\prime }\\le ht(T)$ be such that $m+1=g(r)=g^{\\prime }(r^{\\prime })$ there are two possibilities, either $\\vec{U}^{(j_m(T))}_{\\langle \\kappa _i\\mid i\\in A_p\\cap \\lbrace 1,...,m\\rbrace \\rangle }\\ne \\vec{U}^{(j_m(T))}_{\\langle \\kappa _j\\mid j\\in A^{\\prime }_p\\cap \\lbrace 1,...,m\\rbrace \\rangle }$ In this case, declare that the iteration is undefined.", "Otherwise $\\vec{U}^{(j_m(T))}_{\\langle \\kappa _i\\mid i\\in A_p\\cap \\lbrace 1,...,m\\rbrace \\rangle }= \\vec{U}^{(j_m(T))}_{\\langle \\kappa _j\\mid j\\in A^{\\prime }_p\\cap \\lbrace 1,...,m\\rbrace \\rangle }$ then $j_m(\\theta _r)=j_m(\\theta _{r^{\\prime }})$ and perform the ultrapower with this measure.", "Thus for every $i\\le m$ , $\\kappa _{m+1}:=crit(j_{m,m+1})=j_m(\\theta _r)=j_m(\\theta _{r^{\\prime }})>\\kappa _i$ and $\\langle \\kappa _i\\mid i\\in A_p\\cap \\lbrace 1,...,m+1\\rbrace \\rangle \\in j_{m+1}(T), \\ \\langle \\kappa _i\\mid i\\in A^{\\prime }_p\\cap \\lbrace 1,...,m+1\\rbrace \\rangle \\in j_{m+1}(T)$ In any case we denote $\\theta (m)=\\theta _r$ so that $\\kappa _m=j_{m-1}(\\theta (m))$ , by construction, if $m=g(r)$ then $\\theta (m)=\\theta _r$ and if $m=g^{\\prime }(r^{\\prime })$ then $\\theta (m)=\\theta _{r^{\\prime }}$ .", "If $j_p$ is defined then $\\theta (1)<j_1(\\theta (2))<...<j_{|p|-1}(\\theta (|p|))$ and since $j_{m-1}(\\theta (m))=crit(j_{m,m-1})$ , $\\theta (1)\\le \\theta (2)\\le ...\\le \\theta (|p|)$ .", "It follows that $j_p$ is a tree iteration of $\\vec{U}$ -measures.", "Proposition 2.42 Let $T$ be a $\\vec{U}$ -fat tree and fix an interweaving $p={\\langle }g,g^{\\prime }{\\rangle }$ such that $j_p$ is defined.", "Then There is a $\\vec{U}$ -fat tree, $S_p$ , with $ht(S_p)=|p|$ and for every $s\\in mb(S_p)$ , $s\\upharpoonright A_p,s\\upharpoonright A^{\\prime }_p\\in mb(T)$ interweave as $p$ .", "Moreover, for every $r\\in {\\rm Lev}_m(S_p)$ , if $m\\in A_p$ then $U^{(S_p)}_r=U^{(T)}_{r\\upharpoonright A_p\\cap \\lbrace 1,...,m\\rbrace }$ and if $m\\in A_p^{\\prime }$ then $U^{(s_p)}_r=U^{(T)}_{r\\upharpoonright A^{\\prime }_p\\cap \\lbrace 1,...,m\\rbrace }$ .", "We can shrink $T$ to $R$ such that $mb(R)\\in \\vec{U}_T$ and if $t,t^{\\prime }\\in mb(R)$ interweave as $p$ then $t\\cup t^{\\prime }\\in S_{p}$ If in $g^{\\prime }(1)<g(1)$ , then we can shrink $T$ to $R$ such that $mb(R)\\in \\vec{U}_T$ and for every $t\\in mb(R)$ and $\\alpha \\in {\\rm Succ}_R(\\langle \\rangle )\\cap {\\rm min}(t)$ there is $t^{\\prime }\\in mb(T)$ such that $t,t^{\\prime }$ interweave as $p$ and ${\\rm min}(t^{\\prime })=\\alpha $ .", "Proof.", "For $(1)$ , if the iteration $j_p$ is defined, then in particular for every $m$ , $j_{m,m+1}$ is the ultrapower by $U^{(j_{m}(T))}_{{\\langle }\\kappa _i\\mid i\\in A_p\\cap \\lbrace 1,..,m\\rbrace {\\rangle }}$ or by $U^{(j_{m}(T))}_{{\\langle }\\kappa _i\\mid i\\in A^{\\prime }_p\\cap \\lbrace 1,..,m\\rbrace {\\rangle }}$ which is a measure over $j_m(\\theta _{r_{m+1}})$ for some $r_{m+1}\\le ht(T)$ .", "Since $j_p$ is defined, we can derive the ultrafilter $U_p$ from $j_p$ over $\\prod _{i=1}^{|p|}\\theta (i)$ .", "In $M_{|p|}$ we have that ${\\langle }\\kappa _1,...,\\kappa _{|p|}{\\rangle }\\upharpoonright A_p, {\\langle }\\kappa _1,...,\\kappa _{|p|}{\\rangle }\\upharpoonright A^{\\prime }_p\\in mb(j_p(T))\\text{ interweave as }p$ Then by REF .2, $R=\\lbrace \\vec{\\alpha }\\in \\prod _{i=1}^{|p|}\\theta (i)\\mid \\vec{\\alpha }\\upharpoonright A_p,\\vec{\\alpha }\\upharpoonright A^{\\prime }_p\\in mb(T)\\text{ interweave as } p\\rbrace \\in \\vec{U}_p$ .", "By construction of $j_p$ , if $m\\in A_p$ then the function $f_m(t)=U^{(T)}_{t\\upharpoonright A_p\\cap \\lbrace 1,...m-1\\rbrace }$ satisfy that the measure $j_{m-1}(f_m)({\\langle }\\kappa _1,..,\\kappa _{m-1}{\\rangle })$ is the one applied at the $m$ -th step of the iteration.", "If $m\\in A_p^{\\prime }$ define a similar function $f^{\\prime }_m$ depending on $t\\upharpoonright A_p^{\\prime }\\cap \\lbrace 1,...,m-1\\rbrace $ .", "By REF .4, there is a $\\vec{U}$ -fat tree $S_p$ such that $mb(S_p)\\subseteq R$ and $mb(S_p)\\in U_p$ .", "Then any $s\\in mb(S_p)$ is in $R$ and therefore $s\\upharpoonright A_p,\\ s\\upharpoonright A_p^{\\prime }$ interweave as $p$ .", "Moreover, for every $r\\in {\\rm Lev}_{m-1}(S_p)$ , $U^{(S_p)}_r=f_m(r)=U^{(T)}_{r\\upharpoonright A_p\\cap \\lbrace 1,..,m-1\\rbrace }$ or $U^{(S_p)}_r=f^{\\prime }_m(r)=U^{(T)}_{r\\upharpoonright A^{\\prime }_p\\cap \\lbrace 1,...,m-1\\rbrace }$ .", "To see $(2)$ , for every $\\vec{\\alpha }\\in {\\rm Lev}_{m+1}(S_p)$ define $t(\\vec{\\alpha })\\in T$ to be $\\vec{\\alpha }\\upharpoonright A_p\\cap \\lbrace 1,...,m\\rbrace $ and $t^{\\prime }(\\vec{\\alpha })=\\vec{\\alpha }\\upharpoonright A_p^{\\prime }\\cap \\lbrace 1,...,m\\rbrace $ .", "From $(1)$ it follows that if $m+1\\in A_p$ then ${\\rm Succ}_{S_p}(\\vec{\\alpha })\\in U^{(T)}_{t(\\vec{\\alpha })}$ and similarly for $m+1\\in A_p^{\\prime }$ .", "Define $R$ inductively, the levels of $S_p$ which corresponds to the first level are $g(1)$ and $g^{\\prime }(1)$ are the successors of nodes at levels $g(1)-1$ and $g^{\\prime }(1)-1$ .", "Note that at least one of $g(1),g^{\\prime }(1)$ must be 1.", "Also note that for every $\\vec{\\alpha }\\in {\\rm Lev}_{g(1)-1}(S_p)$ , $t(\\vec{\\alpha })={\\langle }{\\rangle }$ and that for every $\\vec{\\beta }\\in {\\rm Lev}_{g^{\\prime }(1)-1}(S_p)$ , $t^{\\prime }(\\vec{\\beta })={\\langle }{\\rangle }$ .", "Define $B_{{\\langle }{\\rangle }}=\\Delta _{\\vec{\\alpha }\\in {\\rm Lev}_{g(1)-1}(S_p)}{\\rm Succ}_{S_p}(\\vec{\\alpha }), C_{{\\langle }{\\rangle }}=\\Delta _{\\vec{\\alpha }\\in {\\rm Lev}_{g^{\\prime }(1)-1}(S_p)}{\\rm Succ}_{S_p}(\\vec{\\alpha })\\in U^{(T)}_{\\langle \\rangle }$ Let $suc_{R}(\\langle \\rangle )=B_{{\\langle }{\\rangle }}\\cap C_{{\\langle }{\\rangle }}\\in U^{(T)}_{{\\langle }{\\rangle }}$ .", "Moreover, at least one of $B_{{\\langle }{\\rangle }},C_{{\\langle }{\\rangle }}$ is simply ${\\rm Succ}_{S_p}({\\langle }{\\rangle })$ .", "Assume $r\\in {\\rm Lev}_{m}(R)$ is defined, the levels of $S_p$ which corresponds to the $m$ th level are $g(m),g^{\\prime }(m)$ (Which might be that same level), thus for every $\\vec{\\alpha }\\in {\\rm Lev}_{g(m)}(S_p)$ , $t(\\vec{\\alpha })\\in {\\rm Lev}_m(T)$ and for every $\\vec{\\beta }\\in {\\rm Lev}_{g(m^{\\prime })}(S_p)$ , $t^{\\prime }(\\vec{\\beta })\\in {\\rm Lev}_m(T)$ .", "Define ${\\rm Succ}_R(r)=\\underset{\\vec{\\alpha }\\in {\\rm Lev}_{g(m)}(S_p), t(\\vec{\\alpha })=r}{\\Delta }{\\rm Succ}_{S_p}(\\vec{\\alpha })\\cap \\underset{\\vec{\\alpha }\\in {\\rm Lev}_{g(m)}(S_p), t^{\\prime }(\\vec{\\alpha })=r}{\\Delta }{\\rm Succ}_{S_p}(\\vec{\\alpha })\\in U^{(T)}_r$ By REF .4 $mb(R)\\in U_T$ .", "If $t,t^{\\prime }\\in mb(R)$ interweave as $p$ , we prove inductively that $(t\\cup t^{\\prime })\\upharpoonright \\lbrace 1,...,k\\rbrace \\in {\\rm Lev}_k(S_p)$ .", "Clearly $(t\\cup t^{\\prime })\\upharpoonright \\lbrace 1\\rbrace ={\\langle }\\alpha {\\rangle }\\in {\\rm Lev}_1(S_p)$ , as $\\alpha \\in B_{{\\langle }{\\rangle }}\\cap C_{{\\langle }{\\rangle }}\\subseteq {\\rm Succ}_{S_p}({\\langle }{\\rangle })$ .", "Assume that $(t\\cup t^{\\prime })\\upharpoonright \\lbrace 1,..,k\\rbrace \\in {\\rm Lev}_k(S_p)$ , if $k+1\\in A_p$ , let $r$ be such that $g(r)=k+1$ , then $(t\\cup t^{\\prime })(k+1)=t(r)>(t\\cup t^{\\prime })(k)$ .", "Also $t((t\\cup t^{\\prime })\\upharpoonright \\lbrace 1,...,k\\rbrace )=t\\upharpoonright \\lbrace 1,...,r-1\\rbrace $ .", "By definition of diagonal intersection and $R$ it follows that $t(r)\\in {\\rm Succ}_{R}(t\\upharpoonright \\lbrace 1,...,r-1\\rbrace )\\subseteq {\\rm Succ}_{S_p}((t\\cup t^{\\prime })\\upharpoonright \\lbrace 1,...,k\\rbrace ))$ hence $(t\\cup t^{\\prime })\\upharpoonright \\lbrace 1,...,k+1\\rbrace \\in {\\rm Lev}_{k+1}(S_p)$ .", "The case where $k+1\\in A_p^{\\prime }$ is similar.", "To see $(3)$ , suppose that $g^{\\prime }(1)<g(1)$ .", "Define a sequence inductively, let $\\vec{\\eta }_1=\\langle \\beta _1,...,\\beta _{g(1)-1}\\rangle \\in S_p$ .", "Then by 1, ${\\rm Succ}_{S_p}(\\vec{\\eta }_1)\\in U^{(T)}_{\\langle \\rangle }$ , thus by definition of $j_T$ , $\\kappa _1\\in j_1({\\rm Succ}_{S_p}(\\vec{\\eta }_1))={\\rm Succ}_{j_1(S_p)}(\\vec{\\eta }_1)$ Consider $\\vec{\\eta }_1^{\\frown }\\langle \\kappa _1\\rangle \\in {\\rm Lev}_{g(1)}(j_1(S_p))$ , pick any $\\vec{\\eta }_2$ such that $\\vec{\\eta }_1^{\\frown }\\langle \\kappa _1\\rangle ^{\\frown }\\vec{\\eta }_2\\in {\\rm Lev}_{g(2)-1}(j_1(S_p))$ , then ${\\rm Succ}_{j_1(S_p)}(\\vec{\\eta }_1^{\\frown }\\langle \\kappa _1\\rangle ^{\\frown }\\vec{\\eta }_2)\\in j_1(\\vec{U})^{(j_1(T))}_{{\\langle }\\kappa _1{\\rangle }}\\text{ thus }\\kappa _2\\in suc_{j_2(S_p)}(\\vec{\\eta }_1^{\\frown }\\langle \\kappa _1\\rangle ^{\\frown }\\vec{\\eta }_2)$ continuing in this fashion we end up with a witness for the statement $M_n\\models \\exists t\\in mb(j_n(T))\\ s.t.", "\\ \\langle \\kappa _1,...,\\kappa _n\\rangle ,t\\text{ interweave as } p$ Since $\\beta _1\\in {\\rm Succ}_{S_p}(\\langle \\rangle )={\\rm Succ}_T(\\langle \\rangle )={\\rm Succ}_{j_n(T)}({\\langle }{\\rangle })\\cap \\kappa _1$ was arbitrary, it follows that $M_n\\models \\forall \\beta \\in {\\rm Succ}_{j_n(T)}(\\langle \\rangle )\\cap \\kappa _1\\exists t\\in mb(j_n(T)) \\ s.t.", "\\ min(t)=\\beta \\wedge \\langle \\kappa _1,...,\\kappa _n\\rangle ,t \\text{ interweave as }p$ By REF .2 $\\lbrace s\\in mb(T)\\mid \\forall \\beta \\in {\\rm Succ}_{T}({\\langle }{\\rangle })\\cap s_1\\exists t\\in mb(T).min(t)=\\beta \\wedge s,t\\text{ interweave as }p\\rbrace \\in U_T$ By REF .4 we can find $R$ as wanted.", "$\\blacksquare $ Proposition 2.43 Let $T$ be a $\\vec{U}$ -fat tree, and let $p={\\langle }g,g^{\\prime }{\\rangle }$ b an interweaving.", "If $j_p$ is undefined then there is $T^{\\prime }\\subseteq T$ such that $mb(T^{\\prime })\\in U_T$ and any every $t,t^{\\prime }\\in mb(T^{\\prime })$ do not interweave as $p$ .", "Proof.", "Let $m$ be the step of the iteration where we declared that $j_p$ is undefined.", "By definition, there are two cases to consider: Case 1: Assume that $m+1\\in A_p\\setminus A^{\\prime }_p$ and there is $i\\in A^{\\prime }_p\\cap \\lbrace 1,...,m\\rbrace $ such that $j_{i-1}(\\theta (i))\\ge j_m(\\theta (m+1))$ .", "Then $\\theta (i)>\\theta (m+1)$ , otherwise, $\\theta (m+1)\\ge \\theta (i)$ hence $j_{i-1}(\\theta (m+1))\\ge j_{i-1}(\\theta (i))\\ge j_m(\\theta (m+1))\\ge j_{i-1}(\\theta (m+1))$ hence $j_{i-1}(\\theta (m+1))=j_{i-1}(\\theta (i))$ and $\\theta (m+1)=\\theta (i)$ .", "But $j_{i-1}(\\theta (i))=crit(j_{i,i-1})$ $j_{i,i-1}(j_{i-1}(\\theta (m+1))>j_{i-1}(\\theta (m+1))$ hence $j_m(\\theta (m+1))\\ge j_i(\\theta (m+1))>j_{i-1}(\\theta (m+1))=>j_{i-1}(\\theta (i))$ , contradiction, thus $\\theta (i)>\\theta (m+1)$ .", "Let $r_1.r_2\\le ht(T)$ such that $g(r_1)=m+1$ and $g^{\\prime }(r_2)=i$ .", "Then $\\theta _{r_1}=\\theta (m+1)$ and $\\theta _{r_2}=\\theta (i)$ .", "The tree $T^{\\prime }$ is obtained from $T$ by shrinking ${\\rm Succ}_T(t)$ for each $t\\in Lev_{r_2-1}(T)$ such that ${\\rm min}({\\rm Succ}_{T^{\\prime }}(t))>\\theta (m+1)$ .", "To see that $T^{\\prime }$ is as wanted, assume that $s,s^{\\prime }\\in mb(T^{\\prime })$ interweave as $p$ .", "Then $s^{\\prime }(r_2)=(s\\cup s^{\\prime })(i)<(s\\cup s^{\\prime })(m+1)=s(r_1)$ On the other hand, $s^{\\prime }(r_2)\\in {\\rm Succ}_{T^{\\prime }}(s^{\\prime }\\upharpoonright \\lbrace 1,...,r_2-1\\rbrace )$ , hence $s(r_1)<\\theta (m+1)<s^{\\prime }(r_2)$ , contradiction.", "Case 2: Assume that $m+1\\in A_p\\cap A^{\\prime }_p$ and there $U^{(j_m(T))}_{{\\langle }\\kappa _i\\mid i\\in A_p\\cap \\lbrace 1,..,m\\rbrace {\\rangle }}\\ne U^{(j_m(T))}_{{\\langle }\\kappa _i\\mid i\\in A^{\\prime }_p\\cap \\lbrace 1,..,m\\rbrace {\\rangle }}$ .", "These are measures over $j_m(\\theta (m+1)),j_m(\\theta ^{\\prime }(m+1))$ respectively.", "If $\\theta (m+1)\\ne \\theta ^{\\prime }(m+1)$ , then for example $\\theta (m+1)<\\theta ^{\\prime }(m+1)$ and we can shrink $T$ as in case 1 to eliminate such an interweaving, hence assume $\\theta (m+1)=\\theta ^{\\prime }(m+1)$ .", "Consider the first $m$ steps of the iteration $j_p$ , let $A_p\\cap \\lbrace 1,..,m\\rbrace =\\lbrace g(1),..,g(k)\\rbrace , \\ A_p^{\\prime }\\cap \\lbrace 1,..,m\\rbrace =\\lbrace g^{\\prime }(1),...,g^{\\prime }(k^{\\prime })\\rbrace $ , then $\\theta _{k+1}=\\theta (m+1)=\\theta ^{\\prime }(m+1)=\\theta _{k^{\\prime }+1}$ .", "Similar to REF .1, $M_m\\models {\\langle }\\kappa _1,...\\kappa _m{\\rangle }\\upharpoonright \\lbrace g(1),..,g(k)\\rbrace \\in {\\rm Lev}_k(j_m(T)),\\ {\\langle }\\kappa _1,...\\kappa _m{\\rangle }\\upharpoonright \\lbrace g^{\\prime }(1),..,g^{\\prime }(k^{\\prime })\\rbrace \\in {\\rm Lev}_{k^{\\prime }}(j_m(T))$ Moreover, $M_m\\models U^{(j_m(T))}_{{\\langle }\\kappa _1,...\\kappa _m{\\rangle }\\upharpoonright \\lbrace g(1),..,g(k)\\rbrace }\\ne U^{(j_m(T))}_{{\\langle }\\kappa _1,...\\kappa _m{\\rangle }\\upharpoonright \\lbrace g^{\\prime }(1),..,g^{\\prime }(k^{\\prime })\\rbrace }$ since the iteration up to $m$ is defined we can find a $\\vec{U}$ -fat tree a tree $S$ such that: ${\\langle }\\kappa _1,...,\\kappa _m{\\rangle }\\in j_m(mb(S))$ .", "For every $s\\in mb(S)$ , $s\\upharpoonright \\lbrace g(1),..,g(k)\\rbrace \\in {\\rm Lev}_k(T),\\ s\\upharpoonright \\lbrace g^{\\prime }(1),..,g^{\\prime }(k^{\\prime })\\rbrace \\in {\\rm Lev}_{k^{\\prime }}(T)$ .", "$U^{(T)}_{s\\upharpoonright \\lbrace g(1),..,g(k)\\rbrace }\\ne U^{(T)}_{s\\upharpoonright \\lbrace g^{\\prime }(1),..,g^{\\prime }(k^{\\prime })\\rbrace }$ .", "$s\\upharpoonright \\lbrace g(1),..,g(k)\\rbrace ,\\ s\\upharpoonright \\lbrace g^{\\prime }(1),..,g^{\\prime }(k^{\\prime })\\rbrace $ interweave as in ${\\langle }g\\upharpoonright \\lbrace 1,..,k\\rbrace ,g^{\\prime }\\upharpoonright \\lbrace 1,...,k^{\\prime }\\rbrace {\\rangle }$ .", "For every $s\\in {\\rm Lev}_r(S)$ , let $t(s):=s\\upharpoonright A_p\\cap \\lbrace 1,...,r\\rbrace $ and $t^{\\prime }(s):=s\\upharpoonright A^{\\prime }_p\\cap \\lbrace 1,...,r\\rbrace $ .", "Then $U^{(S)}_s$ is either $U^{(T)}_{t(s)}$ if $r+1\\in A_p$ or $U^{(T)}_{t^{\\prime }(s)}$ if $r+1\\in A^{\\prime }_p$ .= Since $T$ mentions at most $|T\\cap [\\theta _{k+1}]^{<\\omega }|\\le \\theta _{k+1}$ measure on $\\theta _{k+1}$ we can use the normality of the measures to separate them.", "Namely, for every $r\\in T$ such that $U^{(T)}_r$ is a measure on $\\theta _{k+1}$ , find $X_r\\in U^{(T)}_r$ such that if $U^{(T)}_r\\ne U^{(T)}_{r^{\\prime }}$ then $X_r\\cap X_{r^{\\prime }}=\\emptyset $ .", "Now we shrink the tree $T$ similar to REF .2, from $(5)$ it follows that if $j\\in A_p$ then for every $\\vec{\\alpha }\\in {\\rm Lev}_{j-1}(S)$ , ${\\rm Succ}_{S}(\\vec{\\alpha })\\in U^{(T)}_{t(\\vec{\\alpha })}$ and similarly for $j\\in A_p^{\\prime }$ .", "Define $R\\subseteq T$ inductively, $B_{{\\langle }{\\rangle }}=\\Delta _{\\vec{\\alpha }\\in {\\rm Lev}_{g(1)-1}(S)}{\\rm Succ}_{S}(\\vec{\\alpha }), C_{{\\langle }{\\rangle }}=\\Delta _{\\vec{\\alpha }\\in {\\rm Lev}_{g^{\\prime }(1)-1}(S)}{\\rm Succ}_{S}(\\vec{\\alpha })\\in U^{(T)}_{\\langle \\rangle }$ Let $suc_{R}(\\langle \\rangle )=B_{{\\langle }{\\rangle }}\\cap C_{{\\langle }{\\rangle }}\\in U^{(T)}_{{\\langle }{\\rangle }}$ .", "As before, at least one of $B_{{\\langle }{\\rangle }},C_{{\\langle }{\\rangle }}$ is simply ${\\rm Succ}_{S}({\\langle }{\\rangle })\\subseteq {\\rm Succ}_T({\\langle }{\\rangle })$ .", "Given $r\\in {\\rm Lev}_{j}(R)$ , define $B_{r}={\\left\\lbrace \\begin{array}{ll}\\underset{\\vec{\\alpha }\\in {\\rm Lev}_{g(j)}(S),\\ t(\\vec{\\alpha })=r}{\\Delta }{\\rm Succ}_{S}(\\vec{\\alpha }) & j<k\\\\ {\\rm Succ}_T(r)\\cap X_r & j=k\\\\ {\\rm Succ}_T(r) & j>k\\end{array}\\right.", "}$ $C_{r}={\\left\\lbrace \\begin{array}{ll}\\underset{\\vec{\\alpha }\\in {\\rm Lev}_{g^{\\prime }(j)}(S),\\ t^{\\prime }(\\vec{\\alpha })=r}{\\Delta }{\\rm Succ}_{S}(\\vec{\\alpha }) & j<k^{\\prime }\\\\ {\\rm Succ}_T(r)\\cap X_r & j=k^{\\prime }\\\\ {\\rm Succ}_T(r) & j>k^{\\prime }\\end{array}\\right.", "}$ Then $B_r,C_r\\in U^{(T)}_r$ and let ${\\rm Succ}_{R}(r)=B_r\\cap C_r$ .", "So by REF .4 $mb(R)\\in U_T$ .", "Let us argue that $R$ is as wanted.", "Toward a contradiction, assume that $t,t^{\\prime }\\in mb(R)$ interweave as $p$ , in particular $t\\upharpoonright \\lbrace 1,..,k\\rbrace ,t^{\\prime }\\upharpoonright \\lbrace 1,..,k^{\\prime }\\rbrace $ interweave as ${\\langle }g\\upharpoonright \\lbrace 1,..,k\\rbrace ,g^{\\prime }\\upharpoonright \\lbrace 1,..,k^{\\prime }\\rbrace {\\rangle }$ , as in the proof of REF .2, we conclude that $s=(t\\upharpoonright \\lbrace 1,..,k\\rbrace )\\cup (t^{\\prime }\\upharpoonright \\lbrace 1,..,k^{\\prime }\\rbrace )\\in mb(S)$ and by $(3)$ $U^{(T)}_{s\\upharpoonright \\lbrace g(1),..,g(k)\\rbrace }\\ne U^{(T)}_{s\\upharpoonright \\lbrace g^{\\prime }(1),..,g^{\\prime }(k^{\\prime })\\rbrace }$ .", "However, $s\\upharpoonright \\lbrace g(1),..,g(k)\\rbrace =t\\upharpoonright \\lbrace 1,..,k\\rbrace $ , $s\\upharpoonright \\lbrace g^{\\prime }(1),..,g^{\\prime }(k^{\\prime })\\rbrace =t^{\\prime }\\upharpoonright \\lbrace 1,..,k^{\\prime }\\rbrace $ hence ${\\rm Succ}_{R}(t\\upharpoonright \\lbrace 1,..,k\\rbrace )\\subseteq X_{t\\upharpoonright \\lbrace 1,..,k\\rbrace }$ is disjoint from ${\\rm Succ}_{R}(t\\upharpoonright \\lbrace 1,..,k^{\\prime }\\rbrace )\\subseteq X_{t\\upharpoonright \\lbrace 1,..,k^{\\prime }\\rbrace }$ .", "On the other hand, $p$ impose that $t(k^{\\prime }+1)=t(k+1)$ is a member of the intersection, contradiction.$\\blacksquare $ To illustrate the second problem of generalizing weak compactness, consider for example the function $f:mb(T)\\rightarrow \\kappa $ , $f(\\alpha ,\\beta )=\\alpha $ .", "No matter how we shrink $T$ to $S$ , $f\\upharpoonright mb(S)$ will be neither constant nor $1-1$ .", "However, we can ignore the coordinate $\\beta $ and obtain a $1-1$ function.", "Generally, we will argue that $f$ might depend on some of the levels of the tree and the other levels can be ignored.", "Let us formulate this precisely: Definition 2.44 Let $T$ be a tree of height $n$ .", "For every $I\\subseteq \\lbrace 1,...,n\\rbrace $ define an equivalence relation $\\sim _I$ on $mb(T)$ by $t\\sim _I t^{\\prime }\\leftrightarrow t\\upharpoonright I=t^{\\prime }\\upharpoonright I$ .", "For $f:mb(T)\\rightarrow X$ , the induced function denoted by $f_I:mb(T\\upharpoonright I)\\rightarrow X$ is the relation $\\lbrace {\\langle }t\\upharpoonright I,f(t){\\rangle }\\mid t\\in mb(T)\\rbrace $ .", "Clearly $f_I$ is a well defined function if and only if $f$ is constant of equivalence classes of $\\sim _I$ .", "For example, if $I=\\emptyset $ and $f_\\emptyset $ is well defined then $f$ is constant.", "Definition 2.45 Let $T$ be a $\\vec{U}$ -fat tree of height $n$ , and let $f:mb(T)\\rightarrow B$ be any function.", "A coordinate $i\\in \\lbrace 1,...,n\\rbrace $ is called an important coordinate for $f$ if $\\forall t_1,t_2\\in mb(T)$ , $t_1(i)\\ne t_2(i)$ implies $f(t_1)\\ne f(t_2)$ .", "The set of important coordinates for $f$ is the set $I(T,f)=\\lbrace i\\in \\lbrace 1,..,n\\rbrace \\mid i\\text{ is an important coordinate}\\rbrace $ We say that $I(T,f)$ is complete if $f_{I(T,f)}$ is well defined i.e.", "$\\forall t,t^{\\prime }\\in mb(T).", "t\\sim _{I(T,f)} t^{\\prime }$ implies $f(t)=f(t^{\\prime })$ .", "Also we say that $I(T,f)$ is consistent if for every $\\vec{U}$ -fat tree $S\\subseteq T$ such that $mb(S)\\in U_T$ , $I(S,f\\upharpoonright mb(S))\\subseteq I(T,f)$ .", "Remark 2.46 The structure of the tree $T$ , impose some dependency between the levels of the tree which are not related to the function.", "For example, assume that $o^{\\vec{U}}(\\kappa )=\\kappa $ and that ${\\langle }X^{(\\kappa )}_i\\mid i<\\kappa {\\rangle }$ is a discrete family for ${\\langle }U(\\kappa ,i)\\mid i<\\kappa {\\rangle }$ .", "Let $T$ be the tree of height 2 such that: ${\\rm Succ}_T({\\langle }{\\rangle })=X^{(\\kappa )}_0$ and for every $\\alpha \\in {\\rm Succ}_T({\\langle }{\\rangle })$ , ${\\rm Succ}_T({\\langle }\\alpha {\\rangle })=X^{(\\kappa )}_\\alpha $ .", "Define the function $f:mb(T)\\rightarrow \\kappa $ by $f({\\langle }\\alpha ,\\beta {\\rangle })=\\beta $ .", "Clearly, we see that the function $f$ depends only on the second coordinate i.e.", "for every ${\\langle }\\alpha ,\\beta {\\rangle },{\\langle }\\gamma ,\\delta {\\rangle }\\in mb(T)$ , $f({\\langle }\\alpha ,\\beta {\\rangle })=f({\\langle }\\gamma ,\\delta {\\rangle })\\leftrightarrow \\beta =\\delta $ and $f_{\\lbrace 2\\rbrace }$ is well defined.", "However, the structure of the tree is such that if $\\alpha \\ne \\gamma $ then $X^{(\\kappa )}_\\alpha \\cap X^{(\\kappa )}_\\gamma =\\emptyset $ and $\\beta \\ne \\delta $ , which impose that 1 is important.", "Note that in this case, by definition, $I(T,f)=\\lbrace 1,2\\rbrace $ .", "If $S\\subseteq T$ then $I(T,f)\\subseteq I(S,f\\upharpoonright mb(S))$ .", "Hence if $I(T,f)$ is complete then also $I(S,f\\upharpoonright mb(S))$ is complete, and if $I(T,F)$ is consistent, then $I(T,f)=I(S,f\\upharpoonright mb(S))$ and also $I(S,f\\upharpoonright mb(S))$ is consistent.", "Lemma 2.47 Let $T$ be a $\\vec{U}$ -fat tree on $\\theta _1\\le ...\\le \\theta _n$ and $f:mb(T)\\rightarrow B$ where B is any set.", "Then there is a $\\vec{U}$ -fat tree $T^{\\prime }\\subseteq T$ , with $mb(T^{\\prime })\\in U_T$ and $I\\subseteq \\lbrace 1,...,ht(T)\\rbrace $ such that for any $t,t^{\\prime }\\in mb(T^{\\prime })$ $t\\upharpoonright I=t^{\\prime }\\upharpoonright I \\Leftrightarrow f(t)=f(t^{\\prime })$ Before proving the lemma, let us state as a corollary the generalization we desired: Corollary 2.48 Let $T$ be a $\\vec{U}$ -fat tree on $\\theta _1\\le ...\\le \\theta _n$ and $f:mb(T)\\rightarrow B$ where B is any set.", "Then there is a $\\vec{U}$ -fat tree $T^{\\prime }\\subseteq T$ , with $mb(T^{\\prime })\\in U_T$ such that the set of important coordinates $I(T^{\\prime },f\\upharpoonright mb(T^{\\prime }))$ is complete and consistent.", "In particular $(f\\upharpoonright mb(T^{\\prime }))_{I^*}$ is well defined and $1-1$ .", "Proof of corollary REF.", "Let $I\\subseteq \\lbrace 1,..,n\\rbrace $ guaranteed from REF , then $I\\subseteq I(T^{\\prime },f\\upharpoonright mb(T^{\\prime }))$ .", "Indeed, every $i\\in I$ is important, since if $t_1,t_2\\in mb(T^{\\prime })$ , $t_1(i)\\ne t_2(i)$ then $t_1\\upharpoonright I\\ne t_2\\upharpoonright I$ , thus $f(t_1)\\ne f(t_2)$ .", "Therefore, $f_{I(T^{\\prime },f\\upharpoonright mb(T^{\\prime }))}$ is well defined, since for every $t_1,t_2\\in mb(T^{\\prime })$ , $t_1\\upharpoonright I(T^{\\prime },f\\upharpoonright mb(T^{\\prime })) \\lbrace i\\rbrace =t_2\\upharpoonright I(T^{\\prime },f\\upharpoonright mb(T^{\\prime }))$ implies that $t_1\\upharpoonright I=t_2\\upharpoonright I$ , hence $f(t_1)=f(t_2)$ .", "We conclude that $I(T^{\\prime },f\\upharpoonright mb(T^{\\prime }))$ is complete.", "To insure consistency, we shrink $T^{\\prime }$ even more.", "For every $i\\notin I(T^{\\prime },f\\upharpoonright mb(T^{\\prime }))$ if there is $R\\subseteq T^{\\prime }$ , $mb(R)\\in U_T$ such that $i\\in I(R,f\\upharpoonright mb(R)$ , pick any such $R$ and denote it by $R_i$ , otherwise let $R_i=T^{\\prime }$ .", "Define $X^*=\\cap _{i\\notin I(T^{\\prime },f\\upharpoonright mb(T^{\\prime }))}mb(R_i)$ .", "Clearly $mb(X^*)\\in U_T$ .", "By REF .4 there is a $\\vec{U}$ -fat tree $T^*$ such that $mb(T^*)\\subseteq X^*$ and $mb(T^*)\\in U_T$ .", "It follows that $T^*\\subseteq R_i\\subseteq T^{\\prime }$ for every $i$ .", "By REF , $ I(T^{\\prime },f\\upharpoonright mb(T^{\\prime }))\\subseteq I(T^*,f\\upharpoonright mb(T^*))$ and therefore $I(T^*,f\\upharpoonright mb(T^*))$ is also complete.", "To see it if consistent, let $S\\subseteq T^*$ , $mb(S)\\in U_T$ , and let $i\\in I(S,f\\upharpoonright mb(S))$ , then $S\\subseteq T$ , so by definition of $R_i$ , $i\\in I(R_i,f\\upharpoonright mb(R_i))$ .", "Since $T^*\\subseteq R_i$ , then $I(R_i,f\\upharpoonright mb(R_i))\\subseteq I(T^*,f\\upharpoonright mb(T^*))$ .", "$\\blacksquare $ Proof of lemma REF .", "Again we go by induction on $ht(T)$ .", "For $ht(T)=1$ it is well known.", "Assume $ht(T)=n+1$ and fix $\\alpha \\in {\\rm Lev}_1(T)$ consider the function $f_{\\alpha }:mb(T/\\langle \\alpha \\rangle )\\rightarrow B \\ \\ f_\\alpha (\\vec{\\beta })=f(\\alpha ^{\\frown }\\vec{\\beta })$ By the induction hypothesis there is $T^{\\prime }_\\alpha \\subseteq T/\\langle \\alpha \\rangle $ $mb(T^{\\prime }_\\alpha )\\in U_{T/{{\\langle }\\alpha {\\rangle }}}$ and $I_\\alpha \\subseteq \\lbrace 2,...,n+1\\rbrace $ such that $(\\star ) \\ \\ \\forall t_1,t_2\\in mb(T^{\\prime }_\\alpha ).", "t_1\\upharpoonright I_\\alpha =t_2\\upharpoonright I_\\alpha \\leftrightarrow f_\\alpha (t_1)=f_\\alpha (t_2)$ Find $H\\in U^{(T)}_{\\langle \\rangle }$ and $I^{\\prime }\\subseteq \\lbrace 2,...,n\\rbrace $ such that $I_\\alpha =I^{\\prime }$ for $\\alpha \\in H$ .", "Let $S$ be the tree with ${\\rm Lev}_1(S)=H$ and for every $\\alpha \\in H$ , $S/{{\\langle }\\alpha {\\rangle }}=T^{\\prime }_\\alpha $ , then by REF .2, $mb(S)\\in U_T$ .", "It follows that for every $t,s\\in mb(S)$ , if $t\\upharpoonright \\lbrace 1\\rbrace \\cup I^{\\prime }=s\\upharpoonright \\lbrace 1\\rbrace \\cup I^{\\prime }$ then $f(t)=f_{t(1)}(t\\upharpoonright \\lbrace 2,...,n\\rbrace )=f_{s(1)}(s\\upharpoonright \\lbrace 2,..,n\\rbrace )=f(s)$ If the implication $f(t)=f(t^{\\prime })\\rightarrow t \\upharpoonright \\lbrace 1\\rbrace \\cup I^{\\prime }=t^{\\prime }\\upharpoonright \\lbrace 1\\rbrace \\cup I^{\\prime }$ holds for every $t,t^{\\prime }\\in mb(S)$ , then we can take $I=I^{\\prime }\\cup \\lbrace 1\\rbrace $ and we are done.", "However there can still be a counter example i.e.", "$t,t^{\\prime }\\in mb(S)$ , such that $t\\upharpoonright I^{\\prime }\\cup \\lbrace 1\\rbrace \\ne t^{\\prime }\\upharpoonright I^{\\prime }\\cup \\lbrace 1\\rbrace \\wedge f(t)=f(t^{\\prime })$ Our strategy will be to go over all possible interweaving of counter examples and shrink the tree $S$ to eliminate them.", "We will see that if we fail to do so, then we can take $I=I^{\\prime }$ .", "Note that if $t(1)=t^{\\prime }(1)$ then by the construction of $S$ , $t,t^{\\prime }$ cannot be a counter example, hence a counter example is one with $t(1)\\ne t^{\\prime }(1)$ .", "Fix any interweaving $p={\\langle }g,g^{\\prime }{\\rangle }$ with $g(1)\\ne g^{\\prime }(1)$ , and consider the iteration, $j_p$ .", "If this iteration is undefined then by REF we can shrink $S$ such that we have eliminated this kind interweaving.", "If the iteration is defined, compare $j_p(f)(\\langle \\kappa _i\\mid i\\in A_p\\rangle ),j_p(f)(\\langle \\kappa _{j}\\mid j\\in A^{\\prime }_p\\rangle )$ .", "Suppose the interweaving is such that for some $i\\in I^{\\prime }$ , $g(i)\\ne g^{\\prime }(i)$ we claim that $(\\star \\star ) \\ \\ j_p(f)(\\langle \\kappa _i\\mid i\\in A_p\\rangle )\\ne j_p(f)(\\langle \\kappa _{j}\\mid j\\in A^{\\prime }_p\\rangle )$ Otherwise by REF .1 find $\\vec{U}$ -fat tree $S_p$ such that $(\\star \\star )$ holds for maximal branches of $S_p$ .", "Let $i$ be maximal such that $g(i)\\ne g^{\\prime }(i)$ , without loss of generality, suppose that $g^{\\prime }(i)<g(i)$ .", "Note that $q=g(i)\\in A_p\\setminus A_p^{\\prime }$ , otherwise, if $q\\in A^{\\prime }_p$ , then for some $j>i$ , $g^{\\prime }(j)=g(i)$ and therefore $g(j)>g(i)=g^{\\prime }(j)$ hence $g(j)\\ne g^{\\prime }(j)$ contradicting the maximality of $i$ .", "We construct recursively $t,r\\in S_p$ , pick any element in $s\\in {\\rm Lev}_{q-1}(S_p)$ , set $t\\upharpoonright \\lbrace 1,..,.q-1\\rbrace =s=r\\upharpoonright \\lbrace 1,..,q-1\\rbrace $ Pick $t(q)<r(q)\\in suc_{S_p}(t)$ , since $q\\notin A^{\\prime }_p$ , then $t\\upharpoonright A^{\\prime }_p\\cap \\lbrace 1,...,q\\rbrace = r\\upharpoonright A^{\\prime }_p\\cap \\lbrace 1,...,q\\rbrace $ .", "Assume that $t\\upharpoonright \\lbrace 1,..,k\\rbrace ,r\\upharpoonright \\lbrace 1,..,k\\rbrace \\in {\\rm Lev}_k(S_p)$ are defined such that $t\\upharpoonright A^{\\prime }_p\\cap \\lbrace 1,...,k\\rbrace =r\\upharpoonright A^{\\prime }_p\\cap \\lbrace 1,...,k\\rbrace $ If $k+1\\in A^{\\prime }_p$ then $U^{(S_p)}_{t\\upharpoonright \\lbrace 1,..,k\\rbrace }=U^{(S_p)}_{r\\upharpoonright \\lbrace 1,..,k\\rbrace }$ , as it depends only on $t\\upharpoonright A^{\\prime }_p\\cap \\lbrace 1,...,k\\rbrace $ .", "Thus we can choose $t(k+1)=r(k+1)\\in suc_{S_p}(t\\upharpoonright \\lbrace 1,..,k\\rbrace )\\cap suc_{S_p}(r\\upharpoonright \\lbrace 1,..,k\\rbrace )$ If $k+1\\in A_p\\setminus A^{\\prime }_p$ , pick $t(k+1)\\in {\\rm Succ}_{S_p}(t\\upharpoonright \\lbrace 1,..,k\\rbrace )$ and $r(k+1)\\in {\\rm Succ}_{S_p}(r\\upharpoonright \\lbrace 1,..,k\\rbrace )$ randomly.", "Note that in any case $t\\upharpoonright A_p^{\\prime }\\cap \\lbrace 1,..,k+1\\rbrace =r\\upharpoonright \\lbrace 1,..,k+1\\rbrace $ Eventually we obtain $t,r\\in mb(S_p)$ with $t\\upharpoonright A^{\\prime }_p=r\\upharpoonright A^{\\prime }_p=\\vec{\\alpha }^{\\prime }$ and ${\\rm min}(t)={\\rm min}(r)={\\rm min}(s)$ .", "Hence $t\\upharpoonright A_p,r\\upharpoonright A_p,\\vec{\\alpha }^{\\prime }\\in mb(S)$ , note that both $t\\upharpoonright A_p,\\vec{\\alpha }^{\\prime }$ and $r\\upharpoonright A_p,\\vec{\\alpha }^{\\prime }$ interweave as $p$ .", "Consequently, $f(t\\upharpoonright A_p)=f(\\vec{\\alpha }^{\\prime })=f(r\\upharpoonright A_p)$ This means we found a counter example with the same first coordinate which is a contradiction, concluding that $j_p(f)(\\langle \\kappa _i\\mid i\\in A_p\\rangle )\\ne j_p(f)(\\langle \\kappa _{j}\\mid j\\in A^{\\prime }_p\\rangle )$ .", "By REF .1 and REF .2 we can shrink $S$ so that for every $t,t^{\\prime }$ which interweaves as $p$ , $f(t)\\ne f(t^{\\prime })$ , in other words, we have eliminated all counter examples which interweave as $p$ .", "Next, consider $p$ for which $g(i)=g^{\\prime }(i)$ for every $i\\in I^{\\prime }$ .", "If $j_p(f)(\\langle \\kappa _i\\mid i\\in A_p\\rangle )= j_p(f)(\\langle \\kappa _{j}\\mid j\\in A^{\\prime }_p\\rangle )$ then we can shrink $S$ so that whenever $t,t^{\\prime }\\in mb(S)$ interweave as $p$ , $f(t)=f(t^{\\prime })$ .", "By REF .3 we can shrink $S$ further to $S^*$ so that for every $t\\in mb(S^*)$ and $\\alpha <{\\rm min}(t)$ there is $s\\in mb(S)$ so that ${\\rm min}(s)=\\alpha \\wedge t,s$ interweave as $p$ .", "We claim that we can drop 1 i.e.", "$I^{\\prime }=I$ is the set desired.", "To see this, assume that $t,t^{\\prime }\\in mb(S^*)$ .", "Without loss of generality, assume that ${\\rm min}(t^{\\prime })=\\alpha <{\\rm min}(t)$ , by the construction of $S^*$ , there is $t^{\\prime \\prime }\\in mb(S)$ such that $t,t^{\\prime \\prime }$ interweave as $p$ .", "$t\\upharpoonright I= t^{\\prime \\prime }\\upharpoonright I$ ${\\rm min}(t^{\\prime })=\\alpha ={\\rm min}(t^{\\prime \\prime })$ .", "Hence $f(t)=f(t^{\\prime })\\Leftrightarrow ^{(1)} f(t^{\\prime \\prime })=f(t^{\\prime })\\Leftrightarrow ^{(3)} t^{\\prime \\prime }\\upharpoonright I=t^{\\prime }\\upharpoonright I\\Leftrightarrow ^{(2)} t\\upharpoonright I=t^{\\prime }\\upharpoonright I$ Finally if $j_p(f)(\\langle \\kappa _i\\mid i\\in A_p\\rangle )\\ne j_p(f)(\\langle \\kappa _{j}\\mid j\\in A^{\\prime }_p\\rangle )$ then we shrink $S$ and eliminate counter examples which interweave as $p$ .", "Obviously, if we went through all possible interweaving of a counter examples and eliminated them, then $I=I^{\\prime }\\cup \\lbrace 1\\rbrace $ will be as desired.", "$\\blacksquare $ Lemma 2.49 Let $T$ and $S$ be $\\vec{U}$ -fat trees on $\\kappa _1\\le ...\\le \\kappa _n$ , $\\theta _1\\le ...\\le \\theta _m$ respectively.", "Suppose $F:mb(T)\\rightarrow \\kappa $ and $G:mb(S)\\rightarrow \\kappa $ are any functions such that $I:=I(T,F), \\ J:=I(S,G)$ are complete and consistent.", "Then there exists $\\vec{U}$ -fat subtrees $T^*,S^*$ with $mb(T^*)\\in U_T$ and $ mb(S^*)\\in U_S$ such that one of the following holds: $mb(T^*)\\upharpoonright I=mb(S^*)\\upharpoonright J$Denote $mb(T)\\upharpoonright I=\\lbrace t\\upharpoonright I\\mid t\\in mb(T)\\rbrace $ .", "and $(F\\upharpoonright mb(T^*))_{I}=(G\\upharpoonright mb(S^*))_{J}$ $Im(F\\upharpoonright mb(T^*))\\cap Im(G\\upharpoonright mb(S^*))=\\emptyset $ Proof.", "The argument is similar to product of measures version in [4].", "Fix $F,G$ , we proceed by induction on $\\langle ht(T),ht(S)\\rangle =:{\\langle }n,m{\\rangle }$ .", "Let us first deal with some trivial cases: If $I=J=\\emptyset $ i.e.", "$F,G$ are constantly $d_F,d_G$ , respectively.", "Either $d_F\\ne d_G$ and $(2)$ holds, or $d_F=d_G$ and $(1)$ holds.", "If $I=\\emptyset $ and $j_0\\in J\\ne \\emptyset $ , then $F$ constantly $d_F$ .", "If $d_F\\notin Im(G)$ then $(2)$ holds, otherwise, there is $\\vec{\\beta }\\in mb(S)$ such that $G(\\vec{\\beta })=d_F$ , remove $\\vec{\\beta }(j_0)$ from ${\\rm Lev}_{j_0}(S)$ i.e.", "define: $S^*\\upharpoonright \\lbrace 1,...,j_0-1\\rbrace :=S\\upharpoonright \\lbrace 1,...,j_0-1\\rbrace $ .", "For every $t\\in {\\rm Lev}_{j_0-1}(S)$ , define ${\\rm Succ}_{S^*}(t):={\\rm Succ}_{S}(t)\\setminus \\lbrace \\vec{\\beta }(j_0)\\rbrace $ .", "For every $t\\in {\\rm Lev}_{j_0}(S^*)$ , $S^*_t:=S_t$ By REF , $mb(S^*)\\in U_S$ .", "If $\\vec{\\beta }^{\\prime }\\in mb(S^*)$ , then $G(\\vec{\\beta }^{\\prime })\\ne d_F$ , just otherwise, $\\vec{\\beta }^{\\prime }\\upharpoonright J=\\vec{\\beta }\\upharpoonright J$ and in particular $\\vec{\\beta }(j_0)=\\vec{\\beta }^{\\prime }(j_0)$ , contradiction, then again $(2)$ holds.", "Similarly, if $J=\\emptyset $ and $I\\ne \\emptyset $ then we can insure $(2)$ .", "This argument include the case that one of the trees is $\\lbrace {\\langle }{\\rangle }\\rbrace $ in which case the functions are constantly $f({\\langle }{\\rangle })$ or $g({\\langle }{\\rangle })$ .", "Thus we can assume that $n,m\\ge 1$ .", "Without loss of generality, assume that $\\theta _1\\le \\kappa _1$ .", "For every $\\beta \\in {\\rm Succ}_S({\\langle }{\\rangle })$ , consider the functionNote that if $m=1$ then $S/{\\langle }\\beta {\\rangle }=\\lbrace {\\langle }{\\rangle }\\rbrace $ and $G_{\\beta }$ is constant.", "$G_{\\beta }: mb(S/{\\langle }\\beta {\\rangle })\\rightarrow X, \\ G_\\beta (\\vec{\\beta })=G(\\beta ^{}\\vec{\\beta })$ Then for every $\\beta \\in {\\rm Succ}_S({\\langle }{\\rangle })$ , $I(S/{\\langle }\\beta {\\rangle },G_\\beta )\\supseteq J\\setminus \\lbrace 1\\rbrace $ .", "Shrink ${\\rm Succ}_S({\\langle }{\\rangle })$ to stabilize $I(S/{\\langle }\\beta {\\rangle },G_\\beta )=J^*$ .", "Then $J^*=J\\setminus \\lbrace 1\\rbrace $ , since if we let $S^*$ be the tree obtained from $S$ by shrinking ${\\rm Succ}_S({\\langle }{\\rangle })$ , and $S^*/{\\langle }\\beta {\\rangle }=S/{\\langle }\\beta {\\rangle }$ , then by REF .4 $mb(S^*)\\in U_S$ .", "By coherency $I(S^*,G\\upharpoonright mb(S^*))\\subseteq J$ .", "So if $j\\in J^*$ then it follows by definition of important coordinate that $j\\in I(S^*,G)$ , hence $j\\in J$ .", "It follows now that for every $\\beta $ , $I(S/{\\langle }\\beta {\\rangle },G_\\beta )$ is complete.", "For consistency, the argument given in corollary REF applies by shrinking $S/{\\langle }\\beta {\\rangle }$ if necessary.", "To ease notation we keep denoting the shrinked tree by $S$ .", "Apply induction to $F$ and $G_\\beta $ , $I,J^*$ , to find $T^\\beta \\subseteq T,\\ S^{\\beta }\\subseteq S/{\\langle }\\beta {\\rangle }$ for which $mb(T^\\beta )\\in U_T,\\ mb(S^{\\beta })\\in U_{S/{\\langle }\\beta {\\rangle }}$ such that one of the following: $mb(T^\\beta )\\upharpoonright I=mb(S^\\beta )\\upharpoonright J^*$ and $(F\\upharpoonright mb(T^\\beta ))_I=(G_{\\beta }\\upharpoonright mb(S^\\beta ))_{J^*}$ .", "$Im(F\\upharpoonright T^\\beta )\\cap Im(G_{\\beta }\\upharpoonright mb(S^\\beta ))=\\emptyset $ .", "Denote by $i_\\beta \\in \\lbrace 1,2\\rbrace $ the relevant case.", "There is $H\\subseteq suc_S(\\langle \\rangle )$ , $H\\in U^{(S)}_{\\langle \\rangle }$ and $i^*\\in \\lbrace 1,2\\rbrace $ such that for every $\\beta \\in H$ , $i_\\beta =i^*$ .", "Let $S^*$ be the tree such that $suc_{S^*}(\\langle \\rangle )=H$ and for every $\\beta \\in H$ , $S^*/{\\langle }\\beta {\\rangle }=S^\\beta \\in \\vec{U}_{S/{\\langle }\\beta {\\rangle }}$ .", "By REF .2, $S^*\\subseteq S$ and $mb(S^*)\\in U_S$ .", "If $i^*=1$ , let $T^*=\\cup _{\\beta \\in H}T^\\beta \\subseteq T$ then $mb(T^*)\\in U_T$ .", "Argue that $1\\notin J$ and therefore $J^*=J$ .", "Indeed, fix some $\\beta _1<\\beta _2\\in H$ , Pick some $t\\in mb(T^{\\beta _1})\\cap mb(T^{\\beta _2})$ (this is possible since they are both in $U_T$ ) then $t\\upharpoonright I\\in ( mb(T^{\\beta _1})\\upharpoonright I)\\cap ( mb(T^{\\beta _2})\\upharpoonright I)$ Since for every $\\beta \\in H$ , $mb(T^\\beta )\\upharpoonright I=mb(S^\\beta )\\upharpoonright J^*$ there are $s_1\\in mb(S^{\\beta _1})$ and $s_2\\in mb(S^{\\beta _2})$ such that $s_1\\upharpoonright J^*=t\\upharpoonright I=s_2\\upharpoonright J^*$ .", "Hence $\\beta _1^{}s_1,\\beta _2^{}s_2\\in mb(S)$ and $G(\\beta _1^{}s_1)=G_{\\beta _1}(s_1)=(G_{\\beta _1})_{J^*}(s_1\\upharpoonright J^*)=F_I(t\\upharpoonright I)=(G_{\\beta _2})_{J^*}(s_1\\upharpoonright J^*)=G_{\\beta _2}(s_2)=G(\\beta _2^{}s_2)$ we found two maximal branches $x,y\\in mb(S)$ which differs on $\\lbrace 1\\rbrace $ such that $G(x)=G(y)$ , by the definition of important coordinates it follows that $1\\notin J$ .", "Moreover, $mb(T^*)\\upharpoonright I=mb(J^*)\\upharpoonright J$ and that $(F\\upharpoonright mb(T^*))_I=(G\\upharpoonright mb(S^*))_J$ , namely, $(1)$ holds.", "To see this, $mb(T^*)\\upharpoonright I=\\cup _{\\beta \\in H} mb(T^\\beta )\\upharpoonright I=\\cup _{\\beta \\in H} mb(S^\\beta )\\upharpoonright J^*=\\cup _{\\beta \\in {\\rm Succ}_{S^*}({\\langle }{\\rangle })}mb(S^*_{{\\langle }\\beta {\\rangle }})\\upharpoonright J=mb(S^*)\\upharpoonright J$ Also if $\\rho \\in mb(T^*)\\upharpoonright I=mb(S^*)\\upharpoonright J$ , there is $\\beta \\in H$ such that $\\rho \\in mb(T^{\\beta })\\upharpoonright I=mb(S^\\beta )\\upharpoonright J$ , hence $(G\\upharpoonright mb(S^*))_J(\\rho )=(G_{\\beta }\\upharpoonright mb(S^{\\beta }))_J(\\rho )=(F\\upharpoonright mb(T^{\\beta }))_I(\\rho )=(F\\upharpoonright mb(T^*))_I(\\rho )$ Assume $i^*=2$ .", "We repeat the same process, consider now $F_\\alpha $ for every $\\alpha \\in {\\rm Succ}_{T}({\\langle }{\\rangle })$ , we can shrink $T$ so that $I\\setminus \\lbrace 1\\rbrace =I(T/{{\\langle }\\alpha {\\rangle }},F_\\alpha )$ is complete and consistent.", "Apply induction to $F_\\alpha ,G$ .", "such that for every $\\alpha $ , we have $j_\\alpha \\in \\lbrace 1,2\\rbrace $ which correspond to $i_\\beta $ .", "We shrink ${\\rm Succ}_{T}({\\langle }{\\rangle })$ to some $W$ and stabilize $j_\\alpha $ .", "If $j^*=1$ then $1\\notin I$ , and we can find $S^*\\subseteq S$ , $T^*\\subseteq T$ such that $mb(S^*)\\in U_S$ and $mb(T^*)\\in U_T$ such that $mb(S^*)\\upharpoonright J= mb(T^*)\\upharpoonright I\\text{ and }(F\\upharpoonright mb(T^*))_I=(G\\upharpoonright mb(S^*))_J$ so $(1)$ holds.", "Assume that $j^*=2$ .", "Case 1: Assume $\\theta _1<\\kappa _1$: shrink ${\\rm Succ}_{T}({\\langle }{\\rangle })$ so that ${\\rm min}({\\rm Succ}_{T}({\\langle }{\\rangle }))>\\theta _1$ .", "Since $U_T$ is $\\kappa _1$ -complete and $|H|=\\theta _1$ , $\\cap _{\\beta \\in H}mb(T^{\\beta })\\in U_T$ .", "By REF .4 there is a $\\vec{U}$ -fat tree $T^*$ such that $mb(T^*)\\in U_T$ and $mb(T^*)\\subseteq \\cap _{\\beta \\in H}mb(T^{\\beta })$ in particular $T^*\\subseteq T$ .", "It follows that $(\\star ) \\ \\ \\ \\forall t\\in mb(T^*)\\forall s\\in mb(S^*).", "F(t)\\ne G(s)$ To see this, note that $s(1)\\in {\\rm Succ}_{S^*}({\\langle }{\\rangle })=H$ , $t\\in mb(T^{s(1)})$ and $s\\upharpoonright \\lbrace 2,...,n\\rbrace \\in mb(S^{s(1)})$ .", "Since $i^*=2$ , $Im(F\\upharpoonright mb(T^{s(1)}))\\cap Im(G_\\beta \\upharpoonright mb(S^{s(1)}))=\\emptyset $ , hence $F(t)\\ne G_{s(1)}(s\\upharpoonright \\lbrace 2,...,n\\rbrace )=G(s)$ .", "Case 2: Assume that $\\theta _1=\\kappa _1$.", "Shrink the trees $T$ and $S$ in the following way: ${\\rm Succ}_{T^{\\prime }}({\\langle }{\\rangle })=\\Delta _{\\beta \\in H} {\\rm Succ}_{T^{\\beta }}({\\langle }{\\rangle })\\in U^{(T)}_{{\\langle }{\\rangle }}, \\ {\\rm Succ}_{S^{\\prime }}({\\langle }{\\rangle })=\\Delta _{\\alpha \\in W}{\\rm Succ}_{S^{\\alpha }}({\\langle }{\\rangle })\\in U^{(S)}_{{\\langle }{\\rangle }}$ .", "Also for every $\\alpha \\in {\\rm Succ}_{T^{\\prime }}({\\langle }{\\rangle })$ , find a $\\vec{U}$ -fat tree $T^{\\prime }/{\\langle }\\alpha {\\rangle }$ such that $mb(T^{\\prime }/{\\langle }\\alpha {\\rangle })\\subseteq \\cap _{\\beta \\in H\\cap \\alpha } mb(T^{\\beta }/\\alpha )$ .", "In the same fashion for every $\\beta \\in {\\rm Succ}_{S^{\\prime }}({\\langle }{\\rangle })$ , find $S^{\\prime }/{\\langle }\\beta {\\rangle }$ such that $mb(S^{\\prime }/{\\langle }\\beta {\\rangle })\\subseteq \\cap _{\\alpha \\in W\\cap \\beta } mb(S^{\\alpha }/{\\langle }\\beta {\\rangle })$ .", "Then we claim the following: $(\\star \\star ) \\ \\ \\ \\forall t\\in mb(T^{\\prime })\\forall s\\in mb(S^{\\prime }).", "t(1)\\ne s(1)\\rightarrow F(t)\\ne G(s)$ To see this, assume for example that $s(1)<t(1)$ (the case $t(1)<s(1)$ is symmetric), note that $s(1)\\in {\\rm Succ}_{S^*}({\\langle }{\\rangle })=H$ , and by the definition of diagonal intersection, $t(1)\\in {\\rm Succ}_{T^{s(1)}}({\\langle }{\\rangle })$ .", "Also, $t\\upharpoonright \\lbrace 2,...,n\\rbrace \\in mb(T^{s(1)}/{\\langle }t(1){\\rangle })$ and therefore $t\\in T^{s(1)}$ .", "Clearly, $s\\upharpoonright \\lbrace 2,...,n\\rbrace \\in mb(S^{\\prime }/{\\langle }s(1){\\rangle })=mb(S^{s(1)})$ .", "Since $i^*=2$ , $Im(F\\upharpoonright mb(T^{s(1)}))\\cap Im(G_{s(1)}\\upharpoonright mb(S^{s(1)}))=\\emptyset $ , hence $F(t)\\ne G_{s(1)}(s\\upharpoonright \\lbrace 2,..,n\\rbrace )=G(s)$ .", "So we are left with the situation that $s={\\rm min}(s)={\\rm min}(t)$ .", "If $U^{(S)}_{{\\langle }{\\rangle }}\\ne U^{(T)}_{{\\langle }{\\rangle }}$ we can shrink ${\\rm Succ}_{T^*}({\\langle }{\\rangle }),{\\rm Succ}_{S^*}({\\langle }{\\rangle })$ so that they are disjoint, avoid this situation and conclude $(2)$ .", "If $U^{(T)}_{\\langle \\rangle }=U^{(S)}_{\\langle \\rangle }$ , let $A=suc_{T^{\\prime }}(\\langle \\rangle )\\cap suc_{S^{\\prime }}(\\langle \\rangle )$ .", "For every $\\alpha \\in A$ , apply the induction hypothesis to the functions $F_\\alpha ,G_\\alpha $ , $I\\setminus \\lbrace 1\\rbrace ,J\\setminus \\lbrace 1\\rbrace $ we obtain $T^\\alpha \\subseteq T/{{\\langle }\\alpha {\\rangle }}$ and $S^\\alpha \\subseteq S_{{\\langle }\\alpha {\\rangle }}$ such that $(1)$ or $(2)$ holds.", "We denote the relevant case by $r_\\alpha $ .", "Again, shrink $A$ to $A^*$ and find $r^*\\in \\lbrace 1,2\\rbrace $ so that for every $\\alpha \\in A^*$ , $r_\\alpha =r^*$ .", "Define ${\\rm Succ}_{T^*}({\\langle }{\\rangle })={\\rm Succ}_{S^*}({\\langle }{\\rangle })=A^*$ and for every $\\alpha \\in A^*$ , $T^*_{{\\langle }\\alpha {\\rangle }}=T^{\\alpha }$ and $S^*_{{\\langle }\\alpha {\\rangle }}=S^{\\alpha }$ .", "Clearly $T^*\\subseteq T$ , $S^*\\subseteq S$ and $mb(T^*)\\in \\vec{U}_T,\\ mb(S^*)\\in \\vec{U}_S$ .", "If $r^*=2$ , For every $\\alpha ^{}t\\in mb(T^*),\\ \\alpha ^{}s\\in mb(S^*)$ , we have that $r_{\\alpha }=2$ , then $F(\\alpha ^{}t)=F_\\alpha (t)\\in Im(F_\\alpha \\upharpoonright mb(T^{\\alpha }))$ and $G(\\alpha ^{}s)=G_\\alpha (s)\\in Im(G_\\alpha \\upharpoonright mb(S^{\\alpha }))$ .", "By $r_{\\alpha }=2$ , $G(\\alpha ,s)\\ne F(\\alpha ,t)$ and we have eliminated the possibility of $F(t)=G(s)$ where ${\\rm min}(s)={\\rm min}(t)$ , we conclude that $(2)$ holds.", "Finally, assume $r^*=1$ , namely that for $I\\setminus \\lbrace 1\\rbrace = I^*\\subseteq \\lbrace 2,...,ht(T)\\rbrace ,J\\setminus \\lbrace 1\\rbrace = J^*\\subseteq \\lbrace 2,...,ht(S)\\rbrace $ , and every $\\alpha \\in A^*$ $mb(T^\\alpha )\\upharpoonright I^*=mb(S^\\alpha )\\upharpoonright J^*\\ \\wedge \\ (F_\\alpha \\upharpoonright mb(T^\\alpha ))_{I^*}=(G_{\\alpha }\\upharpoonright mb(S^\\alpha ))_{J^*}$ It follows that $(\\triangle ) \\ \\ \\ mb(T^*)\\upharpoonright I^*\\cup \\lbrace 1\\rbrace =\\cup _{\\alpha \\in A^*}\\lbrace \\alpha \\rbrace \\times mb(T^\\alpha )\\upharpoonright I^*=\\cup _{\\alpha \\in A^*}\\lbrace \\alpha \\rbrace \\times mb(S^\\alpha )\\upharpoonright J^*=mb(S^*)\\upharpoonright J^*\\cup \\lbrace 1\\rbrace $ Moreover, for every ${\\langle }\\alpha {\\rangle }^{}\\rho \\in mb(T^*)\\upharpoonright I^*\\cup \\lbrace 1\\rbrace $ , $(\\triangle \\triangle ) \\ \\ \\ (F\\upharpoonright _{ mb(T^*)})_{I^*\\cup \\lbrace 1\\rbrace }(\\alpha ,\\rho )=(F_\\alpha \\upharpoonright _{mb(T^{\\alpha })})_{I^*}(\\rho )=(G_\\alpha \\upharpoonright _{ mb(S^{\\alpha })})_{J^*}(\\rho )=(G\\upharpoonright _{ mb(S^*)})_{J^*\\cup \\lbrace 1\\rbrace }(\\alpha ,\\rho )$ If $1\\notin I$ then 1 is not an important coordinate for $F\\upharpoonright mb(T^*)$ and by definition this means that there are $t_1,t_2\\in mb(T^*)$ such that $t_1(1)\\ne t_2(1)$ and $F(t_1)=F(t_2)$ .", "Then $t_1\\upharpoonright I\\in mb(T^{t_1(1)})\\upharpoonright I=mb(S^{(t_1(1)})\\upharpoonright J^*$ $t_2\\upharpoonright I\\in mb(T^{t_2(1)})\\upharpoonright I=mb(S^{(t_2(1)})\\upharpoonright J^*$ So there are $s_1,s_2\\in mb(S^*)$ such that $s_1(1)=t_1(1), s_2(1)=t_2(1)$ and $s_1\\upharpoonright J^*=t_1\\upharpoonright I,s_2\\upharpoonright J^*=t_2\\upharpoonright I$ .", "It follows that $G(s_1)=G_{s_1(1)}(s_1\\upharpoonright J^*)=F_{t_1(1)}(t_1\\upharpoonright I)=F(t_1)\\ne F(t_2)=F_{t_2(1)}(t_2\\upharpoonright I)=G_{s_2(1)}(s_2\\upharpoonright J^*)=G(s_1)$ So 1 is not important for $G\\upharpoonright mb(S^*)$ , hence $1\\notin J$ .", "In a similar way, we conclude that If $1\\notin J$ then $1\\notin I$ .", "In either case, from $(\\triangle ),(\\triangle \\triangle )$ we conclude that $(1)$ holds.", "$\\blacksquare $" ], [ "The proof for short sequences", "Let us return to $\\mathbb {M}[\\vec{U}]$ and use the combinatorical tools developed in the last section.", "Definition 3.1 Let $p\\in \\mathbb {M}[\\vec{U}]$ be a condition.", "A tree of extension of $p$ is a $\\vec{U}$ -fat tree $T$ on $\\theta _1\\le ...\\le \\theta _n$ , such that for every $1\\le i\\le n$ , $\\theta _i\\in \\kappa (p)$ and each $t\\in T$ is a legal extension of $p$ i.e.", "$p^{\\frown }t\\in \\mathbb {M}[\\vec{U}]$ .", "Denote by $\\xi (t),\\kappa (t)$ the ordinals such that ${\\rm Succ}_{T}(t)\\in U(\\kappa (t),\\xi (t))$ .", "If $T$ is a tree of extensions of $p$ and $T^{\\prime }\\subseteq T$ is a $\\vec{U}$ -fat tree such that $mb(T^{\\prime })\\in U_T$ then $T^{\\prime }$ is also a tree of extensions of $p$ .", "Let $p^{}\\vec{\\alpha }\\in \\mathbb {M}[\\vec{U}]$ , and for every $r\\le |\\vec{\\alpha }|=:n$ let $B_r\\in \\cap \\vec{U}(\\vec{\\alpha }(r))$ .", "Define $p^{}{\\langle }\\vec{\\alpha },\\vec{B}^{\\vec{\\alpha }}{\\rangle }:=p^{}{\\langle }\\vec{\\alpha }(1),B_1\\cap \\vec{\\alpha }(1){\\rangle }^{}....^{}{\\langle }\\vec{\\alpha }(n),B_{n}\\cap ( \\vec{\\alpha }(n-1),\\vec{\\alpha }(n)){\\rangle }$ Proposition 3.2 Let $T$ be a $\\vec{U}$ -fat tree of extensions of $p$ , and let for every $t\\in mb(T)$ , $p_t\\ge ^* p^{\\frown }t$ be a condition.", "Then there are $p^*,T^*$ and $B^s$ for $s\\in T^*\\setminus mb(T^*)$ such that: $p\\le ^* p^*$ .", "$T^*\\subseteq T$ is a $\\vec{U}$ -fat tree of extensions for $p^*$ with $mb(T^*)\\in U_T$ .", "$B^{s}\\in \\cap _{\\xi <\\xi (s)}U(\\kappa (s),\\xi )$ .", "For every $t\\in mb(T^*)$ $p_t\\le ^*p^{*}{\\langle }t,\\vec{B}^{t}{\\rangle }:=p^{*}{\\langle }t(1), B^{{\\langle }{\\rangle }}\\cap t(1){\\rangle }^...^{}{\\langle }t(n),B^{t\\upharpoonright \\lbrace 1,...,n-1\\rbrace }\\cap t(n){\\rangle }$ Proof.", "Assume that $T$ is on on $\\kappa _{j_1}(p)\\le ...\\le \\kappa _{j_n}(p)$ , and let us proceed by induction on $ht(T)$ .", "If $ht(T)=1$ , then for every $\\alpha \\in {\\rm Succ}_{T}({\\langle }{\\rangle })\\in U(\\kappa _{j_1}(p),\\xi ({\\langle }{\\rangle }))$ denote $p^{}\\alpha \\le ^* p_\\alpha ={\\langle }p_\\alpha \\upharpoonright \\kappa _{j_1-1}(p),{\\langle }\\alpha , B_\\alpha {\\rangle }, {\\langle }\\kappa _{j_1}(p), C_\\alpha {\\rangle }p_\\alpha \\upharpoonright (\\kappa _{j_1}(p),\\kappa ){\\rangle }$ The order $\\le ^*$ is more than $\\kappa _{j_1}(p)$ -closure in $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\kappa _{j_1}(p),\\kappa )$ , so we can find $p^*_>\\in \\mathbb {M}[\\vec{U}]\\upharpoonright (\\kappa _{j_1}(p),\\kappa )$ such that $p_\\alpha \\upharpoonright (\\kappa _{j_1}(p),\\kappa )\\le p^*_>$ for every $\\alpha \\in {\\rm Succ}_T({\\langle }{\\rangle })$ .", "For the lower part, shrink ${\\rm Succ}_T({\\langle }{\\rangle })$ to $H\\in U(\\kappa _{j_1}(p),\\xi ({\\langle }{\\rangle }))$ and find $p^*_<\\in \\mathbb {M}[\\vec{U}]\\upharpoonright \\kappa _{j_1-1}(p)$ such that for every $\\alpha \\in H$ , $p^*_<=p_\\alpha \\upharpoonright \\kappa _{j_1-1}(p)$ .", "Next, by normality $C:=\\Delta _{\\alpha <\\kappa _{j_1}(p)}C_\\alpha \\in \\cap \\vec{U}(\\kappa _{j_1}(p))$ Use REF to find $C^*\\subseteq C$ such that for every $\\alpha \\in C^*$ , $C^*\\cap \\alpha \\in \\vec{U}(\\alpha )$ .", "As for the $B_\\alpha $ 's, for every $\\alpha \\in H$ , $B_\\alpha \\in \\cap \\vec{U}(\\alpha )$ .", "Use ineffability and shrink $H$ to $H^{\\prime }\\in U(\\kappa _{j_1}(p),\\xi ({\\langle }{\\rangle }))$ and find a single set $X$ such that for every $\\alpha \\in H^{\\prime }$ , $X\\cap \\alpha =B_\\alpha $ , it follows that, $B^{{\\langle }{\\rangle }}:=C^*\\cap X\\in \\cap _{j<\\xi ({\\langle }{\\rangle })}U(\\kappa _{j_1}(p),j)$ .", "Set ${\\rm Succ}_{T^*}({\\langle }{\\rangle })=H^{\\prime }\\cap C^*$ and let $p\\le ^*{\\langle }p^*_{<},{\\langle }\\kappa _{j_1}(p),C^*{\\rangle },p^*_>{\\rangle }=:p^*$ To see that $p^*,B^{{\\langle }{\\rangle }},T^*$ is as wanted, let $\\alpha \\in {\\rm Succ}_{T^*}({\\langle }{\\rangle })$ .", "Since $\\alpha \\in H^{\\prime }$ , $B^{{\\langle }{\\rangle }}\\cap \\alpha =B_\\alpha \\cap C^*\\subseteq B_\\alpha $ .", "Since $\\alpha \\in H$ , $p_\\alpha \\upharpoonright \\kappa _{j_1-1}=p^*_<$ and since $\\alpha \\in {\\rm Succ}_{T}({\\langle }{\\rangle })$ , $p_\\alpha \\upharpoonright (\\kappa _{j_1},\\kappa )\\le ^*p^*_>$ .", "Finally note that $B_{j_1}(p^*)\\setminus \\alpha +1=C^*\\setminus \\alpha +1\\subseteq C_\\alpha $ Thus $p_\\alpha \\le ^*p^{*}{\\langle }\\alpha , B^{{\\langle }{\\rangle }}\\cap \\alpha {\\rangle }$ .", "Assume that $n=ht(T)>1$ , then for every $t\\in T\\setminus mb(T)$ , and for every $\\alpha \\in {\\rm Succ}_{T}(t)$ , we are given some condition $p^{}t^{}\\alpha \\le ^*p_{t^{}\\alpha }$ .", "Apply the case $ht(T)=1$ to $p^{}t$ and ${\\rm Succ}_{T}(t)$ to find $p^{}t\\le ^* p^*_t$ , ${\\rm Succ}_{T^*}(t)$ and a set $B^{t}\\in \\cap _{\\xi <\\xi (t)}U(\\kappa (t),\\xi )$ such that for for every $\\alpha \\in {\\rm Succ}_{T^*}(t)$ , $p_{t^{}\\alpha }\\le ^* p_t^{*}{\\langle }\\alpha , B^t\\cap \\alpha {\\rangle }$ .", "Apply induction hypothesis to $p, T\\setminus mb(T)$ , to find $p\\le ^*p^*$ , $T^*\\subseteq T\\setminus mb(T)$ and sets $B^s$ such that for every $t\\in mb(T^*)$ , $p^*_t\\le ^* p^{*}{\\langle }t,\\vec{B}^t{\\rangle }$ .", "Hence for every $\\alpha \\in {\\rm Succ}_{T^*}(t)$ , $p_t\\le ^*p_t^{*}{\\langle }\\alpha ,B^t\\cap \\alpha {\\rangle }\\le ^*p^{*}{\\langle }t,\\vec{B}^t{\\rangle }^{}{\\langle }\\alpha ,B^t\\cap \\alpha {\\rangle }=p^{*}{\\langle }t^{}\\alpha ,\\vec{B}^{t^{}\\alpha }{\\rangle }$ It follows that $p^*$ , $T^*$ and $B^{t}$ are as wanted.", "$\\blacksquare $ The following lemma is the strong Prikry property for $\\mathbb {M}[\\vec{U}]$ .", "Lemma 3.3 Let $D\\subseteq \\mathbb {M}[\\vec{U}]$ be dense open, and let $p\\in \\mathbb {M}[\\vec{U}]$ be any condition, then there is $p\\le ^* p^*$ and a tree of extensions of $p^*$ , $T$ and sets $B^s\\in \\cap _{\\xi <\\xi (s)}U(\\kappa (s),\\xi )$ for every $s\\in T\\setminus mb(T)$ such that for every $t\\in mb(T)$ , $p^{*}{\\langle }t,\\vec{B}^t{\\rangle }\\in D$ .", "Proof.", "Let $r\\le l(p)+1$ , $\\vec{\\alpha }\\in [\\kappa _r(p)]^{<\\omega }$ , such that $p^{\\frown }\\vec{\\alpha }\\in \\mathbb {M}[\\vec{U}]$ is a condition.", "Set $A^0_r(\\vec{\\alpha })=\\lbrace \\alpha \\in B_r(p)\\setminus ({\\rm max}(\\vec{\\alpha })+1)\\mid \\exists q\\ge ^* p^{\\frown }\\vec{\\alpha }^{}{\\langle }\\alpha {\\rangle }.", "\\ q\\in D\\rbrace , \\ \\ A^1_r(\\vec{\\alpha })=B_r(p)\\setminus A^0_r(\\vec{\\alpha })$ For every $i< o^{\\vec{U}}(\\kappa _r(p))$ , only one of $A^0_r(\\vec{\\alpha }),A^1_r(\\vec{\\alpha })$ is in $U(\\kappa _r(p),i)$ .", "Denote it by $A_{r,i}(\\vec{\\alpha })$ and let $C_{r,i}(\\vec{\\alpha })\\in \\lbrace 0,1\\rbrace $ such that $A_{r,i}(\\vec{\\alpha })=A_r^{C_{r,i}(\\vec{\\alpha })}(\\vec{\\alpha })$ .", "Define $A_{r,i}=\\underset{\\vec{\\alpha }\\in [\\kappa _r(p)]^{<\\omega }}{\\Delta }A_{r,i}(\\vec{\\alpha })\\cap B_{r}(p)\\in U(\\kappa _r(p),i)$ so far $A_{r,i}$ has the property that for $\\vec{\\alpha }\\in [\\kappa _r(p)]^{<\\omega }$ if $\\exists \\alpha \\in A_{r,i}$ and $p^{\\frown }\\vec{\\alpha }^{}{\\langle }\\alpha {\\rangle }\\le ^* q\\in D$ then for every $\\alpha \\in A_{r,i}$ there is $p^{}\\vec{\\alpha }^{}{\\langle }\\alpha {\\rangle }\\le ^*q\\in D$ .", "For every $\\langle \\alpha _1,...,\\alpha _{n-1}{\\rangle }\\in [\\kappa _r(p)]^{n-1}$ , define $D_{r,i}^{(1)}(\\alpha _1,...,\\alpha _{n-1},*):A_{r,i}\\rightarrow \\lbrace 0,1\\rbrace $ by $D_{r,i}^{(1)}(\\alpha _1,...,\\alpha _{n-1},\\alpha )=0 \\Leftrightarrow \\exists r\\le s\\le l(p)+1\\exists j<o^{\\vec{U}}(\\kappa _s(p)) \\ C_{s,j}(\\alpha _1,..,\\alpha _{n-1},\\alpha )=0$ Find an homogeneous set for $D^{(1)}_{r,i}$ , $A^{(1)}_{r,i}(\\alpha _1,...,\\alpha _{n-1})\\in U(\\kappa _r(p),i)$ with color $C^{(1)}_{r,i}(\\alpha _1,...,\\alpha _{n-1})$ .", "Define $A^{(1)}_{r,i}=\\underset{\\vec{\\alpha }\\in [\\kappa _r(p)]^{n-1}}{\\Delta } A^{(1)}_{r,i}(\\vec{\\alpha })\\cap B_r(p)\\in U(\\kappa _r(p),i)$ In similar fashion, define recursively for $k\\le n$ $D_{r,i}^{(k)}(\\alpha _1,...,\\alpha _{n-k},\\alpha )=0\\Leftrightarrow \\exists r\\le s\\le l(p)+1\\exists j<o^{\\vec{U}}(\\kappa ) \\ C_{s,j}^{(k-1)}(\\alpha _1,..,\\alpha _{n-k},\\alpha )=0$ find homogeneous $A_{r,i}^{(k)}(\\alpha _1,...,\\alpha _{n-k})\\in U(\\kappa _r(p),i)$ with color $C_{r,i}^{(k)}(\\alpha _1,...,\\alpha _{n-k})$ and let $A^{(k)}_{r,i}=\\underset{\\vec{\\alpha }\\in [\\kappa _r(p)]^{n-k}}{\\Delta } A^{(k)}_{r,i}(\\vec{\\alpha })\\cap B_r(p)\\in U(\\kappa _r(p),i)$ Eventually, set $A_{r,i,n}=\\underset{k\\le n}{\\bigcap }A^{(k)}_i, \\ A_{r,i}=\\underset{n<\\omega }{\\bigcap }A_{r,i,n}\\in U(\\kappa _r(p),i)\\text{ and } A_r=\\underset{i<o^{\\vec{U}}(\\kappa _r(p))}{\\bigcup }A_{r,i}$ Let $p\\le ^*p_1$ , where $p_1$ is obtained from $p$ by shrinking $B_r(p)$ to the set obtained from REF to $A_r$ such that for every $\\alpha \\in B_r(p_1)$ , $\\alpha \\cap B_r(p_1)\\in \\cap \\vec{U}(\\alpha )$ .", "By density, there exists $p^{\\prime }\\ge p_1$ such that $p^{\\prime }\\in D$ .", "There is $\\langle \\vec{\\alpha },\\alpha \\rangle \\in [B(p^*)]^{<\\omega }$ such that $p_1^{\\frown }\\langle \\vec{\\alpha },\\alpha \\rangle \\le ^*p^{\\prime }$ .", "Find $s_1\\le ...\\le s_n\\le r$ , $i_j\\le o^{\\vec{U}}(\\kappa _{s_j}(p))$ and $j<o^{\\vec{U}}(\\kappa _{r}(p))$ such that $\\alpha \\in A_{r,j}$ and $\\vec{\\alpha }=\\langle \\alpha _1,...,\\alpha _{n-1}\\rangle \\in \\prod ^{n-1}_{j=1}A_{s_j,i_j}$ .", "It follows that $A_{r,j}(\\vec{\\alpha })=A^0_{r,j}(\\vec{\\alpha })$ .", "Hence, $C_{r,j}(\\vec{\\alpha })=0\\Rightarrow D_{s_n,i_n}^{(1)}(\\alpha _1,..,\\alpha _{n})=0\\Rightarrow C^{(1)}_{s_n,i_n}(\\alpha _1,..,\\alpha _{n-1})=0\\Rightarrow D^{(2)}_{s_{n-1},i_{n-1}}(\\alpha _1,..,\\alpha _{n-1})=0\\Rightarrow $ $ C^{(2)}_{s_{n-1},i_{n-1}}(\\alpha _1,...,\\alpha _{n-2})=0\\Rightarrow ...\\Rightarrow D^{(n)}_{s_1,i_1}(\\alpha _1)=0\\Rightarrow C^{(n)}_{s_1,i_1}(\\langle \\rangle )=0$ Define the tree $T^{\\prime }$ : Let $s({\\langle }{\\rangle })=s_1$ , $\\xi ({\\langle }{\\rangle })=i_1$ and define ${\\rm Succ}_{T^{\\prime }}(\\langle \\rangle )=A_{s({\\langle }{\\rangle }),\\xi ({\\langle }{\\rangle })}\\cap B_{s({\\langle }{\\rangle })}(p_1)\\in U(\\kappa _{s({\\langle }{\\rangle })}(p),\\xi ({\\langle }{\\rangle }))$ Since $A_{s_1,i_1}\\subseteq A^{(n)}_{s_1,i_1}({\\langle }{\\rangle })$ is homogeneous, $D^{(n)}_{i_1}(x)=0$ for every $x\\in A_{s_1,i_1}$ .", "Hence, there are $\\kappa _{s(x)}(r)$ and $\\xi (x)$ such that $D^{(n-1)}_{s(x),\\xi (x)}(x,*)$ takes the color 0 on $A_{s(x),\\xi (x)}$ .", "Let ${\\rm Succ}_{T^{\\prime }}({\\langle }\\alpha {\\rangle })=A_{s(\\alpha ),\\xi (\\alpha )}\\cap B_{s(\\alpha )}(p_1)$ Recursively, define the other levels in a similar fashion.", "By REF , for every $t\\in mb(T^{\\prime })$ , $p_1\\le p_1^{}t\\in \\mathbb {M}[\\vec{U}]$ .", "Consider the function $t\\in mb(T^{\\prime })\\mapsto {\\langle }s(t\\upharpoonright 0),s(t\\upharpoonright 1),...,s(t\\upharpoonright n){\\rangle }$ , then by REF , we can find a $\\vec{U}$ -fat tree $T^{\\prime \\prime }\\subseteq T^{\\prime }$ , $mb(T^{\\prime \\prime })\\in U_{T^{\\prime }}$ such that ${\\langle }s(t\\upharpoonright 0),s(t\\upharpoonright 1),...,s(t\\upharpoonright n){\\rangle }$ is stabilized for $t\\in mb(T^{\\prime \\prime })$ .", "By the construction of the tree $T^{\\prime \\prime }$ , for every $t\\in mb(T^{\\prime \\prime })$ there is $p_1^{\\frown }t\\le ^* p_t$ such that $p_t\\in D$ .", "By proposition REF we can amalgamate all those $p_t$ 's and find a single $p\\le ^* p^*$ , shrink $T^{\\prime \\prime }$ to $T^*$ and find $B^s$ for $s\\in T^*\\setminus mb(T^*)$ such that for every $t\\in mb(T^*)$ , $p_t\\le ^*p^{*\\frown }{\\langle }t, \\vec{B}^t{\\rangle }$ .", "Since $D$ is open then $p^{*\\frown }{\\langle }t, \\vec{B}^t{\\rangle }\\in D$ .", "$\\blacksquare $ Proposition 3.4 Let $p\\in \\mathbb {M}[\\vec{U}]$ be a condition, $T$ a $\\vec{U}$ -fat tree of extensions of $p$ , and sets $B^{s}\\in \\cap _{\\xi <\\xi (s)}U(\\kappa (s),\\xi )$ for every $s\\in T\\setminus mb(T)$ such that for every $t\\in mb(T)$ , $p\\le p^{}{\\langle }t,\\vec{B}^t{\\rangle }\\in \\mathbb {M}[\\vec{U}]$ .", "Then there are $p\\le ^*p^*$ , a tree $T^*\\subseteq T$ of extensions of $p^*$ , $mb(T^*)\\in U_T$ and sets $A^s\\subseteq B^s$ , $A^s\\in \\cap _{\\xi <\\xi (s)}U(\\kappa (s),\\xi )$ such that $D_{T^*,\\vec{A}}:=\\lbrace p^{* \\frown }{\\langle }t,\\vec{A}^t{\\rangle }\\mid t\\in mb(T)\\rbrace $ is a pre-dense above $p^*$ .", "In particular, for any generic $G$ with $p^*\\in G$ , $G\\cap D_{T^*}\\ne \\emptyset $ .", "Proof.", "Assume that $T$ is on $\\kappa _{j_1}(p)\\le ...\\le \\kappa _{j_n}(p)$ and again we argue by induction on $ht(T)$ .", "Assume that $ht(T)=1$ , use REF to find $A_<\\subseteq B^{{\\langle }{\\rangle }}\\cap B_{j_1}(p)$ such that $A_<\\in \\cap _{\\xi <\\xi ({\\langle }{\\rangle })} U(\\kappa _{j_1}(p),\\xi )$ and for every $\\alpha \\in A_<$ , $\\alpha \\cap A_<\\in \\cap \\vec{U}(\\alpha )$ .", "Consider the sets $A_{\\xi ({\\langle }{\\rangle })}={\\rm Succ}_T({\\langle }{\\rangle })\\cap B_{j_1}(p)\\cap \\lbrace \\alpha <\\kappa _{j_1}(p)\\mid A_{<}\\cap \\alpha \\in \\cap \\vec{U}(\\alpha )\\rbrace \\in U(\\kappa _{j_1}(p),\\xi ({\\langle }{\\rangle }))$ $A_>=B_{j_1}(p)\\cap \\lbrace \\alpha <\\kappa _{j_1}(p)\\mid \\exists A_{\\xi ({\\langle }{\\rangle })}\\cap \\alpha \\in (\\cap \\vec{U}(\\alpha ))^+\\rbrace \\in \\bigcap _{\\xi ({\\langle }{\\rangle })<\\xi <o^{\\vec{U}}(\\kappa _{j_1}(p))}U(\\kappa _{j_1}(p),\\xi )$ Let $p\\le ^*p^*$ be the condition obtained from $p$ by shrinking $B_{j_1}(p)$ to $B_{j_1}(p^*):=A_<\\cup A_{\\xi ({\\langle }{\\rangle })}\\cup A_>$ let $A^{{\\langle }{\\rangle }}:=A_<$ and shrink ${\\rm Succ}_{T}({\\langle }{\\rangle })$ to ${\\rm Succ}_{T^*}({\\langle }{\\rangle }):=A_{\\xi ({\\langle }{\\rangle })}$ .", "Clearly, $T^*$ is a tree of extension for $p^*$ as for every $\\alpha \\in {\\rm Succ}_{T^*}({\\langle }{\\rangle })$ , $A_<\\cap \\alpha \\in \\cap \\vec{U}(\\alpha )$ and $A_<\\cap \\alpha \\subseteq B_{j_1}(p^*)\\cap \\alpha $ .", "To see that $p^*,T^*,A^{{\\langle }{\\rangle }}$ are as wanted, let $p^*\\le q$ .", "Let $\\vec{\\alpha }$ be such that $p^{*}\\vec{\\alpha }\\le ^* q$ .", "Without loss of generality, assume that $\\vec{\\alpha }\\in [(\\kappa _{j_1-1}(p),\\kappa _{j_1}(p))]^n$ and let $X_i$ denote the sets of the pairs ${\\langle }\\vec{\\alpha }(i),X_i{\\rangle }$ and ${\\langle }\\kappa _{j_1}(p),X{\\rangle }$ appearing in $q$ .", "If $\\vec{\\alpha }\\in [ A_<]^n$ , since $X\\in \\cap \\vec{U}(\\kappa _{j_1}(p))$ , then $X^*:=X\\cap {\\rm Succ}_{T^*}({\\langle }{\\rangle })\\cap \\lbrace \\alpha \\mid \\alpha \\cap X\\in \\cap \\vec{U}(\\alpha )\\rbrace \\in U(\\kappa _{j_1}(p)),\\xi ({\\langle }{\\rangle }))$ In particular $X^*$ is unbounded and we can find $\\alpha \\in X^*\\setminus {\\rm max}(\\vec{\\alpha })+1$ .", "It follows that $p^{*}{\\langle }\\alpha , A^{{\\langle }{\\rangle }}\\cap \\alpha {\\rangle }\\in D_{T^*\\vec{A}}$ .", "We claim that $q,p^{*}{\\langle }\\alpha , A^{{\\langle }{\\rangle }}\\cap \\alpha {\\rangle }\\le q^{\\prime }$ , where $q^{\\prime }=p^{*}{\\langle }\\vec{\\alpha }(1),X_1\\cap A_<{\\rangle }^{}...^{}{\\langle }\\vec{\\alpha }(n),X_{n}\\cap A_<{\\rangle }^{}{\\langle }\\alpha , X\\cap A_{<}\\cap \\alpha {\\rangle }$ Indeed, for every $\\beta \\in A_<$ , $\\beta \\cap A_<\\in \\cap \\vec{U}(\\beta )$ .", "In particular for every $i$ , $\\vec{\\alpha }(i)\\cap A_<\\in \\cap \\vec{U}(\\vec{\\alpha }(i))$ , thus $X_i\\cap A_<\\in \\cap \\vec{U}(\\vec{\\alpha }(i))$ .", "Also by definition of $X^*$ , $\\alpha \\cap X\\in \\cap \\vec{U}(\\alpha )$ and by definition of ${\\rm Succ}_{T^*}({\\langle }{\\rangle })$ , $A_<\\cap \\alpha \\in \\cap \\vec{U}(\\alpha )$ .", "By REF , $q\\le q^{\\prime }$ and $p^{*}{\\langle }\\alpha ,A^{{\\langle }{\\rangle }}\\cap \\alpha {\\rangle }\\le q^{\\prime }$ .", "If there is $j\\le n$ such that $\\vec{\\alpha }(j)\\notin A_<$ , let $r$ be the minimal such $j$ .", "Since $\\vec{\\alpha }(r)\\in B_{j_1}(p)$ , there are two cases here, either $\\vec{\\alpha }(r)\\in A_{\\xi ({\\langle }{\\rangle })}$ or $\\vec{\\alpha }(r)\\in A_>$ .", "If $\\vec{\\alpha }(r)\\in A_{\\xi ({\\langle }{\\rangle })}={\\rm Succ}_{T^*}({\\langle }{\\rangle })$ , then $p^{*}{\\langle }\\vec{\\alpha }(r),A^{{\\langle }{\\rangle }}\\cap \\alpha {\\rangle }\\in D_{T^*,\\vec{A}}$ and we claim that $p^{*}{\\langle }\\vec{\\alpha }(r),A^{{\\langle }{\\rangle }}\\cap \\vec{\\alpha }(r){\\rangle },q\\le q^{\\prime }$ where $q^{\\prime }=p^{*}{\\langle }\\vec{\\alpha }(1),X_1\\cap A_<{\\rangle }^{}...^{}{\\langle }\\vec{\\alpha }(r),A_<\\cap X_r{\\rangle }^{}{\\langle }\\vec{\\alpha }(r+1),X_{r+1}{\\rangle }^{}...^{}{\\langle }\\vec{\\alpha }(n),X_{n}{\\rangle }$ By minimality of $r$ , $\\vec{\\alpha }(i)\\in A_<$ for every $i<r$ and the same argument as before justifies that, $X_i\\cap A_<\\in \\cap \\vec{U}(\\vec{\\alpha }(i))$ .", "Since $\\vec{\\alpha }(r)\\in A_{\\xi ({\\langle }{\\rangle })}$ , by definition we have that $A_<\\cap \\vec{\\alpha }(r)\\in \\cap \\vec{U}(\\vec{\\alpha }(r))$ , hence $X_r\\cap A_<\\in \\cap \\vec{U}(\\vec{\\alpha }(r))$ , then again we use REF .", "Finally, if $\\vec{\\alpha }(r)\\in A_>$ , then $A_{\\xi ({\\langle }{\\rangle })}\\cap \\vec{\\alpha }(r)\\in (\\cap \\vec{U}(\\vec{\\alpha }(r)))^+$ .", "In particular $X^*:=A_{\\xi ({\\langle }{\\rangle })}\\cap X_r\\cap \\lbrace \\alpha \\mid \\alpha \\cap X_r\\in \\cap \\vec{U}(\\alpha )\\rbrace \\in (\\cap \\vec{U}(\\vec{\\alpha }(r)))^+$ hence there is $\\alpha \\in X_r\\cap A_{{\\langle }\\xi ({\\langle }{\\rangle })}\\setminus \\vec{\\alpha }(r-1)+1$ .", "This time, the witness for the compatibility of $p^{*}{\\langle }\\alpha , A^{{\\langle }{\\rangle }}\\cap \\alpha {\\rangle },q$ will be $q^{\\prime }=p^{*}{\\langle }\\vec{\\alpha }(1),X_1\\cap A_<{\\rangle }^{}...^{}{\\langle }\\vec{\\alpha }(r-1),A_<\\cap X_{r-1}{\\rangle }^{}{\\langle }\\alpha , X_r\\cap A_<\\cap \\alpha {\\rangle }^{}{\\langle }\\vec{\\alpha }(r),X_{r}\\setminus \\alpha {\\rangle }^{}...^{}{\\langle }\\vec{\\alpha }(n),X_{n}{\\rangle }$ This conclude the case $ht(T)=1$ .", "Let $T$ be such that $n=ht(T)>1$ , for every $s\\in T\\setminus mb(T)$ , apply the case $n=1$ to ${\\rm Succ}_{T}(s)$ and the condition $p^{}s$ to find $p^{}s\\le p^*_s,\\text{ a set } A^{s}\\subseteq B^{s},\\text{ and } {\\rm Succ}_{T^*}(s)\\subseteq {\\rm Succ}_T(s), \\ {\\rm Succ}_{T^*}(s)\\in U(\\kappa _{j_n}(p),\\xi (s))$ such that $\\lbrace p_s^{*}{\\langle }\\alpha , A^s\\cap \\alpha \\mid \\alpha \\in {\\rm Succ}{T^*}(s)\\rbrace $ is pre-dense above $p^*_s$ .", "Apply REF , and find a condition $p\\le ^* p_1$ , $T_1\\subseteq T\\setminus mb(T)$ , $mb(T_1)\\in U_{T\\setminus mb(T)}$ and sets $C^s\\in \\cap _{\\xi <\\xi (s)}U(\\kappa (s),\\xi )$ such that for every $t\\in mb(T_1)$ , $p^*_t\\le ^*p_1^{}{\\langle }t,\\vec{C}^t{\\rangle }$ .", "Now apply induction hypothesis to $p_1$ , $T_1$ and the sets $B^{s}\\cap C^{s}$ , find $p_1\\le ^* p^*$ and $T^*\\upharpoonright \\lbrace 1,...,n-1\\rbrace \\subseteq T_1$ and sets $A^s$ such that $\\lbrace p^{*}{\\langle }s,\\vec{A}^s{\\rangle }\\mid s\\in mb(T^*\\upharpoonright \\lbrace 1,...,n-1\\rbrace )\\rbrace $ is pre-dense.", "Let us prove that above $p^*$ , $\\lbrace p^{*} {\\langle }t,\\vec{A}^t{\\rangle }\\mid t\\in mb(T^*)\\rbrace $ is pre-dense above $p^*$ .", "Let $p^*\\le q$ , then there is $s\\in mb(T^*\\upharpoonright \\lbrace 1,..,n-1\\rbrace )$ such that $p^{*}{\\langle }s,\\vec{A}^s{\\rangle }$ and $q$ are compatible via some $q^{\\prime }$ .", "Since $A^s\\subseteq C^s$ , it follows that $p^{*}_s\\le ^* p_1^{}{\\langle }s,\\vec{C}^s{\\rangle }\\le ^* p^{*}{\\langle }s,\\vec{A}^s{\\rangle }\\le q^{\\prime }$ Therefore, there is $\\alpha \\in {\\rm Succ}_{T^*}(s)$ such that $p_s^{*}{\\langle }\\alpha ,A^s\\cap \\alpha {\\rangle },q^{\\prime }$ are compatible via $q^{\\prime \\prime }$ .", "It follows that $p_s^{*}{\\langle }\\alpha ,A^s\\cap \\alpha {\\rangle }\\le q^{\\prime \\prime }$ and also $p^{*} {\\langle }s,\\vec{A}^s{\\rangle }\\le q^{\\prime }\\le q^{\\prime \\prime }$ .", "So ${\\langle }\\alpha ,A^s\\cap \\alpha {\\rangle }$ can be added to $p^{*}{\\langle }s,\\vec{A}^s{\\rangle }$ and $p^{*}{\\langle }s^{}\\alpha ,\\vec{A}^{s^{}\\alpha }{\\rangle }=p^{*}{\\langle }s,\\vec{A}^s{\\rangle }^{}{\\langle }\\alpha , A^s\\cap \\alpha {\\rangle }\\le q^{\\prime \\prime }$ .", "We conclude that $q^{\\prime \\prime }$ is a witness for the compatibility of $q$ and $p^{*}{\\langle }s^{}\\alpha ,\\vec{A}^{s^{}\\alpha }{\\rangle }$ .$\\blacksquare $ We will often have a two conditions $p\\le ^* p^*$ and a tree of extensions $T$ of $p$ as in REF , so there are sets $B^t$ such that $D_{T,\\vec{B}}$ is pre-dense above $p$ .", "We would like to remove some of the branches in $T$ to get a tree of extensions of $p^*$ , $T^*\\subseteq T$ , such that $D_{T^*,\\vec{B}}$ is pre-dense above $p^*$ .", "$T^*$ can simply be defined as: $T^*=\\lbrace t\\in mb(T)\\mid p^{*}{\\langle }t,\\vec{B}^t{\\rangle }\\in mb(T)\\rbrace $ It is no hard to check that $T^*$ is a $\\vec{U}$ -fat tree and $mb(T^*)\\in U_T$ .", "To see that $D_{T^*,\\vec{B}}$ is pre-dense, let $p^*\\le q$ then there is $t\\in mb(T)$ such that $p^{}{\\langle }t,\\vec{B}^t{\\rangle },q$ are compatible via a condition $q^{\\prime \\prime }$ .", "Since $t$ appears in $q^{\\prime \\prime }$ and $p^*\\le q\\le q^{\\prime \\prime }$ , it follows by REF that $t\\in mb(T^*)$ and $p^{*}{\\langle }t,\\vec{B}^t{\\rangle },q\\le q^{\\prime \\prime }$ .", "Corollary 3.5 Let $p\\in \\mathbb {M}[\\vec{U}]$ and $\\langle \\lambda ,B\\rangle $ in the steam of p. Consider the decomposition, $p=\\langle q,r\\rangle $ , where $q\\in \\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda \\wedge r\\in \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ .", "Let $\\underaccent{\\sim }{x}$ be a $\\mathbb {M}[\\vec{U}]$ -name for an ordinal.", "Then there is $r\\le ^*r^*\\in \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ such that for any $q\\le q^{\\prime }\\in \\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda $ if there exist $r^*\\le r^{\\prime }\\in \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ such that $\\langle q^{\\prime },r^{\\prime }\\rangle \\ || \\underaccent{\\sim }{x}$ then there is a tree of extensions of $r^*$ , $T/{q^{\\prime }}$ , and sets $B^{t,q^{\\prime }}$ such that $D_{T_{q^{\\prime }},\\vec{B}^{t,q^{\\prime }}}$ is pre-dense above $r^*$ and $\\forall t\\in mb(T/{q^{\\prime }}).", "\\ {\\langle }q^{\\prime },r^{*\\frown }{\\langle }t,\\vec{B}^t{\\rangle }{\\rangle }\\ || \\underaccent{\\sim }{x}$ Proof.", "For every $q\\in \\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda $ , let $D_q=\\Big \\lbrace p^{\\prime }\\in \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )\\ \\big | \\ (\\langle q,p^{\\prime }\\rangle ||\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}})\\vee (\\forall p^{\\prime \\prime }\\ge p^{\\prime }.", "{\\langle }q,p^{\\prime \\prime }{\\rangle }\\text{ does not decide }\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}})\\Big \\rbrace $ Clearly, $D_q\\subseteq \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ is dense open, hence by the strong Prikry property, there is $r\\le ^* r_q$ , a tree of extensions $T^{\\prime }_q$ and sets $A^{s,q}$ for $s\\in T^{\\prime }_q\\setminus mb(T^{\\prime }_q)$ such that for every $t\\in mb(T^{\\prime }_q)$ , $r_q^{}{\\langle }t,\\vec{A}^{t,q}{\\rangle }\\in D_q$ .", "For each $t\\in mb(T^{\\prime }_q)$ one of the following holds: $ {\\langle }q,r_q^{}{\\langle }t,\\vec{A}^{t,q}{\\rangle }{\\rangle }||\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}$ .", "$\\forall p^{\\prime \\prime }\\ge r_q^{}{\\langle }t,\\vec{A}^{t,q}_<{\\rangle }.", "{\\langle }q,p^{\\prime \\prime }{\\rangle }\\text{ does not decide }\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}$ .", "Denote by $i_{t}\\in \\lbrace 1,2\\rbrace $ the case which holds.", "This defines a function $g:mb(T^{\\prime }_q)\\rightarrow \\lbrace 1,2\\rbrace $ .", "Apply REF , shrink $T^{\\prime }_q$ to $T^{\\prime \\prime }_q$ and find $i^*\\in \\lbrace 1,2\\rbrace $ such that for every $t\\in mb(T^{\\prime \\prime }_q)$ $i_{t}=i^*$ .", "Finally, apply REF , extend $r_q$ to $r^{*}_q$ shrink $T^{\\prime \\prime }_q$ to $T^*_q$ and find sets $B^{s,q}\\subseteq A^{s,q}$ so that $D_{T,\\vec{B}^q}=\\lbrace r^{*}_q{\\langle }s,\\vec{B}^{s,q}{\\rangle }\\mid s\\in mb(T^*_q)\\rbrace $ is pre-dense above $r^*_q$ .", "There is sufficient $\\le ^*$ -closure in $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ to find a single $r^*$ such that $r_q\\le ^* r^*$ for every $q\\in \\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda $ .", "Let us prove that $r^*$ is as wanted.", "We can shrink the trees $T^*_q$ to $T_q$ as in the discussion before REF , to be extension trees of $r^*$ such that $D_{r^*,\\vec{B}^q}$ is pre-dense.", "To see that $r^*,T_q,B^{s,q}$ are as wanted, let $q^{\\prime }\\ge q$ and assume that there is $r^{\\prime }\\ge r^*$ such that ${\\langle }q^{\\prime },r^{\\prime }{\\rangle }||\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}$ .", "Since the set $\\lbrace r^{*}{\\langle }t,\\vec{B}^{t,q}{\\rangle }\\mid t\\in mb(T_{q^{\\prime }})\\rbrace $ is pre-dense above $r^*$ , there is $t\\in mb(T_{q^{\\prime }})$ such that $r^{*}{\\langle }t,\\vec{B}^{t,q}{\\rangle },r^{\\prime }$ are compatible.", "In particular, there is $r^{\\prime \\prime }\\ge r^{*}{\\langle }t,\\vec{B}^{t,q}{\\rangle }$ such that ${\\langle }q^{\\prime },r^{\\prime \\prime }{\\rangle }||\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}$ , indicating that $i^*=i_{t}=1$ .", "Hence for every $s\\in mb(T^*_{q^{\\prime }})$ , $i_{s}=1$ , thus ${\\langle }q^{\\prime },r^{*}{\\langle }s,\\vec{B}^{s,q}{\\rangle }{\\rangle }||\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}$ $\\blacksquare $ The next lemma is the first step toward theorem REF .", "Recall the inductive hypothesis $(IH)$ : for every $\\mu <\\kappa $ and every coherent sequence $\\vec{W}$ with maximal element $\\mu $ , every $V$ -generic $G_\\mu \\subseteq \\mathbb {M}[\\vec{W}]$ and $X\\in V[G]$ there is $C^{\\prime }\\subseteq C_{G_\\mu }$ such that $V[X]=V[C^{\\prime }]$ .", "Lemma 3.6 Let $G\\subseteq \\mathbb {M}[\\vec{U}]$ be $V$ -generic filter and assume $(IH)$ .", "Let $A\\in V[G]$ be such that $|A|<\\kappa $ .", "Then there exists $C^{\\prime }\\subseteq C_G$ , $|C^{\\prime }|\\le |A|$ , such that $V[A]=V[C^{\\prime }]$ .", "Proof.", "Let $A=\\langle a_i \\mid i<\\lambda \\rangle $ where $\\lambda =|A|<\\kappa $ be an enumeration of $A$ .", "In $V$ , pick a sequence of $\\mathbb {M}[\\vec{U}]$ -names for $A$ , $\\langle \\underaccent{\\sim }{a}_i\\mid i<\\lambda \\rangle $ .", "We proceed by a density argument, let $p\\in \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ be any condition, using lemma REF , find a $\\le ^*$ -increasing sequence $\\langle p_i \\mid i<\\lambda \\rangle $ above $p$ and maximal antichains $Z_i\\subseteq \\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda $ such that for every $q\\in Z_i$ there is a $\\vec{U}$ -fat tree $T_{q,i}$ and sets $B^{s,q}_i$ such that any extension of $p_i$ from $mb(T_{q,i})$ together with $q$ and the sets $B^{s,q}_i$ decides $\\underaccent{\\sim }{a}_i$ , and the set $D_{T_{q,i},\\vec{B}^q_i}:=\\lbrace p_i^{}{\\langle }t,\\vec{B}^{t,q}_i{\\rangle }\\mid t\\in mb(T_{q,i})\\rbrace $ is pre-dense above $p_i$ .", "The forcing $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ is sufficiently $\\le ^*$ -closure to find $p^{\\prime }$ such that for every $i<\\lambda , \\ p_i\\le ^* p^{\\prime }$ .", "Define the function $F_{q,i}:mb(T_{q,i})\\rightarrow On$ by: $F_{q,i}(t)=\\gamma \\ \\ \\ \\Leftrightarrow \\ \\ \\ {\\langle }q,p^{*}{\\langle }t, \\vec{B}^{t,q}_i{\\rangle }{\\rangle }\\Vdash \\underaccent{\\sim }{a}_i=\\check{\\gamma }$ By lemma REF , we can find $T^{\\prime }_{q,i}\\subseteq T_{q,i}$ , $mb(T^{\\prime }_{q,i})\\in U_{T_{q,i}}$ such that $I_{q,i}:=I(T^{\\prime }_{q,i},F_{q_i}\\upharpoonright mb(T^{\\prime }_{q,i}))$ is complete and consistent.", "For any $q,q^{\\prime }\\in Z_i$ apply lemma REF to the functions $F_{q,i},F_{q^{\\prime },i}$ and shrink $T^{\\prime }_{q,i},T^{\\prime }_{q^{\\prime },i}$ to $T^{q,q^{\\prime }}_{q,i},T^{q,q^{\\prime }}_{q^{\\prime },,i}$ , $mb(T^{q,q^{\\prime }}_{q,i})\\in U_{T_{q,i}},mb(T^{q,q^{\\prime }}_{q^{\\prime },i})\\in U_{T_{q^{\\prime },i}}$ so that either $mb(T^{q,q^{\\prime }}_{q,i})\\upharpoonright I_{q,i}=mb(T^{q,q^{\\prime }}_{q^{\\prime },i})\\upharpoonright I_{q^{\\prime },i}$ and $(F_{q,i}\\upharpoonright mb(T^{q,q^{\\prime }}_{q,i}))_{I_{q,i}}=(F_{q^{\\prime },i}\\upharpoonright mb(T^{q,q^{\\prime }}_{q^{\\prime },i}))_{I_{q^{\\prime },i}}$ .", "$Im(F_{q,i}\\upharpoonright mb(T^{q,q^{\\prime }}_{q,i}))\\cap Im(F_{q^{\\prime },i}\\upharpoonright mb(T^{q,q^{\\prime }}_{q^{\\prime },i}))=\\emptyset $ .", "The ultrafilter $U_{T_{q,i}}$ is sufficiently closed to ensure that $X^*_q=\\cap _{q^{\\prime }\\in \\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda }mb(T^{q,q^{\\prime }}_{q,i})\\in U_{T_{q,i}}$ and by REF there is a $\\vec{U}$ -fat tree $T^{\\prime }_{q,i}\\subseteq T_{q,i}$ such that $mb(T^{\\prime }_{q,i})\\subseteq X^*_q,$ and $mb(T^{\\prime }_{q,i})\\in U_{T_{q,i}}$ .", "By REF , there is $p^{\\prime }\\le ^*p^*_q$ , $T^*_{q,i}$ and $A^{s,q}_i\\subseteq B^{s,q}_i$ such that $D_{T^*_{q,i},\\vec{A}^{q}_i}$ is dense above $p^*_q$ .", "Since $|\\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda |$ is small enough there is a single $p^*\\in \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ such that $p^*_{q,}\\le ^* p^*$ for every $q\\in \\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda $ .", "Restrict the trees to this condition $p^*$ as in the discussion before REF , so that $D_{T^*_q,\\vec{B}^q_i}$ are pre-dense above $p^*$ .", "we abuse notation here by keeping the same notation after the restriction.", "Denote by $G=G_<\\times G_>$ so that $G_<\\subseteq \\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda $ is $V$ -generic and $G_>\\subseteq \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ is $V[G_<]$ -generic.", "By density, find $p^*\\in G_>$ as above.", "For every $i<\\lambda $ , since $Z_i$ is a maximal antichain, there is $q_i$ such that $G_<\\cap Z_i=\\lbrace q_i\\rbrace $ .", "Since $D_{T^*_{q_i,i},\\vec{A}^{q_i}_i}$ is pre-dense above $p^*$ , find $t_i\\in mb(T^*_{q_i,q})$ such that $p^{*\\frown }{\\langle }t_i,\\vec{A}^{t_i,q_i}_i{\\rangle }\\in G_>$ , define $C_i=t_i\\upharpoonright I_{q_i,i}$ and let $C^{\\prime }=\\underset{i<\\lambda }{\\bigcup }C_i\\subseteq C_{G_>}$ .", "Clearly $|C^{\\prime }|\\le \\lambda =|A|$ .", "Let us prove that $\\langle C_i\\mid i<\\lambda \\rangle \\in V[A]$ .", "Indeed, define in $V[A]$ the sets $M_i=\\lbrace q\\in Z_i\\mid a_i\\in Im(F_{q,i})\\rbrace $ then, for any $q,q^{\\prime }\\in M_i$ , $a_i\\in Im(F_{q_i})\\cap Im(F_{q^{\\prime },i})\\ne \\emptyset $ .", "Hence $(1)$ must hold for $F_{q,i},F_{q^{\\prime },i}$ i.e.", "$mb(T^*_{q,i})\\upharpoonright I_{q,i}=mb(T^*_{q^{\\prime },i})\\upharpoonright I_{q^{\\prime },i}\\wedge (F_{q,i}\\upharpoonright mb(T^*_{q,i}))_{I_{q,i}}=(F_{q^{\\prime },i}\\upharpoonright mb(T^*_{q^{\\prime },i}))_{I_{q^{\\prime },i}}$ This means that no matter how we pick $q_i^{\\prime }\\in M_i$ , we will end up with the same function $(F_{q^{\\prime }_i,i}\\upharpoonright nb(T^*_{q^{\\prime }_i,i})_{I_{q_i^{\\prime },i}}$ and the same important values $mb(T^*_{q^{\\prime }_i,i})\\upharpoonright I_{q_i,i}$ .", "In $V[A]$ , choose any $q^{\\prime }_i\\in M_i$ , let $D_i^{\\prime }\\in F_{q_i^{\\prime },i}^{-1^{\\prime \\prime }}\\lbrace a_i\\rbrace \\cap mb(T^*_{q_i^{\\prime },i})$ and $C_i^{\\prime }=D^{\\prime }_i\\upharpoonright I_{q_i^{\\prime },i}$ .", "Since $q_i,q^{\\prime }_i\\in M_i$ we have $ C_i=C_i^{\\prime }$ , hence $\\langle C_i\\mid i<\\lambda \\rangle \\in V[A]$ .", "In order the reconstruct $A$ from the union $C^{\\prime }$ we still have to code some information from the part of $G_<$ , namely, $\\lbrace q^{\\prime }_i\\mid i<\\lambda \\rbrace ,\\langle Ind(C_i,C^{\\prime })\\mid i<\\lambda \\rangle \\in V[A]$ .", "These sets can be coded as a subset of ordinals below $(2^{\\lambda })^+$ , by REF .6 $\\lbrace q^{\\prime }_i\\mid i<\\lambda \\rbrace ,\\langle Ind(C_i,C^{\\prime })\\mid i<\\lambda \\rangle \\in V[G_<]$ By the induction hypothesis applied to $G_<$ , we can find $C^{\\prime \\prime }\\subseteq C_{G_<}$ such that $V[\\lbrace q^{\\prime }_i\\mid i<\\lambda \\rbrace ,\\langle Ind(C_i,C^{\\prime })\\mid i<\\lambda \\rangle ]=V[C^{\\prime \\prime }]$ Also $|C^{\\prime \\prime }|\\le |C_{G_<}|\\le \\lambda $ hence $C:=C^{\\prime }\\uplus C^{\\prime \\prime }$ is of cardinality at most $\\lambda $ .", "Note that $C^{\\prime },C^{\\prime \\prime }\\in V[C]$ as $C^{\\prime \\prime }=C\\cap \\lambda , \\ C^{\\prime }=C\\setminus \\lambda $ .", "Finally, all the information about the function $F_{q,i}$ needed to restore $A$ is coded in $C^{\\prime },C^{\\prime \\prime }$ .", "Namely, $A=\\lbrace (F_{q^{\\prime }_i,i})_{I_{q_i,i}}(C^{\\prime }\\upharpoonright Ind(C_i,C^{\\prime }))\\mid i<\\lambda \\rbrace $ .", "Hence $V[A]=V[C]$ .", "$\\blacksquare $ Corollary 3.7 Suppose that $p\\in \\mathbb {M}[\\vec{U}]$ and $\\underaccent{\\sim }{x}$ is a name such that $p\\Vdash \\underaccent{\\sim }{x}\\in \\underaccent{\\sim }{C}_G$ .", "Then there is $p^*\\ge ^* p$ either $p^* || \\underaccent{\\sim }{x}$ or there is a $\\vec{U}$ -fat tree, $T$ and sets $A^{s}$ such that $\\forall t\\in mb(T)$ $p^{\\frown }{\\langle }t,\\vec{A}^{t}{\\rangle }\\Vdash \\underaccent{\\sim }{x}={\\rm max}(t)$ .", "Moreover, in the later case, let $i\\le l(p)+1$ be such that $mb(T)$ splits on $\\kappa _i(p)$ and assume that $o^{\\vec{U}}(\\kappa _i(p))<\\kappa _i(p)^+$ , then for every $t\\in {\\rm Lev}_{ht(T)-1}(T)$ , $p^{*\\frown }\\langle t,\\vec{A}^{t}\\rangle || o^{(\\kappa _i(p))}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}})$ In other words, there is $\\gamma <o^{\\vec{U}}(\\kappa _i(p))$ such that $p^{*\\frown }\\langle t,\\vec{A}^{t}\\rangle \\Vdash \\underaccent{\\sim }{x}\\in X^{(\\kappa _i(p))}_\\gamma $ Proof.", "Assume that there is no $p^*\\ge ^* p$ which decides $\\underaccent{\\sim }{x}$ .", "By REF find $T$ with minimal $ht(T)$ so there is $p^*\\ge p$ , sets $B^{s}$ and for every $t\\in mb(T)$ , $p^{*\\frown }{\\langle }t,B^{s}{\\rangle }|| \\underaccent{\\sim }{x}$ .", "Assume that $\\kappa (p)=\\lbrace \\nu _1,...,\\nu _n\\rbrace $ are the ordinals appearing in $p$ , denote by $x_t$ the forced value and shrink $T$ so that the function $f(t)={\\left\\lbrace \\begin{array}{ll}i & x_t=\\nu _i\\\\n+1 & x_t\\notin \\lbrace \\nu _1,...,\\nu _n\\rbrace \\end{array}\\right.", "}$ is constant.", "If $f$ would be constantly some $i\\le n$ then by proposition REF there is $p\\le ^* p^{\\prime }$ , $T^{\\prime }\\subseteq T$ and sets $A^s\\subseteq B^s$ such that $\\lbrace p^{^{\\prime }}{\\langle }t,\\vec{A}^t{\\rangle }\\mid t\\in mb(T^{\\prime })\\rbrace $ is pre-dense, it follows that $p^{\\prime }\\Vdash \\underaccent{\\sim }{x}=\\nu _i$ , contradiction.", "So we may assume that $x_t\\notin \\lbrace \\nu _1,...,\\nu _n\\rbrace $ .", "Keep shrinking $T$ so that there is a unique $i\\le ht(T)$ , such that $x_t\\in [t(i),t(i+1))$ (where $t(ht(T)+1)=\\kappa $ ).", "If $i<ht(T)$ then for every $t\\in {\\rm Lev}_i(T)$ , the function $g_t:mb(T/t)\\rightarrow \\kappa $ , define by $g_t(s)=x_{t^{}s}$ is regressive and therefore by REF can be stabilized on some $S_t\\subseteq T/t$ , $mb(S_t)\\in U_{T/t}$ so that for every $t\\in S_t$ , $x_{t^{}s}=y_t$ , depending only on $t$ .", "As in the situation that $f$ was constant, for every $t\\in {\\rm Lev}_i(T)$ we can find $p^{*}t\\le ^*p_t$ such that $p_t\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}=x_t$ .", "By REF , there is $T^*\\subseteq T\\upharpoonright \\lbrace 1,..,i\\rbrace $ , $p^{*}\\le ^*p^{**}$ and sets $Z^s\\subseteq A^s$ such that for every $t\\in {\\rm Lev}_i(t)$ , $p_t\\le ^*p^{**}{\\langle }t,\\vec{Z}^t{\\rangle }$ , this contradict the minimality of $ht(T)$ .", "Hence it most be that for every $t\\in mb(T)$ , $x_t\\ge t(ht(T))={\\rm max}(t)$ .", "It is impossible that $x_t>t$ , otherwise, $x_t\\notin \\lbrace \\nu _1,...,\\nu _n\\rbrace \\cup t$ and we can remove from the large sets of the condition $p^{*}{\\langle }t,\\vec{A}^t{\\rangle }$ the single ordinal $x_t$ and obtain a condition $q$ such that $q\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}=x_t\\notin C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}$ , but $p\\le q$ , then $q\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}\\in C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}$ , contradiction.", "We conclude that $\\forall t\\in mb(T).x_t={\\rm max}(t)$ .", "Which is what we desired.", "For the second part, assume that for $i\\le l(p)+1$ , $mb(T)$ splits on $\\kappa _i(p)$ and that $o^{\\vec{U}}(\\kappa _i(p))<\\kappa _i(p)^+$ .", "It follows that the measures in $\\vec{U}(\\kappa _i(p))$ are separated by the sets $X^{(\\kappa _i(p))}_{\\gamma }$ .", "For every $t\\in {\\rm Lev}_{ht(T)-1}(T)$ , shrink ${\\rm Succ}_T(t)\\in U(\\kappa _i(p),\\xi (t))$ to ${\\rm Succ}_{T^*}(t)={\\rm Succ}_{T}(t)\\cap X^{(\\kappa _i(p))}_{\\xi (t)}$ .", "It follows that for every $t\\in {\\rm Lev}_{ht(T)-1}(T)$ , and for every $\\beta \\in {\\rm Succ}_{T^*}(t)$ , $\\beta \\in X^{(\\kappa _i(p))}_{\\xi (t)}$ .", "Since $p^{\\frown }{\\langle }t,\\vec{A}^t{\\rangle }\\Vdash \\underaccent{\\sim }{x}\\in {\\rm Succ}_{T^*}(t)$ , we conclude that $p^{\\frown }{\\langle }t,\\vec{A}^t{\\rangle }\\Vdash o^{(\\kappa _i(p))}(\\underaccent{\\sim }{x})=\\xi (t)$ $\\blacksquare $ The following lemma is analogous to a lemma proven in [3] for Prikry forcing.", "Lemma 3.8 Let $G\\subseteq \\mathbb {M}[\\vec{U}]$ and let $\\delta \\le \\kappa $ be a limit point of $C_G$ .", "Then for every set of ordinals $D\\in V[C_G]$ such that $|D|<\\delta \\wedge C_G\\cap D=\\emptyset $ there is $X\\in \\bigcap \\vec{U}(\\delta )$ such that $X\\cap D=\\emptyset $ .", "Proof.", "Let $\\lambda :=|D|$ , note that $D\\in V[C_G\\cap \\delta ]$ and since $C_G\\cap \\delta $ is $V$ -generic for $\\mathbb {M}[\\vec{U}]\\upharpoonright \\delta $ , we can assume without loss of generality that $\\delta =\\kappa $ .", "We start with a single $\\mathbb {M}[\\vec{U}]$ -name of an ordinal $\\underaccent{\\sim }{x}$ and $p\\in G$ such that $p\\Vdash \\underaccent{\\sim }{x}\\notin \\underaccent{\\sim }{C}_G$ and.", "Assume that $p=\\langle q_0,r{\\rangle }$ , is a decomposition of $p$ such that ${\\rm max}(\\kappa (q_0))\\ge \\lambda $ .", "Then by REF there is $r\\le ^*r^*$ and a maximal antichain $Z\\subseteq \\mathbb {M}[\\vec{U}]\\upharpoonright {\\rm max}(q_0)$ above $q_0$ , such that for every $q\\in Z$ there is a tree $T_q$ and sets $A^{s,q}$ for which the set $\\lbrace r^{*}{\\langle }t,\\vec{A}^{t,q}{\\rangle }\\mid t\\in mb(T_{q})\\rbrace $ is pre-dense above $r^*$ and for every $t\\in mb(T_q)$ , ${\\langle }q,r^{*}{\\langle }t,\\vec{A}^{s,q}{\\rangle }{\\rangle }\\Vdash \\underaccent{\\sim }{x}=f_q(t)$ Since $p\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}\\notin C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}$ , for every $\\vec{b}\\in mb(T_q)$ , $f_q(\\vec{b})\\notin \\vec{b}$ hence it falls in one of the intervals $(0,\\vec{b}(1)),(\\vec{b}(1),\\vec{b}(2)),...,(\\vec{b}(ht(T_q)),\\kappa )$ let $n_{\\vec{b}}$ be the number of this interval.", "Apply REF to find a tree $T^{\\prime }_q\\subseteq T_q$ , $mb(T^{\\prime }_q)\\in U_{T_q}$ on which the value $n_{\\vec{b}}$ is constantly $n^*_q$ .", "Since for every $t\\in {\\rm Lev}_{n^*_q}(T_q)$ , the function $s\\mapsto f_q(ts)$ defined in $mb((T^{\\prime }_q)/t)$ , is regressive, apply REF , obtain a tree $(T^*_q)_t\\subseteq (T^{\\prime }_q)/t$ on which the value is constant.", "Let $T^*_q\\upharpoonright \\lbrace 1,...,n^*_q\\rbrace =T^{\\prime }_q\\upharpoonright \\lbrace 1,..,n^*_q\\rbrace $ and for every for every $t\\in {\\rm Lev}_{n^*_q}(T^*_q)$ , $(T^*_q)/t=(T^*_q)_t$ is defined as above.", "Then on $T^*_q$ , $f_q(t)$ depends only on $t\\upharpoonright \\lbrace 1,...,n^*_q\\rbrace $ and $f_q(t\\upharpoonright \\lbrace 1,...,n^*\\rbrace )>t(n^*_q)$ .", "Extend $r^*_0\\le ^*r^*_q$ , $S_q\\subseteq T^*_q$ a tree of extensions of $r^*$ and find $B^{q,s}\\subseteq A^{q,s}$ such that for every $s\\in {\\rm Lev}_{n^*}(S^*_q)$ , $D_{S^*_q,\\vec{B}^{q}}$ is pre-dense above $r^*_q$ and ${\\langle }q,r_1^{*}{\\langle }s,\\vec{B}^{q,s}{\\rangle }{\\rangle }\\Vdash {\\rm max}(s)<\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}=f_q(s)$ .", "Finally find a single $r^*$ such that $r^*_q\\le ^*r^*$ , shrink the trees and sets to this condition and denote by $S_q\\upharpoonright \\lbrace 1,...,n^*_q\\rbrace =S^*_q$ .", "Apply REF and let $A_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}}=\\lbrace \\alpha \\in B_{l(r)+1}(r^*)\\mid \\alpha \\cap B_{l(r)+1}(r^*)\\in \\cap \\vec{U}(\\alpha )\\rbrace \\in \\cap \\vec{U}(\\kappa )$ .", "It must be that for every $q\\in Z$ and every $s\\in mb(S^*_q)$ , $f_q(s)\\notin A_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}}\\setminus {\\rm max}(s)$ , otherwise, add the the ordinal $f_q(s)$ and obtain the condition $\\langle q, r^{*}\\langle s,\\vec{B}^{q,s}_<{\\rangle }^{}{\\langle }f_q(s){\\rangle }{\\rangle }\\Vdash \\underaccent{\\sim }{x}=f_q(s)\\in C_G$ contradiction.", "Since $f_q(s)>{\\rm max}(s)$ , we conclude that $f_q(s)\\notin A_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}}$ .", "we claim that $p\\le ^*\\langle q_0,r^*\\rangle \\Vdash \\underaccent{\\sim }{x}\\notin A_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}}$ Otherwise, there is $q\\in Z$ , $s\\in mb(S^*_q)$ and $p^{\\prime }$ such that $\\langle q,r^{*\\frown }{\\langle }s, B^{s,q}{\\rangle }\\le p^{\\prime }\\Vdash \\underaccent{\\sim }{x}\\in A_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}}$ but also $p^{\\prime }\\Vdash \\underaccent{\\sim }{x}=f_q(s)$ so $f_q(s)\\in A_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{x}}}}$ which is a contradiction.", "Now the lemma follows easily, let $\\lbrace d_i\\mid i<\\lambda <\\kappa \\rbrace \\in V[C_G]$ be some set of ordinals such that $C_G\\cap \\lbrace d_i\\mid i<\\lambda \\rbrace =\\emptyset $ then we can take names $\\lbrace \\underaccent{\\sim }{d}_i\\mid i<\\lambda \\rbrace $ and some $p={\\langle }q_0,r_0{\\rangle }$ forcing $\\forall i<\\lambda .", "\\underaccent{\\sim }{d}_i\\notin \\underaccent{\\sim }{C}_G$ , as before we can define the sets $A_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{d}}}_i}\\in \\cap \\vec{U}(\\kappa )$ and find an increasing $\\le ^*$ sequence $\\langle q_0, r_i\\rangle $ , find $p^*$ which bounds all of them and $A^*=\\underset{i<\\lambda }{\\bigcap }A_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{d}}}_i}\\in \\cap \\vec{U}(\\kappa )$ , then $p^*$ forces that $\\forall i<\\lambda $ $\\underaccent{\\sim }{d}_i\\notin A^*$ .", "By density argument we can find such $p^*$ in $G$ .$\\blacksquare $" ], [ "The proof for subsets of $\\kappa $", "Let $A\\in V[G]$ , we do not assume that $A\\subseteq \\kappa $ , since some of the result will be applied for other type of sets.", "Define $\\kappa ^*:={\\rm max}\\lbrace \\alpha \\in Lim(C_G)\\mid o^{\\vec{U}}(\\alpha )\\ge \\alpha ^+\\rbrace $ If $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , then $\\lbrace \\beta <\\kappa \\mid o^{\\vec{U}}(\\beta )<\\beta ^+\\rbrace \\in \\cap \\vec{U}(\\kappa )$ , it follows that $\\kappa ^*<\\kappa $ is well defined.", "Moreover, for every $\\alpha \\in C_G\\setminus \\kappa ^*+1$ , $o^{\\vec{U}}(\\alpha )<\\alpha ^+$ and thus $o^{(\\alpha )}$ is defined.", "Definition 4.1 Let $A\\in V[G]$ be any set.", "In $V[A]$ , consider the crucial set $X_A=\\lbrace \\nu \\mid \\nu \\text{ is }V-\\text{regular and } \\nu > cf^{V[A]}(\\nu )\\rbrace $ Denote by $\\overline{X}_A=X_A\\cup Lim(X_A)\\subseteq \\kappa \\cup \\lbrace \\kappa \\rbrace $ .", "Proposition 4.2 $\\overline{X}_A\\subseteq Lim(C_G)$ .", "$X_A\\in V[A]$ .", "If $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , $X_A\\setminus \\kappa ^*+1$ is closed i.e.", "for every $\\kappa ^*<\\alpha \\le \\kappa $ , if ${\\rm sup}(X_A\\cap \\alpha )=\\alpha $ then $\\alpha \\in X_A$ .", "If $C\\subseteq ^* C_G$ and $C\\in V[A]$ , then $Lim(C)\\subseteq \\overline{X}_A$ Proof.", "For every $\\alpha \\in X_A$ , $cf^{V[G]}(\\alpha )\\le cf^{V[A]}(\\alpha )<\\alpha $ , and $\\alpha $ is $V$ -regular, it follows by REF .7 that $X_A\\subseteq Lim(C_G)$ , and since $Lim(C_G)$ is closed, then $\\overline{X}_A\\subseteq Lim(C_G)$ .", "$(2)$ is trivial as the definition if $X_A$ occurs in $V[A]$ .", "As for $(3)$ , by induction on $\\alpha \\in {\\rm Lim}(X_A\\setminus \\kappa ^*)$ .", "Suppose $\\alpha ={\\rm sup}(X_A\\cap \\alpha )$ , then by induction, $X_A\\cap (\\kappa ^*,\\alpha )$ is a club at $\\alpha $ and by $(1)$ , $\\alpha \\in Lim(C_G)\\setminus \\kappa ^*$ .", "Define in $V[A]$ , $o_A(\\alpha )=limsup_{\\gamma \\in X_A\\cap \\alpha }o^{(\\alpha )}(\\gamma )+1$ By definition of $o^{(\\alpha )}$ , $o_A(\\alpha )\\le o^{\\vec{U}}(\\alpha )<\\alpha ^+$ , hence $cf^{V}(o_A(\\alpha ))\\le \\alpha $ .", "By the definition of $limsup$ , $o_A(\\alpha )$ satisfy two properties: For every $\\nu <\\alpha $ and every $j<o_A(\\alpha )$ there is $j\\le j^{\\prime }<\\mu $ such that $X_A\\cap X^{(\\alpha )}_{j^{\\prime }}\\cap (\\nu ,\\alpha )\\ne \\emptyset $ .", "There is some $\\xi _\\alpha <\\alpha $ such that for every $\\nu \\in X_A\\cap (\\xi _\\alpha ,\\alpha )$ , $o^{(\\alpha )}(\\nu )<o_A(\\alpha )$ .", "We split into cases: If $o_A(\\alpha )=\\beta +1$ , then by property $(1)$ ${\\rm sup}(X_A\\cap X^{(\\alpha )}_\\beta \\cap (\\xi ,\\alpha ))=\\alpha $ .", "Let us argue that ${\\rm otp}( X_A\\cap X^{(\\alpha )}_\\beta \\cap (\\xi ,\\alpha ))=\\omega $ , this is enough to conclude $ cf^{V[A]}(\\alpha )=\\omega $ , hence $\\alpha \\in X_A$ .", "In the interval $(\\xi _\\alpha ,\\alpha )$ it is impossible to have a limit point $\\zeta $ of $X^{(\\alpha )}_\\beta \\cap X_A$ .", "Otherwise, by induction $\\zeta \\in X_A$ and by REF , $o^{(\\alpha )}(\\zeta )\\ge \\beta +1$ contradicting property $(2)$ .", "If $\\lambda :=cf^V(o_A(\\alpha ))<\\alpha $ , let ${\\langle }\\lambda _i\\mid i<\\lambda {\\rangle }\\in V$ be increasing and cofinal in $o_A(\\alpha )$ .", "Define inductively ${\\langle }x_i\\mid i<\\lambda {\\rangle }$ , first, $x_0={\\rm min}(X_A\\cap (\\xi _\\alpha ,\\alpha ))<\\alpha $ .", "At successor step, $i+1$ , $x_i\\in X_A\\cap (\\xi _\\alpha ,\\alpha )$ is defined, by property $(1)$ , there is $\\lambda _{i+1}\\le j^{\\prime }<\\alpha \\text{ and }x_{i+1}\\in X_A\\cap (x_i,\\alpha )\\cap X^{(\\alpha )}_{j^{\\prime }}$ At limit step $\\delta <\\lambda $ , if ${\\rm sup}(x_i\\mid i<\\delta )$ is unbounded in $\\alpha $ , then clearly $\\alpha $ changes cofinality in $V[A]$ .", "Otherwise, let $y_\\delta ={\\rm sup}_{i<\\delta }x_i<\\alpha $ and there is some $x_{\\delta }\\in X_A\\cap X^{(\\alpha )}_{\\lambda _\\delta }\\cap (y_{\\delta },\\alpha )$ .", "Assume that ${\\langle }x_i\\mid i<\\lambda {\\rangle }$ is defined, if $x^*={\\rm sup}_{i<\\lambda }x_i\\in (\\xi ,\\alpha )<\\alpha $ , then by induction hypothesis $x^*\\in X_A\\cap (\\xi _\\alpha ,\\alpha )$ and $o^{(\\alpha )}(x^*)\\ge limsup_{i<\\lambda } o^{(\\alpha )}(x_i)+1=limsup_{i<\\lambda }\\lambda _i+1=o_A(\\alpha )$ contradicting property $(2)$ .", "Finally, if $cf^{V}(o_A(\\alpha ))=\\alpha $ , we take ${\\langle }\\alpha _i\\mid i<\\alpha {\\rangle }\\in V$ cofinal continuous sequence in $o_A(\\alpha )$ which witnesses this.", "Let $Z:=\\lbrace \\beta <\\alpha \\mid o^{(\\alpha )}(\\beta )<\\alpha _{\\beta }\\rbrace $ let us argue that $Z\\in \\cap _{i<o_A(\\alpha )}U(\\alpha ,i)$ .", "Let $i<o_A(\\alpha )$ , denote $j_{U(\\alpha ,i)}({\\langle }\\alpha _\\xi \\mid \\xi <\\alpha {\\rangle })={\\langle }\\alpha ^{\\prime }_\\xi \\mid \\xi <j_{U(\\alpha ,i)}(\\alpha ){\\rangle }, \\ \\ j_{U(\\alpha ,i)}({\\langle }X^{(\\alpha )}_\\xi \\mid \\xi <o^{\\vec{U}}(\\alpha ){\\rangle })={\\langle }X^{\\prime }_\\xi \\mid \\xi < j_{U(\\alpha ,i)}(o^{\\vec{U}}(\\alpha )){\\rangle }$ Since $X^{(\\alpha )}_i\\in U(\\alpha ,i)$ it follow that $\\alpha \\in j_{U(\\alpha ,i)}(X^{(\\alpha )}_i)=X^{\\prime }_{j_{U(\\alpha ,i)}(i)}$ which by definition implies that $(\\star ) \\ \\ \\ o^{(j_{U(\\alpha ,i)}(\\alpha ))}(\\alpha )=j_{U(\\alpha ,i)}(i)$ Also, since $i< o_A(\\alpha )$ , then $j_{U(\\alpha ,i)}(i)<\\cup j_{U(\\alpha ,i)}^{\\prime \\prime }[o_A(\\alpha )]=\\cup _{\\xi <\\alpha }j_{U(\\alpha ,i)}(\\alpha _\\xi )=\\cup _{\\xi <\\alpha }\\alpha ^{\\prime }_\\xi $ By elementary, the sequence ${\\langle }\\alpha ^{\\prime }_\\xi \\mid \\xi <j_{U(\\alpha ,i)}(\\alpha ){\\rangle }$ is also continuous, hence $(\\star \\star ) \\ \\ j_{U(\\alpha ,i)}(i)<\\cup _{z<\\alpha }\\alpha ^{\\prime }_z=\\alpha ^{\\prime }_\\alpha $ We conclude from $(\\star ),(\\star \\star )$ that $o^{j_{U(\\alpha ,i)}(\\alpha )}(\\alpha )=j_{U(\\alpha ,i)}(i)<\\alpha ^{\\prime }_\\alpha $ Hence $\\alpha \\in j_{U(\\alpha ,i)}(Z)$ so $Z\\in U(\\alpha ,i)$ as wanted.", "Consider the set $Z_*:=Z\\uplus (\\cup _{o_A(\\alpha )\\le j<o^{\\vec{U}}(\\alpha )}X^{(\\alpha )}_j)$ .", "Then $Z_*\\in \\cap \\vec{U}(\\alpha )$ and by REF .3, there is $\\eta <\\alpha $ such that $C_G\\cap (\\eta ,\\alpha )\\subseteq Z_*$ .", "In particular $X_A\\cap (\\eta ,\\alpha )\\subseteq Z_*$ .", "By property $(2)$ , if $\\rho \\in X_A\\cap (\\xi _\\alpha ,\\alpha )$ , then $o^{(\\alpha )}(\\rho )<o_A(\\alpha )$ , hence $\\rho \\in Z$ and $X_A\\cap ({\\rm max}\\lbrace \\eta ,\\xi _\\alpha \\rbrace ,\\alpha )\\subseteq Z$ .", "By definition of $Z$ , for every $\\rho \\in ({\\rm max}\\lbrace \\eta ,\\xi _\\alpha \\rbrace ,\\alpha )\\cap X_A$ , $o^{(\\alpha )}(\\rho )<\\alpha _\\rho $ .", "Now to see that $cf^{V[A]}(\\alpha )=\\omega $ , define $x_0={\\rm min}(X_A\\cap ({\\rm max}\\lbrace \\eta ,\\xi _\\alpha \\rbrace ,\\alpha ))$ , recursively assume that $x_n<\\alpha $ is defined.", "Then by property $(1)$ , there is $x_{n}^{\\prime }\\ge x_n$ and some $x_{n+1}\\in X_A\\cap X^{(\\alpha )}_{\\alpha _{x_n^{\\prime }}}\\cap (x_n,\\alpha )$ .", "To see that ${\\langle }x_n\\mid n<\\omega {\\rangle }$ is unbounded in $\\alpha $ , assume otherwise, then $x^*={\\rm sup}_{n<\\omega }x_n<\\alpha $ and by induction $x^*\\in X_A\\cap ({\\rm max}\\lbrace \\eta ,\\xi _\\alpha \\rbrace ,\\alpha )$ hence $x^*\\in Z$ .", "By proposition REF $o^{(\\alpha )}(x^*)\\ge limsup_{n<\\omega } o^{(\\alpha )}(x_n)=limsup_{n<\\omega } \\alpha _{x^{\\prime }_n}\\ge \\alpha _{x^*}$ contradiction the definition of $Z$ .", "To see $(4)$ , if $C\\setminus C_G$ is finite then clearly $Lim(C)\\subseteq Lim(C_G)$ and every $\\delta \\in Lim(C_G)$ is $V$ -regular.", "Let $\\delta \\in Lim(C)$ , it suffices to prove that $X_A$ is unbounded in $\\delta $ .", "Fix any $\\rho <\\delta $ , and let $\\rho ^{\\prime }={\\rm min}(Lim(C\\setminus \\rho +1))$ , then $\\rho <\\rho ^{\\prime }\\le \\delta $ , and also by minimality ${\\rm otp}(C\\cap (\\rho ,\\rho ^{\\prime }))=\\omega $ .", "Since $C\\in V[A]$ , it follows that $cf^{V[A]}(\\rho ^{\\prime })=\\omega $ and since $\\rho ^{\\prime }\\in Lim(C)$ , it is $V$ -regular.", "By definition it follows that $\\rho ^{\\prime }\\in X_A\\cap (\\rho ,\\delta ]$ .", "$\\blacksquare $ It is possible that $X_A$ below $\\kappa ^*$ is not closed: Example 4.3 If there is $\\alpha \\in C_G$ such that $o^{\\vec{U}}(\\alpha )=\\alpha ^+$ , then $\\alpha $ stays regular in $V[G]$ .", "Set $A=C_G$ , then $X_A\\cap \\alpha $ will be unbounded in $\\alpha $ , but $\\alpha \\notin X_A$ .", "There are trivial examples for $A$ in which the set $X_A$ is bounded.", "However the following definition filters this situation.", "Definition 4.4 Let $A\\subseteq On$ , we say that $A$ stabilizes if there is $\\beta <\\kappa $ such that $\\forall \\alpha <{\\rm sup}(A), \\ A\\cap \\alpha \\in V[G\\upharpoonright \\beta ]$ This definition is a more general then the notion of fresh set: Definition 4.5 Let $M\\subseteq M^{\\prime }$ be two $ZFC$ models.", "A set $X\\in M^{\\prime }\\setminus M$ is Fresh with respect to $M$ if $\\forall \\alpha <{\\rm sup}(X).", "X\\cap \\alpha \\in M$ .", "Proposition 4.6 Suppose that $A\\in V[G]$ such that $A$ does not stabilize.", "Assume that $\\forall \\beta <{\\rm sup}(A)$ there is $C_\\beta \\subseteq C_G$ such that $V[C_\\beta ]=V[A\\cap \\beta ]$ .", "Then: If $A\\subseteq \\kappa $ , $X_A\\cap \\kappa $ is unbounded in $\\kappa $ .", "If $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , then $cf^{V[A]}(\\kappa )<\\kappa $ .", "Proof.", "The following argument works for both $(1),(2)$ , we try to prove that $X_A$ is unbounded.", "Let $\\kappa ^*\\le \\delta <\\kappa $ , take some $\\beta <{\\rm sup}(A)$ such that $A\\cap \\beta \\notin V[G\\upharpoonright \\delta ]$ which exists by our assumption that $A$ does not stabilize.", "By assumption, there exists $C_{\\beta }\\subseteq C_G$ such that $V[C_\\beta ]=V[A\\cap \\beta ]\\subseteq V[A]$ It is impossible that $C_\\beta \\setminus (C_G\\cap \\delta )$ is finite, otherwise $A\\cap \\beta \\in V[C_\\beta ]\\subseteq V[G\\upharpoonright \\delta ]$ which contradicts the choice of $\\beta $ .", "Let $\\gamma _\\delta $ be the first limit point of $C_\\beta $ above $\\delta $ .", "By minimality, ${\\rm otp}(C_\\beta \\cap (\\delta ,\\gamma _\\delta ))=\\omega $ , hence $cf^{V[A]}(\\gamma _\\delta )=\\omega $ and $\\gamma _\\delta \\in X_A\\setminus \\delta $ .", "To see $(1)$ , if $A\\subseteq \\kappa $ , then necessarily $\\gamma _\\delta <\\kappa $ for every $\\delta $ , this is since $\\gamma _\\delta \\in Lim(C_\\beta )$ , and $\\beta <{\\rm sup}(A)\\le \\kappa $ , so $V[C_\\beta ]=V[A\\cap \\beta ]\\subseteq V[C_G\\cap \\beta ]$ .", "This implies that $\\gamma _\\delta \\le \\beta $ , otherwise, in $V[C_G\\cap \\beta ]$ the cofinality of some measurable above $\\beta $ which contradicts $\\beta ^+$ -c.c.", "of $\\mathbb {M}[\\vec{U}]\\upharpoonright \\beta $ .", "To see $(2)$ , if some $\\gamma _\\delta =\\kappa $ , $\\kappa \\in X_A$ and $cf^{V[A]}(\\kappa )<\\kappa $ .", "Otherwise, $\\gamma _\\delta <\\kappa $ , and we conclude that $X_A$ is unbounded in $\\kappa $ .", "By the assumption $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , thus by REF .3, $X_A\\setminus \\kappa ^*$ is closed, and $\\kappa $ is a limit point of this set, so $\\kappa \\in X_A$ .", "$\\blacksquare $ Corollary 4.7 Assume $(IH)$ and suppose that $A\\in V[G]$ such that $A\\subseteq \\kappa $ does not stabilize and $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ .", "Then $X_A\\cap (\\kappa ^*,\\kappa )$ is a club at $\\kappa $ and $cf^{V[A]}(\\kappa )<\\kappa $ .", "Proof.", "Since $A\\subseteq \\kappa $ , then by REF , for every $\\beta <\\kappa $ , there is $C_\\beta $ such that $V[A\\cap \\beta ]=V[C_\\beta ]$ so we can apply REF .1, REF .2 and REF .3 applies to concludes that $X_A\\cap (\\kappa ^*,\\kappa )$ is a club and $cf^{V[A]}(\\kappa )<\\kappa $ .$\\blacksquare $ Note that it is possible that $cf^{V[G]}(\\kappa )< cf^{V[A]}(\\kappa )<\\kappa $ , however $cf^{V[A]}(\\kappa )$ most be some member of the generic club that will eventually change its cofinality to $cf^{V[G]}(\\kappa )$ .", "Example 4.8 Assume that $o^{\\vec{U}}(\\kappa )=\\kappa $ , then $cf^{V[G]}(\\kappa )=\\omega $ .", "Using the enumeration $C_G=\\langle C_G(i)\\mid i<\\kappa \\rangle $ and the canonical sequence $\\alpha _n$ that was defined in example REF , we can define in $V[G]$ the set $A=\\bigcup _{n<\\omega } \\lbrace C_G(\\alpha _n)+\\alpha \\mid \\alpha <C_G(n)\\rbrace $ then $A$ does not stabilize.", "Moreover, we cannot construct the sequence $\\langle \\alpha _n\\mid n<\\omega \\rangle $ or any other $\\omega $ -sequence unbounded in $\\kappa $ inside $V[A]$ since $A$ is generic for the forcing $\\mathbb {M}[\\vec{U}\\upharpoonright (\\kappa ,C_G(\\omega ))]$ which does not change the cofinality of $\\kappa $ to $\\omega $ .", "For this kind of examples the case $o^{\\vec{U}}(\\kappa )<\\kappa $ suffices.", "The following definition will allow us to refer to subsets of $C_G$ in $V[A]$ .", "Definition 4.9 Let $A\\in V[G]$ be any set.", "A set $D\\in V[A]$ is a Mathias set if $Lim(D)\\subseteq \\overline{X}_A$ .", "For every $\\delta \\in Lim(D)$ , every $Y\\in \\bigcap \\vec{U}(\\delta )$ there is $\\xi <\\delta $ such that $D\\cap (\\xi ,\\delta )\\subseteq Y$ .", "Lemma 4.10 For every $D\\in V[A]$ , $D$ is Mathias if and only if $D\\subseteq ^* C_G$ i.e.", "$D\\setminus C_G$ is finite.", "Proof.", "If $D\\setminus C_G$ is finite then by REF .4, $Lim(D)\\subseteq \\overline{X}_A$ .", "For the second condition of a Mathias set, simply use REF .3.", "In the other direction, assume that $D$ is a Mathias set.", "Toward a contradiction, assume $|D\\setminus C_G|\\ge \\omega $ , and let $\\delta \\le sup(D)$ be minimal such that $|D\\cap \\delta \\setminus C_G|\\ge \\omega $ then $\\delta \\in Lim(D)\\subseteq \\overline{X}_A\\subseteq Lim(C_G)$ .", "By minimality, $\\lbrace d_n\\mid n<\\omega \\rbrace =D\\cap \\delta \\setminus C_G$ is unbounded in $\\delta $ .", "By REF there is $Y\\in \\bigcap \\vec{U}(\\delta )$ such that $Y\\cap \\lbrace d_n\\mid n<\\omega \\rbrace =\\emptyset $ contradicting condition $(2)$ of the Mathias set $D$ .$\\blacksquare $ Proposition 4.11 Let $\\lambda <\\kappa $ , let $\\lambda _0:={\\rm max}(Lim(C_G)\\cap \\lambda +1)$ and assume $(IH)$ .", "Then there is a Mathias set $F_\\lambda \\subseteq \\lambda _0$ such that $V[F_\\lambda ]=V[A]\\cap V[C_G\\cap \\lambda ]$ .", "Proof.", "Consider in $V[A]$ the sets $B:=\\lbrace D\\subseteq \\lambda \\mid D\\text{ is a Mathias set}\\rbrace $ Then $|B|\\le 2^\\lambda $ , enumerate $B=\\lbrace D_i\\mid i<2^\\lambda \\rbrace $ , let $E=\\lbrace {\\langle }i, d{\\rangle }\\mid i<2^{\\lambda }, d\\in D_i\\rbrace \\subseteq 2^{\\lambda }\\times \\lambda $ , clearly $V[B]=V[E]$ and $E\\subseteq V_{2^{\\lambda }}$ .", "Also, since elements of $Lim(C_G)$ are strong limits in $V[C_G]$ , ${\\rm max}(Lim(C_G)\\cap 2^\\lambda +1)={\\rm max}(Lim(C_G)\\cap \\lambda +1)=\\lambda _0$ By proposition REF .6, $E\\in V[C_G\\cap \\lambda _0]$ and by induction hypothesis there is $F_\\lambda \\subseteq C_G\\cap \\lambda _0$ such that $V[F_\\lambda ]=V[E]$ .", "Since $E\\in V[A]$ , also $F_\\lambda \\in V[A]$ , and since $F_\\lambda \\subseteq C_G\\cap \\lambda _0$ , $F_\\lambda \\in V[C_G\\cap \\lambda _0]$ so $V[F_\\lambda ]\\subseteq V[A]\\cap V[C_G\\cap \\lambda _0]$ .", "For the other direction, if $X\\in V[A]\\cap V[C_G\\cap \\lambda _0]$ , then by induction there is $C\\subseteq C_G\\cap \\lambda _0$ such that $V[X]=V[C]$ , and also $C\\in V[A]$ .", "Then $C\\subseteq \\lambda $ is a Mathias set, hence $C\\in B$ , and therefore, $C\\in V[B]=V[F_\\lambda ]$ .", "$\\blacksquare $ The following lemma will be crucial to pack information given by two sets $D,C\\subseteq C_G$ into a single set $E\\subseteq C_G$ .", "Proposition 4.12 Assume that $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ and $(IH)$ .", "Let $D,E\\in V[A]$ be Mathias sets such that $\\lambda :=|D|<\\kappa $ .", "Denote by $\\theta ={\\rm max}(\\lambda ,\\kappa ^*)$ , Then there is $F\\in V[A]$ such that: $F$ is a Mathias set.", "$F\\cap \\theta = F_\\theta $ .", "$(D\\cup E)\\setminus \\theta \\subseteq F\\subseteq {\\rm sup}(D\\cup E)$ .", "$D,E\\in V[F]$ .", "Remark 4.13 Note that simply taking the union $D\\cup E$ will not suffice for the proposition: For example, assume that $o^{\\vec{U}}(\\kappa )=\\delta $ and $o^{\\vec{U}}(\\delta )=1$ , and pick any generic $G$ with the condition ${\\langle }{\\langle }\\delta , \\lbrace \\alpha <\\delta \\mid o^{\\vec{U}}(\\alpha )=0\\rbrace {\\rangle },{\\langle }\\kappa ,\\lbrace \\delta <\\alpha <\\kappa \\mid o^{\\vec{U}}(\\alpha )<\\delta \\rbrace {\\rangle }{\\rangle }\\in G$ .", "Then $G$ is generic such that ${\\rm otp}(C_G)=C_G(\\omega )=\\delta $ .", "Let $D=\\lbrace C_G(C_G(n))\\mid n<\\omega \\rbrace \\text{ and }E=\\lbrace C_G(\\alpha )\\mid \\omega \\le \\alpha <C_G(\\omega )\\rbrace \\setminus D$ Then $D\\cup E=\\lbrace C_G(\\alpha )\\mid \\omega \\le \\alpha < C_G(\\omega )\\rbrace $ , hence in $V[D\\cup E]$ , $C_G(\\omega )$ is still measurable.", "On the other hand, from $D$ , we can reconstruct ${\\langle }C_G(n)\\mid n<\\omega {\\rangle }$ as $o^{\\vec{U}}(C_G(C_G(n)))=C_G(n)$ .", "So it if impossible that $D\\in V[D\\cup E]$ .", "Proof of REF .", "Fix $\\mathbb {M}[\\vec{U}]$ -names $\\underaccent{\\sim }{E},\\langle \\underaccent{\\sim }{d}_i\\mid i<\\lambda \\rangle $ for the element of $E\\setminus \\theta $ and $D\\setminus \\theta $ respectively.", "Split the forcing at $\\theta $ and find ${\\langle }q^{\\prime },r^{\\prime }{\\rangle }\\in G$ such that $(1)\\ \\ \\ {\\langle }q^{\\prime },r^{\\prime }{\\rangle }\\Vdash \\underaccent{\\sim }{E},\\lbrace \\underaccent{\\sim }{d}_i\\mid i<\\lambda \\rbrace \\subseteq \\underaccent{\\sim }{C}_G\\setminus \\theta \\text{ and } \\forall \\alpha \\in \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C}}}_G\\setminus \\kappa ^*.o^{\\vec{U}}(\\alpha )<\\alpha ^+$ The idea is that for every $\\delta \\in D\\setminus \\kappa (r^{\\prime })$ , there is $i\\le l(r^{\\prime })+1$ such that $\\delta \\in (\\kappa _{i-1}(r),\\kappa _i(r))$ .", "Then $\\delta $ is definable from $D\\cup E$ and two other parameters: $\\gamma (\\delta ):=o^{(\\kappa _i(r^{\\prime }))}(\\delta )\\text{ and }\\beta (\\delta ):={\\rm sup}(x\\in (D\\cup E)\\cap \\delta \\mid \\gamma (x)\\ge \\gamma (\\delta ))$ Indeed, $\\delta ={\\rm min}(y\\in (D\\cup E)\\setminus \\beta (\\delta )\\mid \\gamma (y)=\\gamma (\\delta ))$ Note that $\\beta (\\delta )<\\delta $ since we can always shrink the large set of $\\delta $ to avoid ordinals $\\beta $ with $\\gamma (\\beta )\\ge \\gamma (\\delta )$ , and by REF , we can code $\\gamma (\\delta )$ as a part of a branch below below $\\delta $ .", "After adding these finitely many element to $E$ , we repeat this process on the added points.", "This process should stabilize after $\\omega $ many steps, since we are creating a decreasing sequence of ordinals.", "Formally, proceed by a density argument, let $r^{\\prime }\\le r\\in \\mathbb {M}[\\vec{U}]\\upharpoonright (\\theta ,\\kappa )$ .", "Define recursively for every $k<\\omega $ : $r\\le ^* r_k$ , maximal anti chains ${\\langle }Z^{(k)}_{i,j}\\mid i<\\lambda ,j<\\omega {\\rangle }$ , $\\mathbb {M}[\\vec{U}]$ -names ${\\langle }\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j}\\mid i<\\lambda ,j<\\omega {\\rangle }\\text{ and }{\\langle }T^{(k)}_{q,i,j},I^{(k)}_{q,i,j},F^{(k)}_{q,i,j},\\vec{A}^{(k)}_{q,i,j}\\mid i<\\lambda ,j<\\omega ,q\\in Z^{(k)}_{i,j}{\\rangle }$ First for every $j<\\omega $ and $i<\\lambda $ , let $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(0)}_{i,j}=\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{d}}}_i$ .", "Assume $r\\le ^*r^*_k$ and $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j}$ are defined such that such that for all $i<\\lambda ,j<\\omega $ , ${\\langle }q^{\\prime },r^*_k{\\rangle }\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j}\\in \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C}}}_G\\setminus \\theta $ .", "Fix $i<\\lambda ,j<\\omega $ , use REF to find $r^*_k\\le ^*r_{i,j}$ and a maximal antichain $Z^{(k)}_{i,j}\\subseteq \\mathbb {M}[\\vec{U}]\\upharpoonright \\theta $ above $q^{\\prime }$ , such that for every $q\\in Z^{(k)}_{i,j}$ , either ${\\langle }q,r_{i,j}{\\rangle }||\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j}$ , or there is a $\\vec{U}$ -fat tree of extensions of $r_{i,j}$ $T^{(k)}_{q,i,j}$ and sets $A^t_{q,i,j}$ such that $D_{T^{(k)}_{q,i,j},\\vec{A}^{(k)}_{q,i,j}}$ is pre-dense above $r_{i,j}$ , and for every $t^{}\\alpha \\in mb(T^{(k)}_{q,i,j})$ , $(2) \\ \\ \\ {\\langle }q,r_{i,j}^{}{\\langle }t^{}\\alpha ,\\vec{A}^{t^{}\\alpha }_{q,i,j}{\\rangle }{\\rangle }\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j}=\\alpha , \\ \\ \\ {\\langle }q,r_{i,j}^{}{\\langle }t,\\vec{A}^t_{q,i,j}{\\rangle }{\\rangle }||\\ o^{(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\kappa }}}^{(k)}_{i,j})}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})$ Where $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\kappa }}}^{(k)}_{i,j}$ is an $\\mathbb {M}[\\vec{U}]$ -name for the unique $\\kappa _y(r)$ , $y\\le l(r)+1$ such that $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j}\\in (\\kappa _{y-1}(r),\\kappa _{y}(r))$ .", "Note that $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\kappa }}}^{(k)}_{i,j}$ is also a $\\mathbb {M}[\\vec{U}]$ -name for the measurable on which $mb(T^{(k)}_{q^*,i,j})$ splits, for the unique $q^*$ in $Z^{(k)}_{i,j}\\cap G\\upharpoonright \\theta $ .", "Let $F^{(k)}_{q,i,j}:{\\rm Lev}_{ht(T^{(k)}_{q,i,j})-1}(T^{(k)}_{q,i,j})\\rightarrow \\kappa $ be the function defined by $(3) \\ \\ \\ F^{(k)}_{q,i,j}(s)=\\gamma \\leftrightarrow {\\langle }q,r_{i,j}^{}{\\langle }s,\\vec{A}^{s}_{q,i,j}{\\rangle }{\\rangle }\\Vdash o^{(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\kappa }}}^{(k)}_{i,j})}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})=\\gamma $ This notation work in case that ${\\langle }q,r_{i,j}{\\rangle }||\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j}$ by taking the tree of height 0 and $F^{(k)}_{q,i,j}({\\langle }{\\rangle })$ is the decided value for $o^{\\vec{U}}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})$ .", "Shrink $T^{(k)}_{q,i,j}$ , and find a complete and consistent set of important coordinates $I^{(k)}_{q,i,j}$ .", "Also as in REF , we shrink the trees even more so that for every $q_1,q_2\\in Z^{(k)}_{i,j}$ one of the following holds: $Im(F^{(k)}_{q_1,i,j})\\cap Im(F^{(k)}_{q_2,i,j})=\\emptyset $ .", "$T^{(k)}_{q_1,i,j}\\upharpoonright I^{(k)}_{q_1,i,j}=T^{(k)}_{q_2,i,j}\\upharpoonright I^{(k)}_{q_2,i,j}$ and $(F^{(k)}_{q_1,i,j})_{I^{(k)}_{q_1,i,j}}=(F^{(k)}_{q_2,i,j})_{I^{(k)}_{q_2,i,j}}$ .", "Note that for every $V$ -generic filter $H\\subseteq \\mathbb {M}[\\vec{U}]$ such that ${\\langle }q,r_{i,j}{\\rangle }\\in H$ , there is $t\\in mb(T^{(k)}_{q,i,j})$ such that ${\\langle }q,r_{i,j}^{}{\\langle }t,\\vec{A}^t_{q,i,j}{\\rangle }{\\rangle }\\in H$ , and if $t_1^{}\\alpha _1,t_2^{}\\alpha _2\\in mb(T^{(k)}_{q,i,j})$ are two such branches, then by $(2)$ $\\alpha _1=(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_H=\\alpha _2$ and in particular $F^{(k)}_{q,i,j}(t_1)=F^{(k)}_{q,i,j}(t_2)$ which implies that $t_1\\upharpoonright I^{(k)}_{q,i,j}=t_2\\upharpoonright I^{(k)}_{q,i,j}$ , thus $t_1\\upharpoonright I^{(k)}_{q,i,j}$ is unique.", "Let $\\underaccent{\\sim }{\\vec{\\alpha }}^{(k)}_{q,i,j}$ be a $\\mathbb {M}[\\vec{U}]$ -name such that $(5) \\ \\ \\ {\\langle }q,r_{i,j}{\\rangle }\\Vdash \\forall t\\in mb(T^{(k)}_{q,i,j}).", "\\ {\\langle }q,r_{i,j}^{}{\\langle }t,\\vec{A}^{t}_{q,i,j}{\\rangle }{\\rangle }\\in \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}\\rightarrow \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{q,i,j}=t\\upharpoonright I^{(k)}_{q,i,j}$ Note that if $q_1,q_2\\in Z^{(k)}_{i,j}$ are such that $(4.2)$ holds, then both ${\\langle }q_1, r_{i,j}{\\rangle },{\\langle }q_2,r_{i,j}{\\rangle }$ forces that $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{q_1,i,j}=\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{q_2,i,j}$ .", "Moreover, it is forced by ${\\langle }q,r_{i,j}{\\rangle }$ that $|\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{q,i,j}|=|I^{(k)}_{q,i,j}|$ so we assume that $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{q,i,j}={\\langle }\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{q,i,j}(w)\\mid w\\le |I^{(k)}_{q,i,j}|{\\rangle }$ .", "Next, let $\\underaccent{\\sim }{\\beta }^{(k)}_{q,i,j}$ be a $\\mathbb {M}[\\vec{U}]$ -name such that $(6) \\ \\ {\\langle }q,r_{i,j}{\\rangle }\\Vdash \\underaccent{\\sim }{\\beta }_{q,i,j}^{(k)}={\\rm sup}(\\lbrace x\\in (\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{D}}}\\cup \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{E}}})\\cap \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j}\\mid o^{(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\kappa }}}^{(k)}_{i,j})}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})\\le o^{(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\kappa }}}^{(k)}_{i,j})}(x)\\rbrace \\cup \\lbrace \\theta \\rbrace )$ By definition of $\\underaccent{\\sim }{\\beta }^{(k)}_{q,i,j}$ and since we split the forcing at $\\theta $ , the trees $T^{(k)}_{q,i,j}$ are extension trees of $r_{i,j}$ and for every $w$ , $(7)\\ \\ \\ {\\langle }q,r_{i,j}{\\rangle }\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{q,i,j}(w),\\underaccent{\\sim }{\\beta }^{(k)}_{q,i,j}\\in C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}\\cap [\\theta ,\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})$ just otherwise, there is a generic $H$ with ${\\langle }q,r_{i,j}{\\rangle }\\in H$ and $(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\beta }}}^{(k)}_{i,j})_H=(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_H$ .", "However, by REF , $o^{((\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\kappa }}}^{(k)}_{i,j})_H)}((\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\beta }}}^{(k)}_{i,j})_H)>o^{((\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\kappa }}}^{(k)}_{i,j})_H)}((\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{q,i,j})_H)$ , contradiction.", "By $\\le ^*$ -closure of $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\theta ,\\kappa )$ , find a single $r^*_{k+1}$ such that $r_{i,j}\\le ^* r^*_{k+1}$ for every $i,j$ .", "We conclude that for every $q\\in Z^{(k)}_{i,j} $ , we have defined $T^{(k)}_{q,i,j},F^{(k)}_{q,i,j},I^{(k)}_{q,i,j}$ and names ${\\langle }\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{q,i,j}(w)\\mid w\\le |I^{(k)}_{q,i,j}|{\\rangle },\\underaccent{\\sim }{\\beta }^{(k)}_{q,i,j}$ .", "We would like to turn these name to be independent of $q\\in Z^{(k)}_{i,j}$ .", "For $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\beta }}}^{(k)}_{q,i,j}$ it is easy to find an $\\mathbb {M}[\\vec{U}]$ -names $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\beta }}}^{(k)}_{i,j}$ such that for every $q\\in Z^{(k)}_{i,j}$ , ${\\langle }q,r^*_{k+1}{\\rangle }\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\beta }}}^{(k)}_{q,i,j}=\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\beta }}}^{(k)}_{i,j}$ .", "As for ${\\langle }\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{q,i,j}(w)\\mid w\\le |I^{(k)}_{q,i,j}|{\\rangle }$ , the length $|I^{(k)}_{q,i,j}|$ might depend on $q$ , so we define $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{q,i,j}(w)=\\theta $ if $|I^{(k)}_{q,i,j}|<w<\\omega $ , and we can find names $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{i,j}(w)$ independent of $q$ .", "With these new names, in $(6),(7)$ we can replace ${\\langle }q,r_{i,j}{\\rangle }$ by ${\\langle }q^{\\prime },r^*_{k+1}{\\rangle }$ .", "Enumerate the names $\\lbrace \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{i,j}(w),\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\beta }}}^{(k)}_{i,j}\\mid j,w<\\omega \\rbrace =\\lbrace \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k+1)}_{i,s}\\mid s<\\omega \\rbrace $ This conclude the inductive definition.", "Use $\\sigma $ -closure to find $r_n\\le ^* r_\\omega $ , and shrink all the trees to be extension trees of $r_{\\omega }$ such that for every $i<\\lambda ,\\ k,j<\\omega $ and $q\\in Z^{(k)}_{i,j}$ , $D_{T^{(k)}_{q,i,j},\\vec{A}^{(k)}_{q,i,j}}$ is pre-dense above $r_\\omega $ .", "By density there is such $r_\\omega \\in G$ .", "Define $\\langle (\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G\\mid k,j<\\omega , i<\\lambda \\rangle $ By $(7)$ , ${\\langle }q^{\\prime },r_\\omega {\\rangle }\\Vdash \\underaccent{\\sim }{\\delta }^{(k)}_{i,j}\\in \\underaccent{\\sim }{C}_G\\setminus \\theta $ , thus $(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G\\in C_G\\setminus \\theta $ .", "Claim 1 $\\langle (\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G\\mid k,j<\\omega , i<\\lambda \\rangle \\in V[A]$ .", "Proof of claim: Work inside $V[A]$ , recall that $D,E\\in V[A]$ , therefore $\\langle (\\underaccent{\\sim }{\\delta }^{(0)}_{i,j})_G\\mid i<\\lambda ,j<\\omega \\rangle $ is in $V[A]$ .", "Assume we have successfully defined $\\langle (\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G\\mid i<\\lambda ,j<\\omega \\rangle $ , let us define inside $V[A]$ from this sequence the sequence $\\langle (\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k+1)}_{i,j})_G\\mid i<\\lambda ,j<\\omega \\rangle $ .", "First, in $V[G]$ , for each $i<\\lambda ,j<\\omega $ , let $Z^{(k)}_{i,j}\\cap G\\upharpoonright \\theta =\\lbrace q^G_{i,j}\\rbrace $ and let $t_{i,j}\\in mb(T^{(k)}_{q^G_{i,j},i,j})$ such that ${\\langle }q^G_{i,j},r_\\omega ^{}{\\langle }t_{i,j},\\vec{A}^{t_{i,j}}_{q^G_{i,j},i,j}{\\rangle }\\in G$ .", "Let $y\\le l(r_\\omega )+1$ be such that $(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\kappa }}}^{(k)}_{i,j})_G=\\kappa _y(r_\\omega )$ , which is definable in $V[A]$ using $r_\\omega , (\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G$ , as the unique $y\\le l(r_\\omega )+1$ such that $(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G\\in (\\kappa _{y-1}(r_\\omega ),\\kappa _y(r_\\omega ))$ .", "By $(3)$ , ${\\langle }q^G_{i,j},r_\\omega ^{}{\\langle }t_{i,j}\\setminus \\lbrace {\\rm max}(t_{i,j})\\rbrace ,\\vec{A}^{t_{i,j}\\setminus \\lbrace {\\rm max}(t_{i,j})\\rbrace }_{q^G_{i,j},i,j}{\\rangle }\\Vdash o^{(\\kappa _y(r_\\omega ))}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})=F^{(k)}_{q^G_{i,j},i,j}(t_{i,j}\\setminus \\lbrace {\\rm max}(t_{i,j})\\rbrace )$ hence it must be that $F^{(k)}_{q^G_{i,j},i,j}(t_{i,j}\\setminus \\lbrace {\\rm max}(t_{i,j})\\rbrace )=o^{(\\kappa _y(r_\\omega ))}((\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G)$ .", "Although the sequence ${\\langle }q^{G}_{i,j}\\mid i<\\lambda ,j<\\omega {\\rangle }$ might not be in $V[A]$ , we can do something similar to REF .", "Back in $V[A]$ , $o^{(\\kappa _y(r_\\omega ))}((\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G)$ is definable since in $V$ we have the decomposition ${\\langle }X^{(\\kappa _y(r_\\omega ))}_\\gamma \\mid \\gamma <o^{\\vec{U}}(\\kappa _y(r_\\omega )){\\rangle }$ and $o^{(\\kappa _y(r_\\omega ))}((\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G)$ is the unique $\\gamma _{i,j}<o^{\\vec{U}}(\\kappa _y(r_\\omega ))$ such that $(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_{G}\\in X^{(\\kappa _y(r_\\omega ))}_{\\gamma _{i,j})}$ .", "Let $M^{(k)}_{i,j}=\\lbrace q\\in Z^{(k)}_{i,j}\\mid o^{(\\kappa _y(r_\\omega ))}((\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G)\\in Im(F^{(k)}_{q,i,j})\\rbrace $ Notice that $q^G_{i,j}\\in M^{(k)}_{i,j}$ , as witnessed by $t_{i,j}\\setminus \\lbrace {\\rm max}(t_{i,j})\\rbrace $ , hence $Im(F^{(k)}_{q,i,j})\\cap Im( F^{(k)}_{q^G_{i,j},i,j})\\ne \\emptyset $ for any $q\\in M^{(k)}_{i,j}$ and we conclude that $(4.2)$ must hold.", "Choose in $V[A]$ any $q^{(k)}_{i,j}\\in M^{(k)}_{i,j}$ and any $s^{(k)}_{i,j}\\in mb(T^{(k)}_{q^{(k)}_{i,j},i,j})$ such that $F_{q^{(k)}_{i,j},i,j}^{(k)}(s^{(k)}_{i,j})=o^{(\\kappa _y(r_\\omega ))}((\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G)$ .", "By $(5)$ , $(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}_{i,j})_G=(t_{i,j})\\upharpoonright I^{(k)}_{q^G_{i,j},i,j}$ and since $(F^{(k)}_{q^{(k)}_{i,j},i,j})_{I^{(k)}_{q^{(k)}_{i,j},i,j}}(s_{i,j}\\upharpoonright I^{(k)}_{q^{(k)}_{i,j},i,j})=o^{(\\kappa _y(r_\\omega ))}((\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G)=(F^{(k)}_{q^G_{i,j},i,j})_{I^{(k)}_{q^G_{i,j},i,j}}(t_{i,j}\\upharpoonright I^{(k)}_{q^{(k)}_{i,j},i,j})$ it follows that $(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}_{i,j})_G=s_{i,j}\\upharpoonright I^{(k)}_{q^{(k)}_{i,j},i,j}$ .", "Hence ${\\langle }(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\vec{\\alpha }}}}^{(k)}_{i,j}(w))_G\\mid w<\\omega {\\rangle }$ is definable in $V[A]$ .", "Also, by $(6)$ , $(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\beta }}}^{(k)}_{i,j})_G$ is definable from $(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G$ , $\\kappa _{y}(r_{\\omega })$ and $D\\cup E$ which are all available in $V[A]$ .", "By definition of the sequence $\\langle (\\underaccent{\\sim }{\\delta }^{(k+1)}_{i,j})_G\\mid i<\\lambda ,j<\\omega \\rangle $ it is definable in $V[A]$ .", "So we conclude that $F_*\\in V[A]$ .", "$\\blacksquare _{claim}$ We keep the notation of $q^{(k)}_{i,j}$ from the proof of the claim, use proposition REF to find $F_{\\theta }$ such that $V[F_{\\theta }]=V[A]\\cap V[C_G\\cap \\theta ]$ Define $F_*=\\lbrace (\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G\\mid k,j<\\omega , i<\\lambda \\rbrace , \\ \\ F^*=(E\\cup F_*)\\setminus \\theta \\uplus F_\\theta \\in V[A]$ Clearly, $F^*$ is a Mathis set and $F^*\\cap \\theta =F_\\theta $ .", "To see $(2)$ of the proposition, note that $D\\setminus \\theta =\\lbrace (\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(0)}_i)_G\\mid i<\\lambda \\rbrace \\subseteq F^*$ , it follows that $D\\cup E\\setminus \\theta \\subseteq F^*$ .", "Moreover from $(6)$ it follows that for every $k,i,j$ , $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j}$ is forced by ${\\langle }q^{\\prime }, r_\\omega {\\rangle }\\in G$ to be below some $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(0)}_{s,t}$ , so ${\\rm sup}(F_*)={\\rm sup}(D)$ , hence ${\\rm sup}(F^*)={\\rm sup}(D\\cup E)$ .", "To see $(3)$ , let $\\langle \\lambda _\\xi \\mid \\xi <otp(F_*)=:\\rho \\rangle $ be the increasing enumeration of $F_*$ , clearly $|\\rho |\\le \\lambda $ .Consider the function $R:\\rho \\rightarrow [\\rho ]^{<\\omega }$ defined by $R(\\xi )={\\langle }{\\langle }i_1,...,i_n{\\rangle },s{\\rangle }$ such that for some $i,j,k$ , $\\lambda _\\xi =(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G, \\ {\\langle }(\\vec{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\alpha }}}}^{(k)}_{i,j}(w))_G\\mid w\\le |I^{(k)}_{q^{(k)}_{i,j},i,j}|{\\rangle }={\\langle }\\lambda _{i_1},...,\\lambda _{i_n}{\\rangle }\\text{ and } (\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\beta }}}^{(k)}_{i,j})_G=\\lambda _s$ By the claim, both ${\\langle }(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G\\mid k,j<\\omega , i<\\lambda {\\rangle }\\in V[A]$ , hence $R\\in V[A]$ , since $|\\rho |\\le \\lambda $ , then $R\\in V[A]\\cap V[C_G\\cap \\theta ]=V[F_\\theta ]\\subseteq V[F^*]$ .", "Notice that by $(7)$ , $i_1,...,i_n,s<i$ .", "Let us argue first that $F_*\\in V[F^*]$ , in $V[F^*]$ , we inductively define $\\langle \\beta _i\\mid i<\\rho \\rangle $ .", "Clearly $\\lbrace \\lambda _i\\mid i<\\rho \\rbrace \\cap \\theta +1=\\lbrace \\lambda _i\\mid i<\\epsilon \\rbrace \\in V[F_\\theta ]$ so we let $\\beta _i=\\lambda _i$ for $i<\\epsilon $ .", "Assume that $\\langle \\beta _j\\mid j<i\\rangle $ is defined, where $i>\\epsilon $ , in particular $\\beta _{i_1},...\\beta _{i_n}$ and $\\beta _s$ are defined.", "Let $I=Ind(F_*\\setminus D,F_*)\\subseteq \\rho $ , by the claim, $I\\in V[A]\\cap V[C_G\\cap \\theta ]= V[F_\\theta ]\\subseteq V[F^*]$ .", "Finally, note that $\\lbrace q^{(k)}_{i,j}\\mid i<\\lambda ,j<\\omega \\rbrace \\in V[A]\\cap V[C_G\\cap \\theta ]=V[F_\\theta ]\\subseteq V[F^*]$ and let $\\kappa _{i,j}$ be the measurable on which $mb(T^{(k)}_{q{(k)}_{i,j},i,j})$ splits on.", "Define $\\beta _i={\\rm min}(\\lbrace x\\in (F^*\\setminus \\lbrace \\beta _j\\mid j\\in I\\cap i\\rbrace )\\setminus \\beta _{s}+1\\mid \\ o^{(\\kappa _{i,j})}(x)\\ge (F^{(k)}_{q^{(k)}_{i,j},i,j})_{I^{(k)}_{q^{(k)}_{i,j},i,j}}(\\beta _{i_1},...,\\beta _{i_n})\\rbrace )$ This is a legitimate definition in $V[F^*]$ since we work hard to insure all the parameters used are there.", "Let us prove that $\\beta _\\xi =\\lambda _\\xi $ , inductively assume that $\\langle \\beta _j\\mid j<\\xi \\rangle =\\langle \\lambda _j\\mid j<\\xi \\rangle $ , we can assume that $\\xi >\\epsilon $ , then $\\lbrace \\beta _j\\mid j\\in I\\cap \\xi \\rbrace =\\lbrace \\lambda _j\\mid j\\in I\\cap \\xi \\rbrace =(F_*\\setminus D)\\cap \\lambda _\\xi $ and therefore $(F^*\\setminus \\lbrace \\beta _j\\mid j\\in I\\cap \\xi \\rbrace )\\cap (\\beta _s,\\lambda _\\xi )=[(E\\cup F_*)\\setminus (F_*\\setminus D)]\\cap (\\beta _s,\\lambda _\\xi )=(E\\cup D)\\cap (\\beta _s,\\lambda _\\xi )$ Assume that $i,k,j$ are such that $\\lambda _\\xi =(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G$ , then by induction hypothesis, $\\beta _s=\\lambda _s=(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\beta }}}^{(k)}_{i,j})_G$ and ${\\langle }(\\vec{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\alpha }}}}^{(k)}_{i,j}(w))_G\\mid w\\le |I^{(k)}_{q^{\\prime }_{i,j},i,j}|{\\rangle }={\\langle }\\lambda _{i_1},...,\\lambda _{i_n}{\\rangle }={\\langle }\\beta _{i_1},...,\\beta _{i_n}{\\rangle }$ By $(3)$ it follows that $(F^{(k)}_{q^{(k)}_{i,j},i,j})_{I^{(k)}_{q^{(k)}_{i,j},i,j}}({\\langle }\\beta _{i_1},...,\\beta _{i_n}{\\rangle })=o^{(\\kappa _{i,j})}((\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\delta }}}^{(k)}_{i,j})_G)=o^{(\\kappa _{i,j})}(\\lambda _\\xi )$ By $(6)$ , it follows that in the interval $(\\beta _s,\\lambda _\\xi )$ , there are no ordinals $x\\in F^*\\setminus \\lbrace \\beta _j\\mid j\\in I\\cap \\xi \\rbrace $ such that $(F^{(k)}_{q^{(k)}_{i,j},i,j})_{I^{(k)}_{q^{(k)}_{i,j},i,j}}({\\langle }\\beta _{i_1},...,\\beta _{i_n}{\\rangle })\\le o^{(\\kappa _{i,j})}(x)$ so $\\beta _\\xi \\ge \\lambda _\\xi $ .", "Also $\\lambda _\\xi \\in F^*\\setminus \\lbrace \\beta _j\\mid j\\in I\\cap \\xi \\rbrace $ and $F^{(k)}_{q^{\\prime }_{i,j},i,j}(\\beta _{i_1},...,\\beta _{i_k})=o^{(\\kappa _{i,j})}(\\lambda _\\xi )$ hence $\\lambda _\\xi = \\beta _\\xi $ .", "Thus $F_*\\in V[F^*]$ .", "From this $(3)$ easily follows, indeed, $D\\setminus \\theta ,F_*\\setminus E\\in V[F^*]$ since their indices inside $F_*$ are subsets of $\\theta $ , hence $E\\setminus \\theta =[(E\\cup F_*)\\setminus (F_*\\setminus E)]\\setminus \\theta =F^*\\setminus [\\theta \\cup (F_*\\setminus E)]\\in V[F^*]$ Also $D\\cap \\theta ,E\\cap \\theta \\in V[F_\\theta ]\\subseteq V[F^*]$ and therefore $D,E\\in V[F^*]$ which is what we needed.$\\blacksquare $ The following corollary provides a sufficient condition for the main result.", "It roughly says that given that $\\kappa $ changes cofinality in $V[A]$ , and given a single $C^{\\prime }\\subseteq C_G$ which captures all the initial segments of $A$ , we can glue the information needed to capture $A$ .", "Lemma 4.14 Assume $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ and $(IH)$ .", "Let $A \\in V[G] ,\\ A\\subseteq \\kappa $ and assume that $\\exists C^*\\subseteq C_G$ such that $C^*\\in V[A]$ and $\\forall \\alpha <\\kappa \\ A\\cap \\alpha \\in V[C^*]$ $cf^{V[A]}(\\kappa )<\\kappa $ Then $ \\exists C^{\\prime } \\subseteq C_G $ such that $ V[A]=V[C^{\\prime }]$ .", "Proof.", "Let $\\lambda :=cf^{V[A]}(\\kappa )<\\kappa $ and $\\langle \\alpha _i\\mid i<\\lambda \\rangle \\in V[A]$ unbounded and cofinal in $\\kappa $ witnessing this.", "By REF , there is $C_*\\subseteq C_G$ such that $|C_*|\\le \\lambda $ and $V[C_*]=V[\\langle \\alpha _i\\mid i<\\lambda \\rangle ]$ .", "Use REF to find $C_0\\subseteq C_G$ such that $C_0\\in V[A]$ and $C_*,C^*\\in V[C_0]$ .", "In $V[C_0]$ , let $\\pi _i:2^{\\alpha _i}\\leftrightarrow P(\\alpha _i)$ be any bijection.", "Since $A\\cap \\alpha _i\\in V[C_0]$ , there is $\\delta _i$ such that $\\pi _i(\\delta _i)=A\\cap \\alpha _i$ Note that the sequence $\\langle \\delta _i\\mid i<\\lambda \\rangle $ might not be inside $V[C_0]$ , but it is in $V[A]$ .", "Again by REF we can find $C^{\\prime \\prime }\\subseteq C_G$ such that $|C^{\\prime \\prime }|\\le \\lambda $ such that $V[\\langle \\delta _i\\mid i<\\lambda \\rangle ]=V[C^{\\prime \\prime }]$ By proposition REF , we can find some $C^{\\prime }\\subseteq C_G$ , $C^{\\prime }\\in V[A]$ , such that $C_0, C^{\\prime \\prime }\\in V[C^{\\prime }]$ .", "Now in $V[C^{\\prime }]$ we can compute $A$ as follows, since $C_0\\in V[C^{\\prime }]$ , also ${\\langle }\\pi _i\\mid i<\\lambda {\\rangle }\\in V[C^{\\prime }]$ , and since $C^{\\prime \\prime }\\in V[C^{\\prime }]$ also ${\\langle }\\delta _i\\mid i<\\lambda {\\rangle }\\in V[C^{\\prime }]$ .", "It follows that $A=\\cup _{i<\\lambda }A\\cap \\alpha _i=\\cup _{i<\\lambda }\\pi _i(\\delta _i)\\in V[C^{\\prime }]$ .", "$\\blacksquare $" ], [ "Subsets of $\\kappa $ which does not stabilize", "In this section we assume that $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , $A$ does not stabilize and $(IH)$ .", "We do not assume in general that $A\\subseteq \\kappa $ .", "However, if $A\\in V[G]$ is such that $A\\subseteq \\kappa $ and does not stabilize, then by REF , $cf^{V[A]}(\\kappa )<\\kappa $ .", "By lemma REF , to conclude the main result for $A$ , it remains to find $C^*\\in V[A]$ such that for every $\\alpha <\\kappa $ , $A\\cap \\alpha \\in V[C^*]$ .", "Along this chapter we construct such $C^*$ .", "The naive approach is to following: Fix a cofinal sequence ${\\langle }\\alpha _i\\mid i<cf^{V[A]}(\\kappa ){\\rangle }\\in V[A]$ , since for every $i,$ $A\\cap \\alpha _i$ is bounded, apply REF to find $C_i\\subseteq C_G$ such that $V[A\\cap \\alpha _i]=V[C_i]$ and let $C^*=\\cup _{i<cf^{V[A]}(\\kappa )}C_i$ .", "There are several reasons that $C^*$ in not the desired set: The sequence ${\\langle }C_i\\mid i<cf^{V[A]}(\\kappa ){\\rangle }$ is defined in $V[G]$ and by adding finitely many elements to each $C_i$ we might accumulate an infinite sequence which is not in $V[A]$ .", "As we have seen in REF , union of two sets might lose information, so it is possible that for some $j$ , $C_j\\notin V[\\cup _{i<cf^{V[A]}(\\kappa )}C_i]$ .", "For problem (I), we need to insure that the choice we make is inside $V[A]$ , for this we use the definition of a Mathias set, in $V[A]$ we can choose a sequence ${\\langle }D_i\\mid i<cf^{V[A]}(\\kappa ){\\rangle }$ such that $V[A\\cap \\alpha _i]=V[D_i]$ and each $D_i$ is a Mathias set.", "By proposition REF , $D_i\\subseteq ^* C_G$ , so it might be that $D_i\\setminus C_G\\ne \\emptyset $ .", "By fixing problem I, we have created a new problem: The sequence $D:=\\cup _{i<cf^{V[A]}(\\kappa )}D_i$ might accumulate infinite noise i.e.", "$|D\\setminus C_G|\\ge \\omega $ .", "Lemma REF and corollaries REF ,REF , show we can remove this noise and stay inside $V[A]$ .", "Lemma 4.15 Let $\\langle D_i\\mid i<\\theta \\rangle \\in V[A]$ such that $\\lambda <\\kappa $ and: $D_i$ is a Mathias set $min(D_i)\\ge \\lambda $ .", "Then there is $\\langle D^*_i\\mid i<\\lambda \\rangle \\in V[A]$ such that: $\\underset{i<\\lambda }{\\bigcup }D^*_i$ is Mathias.", "$\\forall i<\\lambda , D_i=^*D^*_i\\subseteq D_i$ .", "Proof.", "By removing finitely many elements from every $D_i$ , we can assume that $otp(D_i)$ is a limit ordinal.", "If every $D_i=\\emptyset $ , then the claim is trivial.", "Otherwise, since $D_i$ is a Mathias set, ${\\rm sup}(D_i)\\in \\overline{X}_A$ .", "Denote $D=\\underset{i<\\lambda }{\\bigcup }D_i$ and $\\nu ^*={\\rm sup}(D)>\\lambda $ .", "Note that $\\nu ^*\\in \\overline{X}_A$ , since $\\nu ^*={\\rm sup}({\\rm sup}(D_i)\\mid i<\\lambda )$ and $\\overline{X}_A$ is closed.", "Proceed by induction on $\\nu ^*$ , by lemma REF , $D_i\\setminus C_G$ is finite.", "It follows that $|D\\setminus C_G|\\le \\lambda <\\nu ^*$ .", "We would like to remove the noise accumulated in $D$ by intersecting it with sets in $\\cap \\vec{U}(\\nu ^*)$ .", "Since $\\nu ^*\\in Lim(C_G)$ , we can apply REF to $D\\setminus C_G$ and find a set $Y^*\\in \\cap \\vec{U}(\\nu ^*)$ such that $Y^*\\cap (D\\setminus C_G)=\\emptyset $ .", "Denote by $D^*=D\\cap Y^*\\subseteq C_G$ .", "Note that $D^*\\in V[A]$ since $D\\in V[A]$ and $Y^*\\in V$ .", "Consider the set $Z^{(0)}=\\lbrace \\nu <\\nu ^*\\mid Y^*\\cap \\nu \\in \\cap \\vec{U}(\\nu )\\rbrace $ to see that $Z^{(0)}\\in \\cap \\vec{U}(\\nu ^*)$ , let $i<o^{\\vec{U}}(\\nu ^*)$ , then $j_{U(\\nu ^*,i)}(Y^*)\\cap \\nu ^*=Y^{*}\\in \\underset{\\xi <i}{\\bigcap }U(\\nu ^*,\\xi )$ .", "By coherency, the order of $\\nu ^*$ in $j_{U(\\nu ^*,i)}(\\vec{U})$ is $i$ , which implies that $\\underset{\\xi <i}{\\cap }U(\\nu ^*,\\xi )=\\cap j(\\vec{U})(\\nu ^*)$ By definition $\\nu ^*\\in j(Z^{(0)})$ thus $Z^{(0)}\\in U(\\nu ^*,i)$ for every $i<o^{\\vec{U}}(\\nu ^*)$ and $Z^{(0)}\\in \\bigcap \\vec{U}(\\nu ^*)$ .", "By proposition REF , there is $\\eta _0<\\nu ^*$ such that $C_G\\cap (\\eta _0,\\nu ^*)\\subseteq Z^{(0)}$ .", "Consider the sequence of Mathias sets ${\\langle }D_i\\cap \\eta _0\\mid i<\\lambda {\\rangle }$ , apply the induction hypothesis to it and find $\\langle D^{\\prime }_i\\mid i<\\lambda \\rangle $ such that $\\underset{i<\\lambda }{\\bigcup }D^{\\prime }_i$ is Mathias.", "$D_i\\cap \\eta _0=^*D^{\\prime }_i\\subseteq \\eta _0$ .", "Define $D^*_i=D^{\\prime }_i\\uplus (D_i\\cap Y^{*}\\setminus \\eta _0)$ Let us argue that $\\langle D^*_i\\mid i<\\lambda {\\rangle }$ is as wanted: to see condition $(1)$ , note that the set $\\underset{i<\\lambda }{\\cup }D_i^*=D^*\\setminus \\eta _0\\cup (\\underset{i<\\lambda }{\\cup }D_i^{\\prime })$ is a Mathias sets as the union of two Mathias sets.", "For condition $(2)$ , it is clear that $D_i^*\\subseteq ^* D_i$ .", "Toward a contradiction, assume that there is $i<\\lambda $ and $\\delta \\le {\\rm sup}(D_i)$ is minimal such that $|(D_i\\cap \\delta )\\setminus (D^*_i\\cap \\delta )|\\ge \\omega $ By the definition of $D^*_i$ , $\\delta >\\eta _0$ and $\\delta \\in Lim(D_i)$ .", "By the definition of $\\eta _0$ , $\\delta \\in C_G\\cap (\\delta _0,\\nu ^*)\\in Z^{(0)}\\cup \\lbrace \\nu ^*\\rbrace $ which means that $\\delta \\cap Y^{*}\\in \\bigcap \\vec{U}(\\delta )$ .", "Since $D_i$ is Mathias, there is $\\xi <\\delta $ such that $D_i\\cap (\\xi ,\\delta )\\subseteq Y^{*}$ , in particular $D_i\\cap (\\xi ,\\delta )=D_i\\cap Y^{*}\\cap (\\xi ,\\delta )=D^*_i\\cap (\\xi ,\\delta )$ So $(D_i\\cap \\delta )\\setminus (D^*_i\\cap \\delta )=(D_i\\cap \\xi )\\setminus (D^*_i\\cap \\xi )$ , this is a contradiction to the minimality of $\\delta $ .", "$\\blacksquare $ Corollary 4.16 Let $\\langle D_i\\mid i<\\theta \\rangle \\in V[A]$ such that $\\theta <\\kappa ^+$ and: $D_i$ is a Mathias set.", "$D_i\\cap \\kappa ^*=F_{\\kappa ^*}$ where $V[F_{\\kappa ^*}]=V[A]\\cap V[C_G\\cap \\theta ]$ .", "$ {\\langle }D_i\\mid i<\\theta {\\rangle }$ is $\\subseteq ^*$ -increasing.", "Then there is $\\langle D^*_i\\mid i<\\theta \\rangle \\in V[A]$ such that: $\\underset{i<\\theta }{\\bigcup }D^*_i$ is a Mathias set.", "$\\forall i<\\theta , D_i=^*D^*_i\\subseteq D_i$ .", "$D_i^*\\cap \\kappa ^*=F_{\\kappa ^*}$ Proof.", "Let $\\lambda =cf^{V[A]}(\\theta )\\le \\kappa $ .", "Since $\\kappa $ is singular in $V[A]$ , $\\lambda <\\kappa $ and let ${\\langle }\\theta _i\\mid i<\\lambda {\\rangle }\\in V[A]$ be cofinal in $\\theta $ .", "We split each $D_{\\theta _i}$ to three intervals: $D_{\\theta _i}=D_{\\theta _i}\\cap \\kappa ^*\\uplus D_{\\theta _i}\\cap (\\kappa ^*,\\lambda )\\uplus D_{\\theta _i}\\setminus \\lambda $ Denote these sets by $A_i,B_i,C_i$ respectively.", "By assumption, $A_i$ is constantly $F_{\\kappa ^*}$ .", "Apply REF to the sequence ${\\langle }C_i\\mid i<\\lambda {\\rangle }$ to obtain ${\\langle }C^*_i\\mid i<\\lambda {\\rangle }\\in V[A]$ such that $C^*_i=^*C_i$ and $C^*:=\\cup _{i<\\lambda }C^*_i$ is Mathias.", "As for the sequence ${\\langle }B_i\\mid i<\\lambda {\\rangle }$ , either $\\lambda \\le \\kappa ^*$ in which case $B_i=\\emptyset $ .", "Otherwise $\\lambda >\\kappa ^*$ , and by removing finitely many points from $B_i$ , we can assume that ${\\rm sup}(B_i)\\in Lim(B_i)\\in \\overline{X}_A\\setminus \\kappa ^*\\subseteq X_A$ i.e.", "${\\rm sup}(B_i)$ is singular in $V[A]$ and also strong limit.", "Since $\\lambda >\\kappa ^*$ is regular in $V[A]$ , it follows that $\\lambda \\notin X_A$ , hence, ${\\rm sup}(B_i)\\le {\\rm max}(X_A\\cap \\lambda )<\\lambda $ .", "By theorem REF , there is $\\lambda ^{\\prime }<\\lambda $ such that for every $\\lambda ^{\\prime }\\le \\delta <\\lambda $ , $B_\\delta =^*B^*$ .", "Note that $F_{\\kappa ^*}\\cup B^*\\cup C^*$ is a Mathias set as the union of finitely many of them.", "Let $D_i^*:=D_i\\cap (F_{\\kappa ^*}\\cup B^*\\cup C^*)$ .", "First, since $\\cup _{i<\\theta }D^*_i\\subseteq F_{\\kappa ^*}\\cup B^*\\cup C^*$ , and $F_{\\kappa ^*}\\cup B^*\\cup C^*$ is Mathias, then also $\\cup _{i<\\theta }D^*_i$ by the criteria of REF .", "Also $(3)$ follows trivially.", "To see $(2)$ , It suffices to see that for each interval $D_i^*\\cap \\kappa ^*=D_i\\cap \\kappa ^*, \\ D_i^*\\cap (\\kappa ^*,\\lambda )=^*D_i\\cap (\\kappa ^*,\\lambda ), \\ D^*_i\\setminus \\lambda =^*D_i\\setminus \\lambda $ indeed $D_i^*\\cap \\kappa ^*=D_i\\cap \\kappa ^*=F_{\\kappa ^*}$ .", "Find $\\lambda ^{\\prime }\\le \\delta <\\lambda $ such that $i<\\theta _{\\delta }$ , then $D_i\\subseteq ^* D_{\\theta _\\delta }$ .", "In particular, $D_i\\setminus \\lambda \\subseteq ^* D_{\\theta _\\delta }\\setminus \\lambda = C_\\delta =^*C^*_\\delta \\subseteq C^*\\text{ and }D_i\\cap (\\kappa ^*,\\lambda )\\subseteq ^* D_{\\theta _\\delta }\\cap (\\kappa ^*,\\lambda )= B_\\delta =^*B^*$ So $D_i\\setminus \\lambda =^*D_i\\cap C^*\\setminus \\lambda =D^*_i\\setminus \\lambda \\text{ and }D_i\\cap (\\kappa ^*,\\lambda )=^*D_i\\cap B^*\\cap (\\kappa ^*,\\lambda )=D^*_i\\cap (\\kappa ^*,\\lambda )$ Therefore $D_i=^*D^*_i$ .$\\blacksquare $ Corollary 4.17 Let $\\langle D_i\\mid i<\\theta \\rangle \\in V[A]$ such that $\\theta <\\kappa ^+$ and: $D_i$ is a Mathias set.", "$D_i\\cap \\kappa ^*=F_{\\kappa ^*}$ .", "$ {\\langle }D_i\\mid i<\\theta {\\rangle }$ is $\\subseteq ^*$ -increasing.", "Then in $V[A]$ there is a Mathias set $E\\subseteq {\\rm sup}\\lbrace \\theta _i\\mid i<\\theta \\rbrace $ which is a $\\subseteq ^*$ -bounded for the sequence $\\langle D_i\\mid i<\\theta \\rangle $ such that $E\\cap \\kappa ^*=F_{\\kappa ^*}$ .", "Proof.", "Simply apply REF , to find ${\\langle }D^*_i\\mid i<\\theta {\\rangle }$ then $E=\\cup _{i<\\theta }D^*_i$ will be as wanted.$\\blacksquare $ As for problem II mentioned in the beginning of this section.", "At the first step will be to take $C^*$ , the union of the $C_i$ 's.", "Then we $\\subseteq ^*$ increase every $C^*\\cap \\alpha _i\\subseteq ^* C^{(1)}_i$ so the $C_i\\in V[C^{(1)}_i]$ .", "Repeating this process transfinitely, this will eventually stabilize to obtain the desired set.", "Note also that this definition must take place inside $V[A]$ .", "The following three propositions formally describe this process, we prove them by induction on $\\nu \\in X_A$ .", "Recall that under the assumption of this section $\\kappa \\in X_A$ .", "Theorem 4.18 Assume that $\\nu \\in X_A$ , $\\theta <\\nu ^+$ and let $\\langle D_i\\mid i<\\theta \\rangle \\in V[A]$ such that: $D_i\\subseteq \\theta _i<\\nu $ is a Mathias set, ${\\langle }\\theta _i\\mid i<\\theta {\\rangle }$ is non decreasing.", "$D_i\\cap \\kappa ^*=F_{\\kappa ^*}$ .", "Then there is $\\langle D^*_i\\mid i<\\theta \\rangle \\in V[A]$ such that $D^*:=\\underset{i<\\theta }{\\bigcup }D^*_i$ is Mathias.", "$D_i\\subseteq ^*D^*_i\\subseteq \\theta _i$ and $D_i\\in V[D^*_i]$ .", "$D^*_i\\cap \\kappa ^*=F_{\\kappa ^*}$ .", "$\\langle D^*_i\\mid i<\\theta \\rangle $ is $\\subseteq ^*$ -increasing.", "Theorem 4.19 Assume that $\\nu \\in X_A$ , $\\theta <\\nu ^+$ and let $\\langle D_i\\mid i<\\theta \\rangle \\in V[A]$ such that: $D_i\\subseteq \\theta _i<\\nu $ is a bounded in $\\nu $ Mathias set, ${\\langle }\\theta _i\\mid i<\\theta {\\rangle }$ is non decreasing.", "$D_i\\cap \\kappa ^*=F_{\\kappa ^*}$ .", "Then there is $\\langle D^*_i\\mid i<\\theta \\rangle \\in V[A]$ such that $D^*:=\\underset{i<\\theta }{\\bigcup }D^*_i$ is Mathias.", "$\\forall i<\\theta .", "D^*_i\\in V[D^*]$ .", "$D_i\\subseteq ^*D^*_i\\subseteq \\theta _i$ and $D_i\\in V[D^*_i]$ .", "$D^*_i\\cap \\kappa ^*=F_{\\kappa ^*}$ .", "$\\langle D^*_i\\mid i<\\theta \\rangle $ is $\\subseteq ^*$ -increasing.", "Proposition 4.20 Assume that $\\nu \\in X_A$ , $D,D^{\\prime }\\in V[A]$ be such that, $D,D^{\\prime }\\subseteq \\nu $ are Mathias sets.", "$D\\cap \\kappa ^*=D^{\\prime }\\cap \\kappa ^*=F_{\\kappa ^*}$ .", "Then there is $D^*\\in V[A]$ such that $D^*$ is a Mathias set $D\\cup D^{\\prime }\\subseteq D^*\\subseteq {\\rm sup}(D\\cup D^{\\prime })$ .", "$D,D^{\\prime }\\in V[D^*]$ .", "$D^*\\cap \\kappa ^*=F_{\\kappa ^*}$ All three theorems are clear in case $\\nu \\le \\kappa ^*$ .", "Assume that $\\nu >\\kappa ^*$ , in particular, $cf^{V[A]}(\\nu )<\\nu $ .", "First we prove a REF : Proof of REF .", "Proof.", "Let us define inductively in $V[A]$ the sequence ${\\langle }D^*_i\\mid i<\\theta {\\rangle }$ , define $D^*_0=D_0$ .", "At successor stage, use the induction hypothesis and apply proposition REF to the sets $D^*_\\alpha ,D_{\\alpha +1}$ , which by assumption are bounded in $\\nu $ , there is $D^*_{\\alpha +1}$ Mathias such that $D^*_\\alpha \\cup D_{\\alpha +1}\\subseteq D^*_{\\alpha +1}\\subseteq \\theta _{\\alpha +1}$ , $D^*_{\\alpha +1}\\cap \\kappa ^*=F_{\\kappa ^*}$ and $D_{\\alpha +1}\\in V[D^*_{\\alpha +1}]$ .", "At limit stage $\\delta <\\theta $ , the sequence ${\\langle }D^*_i\\mid i<\\delta {\\rangle }$ is defined and $\\subseteq ^*$ increasing.", "By REF , there is a Mathias set $E^*$ such that $E^*\\cap \\kappa ^*=F_{\\kappa ^*}$ , $E\\subseteq {\\rm sup}\\lbrace \\theta _i\\mid i<\\delta \\rbrace \\le \\theta _\\delta <\\nu $ which is a $\\subseteq ^*$ -bound.", "Apply REF to $E^*,D_{\\delta }$ to obtain $D^*_{\\delta }\\subseteq \\theta _{\\delta }$ .", "Then $(2),(3),(4)$ are clear.", "At stage $\\theta $ , we also need to insure $(1)$ , by REF , we can change the constructed ${\\langle }D_i^*\\mid i<\\theta {\\rangle }$ to ${\\langle }D^{**}_i\\mid i<\\theta {\\rangle }$ such that $D^*_i=^*D^{**}_i$ , $D^{**}_i\\cap \\kappa ^*=F_{\\kappa ^*}$ and $(1)$ holds.", "It suffices to note that $(2),(3),(4)$ still hold if we only change finitely many elements of $D^*_i$ .$\\blacksquare _{\\ref {Making a sequence stae increasing}}$ Proof of REF .", "The crucial difference between REF and REF is requirement $(2)$ that $D^*_i\\in V[\\cup _{j<\\theta }D^*_j]$ .", "Apply REF to the sequence ${\\langle }D_i\\mid i<\\theta {\\rangle }$ get $\\langle D^0_i\\mid i<\\theta \\rangle $ such that: $\\underset{i<\\theta }{\\bigcup }D^0_i$ is Mathias.", "$D_i\\subseteq ^*D^0_i\\subseteq \\theta _i$ and $D_i\\in V[D^0_i]$ .", "$D^0_i\\cap \\kappa ^*=F_{\\kappa ^*}$ .", "$\\langle D^0_i\\mid i<\\theta \\rangle $ is $\\subseteq ^*$ -increasing.", "Define a matrix of sets $\\langle D^\\xi _i\\mid i<\\theta ,\\xi <\\nu ^+\\rangle $ recursively on the row $\\xi <\\nu ^+$ such that: For each $\\xi <\\nu ^+$ , $\\langle D^\\xi _i\\mid i<\\theta {\\rangle }$ is $\\subseteq ^*$ - increasing.", "(Each row is $\\subseteq ^*$ increasing) For each $i<\\theta $ , $\\langle D^\\xi _i\\mid \\xi <\\nu ^+\\rangle $ is $\\subseteq ^*$ -increasing.", "(Each column is $\\subseteq ^*$ increasing) $D^\\xi _i\\subseteq \\theta _i$ and $D_i\\in V[D^\\xi _i]$ .", "(sets in column $i$ are subsets of $\\theta _i$ ) $D^{(\\xi )}:=\\underset{j<\\lambda }{\\bigcup }D^\\xi _j$ is Mathias.", "(The union of each row is Mathias) $D^\\xi _i\\cap \\kappa ^*=F_{\\kappa ^*}$ .", "(All the sets are the same up to $\\kappa ^*$ ) For every $i<\\theta $ and every $\\xi <\\nu ^+$ , $D^{(\\xi )}\\cap \\theta _i\\subseteq ^* D^{(\\xi +1)}_i$ .", "(The $i$ -th set in a successor row, $\\subseteq ^*$ includes the union of the previous row up to $\\theta _i$ ) At successor row, assume $\\langle D^{\\alpha }_i\\mid i<\\theta \\rangle $ is defined.", "For each $i<\\theta $ apply REF to $D_i$ and $D^{(\\alpha )}\\cap \\theta _i$ to obtain the sets $E^{(\\alpha +1)}_i$ which satisfy $(2),(3),(5),(6)$ .", "Apply REF to the sequence ${\\langle }E^{(\\alpha +1)}_i\\mid i<\\theta {\\rangle }$ , obtain $E^{(\\alpha +1)}_i\\subseteq ^*D^{(\\alpha +1)}_i\\subseteq \\theta _i$ , then also $(1),(4)$ holds without ruining $(2),(3),(5),(6)$ .", "For limit $\\delta <\\nu ^+$ the sequences $\\langle D^{(\\rho )}_i\\mid i<\\theta \\rangle $ are defined for every $\\rho <\\delta $ .", "For each $i<\\theta $ , the sequence ${\\langle }D^{(\\rho )}_i\\mid \\rho <\\delta {\\rangle }$ is $\\subseteq ^*$ -increasing hence by corollary REF , there is a Mathias $E^{(\\delta )}_i\\subseteq \\theta _i$ which is a $\\subseteq ^*$ -bound, this insures $(2),(5)$ .", "Apply REF to $E^{(\\delta )}_i$ and $D_i$ to obtain $F^{(\\delta )}_i$ to insure $(3)$ and finally apply REF to the sequence ${\\langle }F^{(\\delta )}_i\\mid i<\\theta {\\rangle }$ , obtain the sequence ${\\langle }D^{(\\delta )}_i\\mid i<\\theta {\\rangle }$ which satisfy $(1)-(5)$ .", "Hence the sequence $\\langle D^{(\\xi )}_j\\mid j<\\theta \\rangle $ is defined for every $\\xi <\\nu ^+$ .", "For every column $j<\\theta $ , $\\langle D^{(\\xi )}_j\\mid \\xi <\\nu ^+\\rangle $ is a $\\subseteq ^*$ -increasing sequence of subsets of $\\theta _j$ , thus there is $\\xi _j<\\nu ^+$ from which this sequence stabilizes.", "Let $\\xi ^*={\\rm sup}(\\xi _j\\mid j<\\theta )<\\nu ^+$ .", "Let us prove that $D^{(\\xi ^*)}_i$ is as wanted.", "By the construction of the sequence $(1),(3),(4),(5)$ of the theorem follows directly.", "To see $(2)$ , for every $\\xi ^*\\le \\xi ^{\\prime }<\\nu ^+$ and for every $i<\\theta $ , $D^{(\\xi ^*)}_i=^*D^{\\xi ^{\\prime }}_i$ .", "In particular $D^{\\xi ^*+1}_i=^*D^{\\xi ^*}_i$ .", "Hence $D^{\\xi ^*}_i\\subseteq D^{(\\xi ^*)}\\cap \\theta _i\\subseteq ^* D^{\\xi ^*+1}_i=^* D^{\\xi ^*}_i$ Hence $D^{\\xi ^*}_i=^* D^{(\\xi ^*)}\\cap \\theta _i\\in V[D^{(\\xi ^*)}]$ .$\\blacksquare _{\\ref {maintheoremnonstab}}$ Proof of proposition REF .", "Let $cf^{V[A]}(\\nu )=\\lambda <\\nu $ and fix a cofinal sequence ${\\langle }\\nu _i\\mid i<\\lambda {\\rangle }\\in V[A]$ .", "For each $i<\\lambda $ , apply the induction hypothesis of REF to $D\\cap \\nu _i,D^{\\prime }\\cap \\nu _i\\subseteq E_i\\subseteq \\nu _i$ such that $D\\cap \\nu _i,D^{\\prime }\\cap \\nu _i\\in V[E_i]$ and $E_i\\cap \\kappa ^*=F_{\\kappa ^*}$ .", "Apply theorem REF to the sequence ${\\langle }E_i\\mid i<\\lambda {\\rangle }$ to find a sequence ${\\langle }E^*_i\\mid i<\\lambda {\\rangle }$ , such that $E_i\\subseteq ^* E_i^*$ , $E_i\\in V[E^*]$ , where $E^*:=\\cup _{i<\\lambda }E^*_i$ is a Mathias set.", "Then $|D\\cup D^{\\prime }\\setminus E^*|\\le \\lambda $ .", "As in the proof of REF , in the model $V[E^*]$ , $\\forall i<\\lambda D\\cap \\nu _i,D^{\\prime }\\cap \\nu _i\\in V[E^*]$ so we can code the sequences ${\\langle }D\\cap \\nu _i\\mid i<\\lambda {\\rangle },{\\langle }D^{\\prime }\\cap \\nu _i\\mid i<\\lambda {\\rangle }$ as sequence of ordinals ${\\langle }\\delta _i\\mid i<\\lambda {\\rangle }$ (fixing enumerations of $P^{V[E^*]}(\\nu _i)$ ).", "By REF , there is a Mathias set $R\\in V[A]$ , $|R|\\le \\lambda $ such that $V[R]=V[{\\langle }\\delta _i\\mid i<\\lambda {\\rangle }]$ .", "Apply REF to $D\\cup D^{\\prime }\\setminus E^*, R$ and $E^*$ to find $G\\in V[A]$ Mathias such that $D\\cup D^{\\prime }\\setminus \\lambda , E^*\\setminus \\lambda \\subseteq G $ , $G\\cap \\lambda =F_\\lambda $ and $E^*, R\\in V[G]$ .", "Let $G_0=F_{\\kappa ^*}\\cup G\\cap (\\kappa ^*,\\lambda )$ , apply induction hypothesis of REF to $G_0,(D\\cup D^{\\prime })\\cap \\lambda $ and find $G_1\\subseteq \\lambda $ such that $(D\\cup D^{\\prime })\\cap \\lambda ,G_0\\subseteq G_1$ , $G_1\\cap \\kappa ^*=F_{\\kappa ^*}$ and $G_0\\in V[G_1]$ .", "Finally let $D^*=F_{\\kappa ^*}\\cup (G_1\\cap (\\kappa ^*,\\lambda ))\\cup (G\\setminus \\lambda )$ Clearly, $D^*$ is a Mathias set, $D^*\\cap \\kappa ^*=F_{\\kappa ^*}$ thus $(1),(4)$ of theorem REF hold.", "For $(2)$ , ${\\rm sup}(D^*)={\\rm sup}(G)={\\rm sup}(D\\cup D^{\\prime })$ $(D\\cup D^{\\prime })\\cap \\kappa ^*=F_{\\kappa ^*}\\subseteq D^*, \\ D\\cup D^{\\prime }\\cap (\\kappa ^*,\\lambda )\\subseteq G_1\\cap (\\kappa ^*,\\lambda )\\subseteq D^*$ and $D\\cup D^{\\prime }\\setminus \\lambda \\subseteq G\\setminus \\lambda \\subseteq D^*$ Hence $D\\cup D^{\\prime }\\subseteq D^*$ .", "Finally to see $(3)$ , $D^*\\cap \\kappa ^*=F_{\\kappa ^*},\\ D^*\\cap (\\kappa ^*,\\lambda )=G_1\\cap (\\kappa ^*,\\lambda ), \\ D^*\\setminus \\lambda =G\\setminus \\lambda $ Hence $F_{\\kappa ^*},G_1\\cap (\\kappa ^*,\\lambda ),G\\setminus \\lambda \\in V[D^*]$ , so $G_1\\cap \\kappa ^*\\in V[A]\\cap V[C_G\\cap \\kappa ^*]=V[F_{\\kappa ^*}]\\subseteq V[D^*]$ , so $G_1\\in V[D^*]$ .", "It follows that $G_0\\in V[G_1]\\subseteq V[D^*]$ .", "By definition of $G_0$ , $G\\cap (\\kappa ^*,\\lambda )=G_0\\setminus \\kappa ^*\\in V[D^*]$ , and clearly $G\\cap \\kappa ^*\\in V[F_{\\kappa ^*}]\\subseteq V[D^*]$ .", "Therefore $G\\cap \\kappa ^*,G\\cap (\\kappa ^*,\\lambda ),G\\setminus \\lambda \\in V[D^*]\\text{ and }G\\in V[D^*]$ By definition of $G$ , $E^*,R\\in V[G]\\subseteq V[D^*]$ , hence ${\\langle }\\delta _i\\mid i<\\lambda {\\rangle }$ and the coding of $P^{V[E^*]}(\\nu _i)$ is in $V[D^*]$ so the sequences ${\\langle }D\\cap \\nu _i\\mid i<\\lambda {\\rangle },{\\langle }D^{\\prime }\\cap \\nu _i\\mid i<\\lambda {\\rangle }\\in V[D^*]$ .", "Therefore, $D,D^{\\prime }\\in V[D^*]$ , as wanted.$\\blacksquare _{\\ref {boundedUnionOfgenerics}}$ This conclude the induction for REF -REF for every $\\nu \\in X_A$ and in particular for $\\kappa $ .", "Let us conclude the main result for subsets of $\\kappa $ which do not stabilize: Corollary 4.21 Assume that $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , $(IH)$ , $A\\subseteq \\kappa $ , $A\\in V[G]$ and $A$ does not stabilize, then there is $C^{\\prime }\\subseteq C_G$ such that $V[A]=V[C^{\\prime }]$ .", "Proof.", "By REF , $\\lambda :=cf^{V[A]}(\\kappa )<\\kappa $ and let ${\\langle }\\beta _i\\mid i<\\lambda {\\rangle }\\in V[A]$ be cofinal.", "By REF there is a sequence of Mathias sets $\\langle D^{\\prime }_i\\mid i<\\lambda \\rangle \\in V[A]$ such that $V[D^{\\prime }_i]=V[A\\cap \\beta _i]$ and $D^{\\prime }_i\\subseteq \\beta _i$ and denote $D_i=D^{\\prime }_i\\setminus \\kappa ^*\\cup F_{\\kappa ^*}$ .", "Then the sequence $\\langle D_i\\mid i<\\lambda {\\rangle }\\in V[A]$ and $A\\cap \\beta _i\\in V[D_i]$ .", "Use REF to find $\\langle D^*_i\\mid i<\\lambda \\rangle $ and set $D^*=\\underset{i<\\lambda }{\\cup }D^*_i$ .", "Then $D^*$ is Mathias and therefore $D^*\\subseteq ^* C_G$ .", "Let $C^*=C_G\\cap D^*$ .", "Hence $C^*=^*D^*$ and $V[C^*]=V[D^*]$ .", "Finally, for every $\\alpha <\\kappa $ , find $i<\\lambda $ such that $\\alpha <\\beta _i$ .", "By the properties of $D^*$ , $D_i\\in V[D^*]$ , hence, $A\\cap \\beta _i\\in V[D^*]$ .", "Note that $A\\cap \\alpha =(A\\cap \\beta _i)\\cap \\alpha $ and therefore $A\\cap \\alpha \\in V[D^*]=V[C^*]$ .", "Finally, apply REF .$\\blacksquare $" ], [ "subsets of $\\kappa $ which stabilize", "In this section assume that $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , $(IH)$ hence by REF , $\\zeta _0:=cf^{V[G]}(\\kappa )<\\kappa $ .", "Let $A\\in V[G]$ is a subsets of $\\kappa $ such that $A$ stabilize i.e.", "there is $\\lambda <\\kappa $ such that $\\forall \\alpha <\\kappa \\ A\\cap \\alpha \\in V[C_G\\cap \\lambda ]$ Note that if $A\\in V[C_G\\cap \\beta ]$ for some $\\beta <\\kappa $ then we can use $(IH)$ , so we also assume that $A$ is fresh with respect to the model $V[C_G\\cap \\lambda ]$ .", "Again we would like to apply lemma REF , we will use freshness and work a little bit to prove $cf^{V[A]}(\\kappa )<\\kappa $ , while finding $C^*$ is easy: Increase $\\lambda $ if necessary, and assume ${\\rm max}\\lbrace \\kappa ^*,\\zeta _0\\rbrace \\le \\lambda <\\kappa $ .", "by proposition REF , find $F_\\lambda \\subseteq \\lambda $ a Mathias set such that $V[F_\\lambda ]=V[A]\\cap V[C_G\\cap \\lambda ]$ .", "Define $C^*=F_\\lambda \\cap C_G=^* F_\\lambda $ , then $C^*\\in V[A]$ and $\\forall \\alpha <\\kappa .", "A\\cap \\alpha \\in V[A]\\cap V[C_G\\cap \\lambda ]=V[F_\\lambda ]=V[C^*]$ It remains to see that: Proposition 4.22 $cf^{V[A]}(\\kappa )<\\kappa $ .", "Proof.", "By REF , let $\\mathbb {R}\\subseteq RO(\\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda )$ for which $V[C^*]=V[H_{C^*}]$ for some $V$ -generic filter $H_{C^*}\\subseteq \\mathbb {R}$ and denote the quotient forcing (definition REF ) by $\\mathbb {Q}:=(\\mathbb {M}[\\vec{U}]\\upharpoonright \\lambda )/H_{C^*}$ .", "To complete $V[C^*]$ to $V[G]$ , it remains to force above $V[C^*]$ with $\\mathbb {P}:=\\mathbb {Q}\\times \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ , let $H_{\\mathbb {Q}}\\times G\\upharpoonright (\\lambda ,\\kappa )\\subseteq \\mathbb {P}$ be $V[C^*]$ -generic such that $V[C^*][H_{\\mathbb {Q}}\\times G\\upharpoonright (\\lambda ,\\kappa )]=V[G]$ .", "Notice that for every $\\lambda \\le \\alpha <\\kappa $ with $o^{\\vec{U}}(\\alpha )>0$ we have $|\\mathbb {Q}\\times \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\alpha )|<{\\rm min}\\lbrace \\nu >\\alpha \\mid o^{\\vec{U}}(\\nu )=1\\rbrace $ Let $\\underaccent{\\sim }{A}$ be a $\\mathbb {P}$ -name for $A$ and assume that $\\Vdash _{\\mathbb {P}} \\underaccent{\\sim }{A}\\text{ is fresh}$ Let ${\\langle }c_i\\mid i<\\zeta _0\\rangle \\in V[G]$ be a cofinal continuous subsequence of $C_G$ such that $c_0>\\lambda $ .", "Fix $\\langle \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i\\mid i<\\zeta _0{\\rangle }\\in V[C^*]$ a sequence of $\\mathbb {P}$ -names for ${\\langle }c_i\\mid i<\\zeta _0\\rangle $ .", "Find $p={\\langle }p_0,p_1{\\rangle }\\in H_{\\mathbb {Q}}\\times G\\upharpoonright (\\lambda ,\\kappa )$ such that $p\\Vdash _{\\mathbb {P}} \\langle \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i\\mid i<\\zeta _0{\\rangle }\\text{ is a cofinal continuous subsequence of }\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C_G}}}$ For every $i<\\zeta _0$ and $q\\in \\mathbb {Q}/p_0$ , consider the set $D_{i,q}$ of all conditions $p_1\\le r\\in \\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ such that one of the following: $ \\exists \\alpha .\\ {\\langle }q,r{\\rangle }\\Vdash _{\\mathbb {P}}\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i=\\alpha \\ \\wedge \\ \\exists B.", "\\ {\\langle }q,r{\\rangle }\\Vdash _{\\mathbb {P}} \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}\\cap \\alpha =B$ .", "Denote this statement by $\\phi _i(q,r)$ .", "For every $r^{\\prime }\\ge r$ , $\\lnot \\phi _i(q,r^{\\prime })$ .", "Then $D_{i,q}$ is clearly dense open.", "By the strong Prikry property there is $p_1\\le ^*p_{i,q}$ , $S_{i,q}$ and sets $A^s_{i,q}$ such that for every $t\\in mb(S_{i,q})$ , $p_{i,q}^{}{\\langle }t,\\vec{A}^t_{i,q}{\\rangle }\\in D_{i,q}$ .", "Define $g_{i,q}:mb(S_{i,q})\\rightarrow \\lbrace 0,1\\rbrace \\text{ by }g_{i,q}(t)=1\\leftrightarrow \\phi _i(q,p_{i,q}^{}{\\langle }t,\\vec{A}^t_{i,q}{\\rangle })\\text{ holds}$ Then we can shrink $S_{i,q}$ to $T_{i,q}$ such that $g_{i,q}$ is constant on $mb(T_{i,q})$ .", "Now for every $q\\in \\mathbb {Q}$ such that $g_{q,i}=1$ , and every $s\\in mb(T_{q,i})$ let $\\alpha _i(q,s),A_i(q,s)$ be the values decided by ${\\langle }q, p^{}{\\langle }s,\\vec{B}^s_{i,q}{\\rangle }{\\rangle }$ for $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i,\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}\\cap \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i$ respectively.", "Let $N_{i,q}=ht(T_{i,q})$ , then $\\alpha _i(q,s)\\in \\lbrace \\kappa _1(p),...,\\kappa _{l(p)}(p),s(1),..,s(N_{i,q})\\rbrace $ we can extend $T_{i,q}$ if necessary so that ${\\rm max}(s)\\ge \\alpha _i(q,s)$ .", "In particular, $A_i(q,s)\\subseteq {\\rm max}(s)$ .", "Define by recursion $A_i(q,s)$ for $s\\in T_{q,i}\\setminus mb(T_{q,i})$ .", "Let $s\\in {\\rm Lev}_{N_{i,q}-1}(T_{q,i})$ , by ineffability, we can shrink ${\\rm Succ}_{T_{q,i}}(s)$ and find $A_i(q,s)$ such that for every $\\alpha \\in {\\rm Succ}_{T_{q,i}}(s)$ , $A_i(q,s^{}\\alpha )= A_i(q,s)\\cap \\alpha $ .", "Generally, take $s\\in T_{q,i}$ and assume that for every $\\alpha $ is ${\\rm Succ}_{T_{q,i}}(s)$ , $A_i(q,s^{}\\alpha )$ is defined.", "We can find a single $A_i(q,s)$ and shrink ${\\rm Succ}_{T_{q,i}}(s)$ such that $\\forall \\alpha \\in {\\rm Succ}_{T_{q,i}}(s).", "\\ A_i(q,s^{}\\alpha )\\cap \\alpha = A_i(q,s)\\cap \\alpha $ We abuse notation by denoting the shrinked trees by $T_{q,i}$ .", "Extend $p_{i,q}\\le ^* p^*_{i,q}$ find $B^t_{i,q}\\subseteq A^t_{i,q}$ such that extensions from $D_{T_{i,q},\\vec{B}_{i,q}}$ is pre-dense above $p^*_{i,q}$ and Use $\\le ^*$ closure of $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ to find a single $p^*$ such that for every $q\\in \\mathbb {Q}/p_0$ and $i<\\zeta _0$ , $p^*_{i,q}\\le ^* p^*$ , in particular $p_1\\le p^*$ .", "As usual, shrink all the trees to $p^*$ and let $T_{i,q}$ be the resulting tree.", "Claim 2 For every $i<\\zeta _0$ and $q\\in \\mathbb {Q}/p_0$ there is $q^{\\prime }\\ge q$ such that $g_{i,q^{\\prime }}\\upharpoonright mb(T_{i,q})\\equiv 1$ .", "i.e.", "$\\forall t\\in mb(T_{i,q^{\\prime }}).", "\\ \\exists \\alpha ,B.", "\\ {\\langle }q^{\\prime },p^{*}{\\langle }t,\\vec{B}^t_{i,q^{\\prime }}{\\rangle }{\\rangle }\\Vdash _{\\mathbb {P}} \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i=\\alpha \\wedge \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}\\cap \\alpha =B$ Proof.", "Let $p_0\\le q_0$ , find some ${\\langle }q_0,p^*{\\rangle }\\le {\\langle }q,r{\\rangle }$ and $\\alpha $ such that ${\\langle }q,r{\\rangle }\\Vdash _{\\mathbb {P}} \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i=\\alpha $ By assumption on $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}$ , ${\\langle }q,r{\\rangle }\\Vdash _{\\mathbb {P}} \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}\\text{ is fresh}$ , which implies that there is some $B\\in V[C^*]$ and some ${\\langle }q^{\\prime },r^{\\prime }{\\rangle }$ such that ${\\langle }q,r{\\rangle }\\le {\\langle }q^{\\prime },r^{\\prime }{\\rangle }\\Vdash B=\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}\\cap \\alpha $ .", "Find some $t\\in T_{i,q^{\\prime }}$ such that $p^{*}{\\langle }t,\\vec{B}^t_{i,q^{\\prime }}{\\rangle }$ and $r^{\\prime }$ are compatible, then a common extension witnesses that $g_{i,q^{\\prime }}(t)\\ne 0$ , hence $g_{i,q^{\\prime }}(t)=1$ as wanted.$\\blacksquare _{\\text{Claim 2}}$ Move to $V[A]$ , let us compare the sets $A_i(q,s)$ with $A$ .", "For every $i$ and $q$ such that $g_{q,i}=1$ , define $\\rho _q^i(k)$ for $k\\le N_{q,i}$ .", "Let $\\rho ^i_q(0)={\\rm min}(A\\Delta A_i(q,{\\langle }{\\rangle }))+1$ Recursively define $\\rho ^i_q(k+1)={\\rm sup}({\\rm min}(A\\Delta A_i(q,{\\langle }\\delta _1,...,\\delta _k{\\rangle }))+1\\mid {\\langle }\\delta _1,...\\delta _k{\\rangle }\\in {\\rm Lev}_k(T_{q,i})\\cap \\prod _{j=1}^k\\rho ^i_q(j))$ Finally we let $\\rho ^i(k)={\\rm sup}\\lbrace \\rho ^i_q(k)\\mid q\\in \\mathbb {Q}\\wedge g_{q,i}=1\\rbrace $ By the claim, for each $i<\\zeta _0$ , there is $q_i\\in H_{\\mathbb {Q}}$ such that $g_{i,q_i}=1$ , and since $D_{T_{q_i,i},\\vec{B}_{q_i,i}}$ is pre-dense, there is some $\\vec{c}_i\\in mb(T_{q,i})$ such that ${\\langle }q_i,p^{*}{\\langle }\\vec{c}_i,\\vec{B}^{\\vec{c}_i}_{q_i,i}{\\rangle }{\\rangle }\\in H_{\\mathbb {Q}}\\times G\\upharpoonright (\\lambda ,\\kappa )$ .", "By assumption on $T_{q_i,i}$ , ${\\rm max}(\\vec{c}_i)\\ge (\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i)_{H_{\\mathbb {Q}}\\times G\\upharpoonright (\\lambda ,\\kappa )}$ let us argue that for every $k\\le N_i$ , $\\rho ^i(k)> {\\rm min}\\lbrace c_i,\\vec{c}_i(k)\\rbrace $ .", "By construction of the tree $T_{q_i,i}$ , $A\\cap c_i=A_i(q_i,\\vec{c}_i)\\cap c_i$ .", "Since for every $j\\le N_i$ , by definition, $A_i(q_i,\\vec{c}_i\\upharpoonright j)\\cap \\vec{c}_i(j)=A_i(q_i,\\vec{c}_i\\upharpoonright j+1)\\cap \\vec{c}_i(j)$ It follows that for every $j\\le N_i$ , $A_i(q_i,\\vec{c}_i\\upharpoonright j)\\cap {\\rm min}\\lbrace c_i,\\vec{c}_i(j)\\rbrace =A\\cap {\\rm min}\\lbrace c_i,\\vec{c}_i(j)\\rbrace $ In particular, $A\\cap {\\rm min}\\lbrace c_i,\\vec{c}_i(0)\\rbrace =A_i(q_i,{\\langle }{\\rangle })\\cap {\\rm min}\\lbrace c_i,\\vec{c}_i(0)\\rbrace $ .", "Since $A\\cap \\rho ^i(0)\\ne A_i(q_i,{\\langle }{\\rangle })\\cap \\rho ^i(0)$ , it follows that ${\\rm min}\\lbrace c_i,\\vec{c}_i(0)\\rbrace <\\rho ^i(0)$ .", "Inductively assume that ${\\rm min}\\lbrace c_i,\\vec{c}_i(j)\\rbrace <\\rho ^i(j)$ for every $j\\le k$ .", "If $c_i\\le c_i(k)$ then clearly we are done.", "Otherwise, $\\rho _i(j)>\\vec{c}_i(j)$ , which implies that $\\vec{c}_i\\upharpoonright \\lbrace 1,..,k\\rbrace \\in {\\rm Lev}_k(T_{q_i,i})\\cap \\prod _{j=1}^k\\rho ^i(j)$ and since $A_i(q_i,\\vec{c}_i\\upharpoonright \\lbrace 1,...,k\\rbrace )\\cap {\\rm min}\\lbrace c_i,\\vec{c}_i(k+1)\\rbrace =A\\cap {\\rm min}\\lbrace c_i, \\vec{c}_i(k+1)\\rbrace $ , then ${\\rm min}\\lbrace c_i,\\vec{c}_i(k+1)\\rbrace <{\\rm min}(A^i(q_i,\\vec{c}_i\\upharpoonright \\lbrace 1,..,k\\rbrace )\\Delta A)\\le \\rho ^i(k+1)$ Since $\\vec{c}_i(N_i)\\ge c_i$ , it follows that $\\rho ^i(N_i)>c_i$ .", "Next we argue that $\\rho ^i(k)<\\kappa $ .", "Again by induction on $k$ , $\\rho ^i(q,0)<\\kappa $ since for every $q\\in \\mathbb {Q}$ with $g_{q,i}=1$ , $A\\ne A_i(q,{\\langle }{\\rangle })$ , as $A_i(q,{\\langle }{\\rangle })\\in V[C^*]$ but $A\\notin V[C^{\\prime }]$ .", "Since $|\\mathbb {Q}|<\\kappa $ and $\\kappa $ is regular in $V[C^*]$ , it follows that $\\rho ^i(0)<\\kappa $ .", "Assume that it holds for every $j\\le k$ .", "Toward a contradiction assume that $\\rho ^i(k+1)=\\kappa $ .", "Again, $|\\mathbb {Q}|<\\kappa $ and $\\kappa $ is regular in $V[C^*]$ , there must be $q\\in \\mathbb {Q}$ such that $g_{q,i}=1$ and $\\rho ^i(q,k+1)=\\kappa $ .", "Consider the collection $\\lbrace A_i(q,{\\langle }\\alpha _1,...,\\alpha _k{\\rangle })\\mid {\\langle }\\alpha _1,...,\\alpha _k{\\rangle }\\in {\\rm Lev}_k(T_{q,i})\\cap \\prod _{j=1}^k \\rho ^i(j)\\rbrace \\in V[C^*]$ Then for every $\\gamma <\\kappa $ pick any distinct $\\vec{\\alpha }_1,\\vec{\\alpha }_2\\in {\\rm Lev}_k(T_{q,i})\\cap \\prod _{j=1}^k \\rho ^i(j)$ such that $A_i(q,\\vec{\\alpha }_1)\\ne A_i(q,\\vec{\\alpha }_2)$ , but $A_i(q,\\vec{\\alpha }_1)\\cap \\gamma =A_i(q,\\vec{\\alpha }_2)\\cap \\gamma $ .", "To see that there are such $\\vec{\\alpha }_1,\\vec{\\alpha }_2$ , by assumption that $\\rho ^i(k+1)=\\kappa $ there is $\\vec{\\alpha }_1$ such that $\\eta _1:={\\rm min}(A\\Delta A_i(q,\\vec{\\alpha }_1))>\\gamma $ , hence $A_i(\\vec{\\alpha }_1)\\cap \\gamma =A\\cap \\gamma $ .", "Let $\\vec{\\alpha }_2$ be such that ${\\rm min}(A\\Delta A_i(q,\\vec{\\alpha }_2))>\\eta _1$ .", "In particular, $A_i(q,\\vec{\\alpha }_1)\\ne A_i(q,\\vec{\\alpha }_2)$ , but $A_i(q,\\vec{\\alpha }_1)\\cap \\gamma =A\\cap \\gamma =A_i(q,\\vec{\\alpha }_2)\\cap \\gamma $ .", "Since this is defined $V[C^*]$ , where $\\kappa $ is still measurable, and the number of pairs ${\\langle }\\vec{\\alpha }_1,\\vec{\\alpha }_2{\\rangle }$ is bounded by the induction hypothesis, we can find unboundedly many $\\gamma $ 's with the same $\\vec{\\alpha }_1,\\vec{\\alpha }_2$ , which is clearly a contradiction.", "So we found a sequence $\\langle \\rho ^i(N_i)\\mid i<\\lambda \\rangle \\in V[A]$ such that $\\rho _i(N_i)>c_i$ .", "Hence $cf^{V[A]}(\\kappa )\\le \\lambda $ .$\\blacksquare $ As a result of this section we obtain the following: Corollary 4.23 Assume $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ and $(IH)$ .", "Let $A\\in V[G]$ , $A\\subseteq \\kappa $ such that $A$ stabilizes, then there is $C^{\\prime }\\subseteq C_G$ such that $V[A]=V[C^{\\prime }]$ ." ], [ "The argument for a general set", "Recall the main theorem of this paper is: Theorem 1.1 Let $U$ be a coherent sequence with maximal measurable $\\kappa $ , such that $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ .", "Assume the inductive hypothesis: $(IH) \\ \\ \\ For \\ every \\ \\delta <\\kappa , \\ any \\ coherent \\ sequence \\ \\vec{W}\\ with\\ maximal\\ measurable \\ \\delta \\ and \\ any\\ set$ $A\\in V[H]\\ for \\ H\\subseteq \\mathbb {M}[\\vec{W}], \\ there\\ is\\ \\ C\\subseteq C_H,\\ such\\ that\\ V[A]=V[C].$ Then for every $V$ -generic filter $G\\subseteq \\mathbb {M}[\\vec{U}]$ and any set $A\\in V[G]$ , there is $C\\subseteq C_G$ such that $V[A]=V[C]$ .", "First note that it suffices to prove the theorem for sets of ordinals: Lemma 5.1 Assume that for any set of ordinals $A\\in V[G]$ there is $C\\subseteq C_G$ such that $V[C]=V[A]$ .", "Then for every set $A\\in V[G]$ , there is $C\\subseteq C_G$ such that $V[A]=V[C]$ .", "Proof.", "Let $A$ be any set, then by [11], there is a forcing notion $\\mathbb {Q}\\in V$ and a generic filter $H\\subseteq \\mathbb {Q}$ such that $V[A]=V[H]$ .", "Let $\\lambda =|\\mathbb {Q}|$ and let $f:\\mathbb {Q}\\leftrightarrow \\lambda \\in V$ we a bijection.", "Let $f^{\\prime \\prime }H=X\\subseteq \\lambda $ , then $V[H]=V[X]$ .", "Since $X$ is a set of ordinals in $V[G]$ , there is $C\\subseteq C_G$ such that $V[X]=V[C]$ .", "Clearly $V[A]=V[H]=V[X]=V[C]$ .$\\blacksquare $ Let $A$ be a set of ordinals we prove theorem REF by induction of $\\lambda :={\\rm sup}(A)$ .", "If $\\lambda \\le \\kappa $ then apply REF , REF , REF .", "Assume that $\\lambda >\\kappa $ , and let us first resolve the induction step for $cf^{V[G]}(\\lambda )\\le \\kappa $ : Proposition 5.2 Assume $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , $(IH)$ , and $cf^{V[G]}(\\lambda )\\le \\kappa $ , then there is $C\\subseteq C_G$ such that $V[A]=V[C]$ .", "Proof.", "Since $\\kappa $ is singular in $V[G]$ then $cf^{V[G]}(\\lambda )<\\kappa $ .", "Since $\\mathbb {M}[\\vec{U}]$ satisfies $\\kappa ^+-c.c.$ we must have that $\\nu :=cf^V(\\lambda )\\le \\kappa $ .", "Fix $\\langle \\gamma _i | \\ i<\\nu \\rangle \\in V$ cofinal in $\\lambda $ .", "Work in $V[A]$ , for every $i<\\nu $ find $d_i\\subseteq \\kappa $ such that $V[d_i]=V[A\\cap \\gamma _i]$ .", "By induction, there exists $C^*\\subseteq C_G$ such that $V[\\langle d_i\\mid i<\\nu \\rangle ]=V[C^*]$ , therefore $\\forall i<\\nu \\ A\\cap \\gamma _i\\in V[C^*]$ $C^*\\in V[A]$ Work in $V[C^*]$ , for $i<\\nu $ fix a bijection $\\pi _i:2^{\\gamma _i}\\leftrightarrow P^{V[C^*]}(\\gamma _i)$ .", "Find $\\delta _i$ such that $\\pi _i(\\delta _i)=A\\cap \\gamma _i$ .", "By $\\kappa ^+$ -c.c.", "of $\\mathbb {M}[\\vec{U}]$ , there if a function $F:\\nu \\rightarrow P(\\lambda )$ in $V$ such that for every $i<\\nu $ , $\\delta _i\\in F(i)$ and $|F(i)|\\le \\kappa $ .", "Let $\\epsilon _i<\\kappa $ be the index of $\\delta _i$ inside $F(i)$ .", "Find $C^{\\prime \\prime }\\subseteq C_G$ such that $V[C^{\\prime \\prime }]=V[\\langle \\epsilon _i\\mid i<\\nu \\rangle ]$ finally we can find $C^{\\prime }\\subseteq C_G$ such that $V[C^{\\prime }]=V[C^*,C^{\\prime \\prime }]$ .", "To see that $V[A]=V[C^{\\prime }]$ , clearly, $C^*\\in V[A]$ and therefore ${\\langle }\\pi _i,\\delta _i\\mid i<\\nu {\\rangle }\\in V[A]$ .", "Since $F\\in V$ , ${\\langle }\\epsilon _i\\mid i<\\nu {\\rangle }\\in V[A]$ , hence $C^{\\prime \\prime }\\in V[A]$ .", "It follows that $C^{\\prime }\\in V[A]$ .", "For the other direction, $C^*,C^{\\prime \\prime }\\in V[C^{\\prime }]$ , so ${\\langle }\\epsilon _i\\mid i<\\nu {\\rangle }\\in V[C^{\\prime }]$ , and since $F\\in V$ then ${\\langle }\\delta _i\\mid i<\\nu {\\rangle }\\in V[C^{\\prime }]$ .", "Since $C^*\\in V[C^{\\prime }]$ then also ${\\langle }\\pi _i\\mid i<\\nu {\\rangle }\\in V[C^{\\prime }]$ so ${\\langle }\\pi _i(\\delta _i)\\mid i<\\nu {\\rangle }\\in V[C^{\\prime }]$ .", "It follows that $A=\\cup _{i<\\nu }\\pi _i(\\delta _i)\\in V[C^{\\prime }]$ .", "$\\blacksquare $" ], [ "The Induction step for for $cf(\\lambda )>\\kappa $", "Assume that $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ and $(IH)$ .", "The idea for the induction step where ${\\rm sup}(A)=\\lambda $ with $cf(\\lambda )>\\kappa $ (typical example is $\\lambda =\\kappa ^+$ ) is the following: There is $C^*\\subseteq C_G$ such that $C^*\\in V[A]$ and for every $\\alpha <\\lambda .", "A\\cap \\alpha \\in V[C^*]$ .", "The quotient forcing $\\mathbb {M}[\\vec{U}]/C^*$ (which complete $V[C^*]$ to $V[G]$ ) is $\\kappa ^+$ -c.c.", "(and therefore $cf(\\lambda )$ -c.c.)", "in $V[G]$ .", "Then we will apply the following theorem: Theorem 5.3 Let $W\\models ZFC$ and $\\mathbb {P}\\in W$ a forcing notion.", "Let $T\\subseteq \\mathbb {P}$ be any $W$ -generic filter and $\\lambda $ a regular cardinal in $W[T]$ .", "Assume $\\mathbb {P}$ is $\\lambda $ -c.c.", "in $W[T]$ .", "Then in $W[T]$ there are no fresh subsets with respect to $W$ of cardinals $\\theta $ such that $cf(\\theta )=\\lambda $ .", "Remark 5.4 Note that it is crucial that $\\mathbb {P}$ is $\\lambda $ -c.c.", "in the generic extension, otherwise there are trivial examples which contradict this.", "Namely, the forcing which adds a branch through a Suslin tree is $c.c.c.$ , but the branch added is a fresh subset of $\\omega _1$ .", "Proof.", "Toward a contradiction, assume that $A\\in W[T]\\setminus W$ is fresh subset of so $\\theta $ and $cf(\\theta )=\\lambda $ .", "Let ${\\langle }\\theta _i\\mid i<\\lambda {\\rangle }$ be increasing and cofinal in $\\theta $ .", "Let $\\underaccent{\\sim }{A}$ be a $\\mathbb {P}$ -name for $A$ .", "For every $\\alpha <\\lambda $ define in $W$ , $X_\\alpha =\\lbrace B\\subseteq \\alpha \\mid ||\\underaccent{\\sim }{A}\\cap \\theta _\\alpha =B||\\ne 0\\rbrace $ where the truth value is taken in $RO(\\mathbb {P})$ .", "Different $B$ 's in $X_\\alpha $ yield incompatible conditions of $\\mathbb {P}$ and we have $\\lambda $ -c.c by assumption, thus (even in $W[T]$ ) $\\forall \\alpha <\\lambda \\ |X_\\alpha |<\\lambda $ For every $B\\in X_\\alpha $ define $b(B)=||\\underaccent{\\sim }{A}\\cap \\theta _\\alpha =B||$ .", "Assume that $B^{\\prime }\\in X_\\beta $ and $\\alpha \\le \\beta $ then $B=B^{\\prime }\\cap \\theta _\\alpha \\in X_\\alpha $ .", "Moreover $b(B^{\\prime })\\le _B b(B)$ (we Switch to boolean algebra notation $p\\le _B q$ means $p$ extends $q$ ).", "Note that for such $B,B^{\\prime }$ if $b(B^{\\prime })<_B b(B)$ , then there is $(\\star ) \\ \\ 0<p\\le _B(b(B)\\setminus b(B^{\\prime }))\\le _Bb(B)$ Therefore $p\\cap b(B^{\\prime })\\le _B (b(B)\\setminus b(B^{\\prime }))\\cap b(B^{\\prime })=0$ meaning $p\\bot b(B^{\\prime })$ .", "Work in $W[T]$ , denote $A_\\alpha =A\\cap \\theta _\\alpha $ .", "By freshness $\\forall \\alpha <\\lambda \\ A_\\alpha \\in W$ thus $A_\\alpha \\in X_\\alpha $ .", "Consider the $\\le _B$ -non-increasing sequence $\\langle b(A_\\alpha ) \\mid \\alpha <\\lambda \\rangle $ .", "If there exists some $\\gamma ^*<\\lambda $ on which the sequence stabilizes, define $A^{\\prime }=\\bigcup \\lbrace B\\subseteq \\lambda \\ | \\ \\exists \\alpha \\ b(A_{\\gamma ^*})\\Vdash \\underaccent{\\sim }{A}\\cap \\theta _\\alpha =B\\rbrace \\in W$ Claim that $A^{\\prime }=A$ , notice that if $B,B^{\\prime },\\alpha ,\\alpha ^{\\prime }$ are such that $b(A_{\\gamma ^*})\\Vdash \\underaccent{\\sim }{A}\\cap \\theta _\\alpha =B, \\ \\ b(A_{\\gamma ^*})\\Vdash \\underaccent{\\sim }{A}\\cap \\theta _{\\alpha ^{\\prime }}=B^{\\prime }$ With out loss of generality, $\\alpha \\le \\alpha ^{\\prime }$ then we must have $B^{\\prime }\\cap \\theta _\\alpha =B$ otherwise, the non zero condition $b(A_{\\gamma ^*})$ would force contradictory information.", "Since for every $\\gamma ^*\\le \\xi <\\lambda $ , $b(A_{\\gamma ^*})\\Vdash \\underaccent{\\sim }{A}\\cap \\theta _\\xi =A\\cap \\theta _\\xi $ , then $A^{\\prime }= A$ .", "This is a contradiction to $A\\notin W$ .", "We conclude that the sequence $\\langle b(A_\\alpha ) \\mid \\alpha <\\lambda \\rangle $ does not stabilize.", "By regularity of $\\lambda $ , there exists a subsequence $\\langle b(A_{i_\\alpha }) \\mid \\alpha <\\lambda \\rangle $ which is strictly decreasing.", "By $(\\star )$ , find $p_\\alpha \\le _B b(A_{i_\\alpha })$ such that $p_\\alpha \\bot b(A_{i_{\\alpha +1}})$ .", "Since $b(A_{i_\\alpha })$ are decreasing, for any $\\beta >\\alpha \\ p_\\alpha \\bot b(A_{i_\\beta })$ thus $p_\\alpha \\bot p_\\beta $ .", "This shows that $\\langle p_\\alpha \\mid \\alpha <\\lambda \\rangle \\in W[T]$ is an antichain of size $\\lambda $ which contradicts the assumption.$\\blacksquare $ Corollary 5.5 Assume $(IH)$ and $A\\in V[G]$ stabilizes, then there is $C\\subseteq C_G$ such that $V[A]=V[C]$ .", "Proof.", "Let $\\beta <\\kappa $ be such that $\\forall \\alpha <{\\rm sup}(A).\\ A\\cap \\alpha \\in V[G\\upharpoonright \\beta ]$ .", "If $A\\in V[G\\upharpoonright \\beta ]$ , then we can apply $(IH)$ and we are done.", "Otherwise, $A\\in V[G]$ is fresh with respect to the model $V[G\\upharpoonright \\beta ]$ .", "The forcing completing $V[G\\upharpoonright \\beta ]$ to $V[G]$ is simply $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\beta ,\\kappa )$ which clearly is $\\kappa ^+$ -c.c.", "in $V[G]$ (since $\\kappa ^+$ is regular in $V[G]$ ), this is a contradiction to theorem REF .$\\blacksquare $ Assume that $A$ does not stabilize, since we assumed that $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ and by the induction hypothesis on ${\\rm sup}(A)=\\lambda $ , we can apply REF .2, to conclude that $cf^{V[A]}(\\kappa )<\\kappa $ .", "Let ${\\langle }\\lambda _i\\mid i<cf(\\lambda ){\\rangle }$ be cofinal in $\\lambda $ , then for each $\\alpha <cf(\\lambda )$ we chose some $D_\\alpha \\subseteq C_G$ such that $V[A\\cap \\lambda _\\alpha ]=V[D_\\alpha ]$ .", "In previous results ([4],[5]), $o^{\\vec{U}}(\\kappa )<\\kappa $ and $|C_G|<\\kappa $ , therefore $2^\\kappa <\\kappa <cf(\\lambda )$ , it followed that there is some $D_\\alpha $ that repeated cofinaly many times.", "Here, since $2^{|C_G|}\\ge \\kappa ^+$ , we will need as before to somehow accumulate all the information in a $\\subseteq ^*$ -increasing way.", "Proposition 5.6 Assume $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , $(IH)$ and $A\\in V[G]$ does not stabilize.", "Let ${\\langle }\\lambda _i\\mid i<cf(\\lambda ){\\rangle }$ is cofinal in $\\lambda $ and $\\kappa ^*<\\kappa $ such that for every $\\alpha \\in C_G\\setminus \\kappa ^*$ , $o^{\\vec{U}}(\\alpha )<\\alpha ^+$ .", "Then there is a sequence $\\langle D_\\alpha \\mid \\alpha <cf(\\lambda )\\rangle \\in V[A]$ such that: $D_\\alpha $ is a Mathias set, $D_\\alpha \\cap \\kappa ^*=F_{\\kappa ^*}$ .", "where $V[F_{\\kappa ^*}]=V[A]\\cap V[C_G\\cap \\kappa ^*]$ .", "$\\langle D_\\alpha \\mid \\alpha <cf(\\lambda )\\rangle $ is $\\subseteq ^*$ -increasing.", "$A\\cap \\lambda _\\alpha \\in V[D_\\alpha ]$ Proof.", "Work in $V[A]$ .", "For every $\\alpha <cf(\\lambda )$ , by the induction hypothesis, there is a Mathias set $D^{\\prime }_\\alpha \\subseteq ^*C_G$ such that $A\\cap \\lambda _\\alpha \\in V[D^{\\prime }_\\alpha ]$ and $D^{\\prime }_\\alpha \\cap \\kappa ^*=F_{\\kappa ^*}$ .", "Then $(1),(3)$ hold but $(2)$ might fail.", "Let us construct the sequence $\\langle D_\\alpha \\mid \\alpha <cf(\\lambda )\\rangle $ more carefully to insure condition $(2)$ : We go by induction on $\\beta <\\kappa ^+$ .", "Assume the sequence $\\langle D_\\alpha \\mid \\alpha <\\beta \\rangle $ is defined.", "If $\\beta =\\alpha +1$ , then use lemma REF with $D_\\alpha $ and $D^{\\prime }_\\beta $ to find $D_{\\beta +1}$ such that $D_\\alpha \\subseteq D_\\beta $ , $D^{\\prime }_\\beta \\in V[D_\\beta ]$ and $D_\\beta \\cap \\kappa ^*=F_{\\kappa ^*}$ .", "If $\\beta $ is limit, let $\\delta =cf^{V[A]}(\\beta )$ and ${\\langle }\\beta _i\\mid i<\\delta {\\rangle }\\in V[A]$ be cofinal.", "If $\\delta >\\kappa $ , then by REF , the sequence ${\\langle }D_{\\beta _\\alpha }\\mid \\alpha <\\delta {\\rangle }$ , $=^*$ -stabilizes on some Mathias set $E^*_\\beta $ , in particuar, $E^*_\\beta \\cap \\kappa ^*=F_{\\kappa ^*}$ and since the sequence ${\\langle }D_\\alpha \\mid \\alpha <\\beta {\\rangle }$ is $\\subseteq ^*$ -increasing, then it also stabilizes on $E^*_\\beta $ .", "Then $E^*_{\\beta }$ is a $\\subseteq ^*$ -bound.", "If $\\delta \\le \\kappa $ , since $\\kappa $ is singular in $V[A]$ , then $\\delta <\\kappa $ .", "Apply lemma REF to the sequence $\\langle D_{\\beta _\\alpha }\\mid \\alpha <\\delta \\rangle $ , to find a single $E^*_{\\beta }\\in V[A]$ Mathias which is a $\\subseteq ^*$ -bound and $E^*_\\beta \\cap \\kappa ^*=F_{\\kappa ^*}$ .", "In any case, apply lemma REF to $E^*_\\beta ,D^{\\prime }_\\beta $ and find a Mathias $D_\\beta $ such that $E^*_{\\beta }\\subseteq D_\\beta $ and $D^{\\prime }_\\beta \\in V[D_\\beta ]$ and $D_\\beta \\cap \\kappa ^*=F_{\\kappa ^*}$ .", "Clearly the sequence $\\langle D_\\alpha \\mid \\alpha <cf(\\lambda )\\rangle $ is as wanted.$\\blacksquare $ Corollary 5.7 There is $C^*\\subseteq C_G$ , such that $C^*\\in V[A]$ and for every $\\alpha <\\lambda $ , $A\\cap \\alpha \\in V[C^*]$ .", "Proof.", "Consider the sequence $\\langle D_\\alpha \\mid \\alpha <cf(\\lambda )\\rangle \\in V[A]$ from proposition REF , then use theorem REF to find $\\alpha ^*<cf(\\lambda )$ such that for every $\\alpha ^*\\le \\beta <cf(\\lambda )$ , $D_\\beta =^*D_\\alpha $ .", "In particular, $V[D_{\\beta }]=V[D_\\alpha ]$ .", "Then $C^*=D_{\\alpha ^*}\\cap C_G$ is as wanted.", "$\\blacksquare $ Let us turn to the proof that the quotient forcing is $\\kappa ^+$ -c.c.", "in $V[G]$ (and therefore $cf(\\lambda )$ -c.c).", "In [4] and [5], in order to prove $\\kappa ^+$ -c.c.", "of the quotient forcing, a concrete description of the quotient was given.", "Here we will give an abstract argument to avoid this description.", "Example 5.8 It is tempting to try and discard the name $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C}}}^*$ and define $\\mathbb {M}[\\vec{U}]/C^*$ to consist of all $p$ such that there is a $V$ -generic $H\\subseteq \\mathbb {M}[\\vec{U}]$ , with $p\\in H$ and $C^*\\subseteq C_H$ .", "Formally, we suggest that $\\mathbb {M}[\\vec{U}]/C^*$ is $\\mathbb {M}[\\vec{U}]^{\\prime }=\\lbrace p\\in \\mathbb {M}[\\vec{U}]\\mid C^*\\subseteq \\kappa (p)\\cup B(p)\\rbrace $ Such a forcing is not $\\kappa ^+$ - c.c.", "even above $V[C^*]$ .", "Assume that $o^{\\vec{U}}(\\kappa )=\\kappa $ , then $cf^{V[G]}(\\kappa )=\\omega $ .", "We take for example any $C^*=\\lbrace c_n\\mid n<\\omega \\rbrace \\subseteq C_G$ unbounded in $\\kappa $ , such that for every $n$ , $o^{\\vec{U}}(c_n)=0$ .", "Basically, it is a Prikry sequence for the measure $U(\\kappa ,0)$ .", "Now $V[C^*]\\models \\kappa ^\\omega =\\kappa ^+$ so let $\\langle f_i\\mid i<\\kappa ^+\\rangle \\in V[C^*]$ be an enumeration of all function from $\\omega $ to $\\kappa $ .", "we can factor the forcing to first pick $i<\\kappa ^+$ , then the rest of the forcing ensures that $C_G(f_i(n)+1)=c_n$ , this means that $f_i$ determined the places of $c_n$ 's in the sequence $C_G$ .", "Since no choice of $i\\ne j$ can be compatible, the first part is not $\\kappa ^+$ -c.c.", "and therefore also the product.", "Example 5.9 Let us consider another possible simplification of $\\mathbb {M}[\\vec{U}]/C^*$ , first we enumerate $C^*=\\lbrace c^*_\\alpha \\mid \\alpha <\\kappa \\rbrace $ and find $\\mathbb {M}[\\vec{U}]-$ names $\\lbrace \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}^{\\prime }_\\alpha \\mid \\alpha <\\kappa \\rbrace $ for it.", "$\\mathbb {M}[\\vec{U}]^*=\\lbrace q\\in \\mathbb {M}[\\vec{U}]\\mid \\text{ for every finite } a\\subseteq \\kappa \\text{ there is } q_a\\ge q,q_a\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}^{\\prime }_\\alpha = \\check{c}^*_\\alpha , \\text{ for every } \\alpha \\in a \\rbrace $ Let us prove that for suitable choice of names, $\\mathbb {M}[\\vec{U}]^{\\prime }$ is not $\\kappa ^+$ -c.c.", "For every $\\alpha <\\kappa $ , let $X_\\alpha =\\lbrace \\nu <\\kappa \\mid o^{\\vec{U}}(\\nu )=\\alpha \\rbrace .$ Pick some different $\\rho ^0,\\rho ^1 \\in X_0$ .", "The play would be between two conditions $p^0={\\langle }\\rho ^0, {\\langle }\\kappa , \\kappa \\setminus \\rho ^0+1{\\rangle }{\\rangle }\\text{ and }p^1={\\langle }\\rho ^1, {\\langle }\\kappa , \\kappa \\setminus \\rho ^1+1{\\rangle }{\\rangle }$ Above $p^0$ we do something simple - for example, let $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c^{\\prime }}}}_\\alpha $ be a name for the first element of $X_\\alpha $ in the generic sequence $C_G$ .", "Now above $p^1$ , let us do something more sophisticated.", "We will build a $\\kappa -$ tree with each of its branches corresponding to a direct extension of $p^1$ in $\\mathbb {M}[\\vec{U}]/C^{\\prime }$ , where $C^{\\prime }:=\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C^{\\prime }}}}_H$ and $H\\subseteq \\mathbb {M}[\\vec{U}]$ is a $V$ -generic filter with $p^0\\in H$ .", "These extensions will be incompatible in $\\mathbb {M}[\\vec{U}]/C^{\\prime }$ .", "Start with a description of the first level: Fix $Y_1 \\in U(\\kappa ,1), $ such that $Y_1 \\subseteq X_1$ and $Z_1 =X_1 \\setminus Y_1$ has cardinality $\\kappa $ .", "Split $Z_1$ into two disjoint non-empty sets $Z_{1,0},Z_{1,1}$ .", "Now, let $p^1$ extended by an element of $Y_1$ produces $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_1^{\\prime }$ to be different from those which $p^0$ defines, for example, let it be the the first element of $X_2$ in $C_G$ .", "For $i=0,1$ , let $p^1$ extended by an element of $Z_{1,i}$ produces $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_1^{\\prime }$ to be the same as $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_1^{\\prime }$ by $p^0$ .", "The idea behind is to insure that, $p^1{}^\\frown {Z_{1 0} \\cup Y_1},p^1{}^\\frown {Z_{1 1} \\cup Y_1}$ will be in $\\mathbb {M}[\\vec{U}]/C^{\\prime }$ , but only because of $Z_{1 i}$ .", "Note that, $p^1{}^\\frown {Z_{1 0} \\cup Y_1},p^1{}^\\frown {Z_{1 1} \\cup Y_1}$ are incompatible in $\\mathbb {M}[\\vec{U}]/C^{\\prime }$ since $Z_{1 0}$ and $Z_{1 1}$ are disjoint.", "Continue in a similar fashion to define the rest of the levels, the $\\alpha $ -th level we take $Y_\\alpha \\subseteq X_\\alpha $ such that $Z_\\alpha :=X_\\alpha \\setminus Y_\\alpha $ has size $\\kappa $ , and we split $Z_\\alpha $ into two disjoint non empty sets $Z_{\\alpha ,0},Z_{\\alpha ,1}$ .", "The definition of $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c^{\\prime }}}}_\\alpha $ is such that $p^1$ extended by elements of $Y_\\alpha $ forces $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c^{\\prime }}}}_\\alpha $ to be the first member of $X_{\\alpha +1}$ in $C_G$ .", "While $p^1$ extended by elements of $Z_\\alpha $ will force the same value as $p^0$ did.", "Note that the construction is completely inside $V$ .", "Finally, there are $\\kappa ^+-$ branches of length $\\kappa $ in $T$ .", "Let $p^h$ denotes an extension of $p^1$ which corresponds to a $\\kappa -$ branch $h$ i.e.", "$p^h={\\langle }\\rho _1,{\\langle }\\kappa , \\underset{\\alpha <\\kappa }{\\bigcup }Y_\\alpha \\uplus Z_{\\alpha ,h(\\alpha )}\\rangle \\rangle $ .", "Let $h_1, h_2$ be two different branches.", "Let $\\alpha <\\kappa $ be the least such that $h_1(\\alpha )\\ne h_2(\\alpha )$ .", "Then $p^{h_1}$ and $p^{h_2}$ are incompatible in $\\mathbb {M}[\\vec{U}]/C^{\\prime }$ .", "This follows from the choice of $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}^{\\prime }_\\alpha $ and the definitions of conditions at the level $\\alpha $ .", "Note that every $p^h$ is in $\\mathbb {M}[\\vec{U}]^{\\prime }$ , since for every finite $a\\subseteq \\kappa $ , we can extend $p^h$ to some $q_a$ using the elements from $Z_{\\alpha ,h(\\alpha )}$ .", "Proposition 5.10 For every $q\\in \\mathbb {M}[\\vec{U}]$ , $q\\in \\mathbb {M}[\\vec{U}]/C^*\\text{ iff there is a }V\\text{-generic }G^{\\prime }\\subseteq \\mathbb {M}[\\vec{U}]\\text{ such that }\\underaccent{\\sim }{C^*}_{G^{\\prime }}=C^*$ Proof.", "Let $q\\in \\mathbb {M}[\\vec{U}]/C^*=\\mathbb {M}[\\vec{U}]/H_{C^*}$ , let $G^{\\prime }\\subseteq \\mathbb {M}[\\vec{U}]/C^*$ be any $V[C^*]$ -generic for with $q\\in G^{\\prime }$ , then $G^{\\prime }\\subseteq \\mathbb {M}[\\vec{U}]$ is a $V$ -generic filter and $\\pi _*(G^{\\prime })=\\pi _*(G)=H_{C^*}$ .", "To see that $\\underaccent{\\sim }{C^*}_{G^{\\prime }}=C^*$ , denote $C^{\\prime }:=\\underaccent{\\sim }{C^*}_{G^{\\prime }}$ , toward a contradiction, assume that $s\\in C^*\\setminus C^{\\prime }$ , then there is $q\\le q^{\\prime }\\in G^{\\prime }\\text{ such that }q^{\\prime }\\Vdash s\\notin \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C^*}}}$ hence $\\pi (q^{\\prime })\\le ||s\\notin \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C^*}}}||$ .", "It follows that $\\pi (q^{\\prime })\\bot ||s\\in \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C^*}}}||\\in H_{C^*}$ , therefore $\\pi (q^{\\prime })\\in \\pi _*(G^{\\prime })\\setminus H_{C^*}$ contradiction.", "Also if $s\\in C^{\\prime }\\setminus C^*$ , then there is $q\\le q^{\\prime }\\in G$ such that $q^{\\prime }\\Vdash s\\in \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C^*}}}$ , then $\\pi (q^{\\prime })\\le ||s\\in \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C^*}}}||$ , then $\\pi (q^{\\prime })\\bot ||s\\notin \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C^*}}}||\\in H_{C^*}$ .", "In any case $\\pi (q^{\\prime })\\in \\pi _*(G^{\\prime })\\setminus H_{C^*}$ which is again a contradiction.", "For the other direction, if $q\\in G^{\\prime }$ for some $G^{\\prime }\\subseteq \\mathbb {M}[\\vec{U}]$ such that $\\underaccent{\\sim }{C^*}_{G^{\\prime }}=C^*$ , then $X\\cap \\pi _*(G^{\\prime })=X\\cap \\pi _*(G)$ , where $X=\\lbrace ||\\alpha \\in \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{D}}}||\\mid \\alpha <\\kappa \\rbrace $ is the generating set of $\\mathbb {P}_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C^*}}}}$ .", "Since $\\pi $ is a projection, $\\pi _*(G^{\\prime })$ is a $V$ -generic filter for $\\mathbb {P}_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{C^*}}}}$ and there for it is uniquely determined by the intersection with the set of generators $X$ .", "It follows that $\\pi _*(G^{\\prime })=\\pi _*(G)=H_{C^*}$ .", "Finally, for every $a\\in G^{\\prime }$ , $\\pi (a)\\in \\pi _*(G)$ , thus $a\\in \\pi ^{-1^{\\prime \\prime }}H_{C^*}:=\\mathbb {M}[\\vec{U}]/H_{C^*}$ .$\\blacksquare $ Remark 5.11 Example REF produces a much larger forcing than $\\mathbb {M}[\\vec{U}]/C^*$ we can obviously find $q\\in \\mathbb {M}[\\vec{U}]^{\\prime }$ such that $q\\Vdash c^*_\\alpha \\ne \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}^{\\prime }_\\alpha $ for some $\\alpha $ .", "In example REF , the conditions $p^{h}$ constructed are not in $\\mathbb {M}[\\vec{U}]/C^*$ .", "Otherwise, by the proposition, there is a generic $H$ such that $\\lbrace (\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c^{\\prime }}}}_\\alpha )_H\\mid \\alpha <\\kappa \\rbrace =C^*$ with $p^{h}\\in H$ .", "Since $Y^*:=\\underset{\\alpha <\\kappa }{\\bigcup }Y_\\alpha \\in \\cap \\vec{U}(\\kappa )$ , then by proposition REF .3 there is $\\xi <\\kappa $ such that $C_H\\setminus \\xi \\subseteq Y^*$ .", "It follows that the interpretation $(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c^{\\prime }}}}_\\alpha )_H$ must be different from the one $p^0$ made, contradiction.", "We will prove that the quotient forcing is $\\kappa ^+$ -c.c.", "for more general Prikry-type forcings which uses $P$ -point ultrafilters.", "Definition 5.12 Let $F$ be a uniform $\\kappa -$ complete filter over a regular uncountable cardinal $\\kappa $ .", "$F$ is called a $P-$ point filter iff there is $\\pi :\\kappa \\rightarrow \\kappa $ such that $\\pi $ is almost one to one, i.e.", "there is $X\\in F$ such that for every $\\alpha <\\kappa $ , $|\\pi ^{-1}{}\\alpha \\cap X|<\\kappa $ , For every $\\lbrace A_i \\mid i<\\kappa \\rbrace \\subseteq F$ , $\\Delta ^*_{i<\\kappa }A_i=\\lbrace \\nu <\\kappa \\mid \\forall i<\\pi (\\nu ) (\\nu \\in A_i)\\rbrace \\in F$ .", "Clearly, every normal filter $F$ is a $P-$ point, but there are many non-normal $P-$ points as well.", "For example take a normal filter $U$ and move it to a non-normal by using a permutation on $\\kappa $ .", "Also, if $F$ is an ultrafilter, then $\\pi $ is just a function representing $\\kappa $ in the ultrapower by $F$ .", "Before proving the main result, we need a generalization of Galvin theorem (see [2], or [8]): Proposition 5.13 Suppose that $2^{<\\kappa }=\\kappa $ and let $F$ be a $P$ -point filter over $\\kappa $ .", "Let $\\langle X_i\\mid i<\\kappa ^+\\rangle $ be a sequence of sets such that for every $i<\\kappa ^+$ , $X_i\\in F$ , and let $\\langle Z_i\\mid i<\\kappa ^+\\rangle $ be any sequence of subsets of $\\kappa $ .", "Then there is $Y\\subseteq \\kappa ^+$ of cardinality $\\kappa $ , such that $\\bigcap _{i\\in Y}X_i\\in F$ .", "there is $\\alpha \\in Y$ such that $[Z_{\\alpha }]^{<\\omega }\\subseteq \\bigcup _{i\\in Y\\setminus \\lbrace \\alpha \\rbrace }[Z_i]^{<\\omega }$ Proof.", "For every $\\vec{\\nu }\\in [\\kappa ]^{<\\omega }$ , $\\alpha <\\kappa ^+$ and $\\xi <\\kappa $ , let $H_{\\alpha ,\\xi ,\\vec{\\nu }}=\\lbrace i<\\kappa ^+\\mid X_i\\cap \\xi =X_\\alpha \\cap \\xi \\wedge \\vec{\\nu }\\in [Z_i]^{<\\omega }\\rbrace $ Claim 3 There is $\\alpha ^*<\\kappa ^+$ such that for every $\\xi <\\kappa $ and $\\vec{\\nu }\\in [Z_{\\alpha ^*}]^{<\\omega }$ , $|H_{\\alpha ^*,\\xi ,\\vec{\\nu }}|=\\kappa ^+$ Proof of claim.", "Otherwise, for every $\\alpha <\\kappa ^+$ there is $\\xi _\\alpha <\\kappa $ and $\\vec{\\nu }_\\alpha \\in [Z_\\alpha ]^{<\\omega }$ such that $|H_{\\alpha ,\\xi _\\alpha ,\\vec{\\nu }_\\alpha }|\\le \\kappa $ .", "There is $X\\subseteq \\kappa ^+$ , $\\vec{\\nu }^*\\in [\\kappa ]^{<\\omega }$ and $\\xi ^*<\\kappa $ , such that $|X|=\\kappa ^+$ and for every $\\forall \\alpha \\in X, \\ \\vec{\\nu }_\\alpha =\\vec{\\nu }^*\\wedge \\xi _\\alpha =\\xi $ Since $\\kappa $ is strong limit and $\\xi <\\kappa $ , there are less than $\\kappa $ many possibilities for $X_\\alpha \\cap \\xi ^*$ .", "Hence we can shrink $X$ to $X^{\\prime }\\subseteq X$ such that $|X^{\\prime }|=\\kappa ^+$ and find a single set $E^*\\subseteq \\xi ^*$ such that for every $\\alpha \\in X^{\\prime }$ , $X_\\alpha \\cap \\xi ^*=E^*$ .", "It follows that for every $\\alpha \\in X^{\\prime }$ : $H_{\\alpha ,\\xi _\\alpha ,\\vec{\\nu }_\\alpha }=H_{\\alpha ,\\xi ^*,\\vec{\\nu }^*}=\\lbrace i<\\kappa ^+\\mid X_i\\cap \\xi ^*=E^*\\wedge \\vec{\\nu }^*\\in [Z_i]^{<\\omega }\\rbrace $ Hence the set $H_{\\alpha ,\\xi _\\alpha ,\\vec{\\nu }_\\alpha }$ does not depend on $\\alpha $ , which means it is the same for every $\\alpha \\in X^{\\prime }$ .", "Denote this set by $H^*$ .", "To see the contradiction, note that for every $\\alpha \\in X^{\\prime }$ , $\\alpha \\in H_{\\alpha ,\\xi _\\alpha ,\\vec{\\nu }_\\alpha }=H^*$ , thus $X^{\\prime }\\subseteq H^*$ , hence $\\kappa ^+=|X^{\\prime }|\\le |H^*|\\le \\kappa $ contradiction.$\\blacksquare _{claim}$ End of proof of proposition REF : Let $\\alpha ^*$ be as in the claim.", "Let us define $Y\\subseteq \\kappa ^+$ that witness the lemma.", "First, enumerate $[Z_{\\alpha ^*}]^{<\\omega }$ , $\\langle \\vec{\\nu }_i\\mid i<\\kappa \\rangle $ .", "Let $\\pi :\\kappa \\rightarrow \\kappa $ be the function in definition REF guaranteed by $F$ being $P$ -point.", "There is a set $X\\in F$ such that for every $\\alpha <\\kappa $ , $X\\cap \\pi ^{-1}{}^{\\prime \\prime }\\alpha $ is bounded in $\\kappa $ .", "So for every $\\alpha <\\kappa $ , we find $\\rho _\\alpha >{\\rm sup}(\\pi ^{-1}{}^{\\prime \\prime }[\\alpha +1]\\cap X)$ .", "By recursion, define $\\beta _i$ for $i<\\kappa $ .", "At each step we pick $\\beta _i\\in H_{\\alpha ^*,\\rho _i+1,\\vec{\\nu }_i}\\setminus \\lbrace \\beta _j\\mid j<i\\rbrace $ .", "It is possible find such $\\beta _i$ , since the cardinality of $H_{\\alpha ^*,\\rho _i+1,\\vec{\\nu }_i}$ is $\\kappa ^+$ , and $\\lbrace \\beta _j\\mid j<i\\rbrace $ is of size less than $\\kappa $ .", "Let us prove that $Y=\\lbrace \\beta _i\\mid i<\\kappa \\rbrace \\cup \\lbrace \\alpha ^*\\rbrace $ is as wanted.", "Indeed, by definition, it is clear that $|Y|=\\kappa $ .", "Also, if $\\vec{\\nu }\\in [Z_{\\alpha ^*}]^{<\\omega }$ , then $\\vec{\\nu }=\\vec{\\nu }_i$ for some $i<\\kappa $ .", "By definition, $\\beta _i\\in H_{\\alpha ^*,\\rho _i+1,\\vec{\\nu }_i}$ , hence $\\vec{\\nu }\\in [Z_{\\beta _i}]^{<\\omega }$ , so $[Z_{\\beta _i}]^{<\\omega }\\subseteq \\bigcup _{x\\in Y\\setminus \\lbrace \\alpha ^*\\rbrace }[Z_x]^{<\\omega }$ Finally, we need to prove that $\\bigcap _{\\gamma \\in Y}X_\\gamma =X_{\\alpha ^*}\\cap \\bigcap _{\\alpha <\\kappa }X_{\\beta \\alpha }\\in F$ .", "By $P$ -point assumption about $F$ , $X^*:=X\\cap X_{\\alpha ^*}\\cap \\Delta ^*_{i<\\kappa }X_{\\beta _i}\\in F$ Thus it suffices to prove that $X^*\\subseteq \\bigcap _{\\alpha <\\kappa }X_{\\beta \\alpha }$ .", "Let $\\zeta \\in X^*$ , then for every $\\alpha <\\pi (\\zeta )$ , $\\zeta \\in X_{\\beta _\\alpha }$ .", "For $\\alpha \\ge \\pi (\\zeta )$ , $\\zeta \\in \\pi ^{-1^{\\prime \\prime }}\\alpha +1\\cap X$ , and by definition of $\\rho _\\alpha $ , $\\zeta <\\rho _\\alpha $ .", "Recall that $\\beta _\\alpha \\in H_{\\alpha ^*,\\rho _\\alpha +1,\\vec{\\nu }_\\alpha }$ $X_{\\alpha ^*}\\cap (\\rho _\\alpha +1)=X_{\\beta _\\alpha }\\cap (\\rho _\\alpha +1)$ and since $\\zeta \\in X_{\\alpha ^*}\\cap (\\rho _\\alpha +1)$ , $\\zeta \\in X_{\\beta _\\alpha }$ .", "We conclude that $\\zeta \\in X_{\\alpha ^*}\\cap \\bigcap _{\\alpha <\\kappa }X_{\\beta _i}$ , therefore $X^*\\subseteq X_{\\alpha ^*}\\cap \\bigcap _{\\alpha <\\kappa }X_{\\beta _i}$ .", "$\\blacksquare $ Theorem 5.14 Let $\\pi :\\mathbb {M}[\\vec{U}]\\rightarrow \\mathbb {P}$ be a projection and $G\\subseteq \\mathbb {M}[\\vec{U}]$ be $V$ -generic and $H=\\pi _*(G)$ be the induced generic for $\\mathbb {P}$ .", "Then $V[G]\\models \\mathbb {M}[\\vec{U}]/H$ is $\\kappa ^+$ -c.c.", "Proof.", "Assume otherwise, and let $\\langle p_i\\mid i<\\kappa ^+\\rangle \\in V[G]$ be an anthichain in $\\mathbb {M}[\\vec{U}]/H$ .", "Let $\\langle \\underaccent{\\sim }{p}_i\\mid i<\\kappa ^+\\rangle $ be a sequence of $\\mathbb {M}[\\vec{U}]$ -names for them and $r\\in G$ such that $r\\Vdash \\langle \\underaccent{\\sim }{p}_i\\mid i<\\kappa ^+\\rangle \\text{ is an antichain in } \\mathbb {M}[\\vec{U}]/\\underaccent{\\sim }{H}$ Work in $V$ , for every $i<\\kappa ^+$ , let $r\\le r_i\\in \\mathbb {M}[\\vec{U}]$ and $\\xi _i\\in \\mathbb {M}[\\vec{U}]$ be such that $r_i\\Vdash \\underaccent{\\sim }{p}_i=\\xi _i$ .", "Claim 4 $\\forall i<\\kappa ^+\\exists q\\ge \\xi _i\\forall q^{\\prime }\\ge q\\exists r^{\\prime \\prime }\\ge r_i \\ r^{\\prime \\prime }\\Vdash q\\in \\mathbb {M}[\\vec{U}]/\\underaccent{\\sim }{H}$ Proof of claim.", "Otherwise, there is $i$ such that for every $q\\ge \\xi _i$ , there is $q^{\\prime }\\ge q$ such that every $r^{\\prime \\prime }\\ge r_i$ , $r^{\\prime \\prime }\\lnot \\Vdash q^{\\prime }\\in \\mathbb {M}[\\vec{U}]/\\underaccent{\\sim }{H}$ .", "In particular, the set $E=\\lbrace q\\ge \\xi _i \\mid \\forall r^{\\prime \\prime }\\ge r_i.", "r^{\\prime \\prime }\\lnot \\Vdash q\\in \\mathbb {M}[\\vec{U}]/\\underaccent{\\sim }{H}\\rbrace $ is dense above $\\xi _i$ .", "To obtain a contradiction, let $G^{\\prime }$ be any generic for $\\mathbb {M}[\\vec{U}]$ such that $r_i\\in G^{\\prime }$ .", "Since $ r_i\\ge r$ , $r\\in G^{\\prime }$ and therefore $\\xi _i=(\\underaccent{\\sim }{p}_i)_{G^{\\prime }}\\in \\mathbb {M}[\\vec{U}]/\\underaccent{\\sim }{H}_{G^{\\prime }}$ .", "Denote $H^{\\prime }=\\underaccent{\\sim }{H}_{G^{\\prime }}$ .", "Then by proposition REF , there is a $V$ -generic filter $G^{\\prime \\prime }$ for $\\mathbb {M}[\\vec{U}]$ such that $\\xi _i\\in G^{\\prime \\prime }$ and $\\underaccent{\\sim }{H}_{G^{\\prime \\prime }}=H^{\\prime }$ .", "By density of $E$ , there is $\\xi _i\\le q\\in E\\cap G^{\\prime \\prime }$ and in particular, $q\\in \\mathbb {M}[\\vec{U}]/H^{\\prime }$ .", "Thus, there is $r_i\\le r^{\\prime \\prime }\\in G^{\\prime }$ such that $r^{\\prime \\prime }\\Vdash q\\in \\mathbb {M}[\\vec{U}]/\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{H}}}$ , contradicting $q\\in E$ .$\\blacksquare _{claim}$ For every $i<\\kappa ^+$ there pick $q_i\\ge \\xi _i$ such that $(*)_i \\ \\ \\ \\ \\ \\forall q^{\\prime }\\ge q_i.\\exists r^{\\prime \\prime }\\ge r_i.r^{\\prime \\prime }\\Vdash q^{\\prime }\\in \\mathbb {M}[\\vec{U}]/\\underaccent{\\sim }{H}$ Denote by $q_i=\\langle t_{i,1},...,t_{i,n_i},\\langle \\kappa ,A(q_i)\\rangle \\rangle $ and $r_i=\\langle s_{i,1},...,s_{i,m_i},\\langle \\kappa ,A(r_i)\\rangle \\rangle $ .", "Stabilize the sequences $\\langle t_{i,1},...,t_{i,n_i}\\rangle $ and $\\langle s_{i,1},...,s_{i,m_i}\\rangle $ i.e.", "find $X\\subset \\kappa ^+$ such that $|X|=\\kappa ^+$ and $\\vec{t}=\\langle t_1,..,t_n\\rangle ,\\vec{s}=\\langle s_1,...,s_m\\rangle $ such that for every $i\\in X$ $\\langle t_{i1},...,t_{in_i}\\rangle =\\langle t_1,..,t_n\\rangle ,\\text{ and } \\langle s_{i1},...,s_{im_i}\\rangle =\\langle s_1,...,s_m\\rangle $ This means that for every $i\\in X$ , $q_i=\\vec{t}^{}\\langle \\kappa ,A(q_i)\\rangle $ and $r_i=\\vec{s}^{}\\langle \\kappa ,A(r_i)\\rangle $ .", "Let $A^*(r_i)=\\lbrace \\nu \\in A(r_i)\\mid \\nu \\cap A(r_i)\\in \\cap \\vec{\\nu }\\rbrace $ by REF , $A^*(r_i)\\in \\cap \\vec{U}(\\kappa )$ , it follows that for every $\\vec{\\nu }\\in [A^*(r_i)]^{<\\omega }$ , $r_i^{}\\vec{\\nu }\\in \\mathbb {M}[\\vec{U}]$ .", "By lemma REF , there is $Y\\subseteq X$ of cardinality $\\kappa $ , such that $\\bigcap _{i\\in Y}A(q_i)\\in \\bigcap _{i<\\kappa } U(\\kappa ,i)$ .", "There is $\\alpha ^*\\in Y$ such that $[A^*(r_{\\alpha ^*})]^{<\\omega }\\subseteq \\bigcup _{i\\in Y\\setminus \\lbrace \\alpha ^*\\rbrace }[A^*(r_i)]^{<\\omega }$ Consider the set $A=\\bigcap _{i\\in Y}A(q_i)$ .", "For every $i\\in Y$ , $q_i\\le \\vec{t}^{}\\langle \\kappa , A\\rangle =:q^*$ .", "Then by $(*)_{\\alpha ^*}$ , there is $r^{\\prime \\prime }\\ge r_{\\alpha ^*}$ such that $r^{\\prime \\prime }\\Vdash q^*\\in \\mathbb {M}[\\vec{U}]/\\underaccent{\\sim }{H}$ .", "Hence there $\\vec{s}\\le s^{\\prime \\prime }\\in \\mathbb {M}[\\vec{U}]\\upharpoonright {\\rm max}(\\kappa (\\vec{s}))$ , $k<\\omega $ , $\\vec{\\nu }\\in [A(r_{\\alpha ^*})]^{k}$ and $B_1,...,B_k$ such that $r^{\\prime \\prime }=\\langle s^{\\prime \\prime },\\langle \\nu _1,B_1\\rangle ,...,\\langle \\nu _k,B_k\\rangle ,\\langle \\kappa ,A(r^{\\prime \\prime })\\rangle \\rangle $ Since $r^{\\prime \\prime }\\in \\mathbb {M}[\\vec{U}]$ , then $\\vec{\\nu }\\in [A^*(r_{\\alpha })]^{k}$ and by the property of $\\alpha ^*$ , $\\vec{\\nu }\\in \\cup _{j\\in Y\\setminus \\lbrace \\alpha ^*\\rbrace }[A^*(r_j)]^{<\\omega }$ and so there is $j\\in Y$ such that $\\vec{\\nu }\\in [A^*(r_j)]^{k}$ .", "Since $r_{\\alpha ^*}$ and $r_j$ have the same lower part, and $\\vec{\\nu }\\in [A^*(r_j)]^{<\\omega }$ , it follows that $r^{\\prime \\prime }$ and $r_j$ are compatible by the condition: $r^*=\\langle s^{\\prime \\prime }, \\langle \\nu _1, B_1\\cap A(r_j)\\rangle ,...\\langle \\nu _k, B_k\\cap A(r_j)\\rangle ,\\langle \\kappa , A(r_j)\\cap A(r^{\\prime \\prime })\\rangle \\rangle $ To see the contradiction, note that $r^*\\ge r_{\\alpha ^*},r_j$ and $r$ , thus $r^*\\Vdash \\underaccent{\\sim }{p}_{\\alpha ^*}=\\xi _{\\alpha ^*},\\underaccent{\\sim }{p}_j=\\xi _j\\text{ are incompatible in }\\mathbb {M}[\\vec{U}]/\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{H}}}$ but also $r^*\\ge r^{\\prime \\prime }$ , therefore $r^*\\Vdash q^*\\in \\mathbb {M}[\\vec{U}]/\\underaccent{\\sim }{H}$ Since $q^*\\ge q_{\\alpha ^*}\\ge \\xi _{\\alpha ^*}$ and $q^*\\ge q_j\\ge \\xi _j$ , then $r^*\\Vdash \\underaccent{\\sim }{p}_{\\alpha ^*},\\underaccent{\\sim }{p}_j$ are compatible in $\\mathbb {M}[\\vec{U}]/\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{H}}}$ , contradiction.$\\blacksquare $ This suffices to finish the induction step for $cf(\\lambda )>\\kappa $ and in turn REF .", "Corollary 5.15 Assume that $o^{\\vec{U}}(\\kappa )<\\kappa ^+$ , $(IH)$ and let $A\\in V[G]$ a set of ordinals such that $cf({\\rm sup}(A))>\\kappa $ .", "Let $C^*$ be as in REF , then $A\\in V[C^*]$ and $V[A]=V[C^*]$ .", "Proof.", "By REF , $C^*\\subseteq C_G$ is such that $C^*\\in V[A]$ and $\\forall \\alpha <\\lambda .", "A\\cap \\alpha \\in V[C^*]$ .", "Toward a contradiction assume that $A\\notin V[C^*]$ , and let $W:=V[C^*]$ .", "The quotient forcing $\\mathbb {M}[\\vec{U}]/C^*\\in W$ is $\\kappa ^+$ -c.c.", "and therefore $cf(\\lambda )$ -c.c.", "in $V[G]=W[G]$ and $A$ is a fresh subsets of $\\lambda $ contradicting theorem REF .$\\blacksquare _{\\text{\\ref {Inductionstephigh}}}$ $\\blacksquare _{\\text{\\ref {MainResaultParttwo}}}$" ], [ "The Quotient Forcing", "For $\\mathbb {M}[\\vec{U}]/C^*$ be $\\kappa ^+$ -c.c.", "in $V[C^*]$ , we can use a more abstract and direct argument: Suppose we have an iteration $P*\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}}$ of forcing notions.", "It is a classical result about the iteration that if for a regular cardinal $\\lambda $ we have $P$ has $\\lambda -$ c.c., $\\Vdash _P \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}} \\text{ has } \\lambda -c.c.$ , then $P*\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}}$ satisfies $\\lambda -$ c.c.. Also, if $P$ has $\\lambda -$ c.c., $P*\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}}$ has $\\lambda -$ c.c., then $\\Vdash _P \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}} \\text{ has } \\lambda -c.c.$ .", "Namely, suppose otherwise.", "Then there are $p\\in P$ and a sequence of $P-$ names ${\\langle }\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{q}}}_\\alpha \\mid \\alpha <\\lambda {\\rangle }$ such that $p\\Vdash _P {\\langle }\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{q}}}_\\alpha \\mid \\alpha <\\lambda {\\rangle }\\text{ is an antichain in } \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}}.$ Consider now $\\lbrace {\\langle }p,\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{q}}}_\\alpha {\\rangle }\\mid \\alpha <\\lambda \\rbrace \\subseteq P*\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}}$ .", "By $\\lambda -$ c.c., there are $\\alpha , \\beta <\\lambda , \\alpha \\ne \\beta $ such that ${\\langle }p,\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{q}}}_\\alpha {\\rangle }$ and ${\\langle }p,\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{q}}}_\\beta {\\rangle }$ are compatible.", "Hence, there are ${\\langle }p^{\\prime }, \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{q}}}^{\\prime }{\\rangle }\\ge {\\langle }p,\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{q}}}_\\alpha {\\rangle },{\\langle }p,\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{q}}}_\\beta {\\rangle }$ .", "But then $p^{\\prime } \\Vdash _P \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{q}}}^{\\prime } \\text{ is stronger than both } \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{q}}}_\\alpha ,\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{q}}}_\\beta ,$ which is impossible, since $p^{\\prime }$ forces that they are members of an antichain.", "However, in REF , we address a different question: Suppose that $P*\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}}$ satisfies $\\lambda -$ c.c.. Let $G*H$ be a generic subset of $P*\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}}$ .", "Consider the interpretation $Q$ of $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}}$ in $V[G,H]$ .", "Does it satisfies $\\lambda -$ c.c.?", "Clearly, this is not true in general.", "The simplest, let $P$ be trivial and $Q$ be the forcing for adding a branch to a Suslin tree.", "Then, in $V^Q$ , $Q$ will not be c.c.c.", "anymore.", "Our attention in theorem REF is to subforcings and projections of $\\mathbb {M}[\\vec{U}]$ , however the argument given is more general: Theorem 5.16 Suppose that $ \\mathcal {P}$ is either Prikry or Magidor or Magidor-Radin or Radin or Prikry with a product of $P$ -point ultrafilters forcing and $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}}$ is a projection of $\\mathcal {P}$ .", "Let $G(\\mathcal {P})$ be a generic subset of $\\mathcal {P}$ .", "Then, the interpretation of $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{Q}}}$ in $V[G(\\mathcal {P})]$ , satisfies $\\kappa ^+-$ c.c.", "there.", "We do not know how to generalize this theorem to wider classes of Prikry type forcing notions.", "For example the following may be the first step: Question 5.17 Is the result valid for a long enough Magidor iteration of the Prikry forcings?", "The problem is that there is no single complete enough filter here, and so the Galvin Theorem (or its generalization) does not seem to apply.", "Definition 5.18 Let $F$ be a $\\kappa -$ complete uniform filter over a set $X$ , for a regular uncountable cardinal $\\kappa $ .", "We say that $F$ has: The Galvin property iff every family of $\\kappa ^+$ members of $F$ has a subfamily of cardinality $\\kappa $ with intersection in $F$ .", "The generalized Galvin property iff it satisfies the conclusion of REF .", "The following question looks natural in this context: Question 5.19 Characterize filters (or ultrafilters) which satisfy the Galvin property (or the generalized Galvin property).", "Construction by U. Abraham and S. Shelah may be relevant here.", "They constructed a model in which there is a sequence ${\\langle }C_i\\mid i<2^{\\mu ^+}{\\rangle }$ in $Cub_{\\mu ^+}$ such that the intersection of any $\\mu ^+$ clubs in the sequence is of cardinality less that $\\mu $ .", "So the filter $Cub_{\\mu ^+}$ does not have the Galvin property.", "However $GCH$ fails there.", "The following questions seems to be open: Question 5.20 Assume $GCH$ .", "Let $\\kappa $ be a regular uncountable cardinal.", "Is there a $\\kappa $ -complete filter over $\\kappa $ which fails to satisfy the Galvin property?", "Let us note that if the ultrafilter is not on $\\kappa $ , then there is such an ultrafilter, namely, any fine $\\kappa $ -complete filter $U$ over $P_\\kappa (\\kappa ^+)$ does not satisfy the Galvin property: For every $\\alpha <\\kappa ^+$ , let $X_\\alpha =\\lbrace Z\\in P_\\kappa (\\kappa ^+)\\mid \\alpha \\in Z\\rbrace $ , then $X_\\alpha \\in U$ since $U$ is fine but the intersection of any $\\kappa $ elements from this sequence of sets is empty.", "A fine normal ultrafilter on $P_\\kappa (\\lambda )$ is used for the supercompact Prikry forcing (see [7] for the definition).", "Hence, the following question is natural: Question 5.21 Assume $GCH$ and let $\\lambda >\\kappa $ be a regular cardinal.", "Is every quotient forcing of the supercompact Prikry forcing also $\\lambda ^+$ -c.c.", "in the generic extension?", "One particular interesting case is of filters which extends the closed unbounded filter.", "Question 5.22 Assume $GCH$ .", "Let $\\kappa $ be a regular uncountable cardinal.", "Is there a $\\kappa -$ complete filter which extends the closed unbounded filter $Cub_\\kappa $ which fails to satisfy the Galvin property?", "Our prime interest is on $\\kappa -$ complete ultrafilters over a measurable cardinal $\\kappa $ .", "Note the following: Proposition 5.23 It is consistent that every $\\kappa -$ complete(or even $\\sigma $ -complete) ultrafilter over a measurable cardinal $\\kappa $ has the generalized Galvin property.", "Proof.", "This holds in the model $L[U]$ , where $U$ is a unique normal measure on $\\kappa $ .", "In this model every ultrafilter is Rudin-Keisler equivalent to a finite power of $U$ (see for example [11].", "By REF , it is easy to see that all such ultrefilters satisfy the generalized Galvin property.", "$\\blacksquare $ In context of ultrafilters over a measurable, the following is unclear: Question 5.24 Is it consistent to have a $\\kappa $ -complete ultrafilter over $\\kappa $ which does not have the Galvin property?", "Question 5.25 Is it consistent to have a measurable cardinal $\\kappa $ carrying a $\\kappa -$ complete ultrafilter which extends the closed unbounded filter $Cub_\\kappa $ (i.e., $Q-$ point) which fails to satisfy the Galvin property?", "It is possible to produce more examples of ultrafilters (and filters) with generalized Galvin property.", "The simplest example of this kind will be $U\\times W$ , where $U, W$ are normal ultrafilters over $\\kappa $ .", "We will work in a bit more general setting.", "Definition 5.26 Let $F_1,...,F_n$ be $P$ -point filters over $\\kappa $ , and let $\\pi _1,...,\\pi _n$ be the witnessing functions for it.", "Denote by $[\\kappa ]^{n*}$ , the set of all $n$ -tuples ${\\langle }\\alpha _1,..,\\alpha _n{\\rangle }$ such that for every $2\\le i\\le n$ , $\\alpha _{i-1}<\\pi _i(\\alpha _i)$ .", "Note that if $F_i$ 's are normal, the $\\pi _i=id$ and $[\\kappa ]^{n*}=[\\kappa ]^n$ .", "Definition 5.27 Let $F_1,...,F_n$ be $P$ -point filters over $\\kappa $ , and let $\\pi _1,...,\\pi _n$ be the witnessing functions for it.", "Define a filter $\\prod _{i=1}^{n*} F_i$ over $[\\kappa ]^{n*}$ recursively.", "For $X\\subseteq [\\kappa ]^{n*}$ : $ X \\in \\prod _{i=1}^{n*} F_i\\Leftrightarrow \\Big \\lbrace \\alpha <\\kappa \\mid X_{\\alpha }\\in \\prod _{i=2}^{n*} F_i\\Big \\rbrace \\in F_1$ Where $X_{\\alpha }=\\lbrace {\\langle }\\alpha _2,...,\\alpha _n\\rangle \\in [\\kappa ]^{n-1*} \\mid {\\langle }\\alpha ,\\alpha _2,...,\\alpha _n{\\rangle }\\in X\\rbrace $ .", "Again, if the filters are normal, this is simply a product.", "Proposition 5.28 Let $F_1,...,F_n$ be $P$ -point filters over $\\kappa $ , and let $\\pi _1,...,\\pi _n$ be the witnessing functions for it.", "Then for every $X\\in \\prod _{i=1}^{n*} F_i$ , there are $X_i\\in F_i$ such that $\\prod _{i=1}^{n*}X_i\\subseteq X$ .", "Proof.", "By induction on $n$ , for $n=1$ , it is clear.", "Let $X\\in \\prod _{i=1}^{n*} F_i$ .", "Let $X_1=\\Big \\lbrace \\alpha <\\kappa \\mid X_{\\alpha }\\in \\prod _{i=2}^{n*} F_i\\Big \\rbrace \\in F_1$ For every $\\alpha \\in X_1$ , find by induction hypothesis $X_{\\alpha ,i}\\in F_i$ for $2\\le i\\le n$ such that $\\prod _{i=2}^{n*}X_{\\alpha ,i}\\subseteq X_{\\alpha }$ .", "Define $X_i=\\Delta ^*_{\\alpha <\\kappa }X_{\\alpha ,i}$ since $F_i$ is $P$ -point, $X_i\\in F_i$ .", "Let us argue that $\\prod _{i=1}^{n*} X_i\\subseteq X$ .", "Let ${\\langle }\\alpha _1,..,\\alpha _n{\\rangle }\\in \\prod _{i=1}^{n*} X_i$ , then for every $2\\le i\\le n$ , $\\alpha _1<\\pi (\\alpha _i)$ , hence $\\alpha _i\\in X_{\\alpha _1,i}$ .", "It follows that ${\\langle }\\alpha _2,...,\\alpha _n{\\rangle }\\in \\prod _{i=2}^{n*} X_{\\alpha _1,i}\\subseteq X_{\\alpha _1}$ .", "By definition of $X_{\\alpha _1}$ , $\\langle \\alpha _1,\\alpha _2...\\alpha _n{\\rangle }\\in X$ .$\\blacksquare $ Corollary 5.29 Let $F_1,...,F_n$ be $P$ -point filters over $\\kappa $ , and let $\\pi _1,...,\\pi _n$ be the witnessing functions for it.", "Then $\\prod _{i=1}^{n*} F_i$ also satisfy the generalized Galvin property of REF .", "Proof.", "Let $\\langle Y_\\alpha \\mid \\alpha <\\kappa ^+{\\rangle }$ and ${\\langle }Z_\\alpha \\mid \\alpha <\\kappa ^+{\\rangle }$ as in REF .", "By proposition REF , for every $1\\le i\\le n$ , and $\\alpha <\\kappa ^+$ , find $X^{(\\alpha )}_i\\in F_i$ such that $\\prod _{i=1}^{*n} X^{(\\alpha )}_i\\subseteq Y_\\alpha $ .", "For every $\\vec{\\alpha }={\\langle }\\alpha _1,...,\\alpha _n{\\rangle }\\in [\\kappa ]^{*n}$ every $\\vec{\\nu }\\in [\\kappa ]^{<\\omega }$ and every $\\xi <\\kappa ^+$ , define $H_{\\xi ,\\vec{\\alpha },\\vec{\\nu }}=\\Big \\lbrace \\gamma <\\kappa ^+\\mid \\forall 1\\le i\\le n. X^{(\\gamma )}_i\\cap \\alpha _i= X^{(\\xi )}_i\\cap \\alpha _i\\text{ and } \\vec{\\nu }\\in [Z_\\gamma ]^{<\\omega }\\Big \\rbrace $ As in REF , since there are less than $\\kappa $ many possibilities for $\\langle X^{(\\gamma )}_1\\cap \\alpha _1,X^{(\\gamma )}_2\\cap \\alpha _2,...,X^{(\\gamma )}_n\\cap \\alpha _n{\\rangle }$ , we can find $\\alpha ^*<\\kappa ^+$ , such that for every $\\vec{\\alpha }$ and $\\vec{\\nu }$ , $|H_{\\alpha ^*,\\vec{\\alpha },\\vec{\\nu }}|=\\kappa ^+$ .", "Enumerate $[Z_{\\alpha ^*}]^{<\\omega }$ by $\\langle \\vec{\\nu }_i\\mid i<\\kappa \\rangle $ and also each $F_i$ is $P$ -point, so for every $j<\\kappa $ , there is $\\rho ^{(j)}_i>{\\rm sup}(\\pi _i^{-1^{\\prime \\prime }}[j]\\cap B_i)$ for some set $B_i\\in F_i$ .", "Define the sequence $\\beta _j$ by induction, $\\beta _j\\in H_{\\alpha ^*,{\\langle }\\rho ^{(j)}_1,...,\\rho ^{(j)}_n{\\rangle },\\vec{\\nu }_j}\\setminus \\lbrace \\beta _k\\mid k<j\\rbrace $ We claim once again that $X_{\\alpha ^*}\\cap \\bigcap _{j<\\kappa } X_{\\beta _j}\\in \\prod _{i=1}^{n*} F_i$ To see this, define for every $1\\le i\\le n $ $C_i:=X^{(\\alpha ^*)}_i\\cap \\Delta ^*_{j<\\kappa } X^{(\\beta _j)}_i\\in F_i$ Let $\\vec{\\alpha }\\in \\prod _{i=1}^{n*} C_i$ , and let $j<\\kappa $ .", "For every $1\\le i\\le n$ , if $j<\\pi (\\alpha _i)$ then $\\alpha _i\\in X^{(\\beta _j)}_i$ .", "If $\\pi (\\alpha _i)\\le j$ , then $\\alpha _i<\\rho ^{(j)}_i$ , so $\\alpha _i\\in X^{(\\alpha ^*)}\\cap \\rho ^{(j)}_i$ .", "Since $\\beta _j\\in H_{\\alpha ^*,{\\langle }\\rho ^{(j)}_1,...,\\rho ^{(j)}_n{\\rangle },\\vec{\\nu }_j}$ , $\\alpha _i\\in X^{(\\alpha ^*)}\\cap \\rho ^{(j)}_i=X^{(\\beta _j)}\\cap \\rho ^{(j)}_i$ Therefore, $\\vec{\\alpha }\\in \\prod _{i=1}^{n*}X^{(\\beta _j)}_i\\subseteq Y_{\\beta _j}$ .", "The continuation is as in REF .$\\blacksquare $" ], [ "Fresh sets", "Let us conclude this paper with the following result about fresh sets in Prikry, Magidor, Magidor-Radin and Radin extensions.", "Theorem 6.1 Assume that $\\mathbb {P}$ is either Prikry, Magidor, Magidor-Radin or Radin forcing.", "Let $G\\subseteq \\mathbb {P}$ be $V$ -generic.", "If $A\\in V[G]$ is fresh set of ordinals with respect to $V$ , then $cf^{V[G]}({\\rm sup}(A))=\\omega $ .Clearly, if $\\kappa $ changes its cofinality to $\\omega $ , then any cofinal in $\\kappa $ sequence of the order type $\\omega $ will be fresh.", "Proof.", "By induction on $\\kappa $ , which is the supermum of $C_G$ .", "Let $A$ be a fresh subset, if $A\\in V[C_G\\cap \\alpha ]$ for some $\\alpha <\\kappa $ , by the induction hypothesis we are done.", "Assume that $\\forall \\alpha <\\kappa .", "A\\notin V[C_G\\cap \\alpha ]$ , in particular ${\\rm sup}(A)\\ge \\kappa $ .", "Let us start with the difficult part, where ${\\rm sup}(A)=\\kappa $ .", "Lemma 6.2 If $A\\in V[G]$ is fresh subset of $\\kappa $ with respect to $V$ , such that ${\\rm sup}(A)=\\kappa $ , then $cf^{V[G]}(\\kappa )=\\omega $ .", "Proof.", "Toward a contradiction assume that $\\lambda :=cf^{V[G]}(\\kappa )>\\omega $ and let $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}$ a name for $A$ .", "First we deal with the case that $\\kappa $ is singular in $V[G]$ , hence $\\omega <\\lambda <\\kappa $ .", "Since $\\mathbb {P}$ decomposses to the part below $\\lambda $ and the part above $\\lambda $ , we can insure sufficient closure, by working in $V[C_G\\cap \\lambda ]$ , and force with the part of the forcing above $\\lambda $ .", "Note that $A$ is fresh also with respect to $V[C_G\\cap \\lambda ]$ .", "Let ${\\langle }c_\\alpha \\mid \\alpha <\\lambda \\rangle $ be a cofinal continuous subsequence of $C_G$ such that $c_0>\\lambda $ .", "Let $\\langle \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c^{\\prime }}}}_\\alpha \\mid \\alpha <\\lambda {\\rangle }$ be a sequence of $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,\\kappa )$ -names for it.", "Find $p\\in G\\upharpoonright (\\lambda ,\\kappa )$ such that $p\\Vdash \\underaccent{\\sim }{A}\\text{ is fresh}\\wedge \\langle \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c^{\\prime }}}}_\\alpha \\mid \\alpha <\\lambda {\\rangle }\\text{ is a cofinal continuous subsequence of }C_G$ For every $i<\\lambda $ , the set $D_i=\\Big \\lbrace q\\mid \\exists \\vec{\\alpha } \\ \\exists B.", "\\ p^{}\\vec{\\alpha }\\le ^* q\\wedge \\ q\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i={\\rm max}{\\vec{\\alpha }}\\wedge \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}\\cap {\\rm max}{\\vec{\\alpha }}=B\\Big \\rbrace $ is dense.", "To see that, let $q_0\\ge p$ , find any $q_0\\le q$ and $\\vec{\\beta }$ such that $p^{}\\vec{\\beta }\\le ^* q\\text{ and }q\\Vdash {\\rm max}(\\vec{\\beta })=\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i$ Above ${\\rm max}(\\vec{\\beta })$ there is enough closure to decide $\\underaccent{\\sim }{A}\\cap {\\rm max}(\\vec{\\beta })$ .", "Find $q\\upharpoonright ({\\rm max}(\\vec{\\beta }),\\kappa )\\le ^*q_{>{\\rm max}(\\vec{\\beta })}$ in $\\mathbb {M}[\\vec{U}]\\upharpoonright ({\\rm max}(\\vec{\\beta }),\\kappa )$ which decides $\\underaccent{\\sim }{A}\\cap {\\rm max}(\\vec{\\beta })$ and $q\\upharpoonright {\\rm max}(\\vec{\\beta })\\le q_{\\le {\\rm max}(\\vec{\\beta })}$ in $\\mathbb {M}[\\vec{U}]\\upharpoonright (\\lambda ,{\\rm max}(\\vec{\\beta }))$ (not necessarily a direst extension) such that for some $B\\subseteq {\\rm max}(\\vec{\\beta })$ , $q^*:=\\langle q_{\\le {\\rm max}(\\vec{\\beta })},q_{>{\\rm max}(\\vec{\\beta })}{\\rangle }\\Vdash \\underaccent{\\sim }{A}\\cap {\\rm max}(\\vec{\\beta })=B\\wedge \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i={\\rm max}(\\vec{\\beta })$ Let $\\vec{\\alpha }$ be such that $p^{}\\vec{\\alpha }\\le ^* q^*$ , then by construction ${\\rm max}(\\vec{\\alpha })={\\rm max}(\\vec{\\beta })$ and $q^*$ is as wanted.", "By REF , find a condition $p\\le ^*p_i$ , a $\\vec{U}$ -fat tree of extensions of $p_i$ , $T_i$ , and sets $B_i^t$ such that for every $ t\\in mb(T_i)$ there is $A_i(t)\\subseteq {\\rm max}(t)$ such that $p_i{}^{}{\\langle }t, \\vec{B}^t_i{\\rangle }\\Vdash \\underaccent{\\sim }{A}\\cap {\\rm max}(t)=A_i(t)\\wedge \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i={\\rm max}(t)$ Since we have sufficient closure in the forcing above $\\lambda $ , we can find a single $p\\le ^* p^*\\in G\\upharpoonright (\\lambda ,\\kappa )$ such that for every $i<\\lambda $ , $p_i\\le ^* p^*$ .", "Keep defining by recursion sets $A_i(s)$ for $s\\in T_i\\setminus mb(T_i)$ .", "Let $s\\in {\\rm Lev}_{ht(T_i)-1}(T_i)$ , then we can shrink ${\\rm Succ}_{T_i}(s)$ and find $A_i(s)$ such that for every $\\alpha \\in {\\rm Succ}_{T_i}(s)$ , $A_i(s^{}\\alpha )= A_i(s)\\cap \\alpha $ .", "Generally, take $s\\in T_i$ and assume that for every $\\alpha $ is ${\\rm Succ}_{T_i}(s)$ , $A_i(s^{}\\alpha )$ is defined.", "We can find a single $A_i(s)$ and shrink ${\\rm Succ}_{T_i}(s)$ such that $(\\star ) \\ \\ \\ \\forall \\alpha \\in {\\rm Succ}_{T_i}(s).A_i(s^{}\\alpha )\\cap \\alpha = A_i(s)\\cap \\alpha $ Move to $V[A]$ , let us compare the sets $A_i(s)$ with $A$ .", "For every $i$ , define recursively $\\rho ^i(k)$ for $k\\le N_i:=ht(T_i)$ .", "Let $\\rho ^i(0)={\\rm min}(A\\Delta A_i({\\langle }{\\rangle }))+1$ .", "Recursively define $\\rho _i(k+1)={\\rm sup}({\\rm min}(A\\Delta A_i({\\langle }\\delta _1,...,\\delta _k{\\rangle }))+1\\mid \\delta _1<\\rho ^i(0),...,<\\delta _k<\\rho ^i(k)))$ Let $\\vec{c}_i\\in mb(T_i)$ such that $p^{*}{}^{}{\\langle }\\vec{c}_i,\\vec{B}^{\\vec{c}_i}_i{\\rangle }\\in G$ , let us argue that for every $k\\le N_i$ , $\\rho ^i(k)> \\vec{c}_i(k)$ .", "By construction of the tree $T_i$ , $c_i=(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{c}}}_i)_G={\\rm max}(\\vec{c}_i)$ and $A\\cap c_i=A_i(\\vec{c}_i)\\cap c_i$ .", "By $(\\star )$ , for every $j\\le N_i$ , $A_i(\\vec{c}_i\\upharpoonright j)\\cap \\vec{c}_i(j)=A_i(\\vec{c}_i\\upharpoonright j+1)\\cap \\vec{c}_i(j)$ It follows that for every $j\\le N_i$ , $A_i(\\vec{c}_i\\upharpoonright j)\\cap \\vec{c}_i(j)=A\\cap \\vec{c}_i(j)$ In particular, $A\\cap \\vec{c}_i(0)=A_i({\\langle }{\\rangle })\\cap \\vec{c}_i(0)$ .", "Since $A\\cap \\rho ^i(0)\\ne A_i{{\\langle }{\\rangle }}\\cap \\rho ^i(0)$ , it follows that $\\vec{c}_i(0)<\\rho ^i(0)$ .", "Inductively assume that $\\vec{c}_i(j)<\\rho ^i(j)$ for every $j\\le k$ .", "Since $A_i(\\vec{c}_i\\upharpoonright k+1)\\cap \\vec{c}_i(k+1)=A\\cap \\vec{c}_i(k+1)$ , then $\\vec{c}_i(k+1)<{\\rm min}(A_i(\\vec{c}_i\\upharpoonright k+1)\\Delta A)\\le \\rho ^i(k+1)$ Before proving that $cf^{V[G]}(\\kappa )=\\omega $ .", "let is argue that $\\rho ^i(k)<\\kappa $ .", "Again by induction on $k$ , $\\rho ^i(0)<\\kappa $ since $A\\ne A_i({\\langle }{\\rangle })$ , as $A_i({\\langle }{\\rangle })\\in V[C_G\\cap \\lambda ]$ and $A\\notin V[C_G\\cap \\lambda ]$ .", "Toward a contradiction assume that $\\rho ^i(k+1)=\\kappa $ .", "Back to $V[C_G\\cap \\lambda ]$ , consider the collection $\\lbrace A_i({\\langle }\\alpha _0,...,\\alpha _k{\\rangle })\\mid \\alpha _0<\\rho ^i(0),...,\\alpha _k<\\rho ^i(k)\\rbrace $ Then for every $\\gamma <\\kappa $ pick any distinct $\\vec{\\alpha }_1,\\vec{\\alpha }_2$ such that $A_i(\\vec{\\alpha }_1)\\ne A_i(\\vec{\\alpha }_2)$ , but $A_i(\\vec{\\alpha }_1)\\cap \\gamma =A_i(\\vec{\\alpha }_2)\\cap \\gamma $ .", "To see that there are such $\\vec{\\alpha }_1,\\alpha _2$ , if $\\rho ^i(k+1)=\\kappa $ there is $\\vec{\\alpha }_1$ such that $\\eta _1:={\\rm min}(A\\Delta A_i(\\vec{\\alpha }_1))>\\gamma $ , hence $A_i(\\vec{\\alpha }_1)\\cap \\gamma =A\\cap \\gamma $ .", "Let $\\vec{\\alpha }_2$ be such that ${\\rm min}(A\\Delta A_i(\\vec{\\alpha }_2))>\\eta _1$ .", "In particular, $A_i(\\vec{\\alpha }_1)\\ne A_i(\\vec{\\alpha }_2)$ , but $A_i(\\vec{\\alpha }_1)\\cap \\gamma =A\\cap \\gamma =A_i(\\vec{\\alpha }_2)\\cap \\gamma $ .", "Since this is all in $V[C_G\\cap \\lambda ]$ , where $\\kappa $ is still measurable, we can find unboundedly many $\\gamma $ 's with the same $\\vec{\\alpha }_1,\\vec{\\alpha }_2$ , which is clearly a contradiction.", "So we found a sequence $\\langle \\rho ^i(N_i)\\mid i<\\lambda \\rangle \\in V[A]$ such that $\\rho ^i(N_i)>c_i$ .", "Let $Z$ be the closure of $\\lbrace \\rho ^i(N_i)\\mid i<\\lambda \\rbrace $ .", "Since $\\lambda >\\omega $ , there is some limit $\\alpha <\\lambda $ such that $c_\\alpha <\\kappa $ is a limit point of $Z$ .", "To see the contradiction, note that on one hand, $A\\cap c_\\alpha \\in V[C_G\\cap \\lambda ]$ , and therefore the set $Z\\cap c_\\alpha $ , $|Z\\cap c_{\\alpha }|=\\lambda $ is defined in $V[C_G\\cap \\lambda ]$ from $A\\cap c_\\alpha $ , on the other hand, $c_\\alpha >\\lambda $ , thus $c_\\alpha $ should stay measurable in $V[C_G\\cap \\lambda ]$ , contradiction.", "Next we eliminate the case that $\\kappa $ is regular in $V[G]$ i.e.", "$\\lambda =\\kappa $ .", "Many of ideas for $\\lambda <\\kappa $ will also work here.", "We no longer work over the model $V[C_G\\cap \\lambda ]$ , instead, simply force over $V$ .", "Let $p\\in G$ be such that $p\\Vdash \\underaccent{\\sim }{A}\\text{ is fresh}$ By induction we construct a $\\le ^*$ -increasing sequence of conditions $p_n$ and a tree of trees, i.e.", "a tree $T_0$ , trees $ T_{1,t_0}$ for $t_0\\in mb(T_0)$ , and generally trees $ T_{n+1,t_0,..,t_n}$ where $t_0\\in mb(T_0), t_1\\in mb(T_{1,t_0})... t_n\\in mb(T_{n, t_0,...,t_{n-1}})$ First find a condition $p\\le ^*p_0$ and take the tree $T_0$ to be simply the tree with one level which decides $C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}(0)$ if it is not already decided, or $T_0=\\lbrace {\\langle }{\\rangle }\\rbrace $ otherwise.", "Necessarily, for each $\\alpha \\in T_0$ , ${\\rm min}(\\kappa (p^{}\\alpha ))=\\alpha $ , hence there is enough $\\le ^*$ -closure to decide $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}\\cap \\alpha $ , so we find $p^{}\\alpha \\le ^* p_\\alpha $ and a set $A_0(\\alpha )$ such that $p_\\alpha \\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}\\cap \\alpha = A_0(\\alpha )$ .", "Then $p_0$ is obtained by diagonally intersect all the sets in $p_\\alpha $ , and $p_0$ has the following property $\\forall \\alpha \\in T_0.\\ p_0{}^{}\\alpha \\Vdash \\underaccent{\\sim }{A}\\cap \\alpha =A_0(\\alpha )\\wedge C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}(0)=\\alpha $ For clarity reasons, let us present also the construction of $p_1$ and $T_{1,t_0}$ for every $t_0\\in mb(T_0)$ .", "The proof regarding the construction will be addressed later, in the general definition.", "If necessary, find a direct extension of $p_0$ and use ineffability to find a set $A_0({\\langle }{\\rangle })\\subseteq \\kappa $ , such that for every $\\alpha \\in T_0$ , $A_0({\\langle }{\\rangle })\\cap \\alpha =A_0(\\alpha )\\cap \\alpha $ .", "In $V[A]$ , define $\\eta _0={\\rm min}(A\\Delta A_0({\\langle }{\\rangle }))$ , since $A\\notin V$ and $A_0({\\langle }{\\rangle })\\in V$ , $\\eta _0<\\kappa $ is well defined.", "Clearly, for every $V$ -generic filter with $p_0\\in H$ , $\\eta _0> C_H(0)$ , since then $p_0^{} C_H(0)\\in H$ forces the correct value of $A$ .", "Let $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_0$ be a name such that $p_0\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_0={\\rm min}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}\\Delta A_0({\\langle }{\\rangle }))$ .", "Fix $t_0\\in mb(T_0)$ , consider $p_0^{}t_0$ .", "In the general case we will prove that we can find $p_0^{}t_0\\le ^* p_{t_0}$ , $T_{1,t_0}$ and sets $Y_1^t$ for $t\\in mbt(T_{1,t_0})$ such that for every $t_1\\in mb(T_{1,t_0})$ there is $A_1(t_0,t_1)\\subseteq {\\rm max}(t_1)$ , such that $p_{t_0}^{}{\\langle }t_1, \\vec{Y}^{t_1}_1{\\rangle }\\Vdash A_1(t_0,t_1)\\cap {\\rm max}(t_1)= \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}\\cap {\\rm max}(t_1)\\wedge {\\rm max}(t_1)=C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_0)$ Note that $p_0^{}t_0\\Vdash {\\rm max}(t_0)<\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_0\\le C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_0)$ Hence ${\\rm max}(t_1)>{\\rm max}(t_0)$ .", "Find a single $p_0\\le ^*p_1$ such that for every $t_0\\in mb(T_0)$ , $p_{t_0}\\le ^*p_1^{}t_0$ .", "If necessary, directly extend $p_1$ so there is $N_1<\\omega $ , such that for every $t_0\\in mb(T_0)$ , $ht(T_{1,t_0})=N_1$ .", "Define the sets $A_1(t_0,s)$ for every $s\\in T_{1,t_0}\\setminus mb(T_{1,t_0})$ .", "Let $s\\in {\\rm Lev}_{N_1-1}(T_{1,t_0})$ , we can shrink ${\\rm Succ}_{T_{1,t_0}}(s)$ and find $A_1(t_0,s)\\subseteq \\kappa $ such that for every $\\alpha \\in {\\rm Succ}_{T_{1,t_0}}(s)$ , $A_1(t_0,s^{}\\alpha )= A_0(s)\\cap \\alpha $ .", "Recursively, let $s\\in T_{1,t_0}\\setminus mb(T_{1,t_0})$ and assume that for every $\\alpha $ in ${\\rm Succ}_{T_0}(s)$ , $A_1(t_0,s^{}\\alpha )$ is defined.", "Find a single $A_1(t_0,s)$ and shrink ${\\rm Succ}_{T_{1,t_0}}(s)$ such that $\\forall \\alpha \\in {\\rm Succ}_{T_{1,t_0}}(s).", "\\ A_1(t_0,s^{}\\alpha )\\cap \\alpha = A_1(t_0,s)\\cap \\alpha $ In $V[A]$ , define $\\rho ^1(k)$ for every $k\\le N_1$ .", "For $k=0$ , $\\rho ^1(0)={\\rm sup}({\\rm min}(A\\Delta A_1(t_0,{\\langle }{\\rangle }))\\mid t_0\\in mb(T_0)\\cap [\\eta _0]^{<\\omega })$ Recursively, $\\rho ^1(k+1)={\\rm sup}({\\rm min}(A\\Delta A_1(t_0,{\\langle }\\alpha _0,...,\\alpha _k{\\rangle }))\\mid t_0\\in mb(T_0)\\cap [\\eta _0]^{<\\omega }\\wedge \\alpha _i<\\rho ^1(i))$ Note that for every $t_0\\in mb(T_0)$ and $s\\in T_{1.t_0}$ , $A\\ne A_1(t_0,s)$ , as $A_1(t_0,s)\\in V$ and $A\\notin V$ .", "Therefore $\\rho ^1(k)\\le \\kappa $ is well defined for every $k\\le N_1$ .", "In the general case we will also prove that $\\rho ^1(k)<\\kappa $ .", "Finally, define $\\eta _1=\\rho ^1(N_1)$ and let $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_1$ be a name such that $p_1$ forces $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_1$ is computed by comparing the sets $A_1(t_0,s)$ and $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}$ , the way we defined it.", "Now for the general definition, assume we have defined $p\\le ^* p_1\\le ^* p_2....\\le ^*p_{n}$ , trees $T_{n,t_0,...,t_{n-1}}$ for $t_0\\in mb(T_0),t_1\\in mb( T_{1,t_0}),...,t_{n-1}\\in T_{n-1,t_0,...,t_{n-2}}$ , sets $A_n(t_0,..,t_{n-1},t_n)$ for every $t_n\\in mb(T_{n,t_0,...,t_{n-1}})$ and $Y^{t_0}_0,...,Y^{t_{n}}_{n}$ , also a name $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_{n-1}$ such that, $p_{n}^{}{\\langle }t_0,\\vec{Y}^{t_0}_0{\\rangle }^{}{\\langle }t_1,\\vec{Y}^{t_1}_1{\\rangle }...^{}{\\langle }t_n,Y^{t_n}_n{\\rangle }\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}\\cap {\\rm max}(t_n)=A_n(t_0,..,t_{n-1})\\cap {\\rm max}(t_n)\\wedge {\\rm max}(t_n)= C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_{n-1})$ Define recursively the sets $A_n(t_0,...,t_{n-1},s)$ for $s\\in T_{n,t_0,...,t_{n-1}}\\setminus mb(T_{n,t_0,...,t_{n-1}})$ .", "Assume that $A_n(t_0,...,t_{n-1},s^{}\\alpha )$ is defined, for every $\\alpha \\in {\\rm Succ}_{T_{n,t_0,...,t_{n-1}}}(s)$ .", "Directly extend $p_n$ if necessary, shrink $ {\\rm Succ}_{T_{n,t_0,...,t_{n-1}}}(s)$ and find by ineffability $A_n(t_0,...,t_{n-1},s)$ so that for every $\\alpha \\in {\\rm Succ}_{T_{n,t_0,...,t_{n-1}}}(s)$ , $A_n(t_0,...,t_{n-1},s)\\cap \\alpha =A_n(t_0,...,t_{n-1},s^{}\\alpha )\\cap \\alpha .$ In $V[A]$ , we have defined $\\eta _0,...,\\eta _{n-1}$ , and so we can define $\\rho ^n(0)={\\rm sup}\\Big [{\\rm min}\\Big (A_n(t_0,..,t_{n-1},{\\langle }{\\rangle })\\Delta A\\Big )\\mid t_0\\in mb(T_1)\\cap [\\eta _0]^{<\\omega },...,t_{n-1}\\in mb(T_{n-1})\\cap [\\eta _{n-1}]^{<\\omega }\\Big ]$ keep defining $\\rho ^n(k+1)$ recursively as ${\\rm sup}\\Big [{\\rm min}\\Big (A_n(t_0,..,t_{n-1},\\vec{\\alpha })\\Delta A\\Big )\\mid t_0\\in mb(T_1)\\cap [\\eta _0]^{<\\omega },...,t_{n-1}\\in mb(T_{n-1})\\cap [\\eta _{n-1}]^{<\\omega }, \\ \\vec{\\alpha }\\in \\prod _{j=1}^k\\rho ^n(j)\\Big ]$ Finally define, $\\eta _n=\\rho ^n(N_n)$ .", "Again note that $\\rho ^n(k)\\le \\kappa $ is a well define ordinal.", "Let us prove that $\\rho ^n(k)<\\kappa $ .", "Claim 5 For every $k\\le N_n$ , $\\rho ^n(k)<\\kappa $ .", "Proof of claim.", "The proof is similar to the case that $\\kappa $ is singular in $V[G]$ .", "Toward a contradiction assume that $\\rho ^n(k)=\\kappa $ .", "Back in $V$ , consider the collection $\\Big \\lbrace A_n(t_0,...,t_{n-1},\\vec{\\alpha }) \\mid t_0\\in mb(T_1)\\cap [\\eta _0]^{<\\omega },...,t_{n-1}\\in mb(T_{n-1})\\cap [\\eta _{n-1}]^{<\\omega }, \\ \\vec{\\alpha }\\in \\prod _{j=1}^{k-1}\\rho ^n(j)\\Big \\rbrace $ Then for every $\\gamma <\\kappa $ pick any distinct $t_1,...,t_{n-1},\\vec{\\alpha }$ and $s_0,...,s_{n-1},\\vec{\\beta }$ such that $A_n(t_1,...,t_{n-1},\\vec{\\alpha })\\ne A_n(s_0,...,s_{n-1},\\vec{\\beta }),\\text{ but } A_n(t_1,...,t_{n-1},\\vec{\\alpha })\\cap \\gamma =A_n(s_0,...,s_{n-1},\\vec{\\beta })\\cap \\gamma $ To see that there are such $t_1,...,t_{n-1},\\vec{\\alpha }$ and $s_0,...,s_{n-1},\\vec{\\beta }$ , by assumption that $\\rho ^n(k)=\\kappa $ there are $t_1,...,t_{n-1},\\vec{\\alpha }$ such that $\\xi _1:={\\rm min}(A\\Delta A_n(t_1,...,t_{n-1},\\vec{\\alpha }))>\\gamma $ , hence $A\\Delta A_n(t_1,...,t_{n-1},\\vec{\\alpha })\\cap \\gamma =A\\cap \\gamma $ Find $s_0,...,s_{n-1},\\vec{\\beta }$ , ${\\rm min}(A\\Delta A_n(s_0,...,s_{n-1},\\vec{\\beta }))>\\xi _1$ .", "In particular, $A_n(t_1,...,t_{n-1},\\vec{\\alpha })\\ne A_n(s_0,...,s_{n-1},\\vec{\\beta })\\text{ but }A_n(t_1,...,t_{n-1},\\vec{\\alpha })\\cap \\gamma =A\\cap \\gamma =A_n(s_0,...,s_{n-1},\\vec{\\beta })\\cap \\gamma $ Since this is all in $V$ , where $\\kappa $ is measurable, we can find unboundedly many $\\gamma $ 's with the same $t_1,...,t_{n-1},\\vec{\\alpha },s_0,...,s_{n-1},\\vec{\\beta }$ , which is clearly a contradiction.$\\blacksquare _{\\text{claim}}$ Find a name $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_n$ such that $p_n$ forces $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_n$ it is obtained by comparing $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}}$ with the sets $A_n(t_0,...,t_n,\\vec{\\alpha })$ as above using $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_0,...,\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_{n-1}$ .", "Now for the definition of the trees, fix $t_0,...,t_n$ such that $t_0\\in mb(T_0),t_1\\in mb( T_{1,t_0}),...,t_{n}\\in mb(T_{n,t_0,...,t_{n-1}})$ The set $D$ of all conditions $q$ such that for some $\\vec{\\alpha }\\in [\\kappa ]^{<\\omega }$ : $p_{n}^{}{\\langle }t_0,\\vec{Y}^{t_0}_0{\\rangle }^{}...{}^{}{\\langle }t_n,\\vec{Y}^{t_n}_n{\\rangle }^{}\\vec{\\alpha }\\le ^*q$ .", "$q\\Vdash \\underaccent{\\sim }{A}\\cap {\\rm max}(\\vec{\\alpha })=A(\\vec{\\alpha })\\wedge {\\rm max}(\\vec{\\alpha })=C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_{n})$ .", "is dense above $p_{n}^{}{\\langle }t_0,\\vec{Y}^{t_0}_0{\\rangle }^{}...{}^{}{\\langle }t_n,\\vec{Y}^{t_n}_n{\\rangle }$ .", "The proof is as for the case $\\kappa $ is singular.", "By REF and REF , find a condition $p_{n}^{}t_0^{}...^{}t_{n-1}{}^{}t_n\\le ^*p_{t_0,...,t_n}$ , a $\\vec{U}$ -fat tree of extensions of $p_{t_0,...,t_n}$ , $T_{n+1,t_0,..,t_n}$ , and sets $Y^s_{n+1}$ , such that for every $ t\\in mb(T_{n+1,t_0,..,t_n})$ there is $A_{n+1}(t_0,..,t_n,t)\\subseteq {\\rm max}(t)$ for which $p_{t_0,...,t_n}{}^{}{\\langle }t,\\vec{Y}^t_{n+1}{\\rangle }\\Vdash \\underaccent{\\sim }{A}\\cap {\\rm max}(t)=A_{n+1}(t_0,...,t_n,t)\\wedge C_{\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{G}}}}(\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{\\eta }}}_n)={\\rm max}(t)$ and the set $D_{T_{n+1,t_0,...,t_n},\\vec{Y}_n+1}$ is dense above $p_{t_0,...,t_n}$ .", "By REF , find a single $p_n\\le ^* p_{n+1}$ shrink the trees $T_{i,t_0,...,t_i}$ such that for every $t_0,...,t_n$ .", "$p_{t_0,..,t_n}\\le ^* p_{n+1}^{}t_0,..,t_n$ By shrinking even more if necessary, we can assume that there is $N_{n+1}$ , such that for every $t_0,...,t_n$ $ht(T_{n+1,t_0,...,t_n})=N_{n+1}$ .", "This concludes the recursive definition.", "By $\\sigma $ -completness, there is $p_{\\omega }$ such that $p_n\\le ^* p_\\omega $ .", "By density, there is such $p_{\\omega }\\in G$ .", "In $V[A]$ we have the sequence ${\\langle }\\eta _n\\mid n<\\omega {\\rangle }$ .", "Clearly, $C_G(\\eta _n)\\ge \\eta _n$ and as we have seen, $C_G(0)<\\eta _0$ .", "Let us prove that $\\eta _{n+1}>C_G(\\eta _n)$ .", "Claim 6 For every $0<n<\\omega $ , $\\eta _{n}>C_G(\\eta _{n-1})$ .", "Proof of claim.", "Find $\\vec{c}_0\\in mb(T_0)$ ,$\\vec{c}_1\\in mb(T_{1,\\vec{c}_0})$ ,...$\\vec{c}_n\\in mb(T_{n,\\vec{c}_0,...,\\vec{c}_{n-1}})$ such that $p_\\omega ^{} {\\langle }\\vec{c}_0,\\vec{Y}^{\\vec{c}_0}_0{\\rangle }^{}...^{}{\\langle }\\vec{c}_n,\\vec{Y}^{\\vec{c}_n}_n{\\rangle }\\in G$ It follows $\\vec{c}_n(N_n)=C_G(\\eta _{n-1})$ and that $A\\cap C_G(\\eta _{n-1})=A_{n+1}(\\vec{c}_0,...\\vec{c}_n)\\cap C_G(\\eta _{n-1})$ .", "Since for every $j\\le N_n$ , by definition, $A_{n+1}(\\vec{c}_0,...\\vec{c}_{n-1},\\vec{c}_n\\upharpoonright j)\\cap \\vec{c}_n(j)=A_{n+1}(\\vec{c}_0,...\\vec{c}_{n-1},\\vec{c}_n\\upharpoonright j+1)\\cap \\vec{c}_n(j)$ It follows that for every $j\\le N_n$ , $A_{n+1}(\\vec{c}_0,...\\vec{c}_{n-1},,\\vec{c}_n\\upharpoonright j)\\cap \\vec{c}_n(j)=A\\cap \\vec{c}_n(j)$ In particular, $A\\cap (\\vec{c}_n)(0)=A_{n+1}(\\vec{c}_0,...\\vec{c}_{n-1},{\\langle }{\\rangle })\\cap (\\vec{c}_n)(0)$ .", "Let us argue that for every $k\\le N_n$ , $\\rho ^n(k)> \\vec{c}_n(k)$ Since by definition $A\\cap \\rho ^n(0)\\ne A_{n}(\\vec{c}_0,...\\vec{c}_{n-1},{\\langle }{\\rangle })\\cap \\rho ^n(0)$ , it follows that $\\vec{c}_n(0)<\\rho ^n(0)$ .", "Inductively assume that $\\vec{c}_n(j)<\\rho ^n(j)$ for every $j\\le k$ .", "Since $A_{n}(\\vec{c}_0,...,\\vec{c}_{n-1},\\vec{c}_n\\upharpoonright k+1)\\cap \\vec{c}_n(k+1)=A\\cap \\vec{c}_n(k+1)$ , then $\\vec{c}_n(k+1)<{\\rm min}(A_n(\\vec{c}_0,...,\\vec{c}_{n-1},\\vec{c}_n\\upharpoonright k+1)\\Delta A)\\le \\rho ^n(k+1)$ Hence $C_G(\\eta _{n-1})=\\vec{c}_n(N_n)<\\rho ^n(N_n)=\\eta _n$ .$\\blacksquare _{\\text{claim}}$ We conclude that $C_G(0)<\\eta _0\\le C_G(\\eta _0)<\\eta _1\\le C_G(\\eta _1)...$ Let $\\kappa ^*={\\rm sup}_{n<\\omega }\\eta _n$ , then $\\kappa ^*\\in Lim(C_G)$ and therefore regular in $V$ .", "Also, by assumption, $cf^{V[G]}(\\kappa )>\\omega $ , hence $\\kappa ^*<\\kappa $ .", "By freshness, $A\\cap \\kappa ^*\\in V$ .", "This means that in $V$ we can construct the sequence $\\eta _n$ which is a contradiction.", "This conclude the proof for sets with supremum $\\kappa $ .$\\blacksquare _{Lemma}$ Now for the remaining cases of theorem REF : Lemma 6.3 If $A\\in V[G]$ is fresh set of ordinals with respect to $V$ , such that ${\\rm sup}(A)>\\kappa $ , then $cf^{V[G]}({\\rm sup}(A))=\\omega $ .", "Proof.", "Let $\\mu :=cf^{V}({\\rm sup}(A))$ , by theorem REF $\\mu \\le \\kappa $ .", "There is a fresh set $X\\subseteq \\mu $ such that $V[A]=V[X]$ .", "To see this, pick in $V$ a cofinal sequence $\\langle \\eta _i\\mid i<\\mu {\\rangle }$ in ${\\rm sup}(A)$ .", "Then By $\\kappa ^+$ -c.c, there is $F\\in V$ , such that $Dom(F)=\\mu $ .", "For every $i<\\mu $ , $|F(i)|=\\kappa $ .", "$A\\cap \\eta _i\\in F(i)$ .", "For each $i<\\mu $ , find in $V$ , an enumeration $\\langle x^i_j\\mid j<\\kappa \\rangle $ of $F(i)$ , such that for every $W\\in F(i)$ , $\\lbrace j<\\kappa \\mid x^i_j=W\\rbrace $ is unbounded in $\\kappa $ .", "Move to $V[A]$ , inductively define ${\\langle }\\gamma _i\\mid i<\\mu {\\rangle }$ increasing such that $x^i_{\\gamma _i}=A\\cap \\eta _i$ .", "Set $\\gamma _0={\\rm min}(j\\mid x^0_j=A\\cap \\eta _0)$ .", "Assume that $\\gamma _i$ was defined for every $i\\le k<\\mu $ , define $\\gamma _{k+1}={\\rm min}(j>\\gamma _k\\mid x^{k+1}_j=A\\cap \\eta _{k+1})$ .", "Note that at limit stage $\\delta $ , the sequence $\\langle \\gamma _i\\mid i<\\delta \\rangle $ is definable using only the enumeration and $A\\cap \\eta _\\delta $ which is all available in $V$ .", "hence $\\gamma _{\\delta }^{\\prime }={\\rm sup}(\\gamma _i\\mid i<\\delta )<\\kappa $ and we define $\\gamma _\\delta ={\\rm min}(j>\\gamma _\\delta ^{\\prime }\\mid x^{\\delta }_j=A\\cap \\eta _\\delta )$ .", "Let $X=\\lbrace \\gamma _i\\mid i<\\mu \\rbrace \\subseteq \\kappa $ .", "Since $\\langle \\gamma _i\\mid i<\\mu \\rangle $ is increasing, $cf^{V[G]}({\\rm sup}(X))=cf^{V[G]}(\\mu )$ , $V[A]=V[X]$ and $X$ is fresh.", "It follows by the proof for subsets of $\\kappa $ that $cf^{V[G]}(X)=\\omega $ , hence $cf^{V[G]}({\\rm sup}(A))=\\omega $ .", "Finally, if $\\mu >\\kappa $ remains regular.", "So, there are $X\\subseteq \\mu $ unbounded and $r \\in V_\\kappa $ such that for every $i \\in X$ there is $B_i$ , $r^\\frown {\\langle }\\kappa , B_i {\\rangle }\\in G \\text{ and } r^\\frown {\\langle }\\kappa , B_i {\\rangle }\\parallel \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}} \\cap \\eta _i.$ Now, in $V$ , consider the set $Z=\\lbrace S \\mid \\exists B \\exists i<\\mu (r^\\frown {\\langle }\\kappa , B {\\rangle }\\Vdash \\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{A}}} \\cap \\eta _i=S \\rbrace .$ Then $\\bigcup Z= A$ , and so, $A\\in V$ .", "Contradiction.$\\blacksquare _{\\text{Lemma }\\ref {freshabovek}}$ $\\blacksquare _{\\text{Theorem }\\ref {Fresh}}$" ], [ " Open problems ", "Here are some related open problems: Distinguishing from the case where $o^{\\vec{U}}(\\kappa )<\\kappa $ , we do not have here a classification of subforcings of $\\mathbb {M}[\\vec{U}]$ .", "Question 7.1 Classify subforcings of $\\mathbb {M}[\\vec{U}]$ .", "Using theorem REF , it remains to consider models of the form $V[C^{\\prime }]$ for some $C^{\\prime }\\subseteq C_G$ , and try to classify the forcings which generate these models.", "Our conjecture, at least for $o^{\\vec{U}}(\\kappa )=\\kappa $ is the following: Conjecture 7.2 Let $G\\subseteq \\mathbb {M}[\\vec{U}]$ be a $V$ -generic filter, where $\\forall \\alpha \\le \\kappa .o^{\\vec{U}}(\\alpha )\\le \\alpha $ .", "If $V\\subseteq M\\subseteq V[G]$ is a transitive $ZFC$ model, then either it is a finite iteration of Magidor-like forcings as in [4], or there is a tree $T\\subseteq [\\kappa ]^{<\\omega }$ in $V$ such that $ht(T)=\\omega $ and for every $t\\in T$ and every $\\alpha \\in {\\rm Succ}_T(t)$ , there is a name $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{{\\mathbb {M}[\\vec{U}]}^*}}}_{t^{}\\alpha }$ for a Magidor-like forcing, such that if $H$ is $V$ -generic filter for the forcing adding a branch through the tree $T$ along with the forcings $\\smash{\\underset{\\raisebox {1.2pt}[0cm][0cm]{\\sim }}{{{\\mathbb {M}[\\vec{U}]}^*}}}_{t^{}\\alpha }$ corresponding to the branch, then $M=V[H]$ .", "Question 7.3 Suppose that $o^{\\vec{U}}(\\kappa )=\\kappa ^+$ .", "Is still every set of ordinals in the extension equivalent to a subsequence of a generic sequence?", "Note that the situation here is more involved since $\\kappa $ stays regular in $V[G]$ and it is no longer possible to separate the measures.", "Question 7.4 The same as REF , but with $o^{\\vec{U}}(\\kappa )\\ge \\kappa ^+$ .", "Question 7.5 What can we say about other Prikry type forcing notions ?", "In [5], an example of a non-normal ultrafilter is given which adds a Cohen function to $\\kappa $ .", "So in general, not every intermediate model of Prikry type extensions is a Prikry type extension.", "The following question were stated in Section 5: In attempt to generalize REF to a wider calls of forcing, the simplest would probably be to deal we a long enough Magidor iteration of the Prikry forcings and to analyze its subforcings.", "Question 7.6 Is the result of theorem REF valid for a long enough Magidor iteration of the Prikry forcings?", "Question 7.7 Characterize filters (or ultrafilters) which satisfy the Galvin property (or the generalized Galvin property).", "Question 7.8 Assume $GCH$ .", "Let $\\kappa $ be a regular uncountable cardinal.", "Is there a $\\kappa $ -complete filter on $\\kappa $ which fails to satisfy the Galvin property?", "Question 7.9 Assume $GCH$ .", "Let $\\kappa $ be a regular uncountable cardinal.", "Is there a $\\kappa -$ complete filter which extends the closed unbounded filter $Cub_\\kappa $ and fails to satisfy the Galvin property?", "Question 7.10 Is it consistent to have a $\\kappa $ -complete ultrafilter over $\\kappa $ which does not have the Galvin property?", "Question 7.11 Is it consistent to have a measurable cardinal $\\kappa $ carrying a $\\kappa -$ complete ultrafilter which extends the closed unbounded filter $Cub_\\kappa $ (i.e., $Q-$ point) and fails to satisfy the Galvin property?", "In section 5 we have seen that a fine $\\kappa $ -complete ultrafilter of $\\kappa $ does not satisfy the Galvin property Indeed, if $U$ is a fine normal measure on $P_\\kappa (\\lambda )$ then supercompact Prikry forcing is not $\\kappa ^+$ -c.c., however, under $GCH$ this forcing is $\\lambda ^+$ -c.c.", "Question 7.12 Assume $GCH$ and let $\\lambda >\\kappa $ be a regular cardinal.", "Is every quotient forcing of the supercompact Prikry forcing also $\\lambda ^+$ -c.c.", "in the generic extension?" ] ]
2105.11700